The Science

We didn’t invent the science. We applied it.

The frameworks that drive good clinical training have existed for decades. Deliberate practice. Therapeutic alliance research. Empathy measurement. Common factors. The research has been saying the same thing for years — we just built the tool that makes it usable, session by session.

Every dimension we measure, every piece of feedback we generate, is grounded in what the field already knows works. We didn’t start with technology and look for a problem. We started with what supervisors already evaluate and built a way to see it more clearly.

Students underreport clinical difficulties. They overestimate their own competency in self-assessments. The real clinical learning often goes unseen precisely because the student doesn’t yet have the framework to see it. Not because they’re hiding things. Because supervision is evaluative, and humans are human.

“Educationally opportune encounters with real patients are finite. We need a scalable approach to emulating authentic patient-clinician interactions.”

— Kämmer et al., Journal of Medical Internet Research, 2025

What We Measure

The dimensions clinical training has always cared about.

These aren’t metrics we invented. They’re drawn from the same body of research supervisors have relied on for decades — empathy, alliance, challenge, conceptualization. We operationalized them into structured, session-level feedback so you’re not relying on a gut check at mid-term review.

Empathic Accuracy

Can the therapist read what the client is actually feeling beneath the surface? This isn’t warmth — it’s perceptual accuracy. Decades of research show it’s the foundation everything else is built on.

Empathic Depth

Did the client feel understood enough to go deeper? Includes calibrated restraint — knowing when not to name an emotion is as important as naming it well. We measure the response, not just the attempt.

Challenge & Support

Was the level of challenge right for this client’s capacity in this moment? A sharp question that cracks a defense is as valuable as a warm reflection that lands. We score both independently.

Rupture & Repair

How well the therapist detects and responds to ruptures in the working relationship. The research is clear: it’s not whether ruptures happen, it’s how they’re handled that predicts outcomes.

Clinical Intentionality

Is the therapist actively shaping the session with purpose, or passively following wherever the client leads? Intentionality separates competent practice from reactive conversation.

Case Conceptualization

Do the therapist’s decisions reflect a coherent read of this specific client — their context, resistance, and capacity — or could those same moves have been made with anyone?


How It Works

Structured practice. Structured feedback. Visible growth.

Simulated Practice

Students practice with simulated clients presenting real diagnostic patterns in a safe environment. Mistakes become learning moments, not client harm. Available anytime, not just during scheduled role play.

Structured Feedback

Every session is scored across the dimensions clinical training has always cared about. Not a vague overall grade — specific, actionable insight grounded in the frameworks supervisors already use.

Targeted Growth

Supervisors assign cases that address identified gaps. A student who struggles with alliance ruptures practices with a resistant client. Practice becomes purposeful. Growth becomes visible.


Built On

Research traditions that have shaped clinical training for decades.

We didn’t start from scratch. Every scoring dimension, every feedback framework, and every clinical signal we track draws from research traditions that the field has spent decades developing and validating.

Deliberate Practice

Ericsson’s research showed that expert performance doesn’t come from experience alone — it comes from structured repetition with feedback, targeting specific weaknesses. Rousmaniere and others have since applied this directly to psychotherapy training, demonstrating that therapists improve faster when practice is intentional and feedback is immediate.

Therapeutic Alliance Research

Bordin’s working alliance model, Safran and Muran’s work on rupture and repair, and Eubanks-Carter’s Rupture Resolution Rating System all point to the same thing: the therapeutic relationship is the strongest predictor of outcomes across modalities. We measure the micro-moments where it forms, breaks, and gets repaired.

Empathy & Common Factors

Elliott’s meta-analyses consistently show therapist empathy predicting client outcomes. Wampold’s common factors research demonstrates that what matters most isn’t the specific technique — it’s the human elements: empathy, alliance, the therapist’s ability to be present. We built our feedback around those elements.

Simulation-Based Clinical Education

Issenberg and others proved decades ago in medical education that simulation accelerates clinical skill development. Cook’s recent work in Medical Teacher shows that virtual patients built on large language models make this scalable and accessible for the first time. Mental health training is applying these same principles.

Try it yourself before you recommend it

Start with 3 free sessions. Do a role play. Read the feedback. See if it sounds like something a supervisor would actually write.