
Health systems are drowning in metrics. We can tell you how many portal messages were answered within 24 hours, how many patients closed a care gap, and how many clinicians are using ambient AI in their notes. But there is one question our dashboards rarely answer:
Did the patient actually understand what we told them well enough to act on it?
We have turned “patient education” into a checkbox: an after-visit summary printed, a link to an online library, a line in the note that says, “risks and benefits discussed; patient verbalized understanding.” On paper, the job is done. In real life, patients go home confused, overwhelmed, and quietly hoping Google or a relative can fill in the gaps.
Teach-back—the simple practice of asking patients to explain instructions in their own words—has been recommended for years as a core health literacy safety strategy in the Health Literacy Universal Precautions Toolkit. Heart failure and medication safety studies show that it can improve knowledge, self-care, and even reduce readmissions and errors when used consistently. Yet most organizations still treat it as a communication tip, not something that belongs in system design, workflows, and data models.
If digital innovation is supposed to make care safer and more equitable, it’s time for health IT leaders to treat patient understanding as a first-class use case.
The blind spot in our “patient engagement” stack
Most “engagement” metrics track activity, not comprehension:
- Portal logins and message counts
- Click-through on educational content
- Completion of digital care plans
- Attendance at virtual visits
These are easy to instrument and count. None of them tells you whether a patient can correctly explain when to take a new anticoagulant, what to do when their breathing worsens, or how to use an inhaler or GLP-1 pen.
When something goes wrong—wrong dose, missed titration, avoidable readmission—the EHR usually shows that education was provided. The failure is invisible in data, so it gets labeled “nonadherence” instead of “we never verified they understood.”
That’s not just a clinical risk; it’s an IT design failure. We instrument everything except the moment that actually closes the loop.
What it means to operationalize teach-back in health IT
Treating teach-back as a standard, not a script, doesn’t mean turning every conversation into a 20-question quiz. It means designing systems so that key understanding checks are expected, supported, and visible for CIOs, CMIOs, and digital health leaders who own the workflows and data. Treating teach-back as a standard, not a script, doesn’t mean turning every conversation into a 20-question quiz. It means designing systems so that key understanding checks are expected, supported, and visible.
There are pragmatic ways to do that:
- Flag high-risk scenarios in the EHR.
New anticoagulants, insulin starts, heart failure discharges, complex titrations, new antidepressants, device teaching (inhalers, injections, CPAP) — these should trigger a standard “teach-back required” moment in the visit template or discharge workflow. - Replace vague “education provided” checkboxes.
Instead of generic education fields, add specific ones such as:- “Teach-back of anticoagulant dosing and red-flag bleeding symptoms completed,” or
- “Teach-back of asthma action plan completed.”
- Extend teach-back into digital channels.
After visits, portals or SMS can ask one or two plain-language questions like:- “In your own words, what is the main thing you’re supposed to do differently after this visit?”
- “Is there anything in your plan you’re still unsure about?”
Those answers don’t need to be perfect essays. They just need to be good enough to surface misunderstandings early, while they are still cheap to fix.
Once teach-back is represented in discrete fields and simple post-visit flows, it becomes something you can actually measure, stratify, and improve—the same way you already treat sepsis bundles or time-to-antibiotics.
AI as a teach-back copilot, not a replacement
Generative AI has already shown that it can translate discharge summaries into patient-friendly language and formats that are significantly more readable and understandable than the originals when clinicians review and correct the outputs. That’s exactly the kind of cognitive load health IT can take off clinicians’ plates.
Instead of expecting busy clinicians to reinvent explanations from scratch dozens of times a day, health IT teams can use AI for:
- Plain-language visit and discharge summaries
Generate a first draft of the patient-facing plan in clear, jargon-free language at an appropriate reading level and in the patient’s preferred language. The clinician reviews it, edits it as needed, and uses it as the starting point for teach-back. - Context-aware analogies and examples
AI can look at existing medications and conditions to create explanations that fit the patient’s reality: “This new inhaler is your daily seat belt to prevent crashes, not just an airbag after they happen.” - Suggested teach-back questions
For a given diagnosis and treatment plan, suggest two or three targeted teach-back prompts the clinician can use in the room or via telehealth, and one follow-up question for the portal. - Surfacing likely confusion points
By learning from prior instructions and follow-up questions, AI can flag elements that commonly cause errors—taper schedules, overlapping meds, sliding scales—and highlight them in the clinician’s workflow as “spend an extra 30 seconds here.”
In one JAMA Network Open study of 50 discharge summaries, AI-generated versions were significantly more readable and understandable than the original EHR notes, although physician review was still required for accuracy and safety. The goal is not to replace human conversation. It’s to automate drafting, tailoring, and pattern recognition so clinicians can spend their limited time on the uniquely human part: reading the room, listening, and adapting.
Designing for equity, not just convenience
Health literacy is not evenly distributed. It tracks closely with income, educational opportunity, language, and structural racism. National fact sheets and Healthy People 2030 describe limited health literacy as both a prevalent problem and a driver of inequities in outcomes, costs, and trust. People with limited literacy and English proficiency face higher rates of preventable hospitalizations, worse chronic disease control, and more difficulty navigating care.
Recent national summaries estimate that roughly one in three adults in the US has limited health literacy, with much higher rates in marginalized communities. If we don’t build teach-back and comprehension checks into our digital workflows, we’re effectively saying: “People who already know how to navigate us will do fine; everyone else is on their own.”
Health IT can help reverse that by:
- Making interpreters and multilingual staff part of the standard teach-back flow, not an afterthought.
- Ensuring AI-generated explanations are available in multiple languages and accessible formats (audio, large text, screen-reader friendly).
- Stratifying teach-back metrics by race, ethnicity, language, disability status, and neighborhood deprivation to see who is routinely getting confirmation—and who is not.
Healthy People 2030 now explicitly frames health literacy as a national priority and part of the social determinants of health agenda. If the only people getting documented teach-back and follow-up clarification are those already advantaged, your “engagement” stack is quietly amplifying inequity instead of reducing it.
A practical roadmap for CIOs and CMIOs
This doesn’t require a massive transformation program on day one. A realistic roadmap for a health IT or digital health leader might look like:
- Pick three high-risk use cases.
For example: anticoagulants, heart failure discharges, and a high-volume behavioral health medication. - Co-design explanations with patients and clinicians.
Partner with patient advisors and frontline teams to create short, plain-language templates and teach-back prompts for those scenarios. - Embed teach-back into EHR and telehealth templates.
Add specific fields and prompts into visit and discharge workflows so the “teach-back moment” is visible and expected. - Layer in AI where it reduces friction.
Pilot plain-language summary generation and teach-back question suggestions, with tight guardrails and mandatory clinician review. - Measure and stratify.
Track how often teach-back is documented, what issues it uncovers, and whether there is any early signal on callbacks, medication problems, or readmissions. Look for disparities across patient groups. - Scale and standardize.
If it works in three use cases, expand to more. Over time, make “documented understanding for high-risk decisions” part of your quality and digital strategy, not just an education policy.
The payoff is not just fewer preventable errors. It’s building digital infrastructure that aligns with what patients already assume: that if something is important, their care team will make sure they truly understand it.
Health IT has spent a decade wiring up portals, APIs, ambient scribing, and virtual visits. The next wave should be simpler and more radical: make patient understanding visible in our systems, and design AI to protect it, not bypass it.
About Aayush Sisodia
Aayush Sisodia, MSHI, BDS, is a consultant serving as a business analytics advisor in pharmacy analytics at a national health services and pharmacy benefit management company in the United States. His work focuses on turning pharmacy claims and real-world data into clearer, more actionable information for patients, clinicians, and payers.
