Monthly Community Conversation: Alert Overrides is the System Talking Back
January Recap
Written by:
Cait Doherty
Head of Communications
Patient Safety Community
In our latest Community Conversation meeting, we learned an astonishing fact:
For one electronic healthcare record (EHR) vendor in a very large health system, ~92% of clinical decision support system (CDSS) alerts are overridden by clinicians.
Often, high override rates are attributed to alert fatigue, clinician training, and complacency. But we aren’t here to default to the usual attributions.
In systems thinking, human behavior is rarely a helpful explanation. It is a predictable response to the conditions people are working within. When alerts are overridden at scale, it doesn’t mean clinicians are ignoring safety. It means the system has taught them, over time, that alerts are good at interrupting work but may not support it.
Repeated overrides are not random.
They are the system telling us something isn’t aligned.
The following case is used because it is well documented, not because it is unique. The patterns described here exist across healthcare. Our intent is not to assign blame, but to make visible how hazards can accumulate in complex systems that continue to function on the surface.
The VA as a Case Study
In January’s Community Conversation, Nancy Leveson and John Thomas of MIT’s Partnership for Systems Approaches to Safety and Security shared work conducted with the U.S. Department of Veterans Affairs (VA) using a modern hazard analysis technique, System-Theoretic Process Analysis (STPA), to examine the effectiveness of their CDSS.
The MIT team conducted a comprehensive study and asked a very simple but meaningful question to the VA, “How often are alerts being overridden?”
No one in the VA initially knew.
When the data were pulled, the answer showed between 85% and 92% of approximately 4,000 alerts in the CDSS were overridden by clinicians at the point of care.
The VA case isn’t an outlier or a uniquely broken system. It’s a magnified example of patterns that exist across healthcare, showing how hazards can quietly accumulate even while a system appears to be functioning.
For those outside of healthcare, consider this analogy:
Imagine a building with a failing foundation. Instead of fixing it, maintenance props up floors, tenants avoid certain rooms, and staff learn where not to step. The building doesn’t collapse, but it also isn’t safer. Hazards haven’t gone away and now it is up to the person to be more careful. When harm inevitably occurs, attention focuses on the person who happened to be there.
Going Deeper: What the VA Data Revealed
Feedback existed but wasn’t being used.
Raw data isn’t useful unless it is meaningful. In many systems, feedback exists, but it isn’t used. Data on alert firing and overrides were available, but they weren’t routinely reviewed as a measure of system health. A system can be rich in data and still be blind.
No one owned effectiveness over time.
Rules were carefully created, but there was no clear responsibility for evaluating whether they remained useful, relevant, or harmful:
A small IT team maintained roughly 4,000 rules technically, but ownership of clinical relevance, meaning whether the rules still made sense in practice, fell into a gap.
Clinicians encountered the rules in practice but do not own lifecycle management.
Leadership was not structurally positioned to routinely receive or review effectiveness over time.
Vendors shaped the system’s constraints, and without feedback and lifecycle accountability, unresolved hazards accumulated within those constraints.
The system operated reactively.
Problems were discovered through tickets, incidents, or chance, not through proactive monitoring of weak signals such as rising override rates or declining usefulness.
Complexity accumulated.
New rules were added in response to problems, while old or ineffective rules were rarely removed. In complex systems, adding controls without removing old ones often increases hazard, even when intentions are good.
The team’s findings show that there is clearly no lack of data or effort at the VA. What’s missing at the VA is a mechanism. Signals existed, but there was no designed way to ensure they reached people with the authority to act. Override data was generated during care, encountered by clinicians, handled operationally by IT, and then stopped. Without a path from signal to decision to action, the system could not learn and could only adapt. Staff kept the system functioning through extra effort and workarounds, not because it worked, but because there was no alternative.
Systems Thinking Happening in Our Community
Treating alert overrides as a signal allows us to move past surface explanations and ask more accurate questions:
What signals in our systems are we currently trained to ignore?
What hazards sit beneath them?
When organizations take these questions seriously, they change how alert systems are designed and governed. Several PSC members shared examples where alert systems were intentionally pared back, rebuilt around real harm, and monitored over time. When alerts were tightly scoped, clinically grounded, owned by clinicians, reviewed regularly, and supported by IT as safety partners, they worked.
In these cases, IT was involved upstream surfacing patterns, monitoring performance, and supporting decisions to revise or retire alerts, rather than reacting to tickets after the fact. Participants said that turning off poorly performing alerts led to no increase in harm, and in some cases improved outcomes, as clinicians were no longer fighting noise. Together, these stories reinforce that people are not rejecting safety. They are rejecting systems that no longer make sense in practice.
Leadership in Complex Systems
As the conversation continued, participants acknowledged that many of the hazards surfaced in the VA analysis are widely distributed across healthcare organizations and other EHR vendors. Often, they’re normalized. High override rates, reactive workflows, and growing complexity are not the exception. They are the norm.
Our deeper challenge:
Even when signals exist, complex organizations are often not designed to receive, interpret, or act on them, so feedback is generated without reliably reaching those with the authority to change the system. In environments like this, resistance to systems thinking often shows up as confidence that existing approaches are already sufficient, leaving little room for deeper inquiry. Furthermore, a true systems thinking approach asks leaders to shift from certainty to curiosity, and to look beneath familiar frameworks rather than defend them.
In this context, the hardest work is not identifying signals. It is creating the conditions to take them seriously. Rather than demanding better compliance or more vigilance, leadership is invited to ask a different question: What mechanism do you have or need that ensures signals like these are noticed, interpreted, and acted on before harm occurs?
Please reach out if you are interested in learning more about how STPA is applied.
Leveson, N. G., & Thomas, J. P. (2025). System safety for health information technology: A systems-theoretic hazard analysis of clinical decision support systems at the U.S. Department of Veterans Affairs. MIT Partnership for Systems Approaches to Safety and Security (PSASS). https://psas.scripts.mit.edu/home/papers/2025_System_Safety_for_Health_Information_Tec.pdf
Until next time,
C
Better Healthcare by Design.
Better Together.
Patient Safety Community


