Part 6 of the “Preparing for Joint Commission AI Certification” series
The RUAIH guidance makes a striking recommendation: treat AI-related issues like patient safety events, and participate in voluntary, blinded industry-wide reporting to accelerate collective learning.
This represents a significant shift in how healthcare thinks about AI problems. Not as IT issues to be logged in a help desk ticket, but as safety events worthy of root cause analysis, trend tracking, and cross-organizational learning.
Why Voluntary Reporting Matters
The guidance articulates the rationale clearly: “Voluntary reporting will reduce the potential for stifling regulatory burden that could limit the potential innovations that AI can deliver to healthcare, while providing opportunities for learning and quality improvement across healthcare organizations.”
There’s a pragmatic calculation here. If the healthcare industry can demonstrate effective self-governance of AI safety through voluntary reporting and shared learning, it may forestall more prescriptive regulation. But if AI-related patient harm accumulates without transparent industry response, regulatory intervention becomes more likely.
Beyond regulatory strategy, voluntary reporting serves a practical purpose. An AI tool causing problems at one health system may be deployed at dozens of others. Without information sharing, each organization discovers issues independently-often after patients have been affected. Collective learning accelerates problem identification and solution development.
What Counts as an AI Safety Event
The guidance describes AI safety events broadly: “AI tools may contribute to a near miss or harm, such as unsafe recommendations, major performance degradation after an update, or biased outputs.”
Consider these categories:
Direct Safety Events
AI outputs that directly contributed to or could have contributed to patient harm:
- A clinical prediction that was significantly incorrect and influenced care decisions
- An AI-generated documentation error that propagated to clinical decision-making
- A diagnostic AI that missed findings later identified as significant
- Treatment recommendations from AI that were clinically inappropriate
Near-Misses
AI errors caught before affecting patient care:
- AI-generated content that was clearly incorrect but identified during review
- Alert failures that were detected through other means
- Bias patterns identified through monitoring before harm occurred
- System behaviors that could have caused problems but didn’t
Performance Degradation Events
Significant changes in AI behavior that affect reliability:
- Substantial accuracy decline following a model update
- New patterns of errors not previously observed
- Systematic failures for specific patient populations
- Extended system outages affecting clinical workflow
Bias Events
Identification of differential AI performance or outcomes across patient groups:
- Significantly different accuracy across racial or ethnic groups
- Age-related performance disparities
- Systematic differences in AI behavior based on patient characteristics
- Disparate impact on care decisions for specific populations
Integrating AI Reporting with Existing Systems
The RUAIH guidance recommends using existing infrastructure: “When possible, organizations should use existing structures to track and report AI-related incidents and consider updating the kinds of incidents being tracked and reported.”
For most health systems, this means extending patient safety event reporting to capture AI-related issues.
Update Your Incident Categories
Add AI-specific options to your incident reporting system:
- “AI/Algorithm-Related Event”
- “Clinical Decision Support Error”
- “AI Documentation Error”
- “AI System Malfunction”
Without explicit categories, AI events often get buried in general “equipment” or “IT” classifications-making trend identification impossible.
Train Staff to Recognize AI Issues
Many clinicians may not realize when an AI system is involved in their workflow. Training should cover:
- Which tools in your environment incorporate AI
- What AI-related issues look like
- How to report AI concerns through patient safety channels
- The importance of reporting near-misses, not just harm events
Adjust Your Investigation Process
Traditional root cause analysis may need adaptation for AI events:
- Include technical review of algorithm behavior, not just human factors
- Engage vendor in investigation where appropriate
- Consider whether the issue is local (your data, your workflow) or systemic (algorithm problem)
- Document model version and any recent updates
External Reporting Pathways
The RUAIH guidance identifies several channels for external reporting:
Patient Safety Organizations (PSOs)
PSOs, established under the Patient Safety and Quality Improvement Act of 2005, provide a mechanism for confidential, protected reporting of safety events. Information submitted to a federally-listed PSO receives legal privilege and confidentiality protections, reducing liability concerns that might otherwise deter reporting.
The guidance specifically recommends “confidential reporting to federally listed Patient Safety Organizations (PSOs)” as a channel for AI-related events.
If your organization works with a PSO, ensure AI-related events are included in your reporting. If you don’t currently have a PSO relationship, consider establishing one-particularly as AI deployment expands.
FDA Reporting (When Applicable)
For AI tools that are FDA-regulated medical devices, the guidance notes that “serious incidents should also be reported via FDA’s reporting pathways.”
This applies to:
- AI tools with FDA 510(k) clearance or approval
- Software as a Medical Device (SaMD) meeting FDA’s definition
- AI embedded in regulated medical devices
Serious events involving regulated devices require reporting through the FDA’s MedWatch system or, for manufacturers, through Medical Device Reporting (MDR) requirements.
CHAI Health AI Registry
The guidance references “CHAI’s upcoming Health AI Registry” as a mechanism for industry-wide learning. This registry, announced in early 2025, is designed to facilitate anonymous sharing of AI safety information across organizations.
Key features:
- De-identified reporting protecting patient privacy and organizational identity
- Pattern identification across multiple health systems
- Shared learning without liability exposure
- Vendor-agnostic insight into common AI failure modes
As this registry becomes operational, participation will provide both learning opportunities and evidence of governance maturity.
Creating Psychological Safety for Reporting
A reporting system only works if people use it. The guidance emphasizes that reports should be “blinded-protecting patient privacy and the reporting institution’s identity-to reduce fear of liability and encourage openness.”
Internally, apply similar principles:
Non-punitive culture: AI errors are often system issues, not individual failures. Staff should feel safe reporting AI concerns without fear of blame.
Anonymity options: Allow anonymous reporting for sensitive concerns. Some issues may not be reported if staff must identify themselves.
Visible follow-through: When issues are reported, communicate actions taken. If staff report problems that disappear into a void, reporting will decline.
Leadership engagement: When leaders visibly engage with AI safety reporting, it signals organizational priority.
The Special Challenge of AI Errors
AI errors can feel different from traditional safety events. If an AI tool makes an incorrect recommendation that a clinician follows, who is responsible-the clinician, the AI vendor, the organization that deployed it?
This ambiguity can suppress reporting. Clinicians may feel embarrassed that they “should have caught” an AI error. They may worry about liability implications. They may not know how to categorize what happened.
Address this proactively:
- Acknowledge that AI errors are expected, not shameful
- Clarify that reporting AI issues is valued, not problematic
- Separate reporting from blame assignment
- Recognize that humans can’t perfectly validate every AI output-that’s why monitoring systems exist
What to Do With the Data
Reporting without action is theater. Your AI safety reporting program should drive:
Trend identification: Are you seeing recurring issues with specific AI tools? Are certain error types increasing? Are particular patient populations affected more frequently?
Vendor communication: Aggregate concerns to share with vendors. Individual events may seem like outliers; patterns demand vendor response.
Governance input: Safety data should inform governance decisions about AI tools-including whether to continue using problematic tools.
Process improvement: Some AI safety issues reflect workflow problems, not algorithm problems. Use safety data to improve how AI integrates with clinical processes.
Industry contribution: De-identified data contributed to PSOs or registries helps the broader healthcare community learn.
Getting Started
If AI safety reporting doesn’t exist in your organization, start here:
Week 1-2:
- Review current incident reporting categories; identify where AI events would fit
- Draft proposed AI-specific incident categories for your reporting system
- Identify who will triage AI-related reports (quality, IT, clinical informatics?)
Week 3-4:
- Update incident reporting system with new categories
- Create brief guidance document: “How to Report AI-Related Safety Concerns”
- Share guidance with clinical and IT staff in high-AI areas
Month 2:
- Conduct training sessions for staff using AI tools
- Designate responsibility for AI safety event review
- Establish cadence for reviewing AI safety reports (monthly for high-risk areas)
Month 3:
- Add AI safety reporting to governance committee agenda
- Evaluate PSO relationship for AI event inclusion
- Develop vendor communication process for AI-related concerns
Ongoing:
- Monitor for CHAI Health AI Registry availability and participation requirements
- Track and trend AI safety events
- Iterate on processes based on experience
You won’t have perfect reporting coverage immediately. The goal is building the organizational habit of treating AI issues as safety events worthy of systematic tracking-before a serious event forces reactive change.
Next in the series: Evaluating AI Bias in Your Patient Population: A Practical Framework
Harness.health helps regional health systems build AI governance programs aligned with Joint Commission RUAIH guidance. Learn more about our platform