Part 8 of the “Preparing for Joint Commission AI Certification” series
The RUAIH guidance frames the education challenge directly: “With the proliferation of AI tools in healthcare, clinicians and staff members are encountering a growing number of AI tools throughout the workplace.”
A vendor demo and a quick reference card aren’t training. And AI literacy-understanding what these tools are and aren’t capable of-matters as much as knowing which buttons to click.
This final post in our series addresses what effective AI training looks like, and how to build it without creating another unwieldy competency requirement.
Two Layers of AI Education
The RUAIH guidance distinguishes between two types of education:
General AI literacy: Foundational understanding of what AI is, how it works, its capabilities and limitations, and organizational policies governing its use.
Tool-specific training: Practical instruction on using specific AI tools correctly, interpreting their outputs, and knowing when to seek help.
Both are necessary. A clinician who understands how to use an AI documentation tool but doesn’t understand that AI can hallucinate information is incompletely trained. Conversely, abstract AI literacy without practical application doesn’t improve care.
What AI Literacy Should Cover
AI literacy training doesn’t need to make everyone a data scientist. The goal is ensuring staff can be informed users of AI tools and effective participants in AI governance.
Core Concepts (All Staff)
What AI is and isn’t:
- AI tools learn patterns from data and apply those patterns to new situations
- AI doesn’t “understand” in the human sense-it generates statistically probable outputs
- AI can be wrong, sometimes confidently wrong
- Different types of AI have different capabilities and limitations
How AI works in healthcare:
- Overview of AI applications in your organization (which tools, where deployed)
- The difference between AI-assisted and AI-automated processes
- Human oversight requirements and why they exist
AI risks and limitations:
- AI can reflect biases present in training data
- Performance can vary across patient populations
- AI outputs require clinical judgment, not blind acceptance
- Model behavior can change over time or after updates
Organizational policies:
- Where to find your AI policies and procedures
- How to report AI concerns or errors
- Who to contact with AI questions
Enhanced Concepts (Clinical Staff Using AI Tools)
Interpreting AI outputs:
- Confidence scores and what they mean
- Why AI might be more or less reliable in specific situations
- Recognizing when AI output seems inconsistent with clinical picture
Human-AI collaboration:
- AI as decision support, not decision maker
- When to trust AI output vs. when to verify independently
- The concept of “automation bias” and how to avoid it
Documentation and accountability:
- How to document AI involvement in clinical decisions
- Who is accountable when AI contributes to care decisions
- Your responsibility to review and verify AI-generated content
Tool-Specific Training Requirements
The guidance states that healthcare organizations should “define and document how users of the AI system will be given relevant AI tool and system documentation and role-specific training to ensure that AI systems are used and monitored appropriately, safely, and effectively.”
For each AI tool deployed, define:
Who needs training: Not everyone needs training on every tool. Identify roles that will directly interact with the AI system.
What they need to know:
- The tool’s purpose and intended use
- How to access and operate the tool
- How to interpret outputs correctly
- Known limitations and contraindications
- How to report issues or concerns
When training happens:
- Initial training before first use
- Refresher training (if required) and at what interval
- Update training following significant tool changes
How competency is verified:
- Completion attestation
- Demonstrated skill assessment
- Supervisor sign-off
- Or combination depending on risk level
Example: AI Documentation Tool Training
For an ambient clinical documentation tool, role-specific training might include:
Physicians/APPs:
- How the tool captures and processes conversation
- Review and attestation workflow
- Common error patterns to watch for
- How to correct AI-generated content
- Patient notification/consent procedures
- How to report accuracy concerns
Medical Assistants:
- How to initiate/pause recording (if applicable)
- How to explain the tool to patients
- Who to contact if technical issues arise
Training Coordinators:
- How to add new users
- How to pull usage and accuracy reports
- How to troubleshoot common issues
The Change Management Challenge
Training isn’t just skill transfer-it’s change management. Staff may have concerns about AI that training alone won’t resolve:
“AI will replace me”: Address job security concerns directly. Be honest about which tasks AI affects and how roles may evolve. Emphasize AI as augmentation, not replacement-clinicians still make decisions.
“I don’t trust it”: Skepticism isn’t always wrong. Acknowledge AI limitations. Explain your organization’s validation and monitoring processes. Emphasize that healthy skepticism (verifying AI outputs) is appropriate.
“It’s just more work”: Some AI tools do add workflow steps. Be honest about this tradeoff. Explain the intended benefits. Solicit feedback on workflow improvements.
“I’m not technical”: AI use doesn’t require technical skill. Training should demystify the tools without requiring understanding of algorithms or data science.
Effective training addresses these concerns explicitly. If you only teach mechanics without addressing mindset, adoption will suffer.
Training Delivery Approaches
Regional systems don’t have unlimited training time or dedicated instructional designers. Practical approaches include:
Vendor-Provided Training
Most AI vendors offer training resources. Leverage these as foundation:
- Vendor webinars and recorded training
- Quick reference guides and job aids
- Train-the-trainer sessions
- Ongoing support resources
Supplement vendor training with organization-specific content (policies, reporting procedures, local workflows).
Embedded Training
Build training into the workflow where possible:
- Tooltips and contextual help within AI tools
- Quick reference cards at workstations
- Peer champion support for real-time questions
- “Tip of the week” in existing communication channels
Tiered Training
Not everyone needs the same depth:
Tier 1 (All staff): Brief AI literacy module (30-60 minutes). Can be part of annual competency training.
Tier 2 (Direct users): Tool-specific training (1-2 hours per tool). Required before first use.
Tier 3 (Super users/champions): Deep training including troubleshooting, advanced features, and ability to train others (4-8 hours).
Just-in-Time Training
Don’t front-load everything. Provide basic training upfront, then:
- Follow-up training after users have experience
- Refreshers triggered by updates or identified issues
- Advanced training for those who want to go deeper
Documentation Requirements
The RUAIH guidance requires documentation: “Healthcare organizations should, at a minimum, define and document how users of the AI system will be given relevant AI tool and system documentation and role-specific training.”
Document:
For the organization:
- AI literacy training curriculum and completion tracking
- Which roles require which tool-specific training
- Training materials for each AI tool
- Competency verification approach
For each AI tool:
- Training requirements by role
- Training materials (or links to vendor resources)
- Update training requirements following changes
- Completion records
For each staff member:
- Completed AI training (general and tool-specific)
- Competency verification date
- Refresher training due dates
This documentation serves both internal management and potential certification or accreditation review.
When to Retrain
Training isn’t one-and-done. Circumstances requiring retraining:
Model updates: When vendors update AI models, assess whether changes warrant user notification or retraining. Significant behavioral changes require at least update communication; major changes may require formal retraining.
Workflow changes: If how the AI integrates into clinical workflow changes, affected users need updated training.
Performance issues: If monitoring identifies problems (see Post 5), training may be part of the response-especially if issues relate to how users interact with the tool.
New use cases: Extending an AI tool to new clinical areas or user groups requires training for those new users.
Periodic refreshers: For high-risk AI tools, consider annual refresher training to reinforce key concepts and update for any changes.
Keeping Humans in the Loop
Throughout the RUAIH guidance, a theme recurs: human oversight. The guidance explicitly emphasizes “keeping humans in the decision-making process.”
Training must reinforce this principle:
- AI provides input; humans make decisions
- Override AI when clinical judgment indicates
- Always review AI-generated content before it becomes part of the record
- Report AI errors-they help everyone learn
The goal isn’t training staff to follow AI recommendations. It’s training them to be informed partners who leverage AI appropriately while maintaining clinical responsibility.
Building Training Infrastructure
If you’re starting from zero:
Month 1:
- Compile vendor training resources for current AI tools
- Draft AI literacy curriculum outline
- Identify training owners for each AI tool
Month 2:
- Develop or adapt AI literacy module
- Document training requirements by role and tool
- Create training documentation templates
Month 3:
- Pilot AI literacy training with select departments
- Validate tool-specific training completeness
- Establish tracking mechanism for training completion
Ongoing:
- Roll out AI literacy training organization-wide
- Monitor training completion rates
- Update training for new tools and changes
- Collect feedback and improve based on experience
Series Conclusion
Over eight posts, we’ve walked through the RUAIH framework’s seven elements: governance structures, patient transparency, data use protections, quality monitoring, safety event reporting, bias assessment, and education.
The common thread is intentionality. AI in healthcare isn’t something that should happen to your organization-it’s something you should consciously manage.
The Joint Commission’s voluntary certification program, expected in 2026, will formalize expectations. But the real value of RUAIH alignment isn’t a certificate. It’s building the organizational capacity to deploy AI safely, effectively, and equitably.
Regional health systems face real constraints: limited budgets, stretched IT teams, and competing priorities. The guidance acknowledges this. Perfect isn’t the standard-”organizationally appropriate” is.
Start where you are. Build incrementally. Document what you do. And keep your focus on the ultimate goal: AI that serves your patients, not just your operations.
This concludes the “Preparing for Joint Commission AI Certification” series.
Harness.health helps regional health systems build AI governance programs aligned with Joint Commission RUAIH guidance. Our platform streamlines AI registry, quality monitoring, safety event capture, and compliance reporting. Learn more