Part 3 of the “Preparing for Joint Commission AI Certification” series
The RUAIH guidance is unambiguous: patients should know when AI is involved in their care. But what does that mean in practice? Do you need consent forms for every algorithm? Signs in every exam room? The answer is more nuanced than the headlines suggest.
What the Guidance Actually Says
The RUAIH framework addresses patient transparency in two key passages:
“When appropriate, patients should be notified when AI directly impacts their care and how their data may be used in the context of AI.”
“Where and when relevant, consent should be obtained.”
Notice the qualifiers: “when appropriate,” “directly impacts,” “where and when relevant.” The guidance recognizes that transparency requirements should scale with how significantly AI affects patient care and data.
The framework also calls for organizations to “develop a mechanism to disclose and educate patients and their families on the use and benefit of these tools.” This is about building understanding, not just checking disclosure boxes.
A Risk-Based Approach to Patient Notification
Not all AI use warrants the same level of patient communication. Consider three tiers:
Tier 1: General Awareness (Low-Risk AI)
AI tools that support operations without directly influencing clinical decisions typically require only general awareness. Examples include:
- Scheduling optimization algorithms
- Revenue cycle and coding assistance
- Supply chain predictions
- Administrative workflow automation
For these tools, a general statement in your Notice of Privacy Practices or patient welcome materials may suffice: “We use technology including artificial intelligence to support our operations and improve the care we provide.”
Tier 2: Proactive Disclosure (Clinical-Adjacent AI)
AI tools that inform clinical workflows but don’t autonomously make decisions warrant proactive disclosure without necessarily requiring individual consent. Examples include:
- Ambient clinical documentation (AI scribes)
- Clinical documentation improvement suggestions
- Risk stratification for care management outreach
- Appointment no-show predictions affecting scheduling
For these tools, consider:
- Clear signage in relevant clinical areas
- Mention during intake or registration
- Information in patient portal or educational materials
- Staff training to answer patient questions
Tier 3: Explicit Notification and Consent (High-Risk AI)
AI tools that directly influence diagnostic or treatment decisions warrant explicit notification and, in many cases, documented consent. Examples include:
- Diagnostic imaging AI that flags findings for radiologist review
- Sepsis or deterioration prediction alerts that trigger clinical intervention
- Treatment recommendation algorithms
- AI-assisted pathology or genomic interpretation
For these tools, notification should be specific to the encounter, and patients should have the opportunity to ask questions or decline AI involvement where clinically appropriate.
The AI Scribe Question
Ambient clinical documentation-AI that listens to patient encounters and generates notes-sits at an interesting intersection. It doesn’t make clinical decisions, but it’s deeply embedded in the patient encounter and involves processing sensitive conversation data.
Most health systems implementing AI scribes are landing on Tier 2 disclosure: proactive notification without requiring individual consent for each encounter. Common approaches include:
- Signage in exam rooms: “This practice uses AI-assisted documentation technology to help your provider focus on your care rather than typing during your visit.”
- Verbal mention by staff: “Dr. Smith uses an AI assistant to help with note-taking. Do you have any questions about that?”
- Information in patient welcome packets or portal
Some systems offer patients the ability to opt out, though this creates workflow complexity and the clinical note must still be completed somehow.
The key is ensuring patients aren’t surprised-and that they understand the AI is a documentation tool, not a diagnostic one.
Addressing Patient Concerns
Surveys consistently show that most patients want to know when AI is used in their care-but they don’t necessarily object to it. What matters is feeling informed and confident that human clinicians remain in control.
Common patient concerns and how to address them:
“Is a computer making decisions about my health?” Emphasize that AI assists clinicians; it doesn’t replace them. Clinical judgment remains with your care team.
“Is my data being sold or shared?” Be prepared to explain your data use agreements with AI vendors. Patients should know their information is protected and used only for their care.
“What if the AI makes a mistake?” Explain that AI recommendations are reviewed by clinicians before any action is taken. Human oversight is built into the process.
“Can I opt out?” Have a clear answer. For some AI tools, opting out is straightforward. For others (like embedded EHR algorithms), it may not be practically possible without opting out of care entirely.
Staff Transparency: The Often-Overlooked Element
The RUAIH guidance also addresses transparency for hospital staff: “Hospital AI policies should address transparency for both hospital staff and patients, including how AI tools are used.”
Clinicians need to know:
- Which tools in their workflow incorporate AI
- What the AI is designed to do (and not do)
- How to interpret AI outputs appropriately
- Where to find more information or report concerns
A common failure mode is deploying AI tools without adequate staff awareness. Clinicians may not realize an EHR alert is AI-generated, or they may over-trust (or under-trust) AI recommendations without understanding their limitations.
Staff transparency is a prerequisite for effective patient transparency. A nurse who doesn’t know an AI scribe is running can’t explain it to a patient who asks.
Template Language for Patient Communications
General Privacy Notice Addition: “[Organization Name] uses artificial intelligence and other advanced technologies to support clinical care, improve safety, and enhance operational efficiency. These tools assist our care teams but do not replace clinical judgment. Your care decisions are always made by qualified healthcare professionals. We protect your information in accordance with applicable privacy laws, and information processed by AI tools is subject to the same safeguards as all your health information.”
Exam Room Signage (AI Documentation): “To help your care team focus on you during your visit, we use AI-assisted technology to support clinical documentation. Your provider reviews and approves all notes. If you have questions, please ask.”
High-Risk AI Specific Disclosure: “As part of your care today, we may use an AI tool called [Tool Name] to assist with [specific purpose-e.g., analyzing your imaging results]. This tool helps our clinicians by [brief explanation]. Your physician will review the AI analysis along with other clinical information to make decisions about your care. If you have questions or concerns about this technology, please let us know.”
Building a Transparency Policy
Your AI transparency policy should address:
- Tiered notification requirements based on AI tool risk classification
- Consent requirements for high-risk AI tools
- Disclosure mechanisms (signage, verbal notification, written materials)
- Staff training on AI transparency obligations
- Patient questions and opt-out procedures
- Documentation requirements for consent/notification where required
The policy doesn’t need to be lengthy. One to two pages covering these elements, with appendices listing specific tools and their notification tier, provides a practical foundation.
What Not to Do
Don’t bury AI disclosure in unreadable consent forms. If patients need to parse legal language to understand AI is involved in their care, you’ve failed at transparency.
Don’t over-notify to the point of fatigue. If every interaction includes warnings about AI, patients will tune out-and won’t pay attention when disclosure really matters.
Don’t promise what you can’t deliver. If a patient can’t realistically opt out of a particular AI tool, don’t imply they can.
Don’t wait for perfect guidance. Transparency practices can evolve. Starting with reasonable, good-faith disclosure now is better than waiting for definitive rules.
Next in the series: Data Use Agreements for Healthcare AI: What Your Legal Team Needs to Negotiate
Harness.health helps regional health systems build AI governance programs aligned with Joint Commission RUAIH guidance. Learn more about our platform