Part 2 of the “Preparing for Joint Commission AI Certification” series
The RUAIH guidance is clear: health systems need a formal AI governance structure. But for regional systems already stretched thin, the prospect of standing up yet another committee can feel overwhelming.
The good news? The Joint Commission explicitly states that AI governance “does not need to be its own standalone team.” The key is building accountability and oversight into structures you likely already have.
What the Guidance Actually Requires
The RUAIH framework calls for governance that addresses:
- Risk-based oversight of AI tools across clinical, operational, and administrative functions
- A designated individual with appropriate technology expertise to lead AI implementation
- Policies covering selection, implementation, risk management, and lifecycle oversight
- Regular updates to the organization’s fiduciary board on AI use and outcomes
Notice what’s not required: a dedicated Chief AI Officer, a 15-person committee, or weekly meetings. The framework emphasizes being “organizationally appropriate”-which means your governance model should fit your size, resources, and existing structures.
Three Models That Work for Regional Systems
Based on how health systems are approaching RUAIH alignment, three practical models emerge:
Model 1: Expand an Existing Committee
Many regional systems successfully add AI oversight to an existing body-typically clinical informatics, quality/patient safety, or compliance committees. This works well when:
- You have an active committee with relevant expertise already meeting regularly
- AI deployment is still early-stage (fewer than 10 tools in production)
- You can carve out dedicated agenda time for AI topics
The advantage is speed to implementation. The risk is that AI becomes a perpetual “other business” item that never gets adequate attention.
Model 2: Create a Focused Subcommittee
A dedicated AI subcommittee that reports to an existing governance body offers more focus without full standalone overhead. Structure might include:
- Monthly meetings focused exclusively on AI
- 5-8 members representing key stakeholder groups
- Charter defining scope, authority, and escalation paths
- Formal reporting line to board (often through quality or compliance)
This model balances attention with efficiency. It works particularly well for systems actively scaling AI deployment.
Model 3: Hub-and-Spoke Integration
Larger regional systems or those with multiple facilities sometimes distribute AI governance across functional areas (clinical AI to medical staff, operational AI to IT, administrative AI to finance) with a coordinating council that ensures consistency.
This can work but requires clear standards and strong coordination. The risk is fragmentation-different parts of the organization applying different rigor to similar risks.
Who Needs to Be at the Table
The RUAIH guidance suggests governance should include expertise in:
- Executive leadership (decision authority and resource allocation)
- Regulatory/ethical compliance
- Information technology
- Safety/incident reporting
- Clinical/operational domains relevant to AI use
- Cybersecurity and data privacy
- Stakeholders reflecting impacted populations (staff, providers, patients)
For a regional system, this might translate to a practical roster of 6-8 people:
- CMO, CMIO, or CNO - Clinical authority and physician engagement
- CIO or IT Director - Technical implementation and security
- Compliance Officer - Regulatory alignment and risk management
- Quality/Patient Safety Lead - Incident integration and monitoring
- Privacy Officer - HIPAA and data use oversight
- Nursing or Clinical Informatics Representative - Frontline workflow perspective
- Finance Representative - Budget and vendor contract input (as needed)
You don’t need all roles at every meeting. Core membership of 4-5 with others joining for relevant topics keeps meetings manageable.
Keeping the Board Informed Without Overwhelming Them
The guidance requires that “the fiduciary board of the healthcare organization should be regularly updated on AI use and its outcomes.”
This doesn’t mean monthly deep-dives into algorithm performance. Effective board reporting on AI typically includes:
Quarterly updates covering:
- Inventory of AI tools in production (new additions, retirements)
- Summary of monitoring results (tools performing as expected vs. flagged for review)
- Any safety events or near-misses involving AI
- Upcoming AI initiatives and associated risks
Annual strategic review covering:
- AI governance program maturity assessment
- Alignment with evolving regulatory expectations
- Resource needs for the coming year
Frame AI as you would any other patient safety or quality topic. Boards don’t need to understand how neural networks function-they need confidence that management has appropriate oversight in place.
Sample Governance Charter Language
A charter doesn’t need to be elaborate. Key elements include:
Purpose: “The AI Governance Committee provides oversight of artificial intelligence tools used in clinical, operational, and administrative functions to ensure safe, effective, and ethical deployment aligned with organizational values and regulatory expectations.”
Scope: “All AI and machine learning tools deployed within [Organization Name], including vendor-provided solutions, EHR-embedded algorithms, and internally developed models.”
Authority: “The Committee is authorized to approve, defer, or decline AI tool deployments; establish monitoring requirements; and recommend policy changes to executive leadership. Tools posing significant clinical risk require Committee approval prior to production deployment.”
Reporting: “The Committee reports to [Quality Committee/Board] quarterly on AI program status and immediately for significant safety events.”
Common Mistakes to Avoid
Treating AI governance as purely an IT function. IT implements; governance requires clinical, operational, and compliance voices. The most common failure mode is an IT-dominated committee that misses clinical workflow implications.
Waiting for perfect structure before starting. Begin with what you have. A CMO and CIO meeting monthly to review AI tools is infinitely better than a theoretical 12-person committee that never convenes.
Underestimating the inventory challenge. Many systems discover AI tools they didn’t know existed-embedded in imaging systems, EHR modules, or revenue cycle platforms. Your first governance task is understanding what’s already deployed.
Creating governance that slows everything down. The goal is appropriate oversight, not bureaucratic obstruction. Risk-stratify your review processes so low-risk administrative tools don’t require the same scrutiny as clinical decision support.
Getting Started This Month
If you’re starting from zero, here’s a 30-day action plan:
Week 1: Identify an executive sponsor (CMO or CIO) and schedule an initial planning conversation. Draft a preliminary AI tool inventory by surveying department heads.
Week 2: Determine your governance model (expand existing committee vs. create subcommittee). Identify 4-6 potential members and gauge availability.
Week 3: Draft a simple charter (one page is fine). Circulate for feedback.
Week 4: Hold your first meeting. Agenda: review inventory, identify highest-risk tools, and establish meeting cadence.
You won’t have everything figured out. That’s fine. The point is establishing the muscle memory of regular AI oversight before the certification program launches and before a safety event forces reactive governance.
Next in the series: AI Transparency Requirements: What You Actually Need to Tell Patients
Harness.health helps regional health systems build AI governance programs aligned with Joint Commission RUAIH guidance. Learn more about our platform