Part 1 of the “Preparing for Joint Commission AI Certification” series


In September 2025, the Joint Commission and Coalition for Health AI (CHAI) released their first joint guidance on the Responsible Use of AI in Healthcare (RUAIH). For regional health systems watching AI reshape clinical workflows—from ambient documentation to clinical decision support—this framework offers both a roadmap and a preview of what accreditation expectations may look like in the years ahead.

Why This Guidance Matters Now

The timing isn’t coincidental. By 2024, nearly half of U.S. healthcare organizations had begun implementing generative AI, yet practices varied wildly. Some health systems had mature governance programs; others deployed AI tools with little more than a vendor demo and a hopeful IT team.

The Joint Commission recognized this gap. As Dr. Jonathan Perlin, Joint Commission’s president and CEO, noted at the guidance launch: “We understand how quickly AI is changing healthcare—and at a scale I’ve never seen in my time as a leader.”

The RUAIH framework is the first product of the Joint Commission-CHAI partnership, with governance playbooks expected in late 2025 and a voluntary AI certification program launching in 2026. That certification will be available to over 22,000 Joint Commission-accredited organizations nationwide.

The Seven Elements of Responsible AI Use

The guidance identifies seven core elements that health systems should address:

  1. AI Policies and Governance Structures — Establish formal oversight with board-level visibility
  2. Patient Privacy and Transparency — Protect data and inform patients about AI use in their care
  3. Data Security and Data Use Protections — Go beyond HIPAA basics with AI-specific safeguards
  4. Ongoing Quality Monitoring — Continuously evaluate AI tool performance in your environment
  5. Voluntary, Blinded Reporting of AI Safety-Related Events — Share learnings across the industry
  6. Risk and Bias Assessment — Identify and mitigate algorithmic bias before and after deployment
  7. Education and Training — Ensure staff understand both the benefits and limitations of AI tools

Each element will be covered in depth in subsequent posts in this series.

What This Means for Regional Health Systems

Here’s the uncomfortable truth: this guidance was written with the understanding that a 500-bed academic medical center and a 50-bed community hospital face very different realities.

CHAI CEO Dr. Brian Anderson emphasized this point directly: “The real challenge I put to our team is to build these playbooks in such a way that they meet individual health systems where they’re at. We do not want to create playbooks that are only going to be supporting major big health systems.”

The upcoming playbooks will be tailored to different organization sizes, and the certification program will account for available resources. But regional systems shouldn’t wait for perfect guidance to start building governance foundations.

Voluntary Today, Expected Tomorrow

The RUAIH guidance is explicitly non-binding—for now. But the trajectory is clear. The Joint Commission has signaled that voluntary certification is a stepping stone, and the framework draws directly from authoritative sources including:

  • NIST AI Risk Management Framework
  • National Academy of Medicine’s AI Code of Conduct
  • ONC’s HTI-1 requirements for predictive decision support

Health systems that proactively adopt these practices gain several advantages: reduced risk of AI-related patient safety events, stronger vendor negotiations, and a head start on whatever compliance requirements emerge.

As one legal analysis noted, the guidance “will likely inform future accreditation and certification pathways related to AI use.” Translation: what’s voluntary in 2025 may look different by 2028.

Where to Start

If your health system hasn’t begun formal AI governance, don’t panic—but don’t delay either. Focus on three immediate priorities:

First, inventory your AI tools. You can’t govern what you don’t know exists. Many health systems discover AI embedded in EHR modules, imaging systems, and administrative tools they hadn’t tracked.

Second, identify your governance home. The guidance doesn’t require a standalone AI committee. Many systems effectively integrate AI oversight into existing quality, compliance, or clinical informatics structures.

Third, assess your highest-risk tools. Not all AI requires equal scrutiny. Tools that directly influence clinical decisions warrant more rigorous monitoring than those handling scheduling or documentation.

What’s Next in This Series

Over the next seven posts, we’ll dive deep into each RUAIH element with practical guidance tailored for regional health systems:

  • Post 2: Building Your AI Governance Team Without Hiring a Small Army
  • Post 3: AI Transparency Requirements: What You Actually Need to Tell Patients
  • Post 4: Data Use Agreements for Healthcare AI: What Your Legal Team Needs to Negotiate
  • Post 5: Monitoring AI Performance: A Risk-Based Approach for Resource-Constrained Health Systems
  • Post 6: AI Safety Events: Building a Reporting Culture Before Something Goes Wrong
  • Post 7: Evaluating AI Bias in Your Patient Population: A Practical Framework
  • Post 8: AI Training for Clinical Staff: Beyond the Vendor Demo

The goal isn’t perfect compliance with a framework that’s still evolving. It’s building the organizational muscle to deploy AI safely—and being ready when certification expectations crystallize.


Harness.health helps regional health systems build AI governance programs aligned with Joint Commission RUAIH guidance. [Learn more about our platform →]