OCR & AI Enforcement
What OCR is saying about AI, enforcement trends, and how to prepare for AI-related audits
What OCR Has Said About AI & HIPAA
"Covered entities and business associates are responsible for HIPAA compliance when using AI and other technologies involving protected health information." — OCR, 2024 AI Guidance
"The use of new technologies, including AI, does not lessen a covered entity's compliance obligations under HIPAA." — OCR Director Melanie Fontes Rainer
Translation: "We don't care that AI is new or that you're 'still figuring it out.' HIPAA applies to AI just like everything else. Get compliant or face penalties."
OCR AI Enforcement Trends
What we're seeing in investigations and settlements
OCR is actively investigating AI-related complaints
Multiple breach notifications involve 'use of third-party AI services.' AI is no longer under the radar — OCR is looking for it specifically in audits and investigations.
Why it matters: 2023-2024: 15+ breach reports mention AI or machine learning tools. Up from 2 in 2021-2022.
OCR settlements increasingly cite missing BAAs with technology vendors
Using ChatGPT without a BAA = same violation as using an email provider without a BAA.
Why it matters: $4.75M settlement (2024) included violation for 'failure to obtain BAAs with all vendors processing PHI'
OCR expects demonstration of what PHI was accessed/disclosed
Shadow AI = no logs = can't demonstrate compliance = violation even if no breach occurred.
Why it matters: Multiple corrective action plans require 'comprehensive logging of all PHI access, including automated systems'
OCR distinguishes 'didn't know' from 'should have known'
Higher penalties for organizations that knew staff were using AI but took no action. Ignoring shadow AI = willful neglect.
Why it matters: Willful neglect penalties: $50,000+ per violation, up to $1.5M per year for repeat violations
Organizations must identify and assess risks from new technologies including AI
Not conducting shadow AI discovery = failure to perform required risk assessment.
Why it matters: Risk assessment gaps are most common finding in OCR audits (present in 70%+ of audits)
AI tools training on patient data viewed as patient rights violation
Using consumer AI tools (ChatGPT free) that may train on inputs = unauthorized disclosure.
Why it matters: OCR guidance emphasizes patient right to know how PHI is used, including for AI/ML purposes
What OCR Will Ask About AI in Audits
Questions to expect and how to answer them
What AI tools does your organization use that may access PHI?
Wrong: "We don't allow AI tools / We're not sure what staff are using." Right: "Here's our AI inventory from our shadow AI discovery. All shadow AI has been eliminated and replaced with our governed platform."
Why it matters: Conduct shadow AI discovery (surveys, network monitoring, credit card review). Document findings. Deploy governed platform.
Do you have Business Associate Agreements with all AI vendors?
Wrong: "We're working on getting those / Some staff use personal accounts." Right: "Yes. Our governed AI platform provider holds BAAs with OpenAI, Anthropic, and Google on our behalf. Here's documentation."
Why it matters: Choose platform provider that executes BAAs with all model providers. Maintain copies of all BAAs.
How do you ensure PHI isn't improperly shared with AI systems?
Wrong: "We trained staff not to share PHI / We have a policy against it." Right: "Automatic PHI detection scans every AI interaction before data reaches models. We can demonstrate effectiveness with testing results."
Why it matters: Deploy PHI protection technology. Validate with red team testing. Document detection accuracy.
Can you produce audit logs of AI interactions involving PHI?
Wrong: "We don't have logs / Only for some systems." Right: "Yes. Here are complete logs for all AI usage: user, timestamp, model, PHI detection results, for the requested period."
Why it matters: Implement comprehensive audit logging. Test log retrieval and reporting capabilities.
How do you control who has access to AI tools?
Wrong: "Anyone can use ChatGPT / We blocked everything." Right: "Role-based access controls through our governed platform. SSO integration. Automated de-provisioning when employees leave."
Why it matters: Deploy governed platform with RBAC. Integrate with identity provider (Active Directory, Okta, etc.).
Have you conducted a risk assessment for AI usage?
Wrong: "AI wasn't part of our last risk assessment / It's on our to-do list." Right: "Yes. Our annual risk assessment includes AI technologies. Here's our AI risk analysis and mitigation plan."
Why it matters: Update HIPAA risk assessment to include AI. Document risks identified and controls implemented.
What training have you provided to staff about AI and HIPAA?
Wrong: "We mentioned it in our general HIPAA training." Right: "AI-specific HIPAA training is required for all users. Here's training materials and completion records."
Why it matters: Develop AI + HIPAA training module. Track completion. Update annually.
How do you validate that AI outputs don't contain PHI?
Wrong: "We trust the AI provider's controls." Right: "We scan AI outputs for PHI before staff use them. Plus, staff review all outputs before finalizing (documented in workflows)."
Why it matters: Implement output scanning. Document review workflows. Train staff on output validation.
HIPAA Penalty Structure for AI Violations
OCR AI Audit Readiness Checklist
-
Shadow AI discovery completed and documented
-
All shadow AI tools eliminated or governed
-
BAAs executed with all AI model providers
-
Automatic PHI protection validated and tested
-
Complete audit logs for all AI interactions
-
Role-based access controls implemented
-
AI-specific policies documented and approved
-
Staff training on AI + HIPAA completed
-
Risk assessment updated to include AI
-
Incident response procedures include AI scenarios
-
Can produce documentation for all controls within 48 hours
-
Quarterly governance reviews conducted and documented
Get OCR Audit-Ready
Book a Shadow AI Risk Check and we'll assess your OCR audit readiness, identify gaps, and create a compliance roadmap before you get investigated.