Blog

Major Upcoming Change: HIPAA Security Rule Overhaul

By
RJ O'Connor
April 29, 2026
Share this post

The most substantial update to HIPAA in over a decade is the proposed rewrite of the HIPAA Security Rule for electronic protected health information (ePHI).

  • NPRM issued: December 2024 / January 2025.
  • Final rule expected: May 2026 (per OCR’s regulatory agenda).
  • Compliance timeline: Typically 180–240 days after publication in the Federal Register, likely pushing full compliance to late 2026 or early 2027.

This update is driven by the surge in cyberattacks on healthcare organizations and aims to modernize requirements that have remained largely unchanged since 2003 (with the last major tweak in 2013).Key proposed changes (many of which shift from flexible to mandatory):

  • Elimination of the “required” vs. “addressable” distinction for most implementation specifications → Nearly all safeguards become strictly mandatory, with very limited exceptions.
  • Mandatory technical controls:
    • Encryption of ePHI at rest and in transit.
    • Multi-factor authentication (MFA) for all systems accessing ePHI.
    • Enhanced audit logging, vulnerability scanning/patch management, and system hardening.
  • Stronger administrative requirements:
    • Annual (or more frequent) security risk analyses (SRAs) and ongoing risk management.
    • Comprehensive, continuously updated asset inventories and network maps for all technology assets interacting with ePHI.
    • Written policies and procedures that are regularly reviewed, tested, and updated.
    • Improved incident response, backup, and disaster recovery processes.
  • Tighter Business Associate (BA) obligations, including stronger oversight and faster incident reporting.
  • Explicit focus on emerging technologies, including risks from artificial intelligence, quantum computing, and other advanced tools.

Status note: The NPRM received thousands of comments. While OCR is still reviewing them, the rule remains on track for a May 2026 finalization target. Industry pushback on costs and prescriptiveness could lead to some scaling back, but the overall direction toward stronger, more enforceable cybersecurity is clear. Organizations should begin gap assessments against the proposed rule immediately, as OCR has already shown increased enforcement focus on risk analysis and basic safeguards.Other HIPAA Changes (Lower Immediate Impact)

  • 42 CFR Part 2 alignment (Substance Use Disorder records): Covered entities must update their Notice of Privacy Practices (NPP) by February 16, 2026. OCR gains enforcement authority over Part 2 on the same date. This is one of the few concrete deadlines already in place.
  • Reproductive health care privacy rule (finalized 2024): Most substantive provisions (restrictions on using/disclosing PHI for investigations into lawful reproductive care) were vacated nationwide by a federal court in June 2025. Only limited NPP-related updates tied to Part 2 remain relevant.
  • Patient access and other Privacy Rule tweaks: Ongoing discussions around shorter response times for access requests and improved interoperability, but no major new final rules are imminent beyond the NPP updates noted above.

HIPAA remains technology-neutral overall, meaning existing Privacy, Security, and Breach Notification Rules fully apply to new tools like AI unless and until specific guidance is issued.Impact on Companies Working with Protected Health Information (PHI) + AI IntegrationCompanies that are covered entities (providers, health plans, clearinghouses) or business associates (AI vendors, analytics platforms, cloud services, telehealth) will face heightened compliance burdens once the Security Rule finalizes. AI introduces unique risks: ingesting large PHI datasets for training, potential re-identification through model inversion or prompt leakage, biased outputs affecting care, “shadow AI” (unauthorized use of tools like public LLMs), and adversarial attacks.Key Impacts from the Proposed Security Rule Changes:

  • Risk Analysis & Asset Inventory: AI systems (SaaS, custom models, generative tools) must be explicitly included in your technology asset inventory and undergo dedicated risk assessments covering confidentiality, integrity, availability, plus AI-specific threats (e.g., data leakage in outputs, training data provenance, hallucination risks).
  • Mandatory Controls: Encryption, MFA, and detailed logging will apply directly to any AI pipeline handling PHI. Audit trails must capture prompts, responses, and data flows. Shadow AI use becomes a clearer compliance violation.
  • Business Associate Agreements (BAAs): Required for any AI vendor processing PHI. Contracts should demand transparency on model training data, bias testing, security controls, and flow-down obligations to subcontractors. Update existing BAAs to align with the new mandatory safeguards.
  • Privacy Considerations: Uses of PHI for treatment, payment, or healthcare operations (TPO) generally remain permitted without authorization, but the minimum necessary standard still applies. Training AI models on identifiable PHI often does not qualify as TPO and may require patient authorization or robust de-identification (safe harbor or expert determination). Re-identification risks rise sharply with powerful AI.
  • Breach & Incident Response: AI-related incidents (e.g., unintended PHI exposure via model outputs) trigger standard breach notification rules. Faster expected reporting under the new Security Rule will demand tighter processes.
  • Transparency & Oversight: Expect pressure for patient disclosures about AI use in care (via NPP or state laws) and strong human oversight of AI outputs, especially for clinical decision support.

Best Practices for PHI-Protecting Companies Using AI:

  1. Start Gap Assessments Immediately: Map all current and planned AI uses against PHI workflows. Treat the proposed Security Rule as your target standard and identify gaps in encryption, MFA, risk analysis, and inventory processes.
  2. Prioritize Data Minimization & De-Identification: Use de-identified or synthetic data for AI training and development whenever possible. Techniques like differential privacy or federated learning can further reduce risks.
  3. Robust Vendor Management:
    • Only work with AI vendors that sign strong BAAs with AI-specific clauses (no unauthorized training on your PHI, zero or minimal data retention, audit rights).
    • Require detailed documentation on model governance, training data provenance, and security measures.
  4. Technical & Governance Safeguards:
    • Enforce end-to-end encryption and MFA across AI pipelines.
    • Implement comprehensive logging/monitoring of AI interactions with PHI.
    • Establish clear internal AI governance policies, including approval processes and prohibitions on using public generative AI tools with PHI.
    • Conduct regular AI-specific risk assessments and maintain human-in-the-loop review for high-stakes uses.
  5. Documentation & Training: Update policies, conduct workforce training on AI risks, and ensure ongoing testing of controls. Smaller organizations may need phased implementation plans focusing first on high-risk AI uses.