top of page

New York's RAISE Act Is Now Law: What Healthcare Executives Need to Know

  • Writer: Andreea Bodnari
    Andreea Bodnari
  • Dec 22, 2025
  • 4 min read

On December 19, 2025, Governor Hochul signed New York's RAISE Act (Responsible AI Safety and Education Act) into law, making New York one of the first states to establish comprehensive regulatory requirements for frontier AI models. The law takes effect 90 days after signing.


For healthcare leaders already navigating a complex patchwork of AI regulations—from ONC HTI-1 to the EU AI Act—this raises an immediate question: Does this affect us?

The short answer: probably not directly. But the implications for your AI governance strategy are significant.


What the RAISE Act Actually Regulates

The RAISE Act targets "large developers" of "frontier models"—a narrow category defined by extremely high thresholds:

  1. Training compute exceeding 10²⁶ computational operations

  2. Compute costs exceeding $100 million for a single model

  3. Aggregate compute spending over $100 million on frontier models


These thresholds describe a handful of companies globally: OpenAI, Google, Anthropic, Meta, Microsoft, and a few others. Even the largest integrated delivery networks with sophisticated in-house data science teams are nowhere near these thresholds.

Important exception: Accredited colleges and universities engaged in academic research are explicitly excluded from the definition of "large developers."


Why Healthcare Enterprises Should Still Pay Attention

While the RAISE Act won't impose direct compliance burdens on healthcare companies, it fundamentally changes the landscape in three important ways:


1. Vendor Diligence Gets Easier—and More Expected

If you're procuring AI tools from large developers, those vendors must now:

  • Publish their safety and security protocols (with appropriate redactions)

  • Report safety incidents to NY authorities within 72 hours

  • Maintain detailed records of testing procedures

  • Designate senior personnel responsible for compliance

Practical implication: Your legal, compliance, and IT teams should start asking vendors: "Are you subject to the NY RAISE Act? Can we see your published safety protocol?"


2. Downstream Liability Awareness

The law's definition of "critical harm" sets a marker for what regulators consider catastrophic AI failure:

  • Death or serious injury to 100 or more people

  • At least $1 billion in damages

  • Caused by AI acting with "no meaningful human intervention"

If you deploy a vendor's AI tool and something goes wrong at scale, understanding whether that vendor had proper protocols in place becomes directly relevant to your own risk exposure.


3. A Signal of Regulatory Direction

New York often leads on regulation, and other states frequently follow. This law signals that:

  • AI governance documentation will become standard

  • Safety incident reporting will become normalized

  • Regulatory thresholds may lower over time

Building governance infrastructure now positions your organization ahead of inevitable requirements.


Key Definitions: What the Law Actually Says About AI Safety

Notably, the RAISE Act does not provide a standalone definition of "AI security and safety." However, it operationalizes these concepts through specific requirements:

"Safety and Security Protocol" Must Include:

  • Reasonable protections and procedures to reduce the risk of critical harm

  • Administrative, technical, and physical cybersecurity protections

  • Detailed testing procedures to evaluate critical harm risk

  • Assessment of whether models could be misused, modified, or evade developer control

  • Designated senior personnel responsible for compliance

"Safety Incident" Includes:

  • A known incidence of critical harm

  • A frontier model autonomously engaging in behavior other than at user request

  • Theft, misappropriation, or unauthorized access to model weights

  • Critical failure of technical or administrative controls

  • Unauthorized use of a frontier model

These definitions provide a useful framework for thinking about AI risk categories—even for organizations not directly subject to the law.


What Health Systems Should Do Now

Action

Why It Matters

Inventory your AI tools

Know which vendors might be "large developers" under this law

Update vendor contracts

Add provisions requiring disclosure of RAISE Act compliance status and safety incidents

Monitor safety disclosures

If a vendor reports an incident to NY, you'll want to know immediately

Document your AI governance

Even if not required now, this is where regulation is heading



How ALIGNMT AI Helps You Navigate This Landscape

The RAISE Act is just one piece of a rapidly evolving regulatory mosaic. Healthcare organizations have a mosaic of AI governance guidance ranging from CHAI transparency standards, ONC HTI-1 requirements, and—for those with international operations—the EU AI Act.


ALIGNMT AI's governance platform helps healthcare enterprises:

  1. Maintain continuous visibility into every AI tool deployed across your organization

  2. Automate compliance workflows across multiple regulatory frameworks simultaneously

  3. Detect risks in real-time before they become incidents—flagging bias, judgment errors, and safety issues

  4. Generate audit-ready documentation that demonstrates due diligence to regulators, boards, and patients

  5. Monitor vendor AI tools to ensure third-party systems meet your governance standards

Healthcare companies using our platform have cut compliance preparation time by up to 50%, enabling them to scale AI initiatives with confidence while maintaining the trust essential to patient care.


The Bottom Line

The RAISE Act won't send compliance officers scrambling at most healthcare companies. But it does mark a turning point: AI governance is becoming codified into law, transparency requirements are expanding, and the organizations that build robust oversight infrastructure now will be best positioned as regulation inevitably reaches further.


The smart move isn't to wait for direct mandates—it's to use this moment to strengthen your vendor oversight and AI governance posture while the regulatory landscape is still forming.


Ready to build AI governance infrastructure that scales with evolving regulations? Request a demo to see how ALIGNMT AI can help your organization deploy AI responsibly.

 
 
 

Comments


bottom of page