top of page

Australia's TGA Signals What's Coming for AI in Healthcare: Why US and EU Organizations Should Pay Attention

  • Feb 11
  • 10 min read

The global regulatory net is tightening around AI medical devices—and Australia just gave us a preview of what's ahead.

Update: February 5, 2026 — The TGA has refreshed its guidance on AI and medical device software regulation, reaffirming its technology-agnostic, risk-based approach and clarifying expectations for developers, deployers, and sponsors of AI-enabled medical devices. This update reinforces the enforcement stance outlined in the July 2025 outcomes report.

In July 2025, Australia's Therapeutic Goods Administration (TGA) published its landmark report on medical device software and AI regulation, followed by an aggressive compliance enforcement announcement targeting unapproved AI-enabled medical devices. Most recently, on February 5, 2026, the TGA updated its core guidance to further clarify regulatory expectations. While this might seem like a regional development, it's actually a critical signal for organizations operating in the US and EU markets. Here's why it matters—and what you should be doing now.


The TGA's Move: What Just Happened

Australia has taken a decisive stance on AI in healthcare with three key actions:

  1. Strengthened Regulatory Framework: The TGA's comprehensive review confirmed that existing medical device regulations apply to AI, but with important clarifications on definitions, roles, and responsibilities throughout the AI lifecycle.

  2. Active Enforcement: The TGA announced targeted compliance actions against software products—particularly AI digital scribes with diagnostic or treatment features—that should be registered as medical devices but aren't.

  3. Closing the Wellness Loophole: The regulator is urgently reviewing exclusions for digital mental health tools, signaling that apps and chatbots claiming "wellness-only" status may soon face medical device requirements.

What Changed in the February 5, 2026 Update

The TGA's latest guidance update reinforces and clarifies several critical points:

  • Technology-Agnostic Regulation: The framework explicitly regulates products based on their intended purpose, not the technology used—whether it's a watch, phone, tablet, cloud service, or laptop. If it diagnoses, treats, monitors, predicts, or provides clinical decision support, it's regulated.

  • LLM Developer Liability: The update makes crystal clear that if you "adapt, build on, or incorporate" an LLM (like ChatGPT, Claude, or other foundation models) into a product with medical purpose for Australian users, you are deemed the manufacturer with full regulatory obligations under the Therapeutic Goods Act 1989.

  • Scope Creep Monitoring: Manufacturers must actively monitor how system updates affect functionality. New features that change the intended purpose—sometimes called "scope creep" or "feature creep"—cannot be implemented until the device receives proper regulatory approval.

  • Generative AI in Clinical Decision Support: Tools using generative AI to provide diagnostic or treatment recommendations are explicitly called out as regulated medical devices.

The message is clear: if your software diagnoses, treats, monitors, or makes clinical recommendations, it's a medical device—regardless of what technology powers it or how you've marketed it.

Why This Matters Beyond Australia

Regulatory Timeline: The Tightening Noose

Here's how Australia has systematically strengthened AI oversight:

  • February 2021: New classification rules for software-based medical devices take effect

  • July 2025: Comprehensive outcomes report on AI and medical device software regulation published

  • August 2025: Compliance enforcement announced targeting unapproved AI medical devices

  • October 2025: Digital mental health tools identified for urgent regulatory review

  • February 2026: Updated guidance reinforces technology-agnostic approach and clarifies LLM manufacturer liability

This isn't a one-time announcement—it's a sustained regulatory campaign.


The Global Regulatory Convergence

What's happening in Australia isn't an isolated event. It's part of a coordinated international movement toward stricter AI governance in healthcare:

United States (FDA)

  • The FDA's AI/ML-based Software as a Medical Device (SaMD) Action Plan already established predetermined change control protocols

  • Recent draft guidance on clinical decision support software tightened the boundary between regulated devices and "wellness" tools

  • The FDA is actively working with international partners through the International Medical Device Regulators Forum (IMDRF)

European Union

  • The EU AI Act (effective August 2024) classifies medical AI systems as "high-risk," requiring conformity assessments, risk management, and transparency

  • The Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) already impose rigorous requirements on software medical devices

  • Combined, these create a dual-compliance burden: devices must satisfy both AI Act requirements AND medical device regulations

The IMDRF Connection Australia, the US, and EU regulators all participate in the IMDRF's Software as a Medical Device working group. The TGA's report explicitly references IMDRF principles and FDA guidance, demonstrating the intentional harmonization happening globally. When one major regulator moves, others are watching—and often coordinating.

Four Trends Accelerating Across Markets

1. The "Wellness Exception" Is Dying

For years, many AI health apps operated in a regulatory gray zone by claiming "general wellness" status. Australia's urgent review of digital mental health tools mirrors similar scrutiny in the UK (which issued guidance in February 2025) and growing skepticism from the FDA.

What this means: If your product makes any clinical claims—even subtle ones about mental health screening, symptom tracking, or personalized health insights—expect regulators to challenge your non-device status.

2. LLMs and Generative AI Are Regulatory Wildcards

The TGA explicitly addressed Large Language Models (LLMs)—the AI systems behind tools like ChatGPT, Claude, and medical chatbots—and the February 2026 update doubled down on this message with unmistakable clarity.

Here's the regulatory reality: If you incorporate an LLM into a product with any medical purpose (clinical documentation, symptom assessment, care recommendations), YOU are the manufacturer. You own full regulatory responsibility—not OpenAI, not Anthropic, not the foundation model provider.

This creates unprecedented compliance challenges:

The Black Box Problem: LLMs are probabilistic systems that can generate unpredictable outputs. Traditional medical device validation assumes deterministic behavior—the same input produces the same output. LLMs break this assumption. How do you validate a system that might respond differently to identical queries?

The Training Data Dilemma: Medical device regulations require transparency about data sources and quality. But most commercial LLMs are trained on vast, undisclosed datasets that may include:

  • Outdated or inaccurate medical information

  • Biased or unrepresentative data

  • Content that wasn't validated for clinical use

  • Data that doesn't reflect your target patient population (especially non-US populations, as the TGA noted)

The Liability Chain Problem: Consider a typical scenario:

  1. Your startup builds a clinical documentation tool using GPT-4's API

  2. A physician uses it to draft a differential diagnosis

  3. The LLM hallucinates a contraindication

  4. Patient harm occurs

Who's liable? Not OpenAI (their terms explicitly disclaim medical use). YOU are—because you're the manufacturer who deployed it in a medical context.

The Update Treadmill: Foundation model providers update their models regularly, sometimes without notice. Each update is potentially a new medical device requiring validation. How do you maintain regulatory compliance when your core technology changes outside your control?

What regulators expect:

  • Validation evidence showing your LLM-powered product is safe and effective for its intended medical purpose

  • Risk management for hallucinations, bias, inappropriate outputs, and system failures

  • Clinical studies demonstrating real-world performance (not just benchmarks)

  • Monitoring systems to detect when model updates or drift affect clinical safety

  • Transparency about model limitations that physicians and patients can understand

The FDA, EU, and now TGA are aligned: using an LLM doesn't exempt you from medical device requirements—it adds to them.

3. Post-Market Surveillance Is Getting Serious

All three regions are demanding:

  • Real-world performance monitoring for deployed AI

  • Tracking of model drift and degradation

  • Mechanisms to identify and remove non-compliant products

  • Greater transparency about datasets, training methods, and updates

The TGA report emphasizes "data from in-clinic performance of deployed AI models" as critical for ongoing safety. The EU AI Act requires similar post-market monitoring. The FDA's predetermined change control protocols assume continuous learning and updates.


4. Off-Label Use: The Hidden Compliance Landmine

Here's a scenario playing out right now across healthcare AI:

You develop an AI clinical decision support tool approved for interpreting chest X-rays in adults with suspected pneumonia. A hospital starts using it for pediatric imaging. A physician uses it to screen for lung cancer. A radiologist uses it in ICU settings you never tested. None of this was your intended use—but you're still on the hook.

Why AI amplifies off-label risk:

Unlike a physical medical device (a scalpel can only cut), AI software is infinitely adaptable. The same algorithm can be applied to different:

  • Patient populations (adults → pediatrics, general → specialty)

  • Clinical conditions (pneumonia → cancer → COVID)

  • Care settings (outpatient → emergency → ICU)

  • Workflow positions (screening → diagnosis → treatment planning)

Each variation may have different risk profiles, but you only validated and got approval for one narrow use case.

The regulatory expectation: Manufacturers must:

  1. Clearly define intended use in labeling, marketing, and contracts

  2. Monitor how the product is actually used in the real world

  3. Detect off-label patterns that create safety risks

  4. Take action when you discover problematic use—even if you didn't authorize it

The TGA made this explicit: their compliance actions target products being used clinically when they were only approved for administrative purposes (like digital scribes adding diagnostic features). The FDA has similar concerns, particularly around Clinical Decision Support Software (CDSS) that crosses into diagnostic territory.

Practical challenges:

  • Discovery: How do you know your product is being used off-label? Many manufacturers have limited visibility into actual clinical workflows

  • Contractual limits: Your terms of service prohibit off-label use—but that doesn't eliminate your regulatory obligations

  • Market pressure: Customers want flexible tools; restricting use may hurt competitiveness

  • Global variation: What's "intended use" may differ across regulatory jurisdictions

What you must do:

Technical controls: Implement guardrails that prevent or flag off-label use:

  • User authentication and role-based access

  • Workflow restrictions (e.g., only available for specified patient age ranges)

  • Alerts when use patterns deviate from intended parameters

  • Audit logs that track actual vs. intended use

Labeling and training: Make intended use crystal clear:

  • Explicit statements of what the device IS and IS NOT designed to do

  • Prominent warnings about risks of off-label application

  • Training materials that reinforce appropriate use

  • Documentation physicians can reference

Monitoring and response: Build systems to detect and address off-label use:

  • Analytics that track usage patterns across customers

  • Complaint and adverse event systems that surface misuse

  • Rapid response protocols when problematic patterns emerge

  • Willingness to restrict or revoke access when necessary

Proactive communication: When you discover off-label use, regulators expect you to:

  • Notify affected users immediately

  • Issue corrective actions if safety is at risk

  • Update labeling or seek expanded indications if there's legitimate clinical need

  • Report to regulatory authorities as required

The bottom line: "We told them not to" is not a defense. If your AI is being used clinically in ways you didn't validate, you have a compliance problem—and potentially a patient safety problem.

What US and EU Organizations Should Do Now

The February 2026 TGA update isn't just administrative housekeeping—it's a signal that regulators are tightening their grip on AI in healthcare globally. Here's what you need to do:

1. Conduct a Regulatory Status Assessment

Don't assume your product is exempt. Ask:

  • Does our software have any medical purpose or claim?

  • Do we incorporate AI, ML, or LLMs in any customer-facing features?

  • Could a regulator interpret our marketing as making clinical claims?

  • Are we positioned as "wellness" but actually being used clinically?

If you answered "yes" or "maybe" to any of these, you likely need a regulatory pathway.

2. Map Your AI Lifecycle Roles

The TGA highlighted confusion around "manufacturer," "deployer," and "developer" in AI contexts. The EU AI Act introduces these same distinctions.

Action: Document who in your supply chain is responsible for:

  • Initial development and training

  • Deployment and integration

  • Ongoing updates and maintenance

  • Performance monitoring

  • Incident response

This isn't just paperwork—it determines your legal obligations and liability exposure.

3. Implement Controls for LLMs and Off-Label Use

If you're building on foundation models or facing off-label use risks, act now:

For LLM-based products:

  • Vendor due diligence: Document what you know (and don't know) about your foundation model—training data, update frequency, performance characteristics

  • Validation protocols: Establish systematic testing for medical accuracy, bias, hallucinations, and edge cases BEFORE deployment

  • Version control: Track which model version is deployed, monitor for provider updates, and have a process to re-validate when models change

  • Output monitoring: Implement real-time checks for dangerous outputs (contradictory recommendations, harmful advice, privacy breaches)

  • Human oversight: Design workflows that require qualified clinical review before AI outputs reach patients

  • Contractual protections: While they won't eliminate liability, ensure your agreements with LLM providers clearly define responsibilities and indemnification

For off-label use management:

  • Technical guardrails: Build restrictions directly into your product (age-range locks, indication-specific workflows, usage alerts)

  • Usage analytics: Monitor how customers actually use your product vs. how you intended it to be used

  • Clear documentation: Create unmistakable labeling about intended use, contraindications, and limitations

  • Customer agreements: Include explicit terms prohibiting off-label use and requiring reporting of adverse events

  • Response playbook: Have a plan for what you'll do when (not if) you discover off-label use

  • Consider broader indications: If legitimate clinical need exists for broader use, pursue regulatory pathways to expand your approved indications rather than turning a blind eye

4. Build Governance Before Compliance Becomes Urgent

Here's the pattern we're seeing: regulators identify a gap (digital scribes, mental health apps, generative AI), then announce enforcement intentions. Organizations scramble to achieve compliance retroactively, often discovering they lack:

  • Documentation of design decisions and risk assessments

  • Audit trails for model training and updates

  • Processes for monitoring real-world performance

  • Systems to manage regulatory change across jurisdictions

The opportunity: Build robust AI governance infrastructure now, while you still have runway. This means:

  • Risk Management Framework: Implement systematic processes for identifying, evaluating, and mitigating AI risks throughout the product lifecycle

  • Documentation Systems: Create and maintain technical files, design dossiers, and clinical evaluation reports that satisfy multiple regulatory regimes

  • Change Control: Establish predetermined change control protocols for AI updates, aligned with FDA thinking and EU requirements

  • Transparency Mechanisms: Prepare to disclose training data sources, model limitations, and intended use parameters

  • Monitoring Infrastructure: Build systems to track model performance, detect drift, and capture real-world outcomes

5. Prepare for Multi-Jurisdictional Complexity

If you operate globally (or plan to), you're facing:

  • US: FDA premarket requirements + state-level AI regulations + potential federal AI legislation

  • EU: AI Act + MDR/IVDR + national implementation variations

  • Other markets: Australia, UK, Canada, and others all developing their own nuances

Smart strategy: Build to the highest common standard. If you can satisfy EU AI Act requirements and FDA expectations simultaneously, you're positioned for global success.

The Bottom Line

Australia's regulatory actions—culminating in the February 2026 guidance update—aren't a warning shot. They're confirmation that global AI healthcare regulation has entered active enforcement mode. The "move fast and break things" era is over for medical AI.

The challenges are real and complex: validating black-box LLMs, managing off-label use you can't fully control, satisfying divergent regulatory requirements across markets, and maintaining compliance while your underlying technology changes. These aren't problems you can solve with a single audit or checklist.

But here's the opportunity: organizations that get ahead of this curve don't just achieve compliance—they build competitive advantages. Robust governance enables:

  • Faster regulatory approvals across markets

  • Reduced time-to-market for updates and new features

  • Lower risk of enforcement actions or product recalls

  • Greater trust from healthcare customers who face their own compliance pressures

  • Clearer accountability that protects your organization and leadership

  • Ability to confidently leverage cutting-edge AI (including LLMs) where competitors remain paralyzed by uncertainty

The question isn't whether AI healthcare products will be regulated more strictly. They will be. The question is whether you'll be ready—and whether you'll turn regulatory excellence into a market differentiator.

About Alignmt AI

Alignmt AI provides an AI governance platform designed for organizations building and deploying AI systems in regulated industries. Our platform helps teams maintain compliance across evolving regulatory frameworks while accelerating responsible innovation.

Want to discuss how these regulatory trends impact your organization? Contact our team for a consultation.

 
 
 

Comments


bottom of page