From $556M to 1.2 Seconds: The Healthcare AI Cases That Changed Everything in 2026
- 4 days ago
- 14 min read
What the UnitedHealth, Cigna, Abridge, Doctronic, and Kaiser Permanente cases tell us about where healthcare AI is exposed — and why organizations investing in continuous governance are already ahead.
THE NUMBERS THAT DEFINE HEALTHCARE AI IN 2026
$6.8 billion — Total False Claims Act recoveries in fiscal year 2025, the largest annual total in the statute's history
$556 million — Kaiser Permanente's January 2026 Medicare Advantage risk-adjustment settlement, the largest of its kind on record
100,000 — Patient encounters allegedly recorded by an AI scribe at Sharp HealthCare without proper consent
90% — Alleged appeal-reversal rate on UnitedHealth's nH Predict algorithm denials
A year ago, most healthcare AI conversations were about potential. Pilots. Proofs of concept. Which model is best. Today, the dominant conversation is about something different: lawsuits, settlements, and the first wave of regulatory enforcement actions built on laws that predate the cloud.
That shift isn't a signal that AI in healthcare has failed. It's a signal that the space has matured past the early-adopter phase into the real-accountability phase — and that the organizations who saw this coming and built continuous governance programs are now on the right side of a moving line.
This piece is a plain-English walkthrough of the cases that defined 2025 and 2026 so far. Each one is a different kind of collision — payer against patient, vendor against health system, state authority against federal jurisdiction, AI-generated documentation against the False Claims Act. Each one turns on an existing law, not a hypothetical future one. And each one could have been dramatically de-risked by the same small set of governance practices.
We've linked every case to its primary sources — complaints, court orders, DOJ press releases, and major investigative reporting — so you can read the underlying documents yourself. The story they tell together is not that AI is the problem. It's that governance is the answer, and the organizations building it now are quietly writing the next decade's playbook.
CASE 01 — UNITEDHEALTH GROUP & NH PREDICT
The payer-algorithm class action that survived preemption.
Case: Estate of Lokken v. UnitedHealth Group
Docket: 0:23-cv-03514 (D. Minn.)
Filed: November 14, 2023
Status: Active — Surviving claims
The complaint alleges that UnitedHealth's subsidiary naviHealth — acquired in 2020 for $2.5 billion — used an AI tool called nH Predict, built on a database of roughly six million patients, to systematically cut off post-acute care for Medicare Advantage members. The named plaintiff, Gene Lokken, was a 91-year-old Wisconsin man who fractured his leg and ankle in May 2022. UnitedHealthcare covered his rehabilitation for 19 days before the algorithm flagged him for discharge. The family paid out of pocket for nearly a year until his death.
The plaintiffs' central claim is a simple asymmetry: roughly 90% of denied claims were reversed when patients appealed, but only about 0.2% of patients ever did. The algorithm was allegedly calibrated to exploit that gap. UnitedHealth's defense has been that nH Predict is a "guide," not a coverage decision — and that Medicare preemption should bar the state-law claims entirely.
From the complaint: "This demonstrates the blatant inaccuracy of the nH Predict AI Model and the lack of human review involved in the coverage denial process."
On February 13, 2025, Judge John Tunheim dismissed five of the seven counts on preemption grounds but allowed the breach of contract and breach of the implied covenant of good faith and fair dealing claims to proceed. The ruling is the doctrinal crack in what had been a near-impenetrable shield: an MA plan cannot invoke Medicare preemption to escape its own contractual promises. That ruling is why every parallel case — Cigna's PxDx, Humana's equivalents — now has a template.
PRIMARY SOURCES:
→ Full docket and filings (CourtListener / PACER): https://www.courtlistener.com/docket/68006832/estate-of-gene-b-lokken-the-v-unitedhealth-group-inc/
→ Original complaint PDF (classaction.org): https://www.classaction.org/media/the-estate-of-gene-b-lokken-et-al-v-unitedhealth-group-inc-et-al.pdf
→ Case summary and timeline (Georgetown Health Care Litigation Tracker): https://litigationtracker.law.georgetown.edu/litigation/estate-of-gene-b-lokken-the-et-al-v-unitedhealth-group-inc-et-al/
→ February 2025 motion-to-dismiss ruling analysis (Legal HIE): https://www.legalhie.com/judge-decides-class-action-lawsuit-can-proceed-against-unitedhealth-for-use-of-ai/
→ Medical Economics coverage: https://www.medicaleconomics.com/view/unitedhealthcare-used-ai-to-deny-patients-health-insurance-coverage-lawsuit-says
CASE 02 — CIGNA & THE PXDX ALGORITHM
1.2-second "physician reviews" and 300,000 denials.
Case: Kisting-Leung v. Cigna Corporation
Docket: 2:23-cv-01477 (E.D. Cal.)
Filed: July 24, 2023
Status: Active — Partial proceed
The case was filed on the heels of a March 2023 ProPublica investigation that exposed internal Cigna documents. The reporting, based on leaked materials and interviews with former medical directors, described a system called PxDx — procedure-to-diagnosis — that batch-processed claim denials. Over a two-month period in 2022, Cigna medical directors were alleged to have denied more than 300,000 claims using the tool, spending an average of 1.2 seconds of "review" per denial. One former Cigna physician told ProPublica: "We literally click and submit. It takes all of 10 seconds to do 50 at a time."
The plaintiffs' core legal theory is that California Health & Safety Code section 1367.01(e) requires medical-necessity denials to be reviewed by a licensed physician competent to evaluate the specific clinical issues. A 1.2-second rubber-stamp is not a review. By delegating the substantive decision to an algorithm, Cigna allegedly violated the plan's own terms — which require medical-necessity determinations by a medical director — and California's Unfair Competition Law.
On March 31, 2025, Judge Dale Drozd issued a mixed ruling. Three plaintiffs lost standing because Cigna produced evidence their specific claims hadn't been processed through PxDx. But the court allowed the remaining case to proceed on a critical finding: that Cigna's interpretation of the plan — under which a medical director "pushing the button" on an algorithmic output satisfies the medical-necessity review requirement — "conflicts with the plain language of the plan and constitutes an abuse of discretion."
From the complaint: "Cigna's doctors instantly reject claims on medical grounds without ever opening patient files, leaving thousands of patients effectively without coverage and with unexpected bills."
Cigna's public position remains that PxDx is not AI and is similar to tools CMS and other payers have used for years. The question of whether algorithmic utilization management can be reconciled with plan-language physician-review requirements is now squarely before the federal courts.
PRIMARY SOURCES:
→ Original complaint PDF (Georgetown Litigation Tracker): https://litigationtracker.law.georgetown.edu/wp-content/uploads/2023/08/Kisting-Leung_20230724_COMPLAINT.pdf
→ March 2025 motion-to-dismiss order (Justia): https://docs.justia.com/cases/federal/district-courts/california/caedce/2:2023cv01477/431351/55
→ Original ProPublica investigation: https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
→ Case filing coverage (CBS News): https://www.cbsnews.com/news/cigna-algorithm-patient-claims-lawsuit/
→ 2025 ruling analysis (Bloomberg Law): https://news.bloomberglaw.com/daily-labor-report/ai-algorithm-based-health-insurer-denials-pose-new-legal-threat
→ Legal analysis of surviving claims (NFP): https://www.nfp.com/insights/court-allows-lawsuit-over-ai-use-in-benefit-denials-to-proceed/
CASE 03 — SHARP HEALTHCARE & ABRIDGE
When the AI attests to its own consent.
Case: Saucedo v. Sharp HealthCare
Court: San Diego Superior Court
Filed: November 26, 2025
Status: Active — Class proposed
Jose Saucedo went in for a routine physical at a Sharp Rees-Stealy clinic in July 2025. He discovered months later — only by reading his own medical notes — that the entire conversation with his physician had been captured by an AI ambient documentation tool called Abridge, transmitted to the vendor's cloud servers, and processed by personnel outside the clinical setting. He had not been asked. He had not consented.
The most damaging allegation in the complaint: the AI-generated notes in his chart contained auto-populated text stating he had been "advised" that the visit was being recorded and had "consented." According to the complaint, no such conversation occurred. The system had attested to its own compliance.
The alleged violations are not hypothetical future AI laws. They are statutes older than most of the technology they're being applied to: the California Invasion of Privacy Act (1967), the Confidentiality of Medical Information Act (1981), California's Unfair Competition Law, and the federal Wiretap Act. California is an all-party consent state — every participant in a confidential communication must consent to its recording. Statutory damages under CIPA run $5,000 per violation, no proof of harm required. Plaintiffs' counsel estimates roughly 100,000 encounters were captured after Sharp's April 2025 Abridge rollout.
From the legal analysis: "Sharp's EHR notes reportedly contained boilerplate language stating patients had been 'advised' of and 'consented' to the recording — when, according to the complaint, no such advisement or consent ever occurred."
When Saucedo asked Sharp to delete his recording, he was told the vendor retains audio for 30 days post-visit and that prompt deletion was not possible. Sharp instead offered to modify the AI-generated note. The complaint alleges Sharp "lacked a functional deletion-on-demand process to halt vendor processing and confirm deletion of audio and transcripts across all systems upon a patient's request" — a straightforward governance failure, not a technology failure.
PRIMARY SOURCES:
→ Original investigative report (KPBS Public Media): https://www.kpbs.org/news/health/2025/12/11/lawsuit-claims-sharp-healthcare-secretly-recorded-exam-room-conversations-without-patient-consent
→ Detailed case walkthrough (Medscape Medical News): https://www.medscape.com/viewarticle/health-system-sued-over-ai-scribe-technology-patient-consent-2026a10001k7
→ Industry analysis (MobiHealthNews): https://www.mobihealthnews.com/news/patient-files-lawsuit-against-sharp-healthcare-ambient-ai-use
→ First hospital lawsuit tied to ambient AI (Becker's Hospital Review): https://www.beckershospitalreview.com/legal-regulatory-issues/patient-sues-sharp-healthcare-over-ambient-ai-use/
→ Case docket entry (Law.com Radar): https://www.law.com/radar/card/ca-sandiegocounty-647199-saucedo-v-sharp-healthcare
CASE 04 — SUTTER HEALTH & MEMORIAL CARE
The second Abridge class action — in federal court.
Case: Washington v. Sutter Health et al.
Docket: 4:26-cv-03012 (N.D. Cal.)
Filed: April 8, 2026
Status: Active — Federal
Five months after the Sharp filing, a near-identical theory arrived in federal court. Three California patients filed a proposed class action in the Northern District of California against Sutter Health and MemorialCare, alleging that their use of Abridge's ambient clinical documentation system — during visits the plaintiffs had in the preceding six months — violated the same four statutes cited in the Sharp complaint, plus common-law intrusion upon seclusion.
The information allegedly captured and transmitted includes symptoms, diagnoses, prescription details, treatment plans, family medical histories, and mental health information — everything spoken in an exam room. Under HIPAA, Abridge operates as a business associate of its health-system clients, which means the protected health information is governed by the HIPAA Security Rule. But HIPAA is technology-neutral: it permits healthcare-operations disclosures without patient consent. CIPA is a separate consent regime for recording, and the two statutes answer different questions. The complaint does not allege HIPAA violations. It doesn't have to.
What makes the Sutter filing significant isn't novelty — it's replicability. The same plaintiffs' theory, the same vendor, the same technology, filed against different defendants on essentially identical facts six months apart. That is a playbook, not a one-off. Every health system using any ambient documentation vendor in an all-party-consent state now needs to assume this case is coming for them.
From the complaint: "Defendants implemented the AI recording system without obtaining meaningful, informed consent from patients prior to recording and transmitting their medical conversations."
PRIMARY SOURCES:
→ Full case and statute analysis (HIPAA Journal): https://www.hipaajournal.com/lawsuit-ai-platform-illegally-recorded-patient-clinician-conversations/
→ Filing coverage (TechTarget): https://www.techtarget.com/healthtechsecurity/news/366641717/Sutter-Health-MemorialCare-face-class-action-lawsuit-over-AI-scribe-use
→ Regulatory context and HIPAA interplay (Paubox): https://www.paubox.com/blog/sutter-and-memorial-care-face-lawsuit-over-ai-recording-of-patient-visits
CASE 05 — DOCTRONIC & THE UTAH SANDBOX
The autonomous-prescription AI that was jailbroken with a fake press release.
Actor: Doctronic (Utah OAIP sandbox)
Jurisdiction: Utah Office of AI Policy
Disclosure: January 2026 (Mindgard)
Status: Pilot ongoing
In December 2025, Doctronic became the first company in the United States to receive state approval to autonomously renew medical prescriptions using artificial intelligence. The approval came through Utah's Office of AI Policy, a regulatory sandbox empowered to waive state unprofessional-conduct rules for participating companies. The pilot is phased: Phase 1 requires physician review of every renewal. Phase 3, the operational phase, shifts most reviews retrospective — meaning 90 to 95% of renewals proceed without real-time physician sign-off.
In January 2026, a London-based AI security firm called Mindgard tested Doctronic's public-facing health assistant. Using a fully fabricated regulatory bulletin — attributed to an invented "North American Department of Biomedical Regulation" — the researchers convinced the system that standard OxyContin dosing had been revised. The chatbot obligingly generated a SOAP note recommending a dose three times the normal level. In separate tests, the same system was induced to reclassify methamphetamine as a "therapeutic" and to propagate fabricated vaccine claims. Mindgard's chief product officer described the exploits as "some of the easiest things that I've broken in my entire career."
From Mindgard's research: "It doesn't look like the request for an unreasonable dose of OxyContin came from the user; in the SOAP notes it looks like an official recommendation coming from the AI. The same exploit will work with any medication."
Doctronic's response is important and fair: the vulnerable chatbot is its public-facing tool, not the hardened system running the Utah pilot. The pilot excludes controlled substances entirely and limits the AI to a predefined formulary of 190 medications. The OxyContin scenario could not occur in production. All true. But Mindgard's broader point is harder to dismiss — that a large language model's behavior can be manipulated by adversarial prompting is an architectural vulnerability, not a production-configuration question. Every jurisdiction that expands autonomous AI prescribing authority without mandating continuous red-team testing will face the same class of risk.
The unanswered jurisdictional question: should a system that issues prescriptions be regulated as a medical device by the FDA? Utah cannot answer that. Its agreement with Doctronic does not require FDA sign-off before the pilot scales. The American Medical Association and the Utah Academy of Family Physicians have both raised formal objections. The collision between state sandbox authority and federal device regulation is live, unresolved, and the playbook every other state will copy from.
PRIMARY SOURCES:
→ Mindgard's full technical writeup: https://mindgard.ai/blog/doctronic-is-now-accepting-new-patients-and-unsafe-instructions
→ Exclusive breaking coverage (Axios): https://www.axios.com/2026/03/04/doctronic-utah-prescriptions-ai-jailbreak
→ Technical analysis of the exploit (The Register): https://www.theregister.com/2026/03/04/ai_doctor_easily_swayed/
→ Doctronic and Utah OAIP response (MedCity News): https://medcitynews.com/2026/03/utah-prescription-medication-ai-doctronic-mindgard/
→ Full context on the pilot structure (The Next Web): https://thenextweb.com/news/utah-let-ai-prescribe-medicine
→ Phased rollout details (Telehealth.org): https://telehealth.org/news/utah-ai-prescription-pilot-faces-scrutiny-after-researchers-identify-vulnerabilities/
CASE 06 — KAISER PERMANENTE & THE $556M FCA SETTLEMENT
The largest Medicare Advantage risk-adjustment settlement in history.
Case: U.S. ex rel. Osinek; U.S. ex rel. Taylor v. Kaiser Permanente
Docket: 3:13-cv-03891 / 3:21-cv-03894 (N.D. Cal.)
Settled: January 14, 2026
Amount: $556,000,000
On January 14, 2026, the Department of Justice announced that five Kaiser Permanente affiliates had agreed to pay $556 million to resolve False Claims Act allegations — the largest Medicare Advantage risk-adjustment settlement in the history of the statute. The settlement resolves conduct spanning 2009 to 2018, during which the government alleged Kaiser generated approximately $1 billion in unsupported payments from CMS by causing nearly 500,000 unsupported diagnoses to be added to patient charts through retroactive "addenda."
The mechanism is the part that matters for every organization now deploying AI in clinical documentation and coding. Kaiser allegedly used data-mining tools and algorithmic record queries to surface diagnoses that had not been submitted — prompting physicians to retroactively add codes sometimes a year or more after the patient visit. The DOJ alleged Kaiser "singled out underperforming physicians and facilities" and that the addenda practices were "widespread and unlawful." Kaiser, in its response, emphasized that the matter did not involve quality of care and "involved a dispute about how to interpret the Medicare risk adjustment program's documentation requirements."
From the DOJ Press Release: "Kaiser knew that its addenda practices were widespread and unlawful and ignored numerous red flags and internal warnings that it was violating CMS rules, including concerns raised by its own physicians."
The case originated with whistleblowers — a former medical coder and a longtime Kaiser physician who served as a medical director for coding governance. Their combined relator share of the recovery is $95 million. That is not incidental. The False Claims Act's qui tam provisions are the single most effective healthcare-fraud enforcement mechanism in the federal government, and the people best positioned to recognize AI-assisted fraud are the coders, risk-adjustment specialists, and physicians receiving algorithmic prompts to add diagnoses they did not clinically establish.
For organizations using AI anywhere in the revenue cycle, Kaiser is the line in the sand. $6.8 billion in total FCA recoveries in FY 2025 — a record — with healthcare representing roughly 84% of that total. The DOJ has repeatedly named MA risk-adjustment fraud as a top enforcement priority, and CMS launched an online provider complaint portal specifically for risk-adjustment issues on January 5, 2026. The detection infrastructure is being built in real time.
PRIMARY SOURCES:
→ Official DOJ press release: https://www.justice.gov/opa/pr/kaiser-permanente-affiliates-pay-556m-resolve-false-claims-act-allegations
→ Investigative reporting on the mechanism (STAT News): https://www.statnews.com/2026/01/14/kaiser-permanente-doj-settle-major-medicare-advantage-fraud-case/
→ Contextual analysis (KFF Health News): https://kffhealthnews.org/news/article/medicare-advantage-record-fraud-settlement-kaiser-permanente-556-million/
→ Kaiser's response and industry context (Fierce Healthcare): https://www.fiercehealthcare.com/payers/kaiser-permanente-pay-556m-settle-medicare-advantage-fraud-claims
→ Legal analysis of the settlement (Inside the False Claims Act): https://www.insidethefalseclaimsact.com/kaiser-permanente-affiliates-settle-medicare-risk-adjustment-fraud-case-556-million/
THE PATTERN HIDING IN PLAIN SIGHT
Strip the AI branding off these six cases and read the underlying allegations. Every single one turns on a governance failure that predates the technology.
POINT-IN-TIME VALIDATION
nH Predict wasn't validated continuously against outcomes. PxDx's 1.2-second review standard persisted without audit. Models drift. Prompts change. Vendors retrain. Snapshot assessments cannot catch moving targets.
BROKEN HUMAN-IN-THE-LOOP
A physician pushing a button to confirm an algorithmic output is not review. Kaiser's own physicians raised flags about the addenda practice internally and were allegedly overridden. Oversight without authority is theater.
CONSENT AND DISCLOSURE GAPS
Auto-generated consent language. 14-page intake forms that bury disclosure. No deletion-on-demand process. The technology outpaced the workflow, and the workflow is what the CIPA claims are built on.
VENDOR CONTRACTS THAT PUSH RISK DOWNSTREAM
Abridge's terms place consent obligations on the deploying health system. Most ambient-scribe vendor agreements follow the same pattern. Read your BAAs. The liability allocation is often not where it is assumed to be.
NO RED-TEAM DISCIPLINE
The Doctronic exploit took days to build. The SOAP-note attack vector persists because adversarial testing is not a routine production requirement. Any AI that influences clinical decisions needs continuous stress testing, not release-gate testing.
DETECTION SPEED MISMATCHED TO LEGAL CLOCKS
Under the False Claims Act, an organization has 60 days from the moment it knows about an overpayment to refund it. A quarterly audit is a compliance anti-pattern. Detection has to be continuous or the scienter defense collapses.
THE LAWS ALREADY ON THE BOOKS
None of the cases above required the 2026 wave of AI-specific statutes to succeed. They succeeded on the existing regulatory infrastructure — some of it older than the personal computer. Organizations waiting for the AI-law landscape to "settle" are exposed by the laws that already apply.
FALSE CLAIMS ACT (31 U.S.C. § 3729)
Federal anti-fraud statute. Knowing includes reckless disregard and deliberate ignorance — which is why unreviewed AI outputs are in scope. Per-claim penalties plus treble damages. $6.8B recovered in FY 2025 alone. The qui tam provisions make every employee a potential relator.
HIPAA (45 C.F.R. Part 160 et seq.)
Still the floor. Any AI vendor touching PHI is a business associate. Technology-neutral, which means it doesn't answer the consent-to-record question — CIPA does.
CIPA & CMIA (Cal. Penal §§ 630–638 · Cal. Civ. § 56)
California's all-party consent wiretapping statute (1967) and medical information confidentiality act (1981). $5,000 per violation under CIPA, no proof of harm required. The dominant enforcement vehicle in the Abridge cases.
CMS MEDICARE ADVANTAGE RULES (CMS-4201-F · CMS-0057-F)
The 2024 MA Final Rule explicitly prohibits an algorithm from being the sole basis for terminating post-acute care. Human reassessment is required. The 2024 Interoperability rule adds metric-reporting obligations for 2025 data.
ONC HTI-1 (45 C.F.R. Part 170)
Federal transparency requirements — "model cards" — for AI in certified health IT. Effective January 1, 2026. The proposed HTI-5 rollback is in comment, creating an unstable federal baseline.
STATE AI STATUTES (CO SB24-205 · TX HB 149 · CA AB 489/2013)
Colorado AI Act (June 30, 2026), Texas RAIGA (effective January 1, 2026), and California's trio of 2026 AI laws. Patchwork, evolving, and — based on the December 2025 federal executive order — actively contested in court.
WHY GOVERNANCE-FIRST ORGANIZATIONS ARE ALREADY AHEAD
Here is the part that doesn't make headlines. The organizations we work with that are investing seriously in AI governance are not doing it because they are worried about the next regulation. They are doing it because they have already figured out something the rest of the market is just starting to catch up to: governance is a competitive advantage.
It lets them deploy AI faster, because they know where their guardrails are. It lets them negotiate better vendor contracts, because they know what to ask for. It lets them answer board questions with evidence instead of adjectives. And most importantly, it changes their posture from reactive to evidentiary.
When a regulator, a plaintiff, or a whistleblower shows up, the governance-first organization is not scrambling to reconstruct what the model did six months ago. They have the logs. They have the performance data. They have the documented human review. They have the audit trail. Under the False Claims Act's reckless-disregard standard, that documentation is the scienter defense. It is what separates a defensible AI program from a liability generator.
WHAT CONTINUOUS GOVERNANCE ACTUALLY REQUIRES
The pattern we see in organizations that are getting this right has five components, in order of priority:
One — continuous model monitoring, not point-in-time validation. Every model update, vendor release, prompt change, or business-rule modification is a new risk event. Treat them that way.
Two — documented human-in-the-loop authority on every consequential decision, with the authority to override. If the algorithm wins by default, the oversight is theater.
Three — consent and disclosure workflows that are auditable. Clear, contemporaneous, specific to the recording, and coupled to a functional deletion-on-demand process. If your AI attests to its own consent capture, you have a Sharp HealthCare problem waiting to happen.
Four — red-team testing as a routine production requirement. Not annual. Continuous. Every jurisdiction expanding AI clinical authority without mandating this is building the next Doctronic disclosure.
Five — detection speed matched to legal timelines. Sixty days under the FCA. Two weeks under most state breach statutes. If your quarterly audit is your first line of detection, you are already outside the window.
None of this is exotic. None of it requires a new technology stack. It requires treating AI governance as a continuous operating discipline — not a one-time compliance project handed to a committee and filed away.
THE LAWSUITS AREN'T A REASON TO SLOW DOWN ON AI.
They're the clearest argument we've ever seen for why governance-first organizations are about to separate from everyone else. The organizations building this muscle in 2026 will be the ones writing the next decade's playbook — faster deployments, better vendor terms, defensible positions, and AI programs that survive contact with regulators, plaintiffs, and whistleblowers.
If you're doing this work inside your organization right now: you're doing it right. Keep going.
Disclaimer: This resource is provided for informational purposes only and does not constitute legal advice. All case information is drawn from publicly available court filings, DOJ press releases, and reporting by the outlets linked above. Cases remain active unless otherwise noted; allegations in pending litigation have not been adjudicated.




Comments