AI Is Deciding Whether You Get Care — And Nobody’s Really Reviewing It
Some of the first cases are in the courts, and finding a (limited) way to survive federal preemption.
There’s a pattern I keep seeing in healthcare AI policy — and it’s starting to show up in my studies at law school.
AI systems get deployed in two very different roles. The first is efficiency: AI helps sort, route, organize, surface. It speeds things up. It reduces administrative burden. Nobody gets hurt when it misclassifies a scheduling request.
The second role is consequential: AI makes the call. Approve or deny. Go home or stay admitted. Treatment now or treatment after you appeal for three months.
The problem is that the insurance industry keeps sliding AI from the first role into the second one — and then acting surprised when anyone notices.
The Washington State AI Task Force just noticed.
What the Task Force Found
The Washington State AI Task Force released its Interim Report in December 2025. It’s a substantive document — eight subcommittees, covering everything from K-12 education to law enforcement AI disclosure. The healthcare prior authorization section is the one that should be keeping every healthcare law student up at night.
Here’s the core finding, in plain language:
Insurance companies are using AI “black boxes” to deny, delay, or modify medical care — and patients and providers often have no meaningful way to understand why or challenge it effectively.
The Task Force found three specific risks that deserve your attention:
The black box problem. AI models making prior authorization decisions “function as ‘black boxes,’ making decisions based on complex algorithms that are not transparent to patients, providers, or even payors.” The insurer deploys the tool. The tool says no. Nobody can fully explain why.
The bias problem. AI trained on historical claims data inherits the disparities embedded in that data. If certain populations were historically under-treated, the AI learns that pattern and encodes it as the baseline.
The automation bias problem. This one is insidious. The Task Force specifically called out the risk that humans stop critically evaluating AI outputs — that clinical staff start deferring to whatever the algorithm says because it feels authoritative. The algorithm is never authoritative on clinical questions. It’s a tool. A sophisticated, expensive tool with no medical license.
The Real Problem (And It’s Not the AI)
The AI isn’t the villain here. This is important to understand, and it’s the point most coverage misses.
AI can absolutely be used in prior authorization. It should be. The administrative burden of prior authorization is genuinely broken — a 2021 study by Washington’s Office of Insurance Commissioner found that 75% of health care service codes requiring prior authorization were approved 100% of the time. Think about that. Three-quarters of the things insurers were requiring pre-approval for were never actually going to be denied. That’s pure administrative overhead that falls on clinical staff and delays care for patients.
AI that routes and approves requests faster? Great. AI that catches clear-cut approvals and processes them without human review? Makes sense.
The problem is when the same system that approves care is also empowered to deny care — without a licensed clinician actually reviewing the case.
That’s the line. AI can say yes. AI cannot say no.
UnitedHealth’s NaviHealth subsidiary built a tool called nH Predict. The tool predicted how many days a patient should need post-acute care, and staff allegedly were required to follow its outputs under threat of termination — regardless of what the treating physician recommended. When patients and families appealed, roughly 90% of prior authorization denials were reversed, and over 80% of preauthorization denials were overturned on appeal.
Ninety percent of the denials that got appealed were wrong. Not edge cases. Not close calls. Wrong.
This is now federal litigation. Estate of Gene B. Lokken, et al. v. UnitedHealth Group, Inc., No. 23-cv-03514 (D. Minn.), is a class action brought by the estates and families of Medicare Advantage members who were denied post-acute care coverage. Here’s where it gets legally interesting: the court didn’t let all the claims through. In February 2025, Judge Tunheim held that most state law claims — unjust enrichment, bad faith insurance — were preempted by the Medicare Act because evaluating them would require the court to second-guess coverage determinations already regulated under 42 C.F.R. §§ 422.101 and 422.566.
But two claims survived: breach of contract and breach of the implied covenant of good faith and fair dealing. The reason they survived is something worth sitting with. UnitedHealth’s own Evidence of Coverage documents told members that claim decisions would be made by “clinical services staff” and “physicians.” The court held that asking whether UHC lived up to that specific promise — without touching federal Medicare standards — is a pure contract question that state law can still reach.
The case is now in class-wide discovery, with the court denying UnitedHealth’s motion to bifurcate and limit discovery to the named plaintiffs only. As of September 2025, it’s moving forward.
The core allegation UnitedHealth denies: they ever used nH Predict at all. The plaintiffs’ counter: 90% appeal overturn rates don’t happen when licensed physicians are actually reviewing individual cases.
A denial that an algorithm generates and a human rubber-stamps in 1.2 seconds is not a human determination. It’s an algorithmic determination with paperwork on top.
Cigna had its own version of this problem. The PXDX system allegedly allowed physicians to deny claims without reviewing individual patient files — 300,000 claims denied over roughly two months, at about 1.2 seconds per claim. A lawsuit in California argues this violated state physician-review laws. Cigna disputes the characterization, but the underlying question is the same: Was a licensed clinician making a medical necessity determination, or was an algorithm?
What the Task Force Recommends
The Task Force’s recommendations are clear and, honestly, more careful than I expected:
AI can approve care without human review. AI cannot deny care without human review. This asymmetry is deliberate and important. The Task Force specifically said: “AI systems may be used to facilitate approving prior authorization requests or to overturn prior denials without additional human review.” But any adverse determination — any denial, delay, or modification based on medical necessity — must be made by a licensed physician or licensed health professional working within their scope of practice.
That’s the line. AI can say yes. AI cannot say no.
Clinical criteria must match. AI systems used by payors must apply the same clinical review criteria that entity-employed licensed health professionals use. You can’t have the AI applying looser or differently weighted criteria than what a human reviewer would apply.
Mandatory impact assessments and independent auditing. Payors must conduct periodic assessments to identify and mitigate unfair disparate impacts, keep clinical guidelines current, and measure administrative burden on providers and patients. And critically — independent auditors, not the payor’s internal team, should assess transparency, accuracy, and compliance.
Plain-language explanations for denials. When AI is used to support a denial, the payor must provide clear, understandable explanations accessible to both patients and providers, referencing relevant clinical guidelines.
Where This Sits Federally (And Why It Matters That It Isn’t Settled)
Here’s the frustrating context: the federal government is not moving on this quickly.
The Improving Seniors’ Timely Access to Care Act — the PRIOR Act — has been sitting in Congress for years. The 2025 reintroduction (S. 1816) has 248 House cosponsors and 64 Senate cosponsors, which is extraordinary bipartisan support. It passed the House in a prior session. And it has still not become law, largely due to cost estimates and industry opposition.
What did pass was a January 2024 CMS final rule (CMS-0057-F) that requires electronic prior authorization APIs for Medicare Advantage, Medicaid, CHIP, and qualified health plans, with a deadline of January 1, 2027. That same rule (effective January 1, 2026) now requires specific denial reasons from a standardized list and timelines — 7 calendar days for standard decisions, 72 hours for expedited. Annual public reporting on denial rates started in 2026.
That’s meaningful progress. But CMS’s rule addresses process — speed, electronic format, denial reasons. It doesn’t directly address who makes the adverse determination or whether AI can substitute for a licensed clinician.
Washington State is attempting to fill that gap. The Task Force’s recommendations, if enacted by the legislature, would create one of the clearest state-level standards for AI in prior authorization in the country.
The Trump Administration’s federal approach is explicit deregulation. The Task Force report says it plainly: federal regulators are “emphasizing deregulation, while state regulators have explored new legislation to address specific AI risks.” This regulatory gap is exactly why Washington’s action matters — and why you’re going to see more of it from state AGs and legislatures in the next two to three years.
Why This Matters for Law Students
I spend a lot of time in this space thinking about where AI is being deployed to help humans do their jobs better versus where it’s being deployed to replace human judgment in ways that affect other people’s lives.
Prior authorization is the clearest example I know of AI crossing that line — and then the industry responding to scrutiny by arguing that the AI is just a tool, not actually making the decisions. The nH Predict lawsuit allegations suggest otherwise. The PXDX timelines suggest otherwise. The 90% appeal overturn rate suggests otherwise.
A denial that an algorithm generates and a human rubber-stamps in 1.2 seconds is not a human determination. It’s an algorithmic determination with paperwork on top.
The Lokken litigation also reveals something that matters for how future cases will be fought. The Medicare Act’s broad preemption clause — covering “any law or regulation” — swallowed most state law theories. Bad faith. Unjust enrichment. Gone, because evaluating them would require courts to revisit coverage determinations that federal law already regulates. The only way plaintiffs kept anything was by anchoring to UnitedHealth’s own contractual language: you promised decisions would be made by physicians. That’s the surviving hook.
This is why the Task Force’s legislative recommendations matter so much. When courts can’t reach these practices through existing state tort law, legislation becomes the backstop. If Washington — and other states — enact the standards the Task Force recommends, they create new legal obligations that aren’t preempted because they’re not regulating coverage standards. They’re regulating who decides — and that’s a different question.
The Task Force got this right. The question is whether the legislature acts on it — and whether other states follow.
For those of you heading toward health law, this is your practice area for the next decade. The questions that flow from AI in prior authorization touch insurance regulation, administrative law, ERISA, state healthcare law, civil rights and disparate impact, and the emerging law of algorithmic accountability. There is a lot of work to be done. We’re going to be busy.
The WA AI Task Force Interim Report is publicly available.
The PRIOR Act (S. 1816) can be tracked on congress.gov.
Estate of Gene B. Lokken, et al. v. UnitedHealth Group, Inc., No. 23-cv-03514 (D. Minn.) — the February 2025 opinion on preemption is at 766 F. Supp. 3d 835.
The Cigna PXDX litigation was filed in California. (Kisting-Leung v. CIGNA, 2:23-cv-01477-DAD-CSK (E. D. Cal.)
CMS-0057-F (the CMS Interoperability and Prior Authorization Final Rule) was issued January 17, 2024.



