Humana, United Health: Class Action Suits over AI use in Coverage Determination

It was only a matter of time before litigation came forward regarding the use of Artificial Intelligence (AI) in primarily, Medicare Advantage plans, where coverage denials/determinations are at issue. This week, a class action suit dropped in the U.S. Circuit Court of Western Kentucky against Humana. A link to the suit is here: https://www.scribd.com/document/692182281/Humana-AI-NaviHealth-MA-lawsuit

In April, I wrote about the advancing use of AI in coverage determinations/claims denials and the associated risks and disconnects between Medicare coverage requirements, especially in light of the Jimmo decision. That post is here: https://rhislop3.com/2023/04/17/medicare-claims-audits-denials-and-ai/

The crux of the Humana suit is stated in the first paragraph of the introduction.

“This putative class action arises from Humana’s illegal deployment of artificial intelligence (AI) in place of real doctors to wrongfully deny elderly patients care owed to them under Medicare Advantage Plans. The AI Model, known as nH Predict, is used to override real treating physicians’ determinations as to medically necessary care patients require. Humana knows that the nH Predict AI Model predictions are highly inaccurate and are not based on patients’ medical needs but continues to use this system to deny patients’ coverage.”

The suit comes from two SNF patients that allege their rehab was cut short and their coverage (Medicare benefits) were limited because of the use nH Predict, AI. The claim alleges that “Humana knows that the nH Predict AI Model predictions are highly inaccurate and are not based on patients’ medical needs but continues to use this system to deny patients’ coverage.” The physician of one of the plaintiffs, Barrows, was recommending further rehabilitation treatment, for her to keep off her feet for another month at the time coverage was cut.  Recall, per the Jimmo decision, progress is not a determining factor coverage.  The coverage determinant is “medical necessity” for skilled services.  Medicare Advantage plans are required by law to no less than the same level and amount of coverage as fee-for-service Medicare provides.

In November, a similar class action suit was filed against UnitedHealth for the use of algorithms to deny rehabilitation services to patients.  The suit alleges that UnitedHealth pressured employees to follow an algorithm, which predicts a patient’s length of stay, to issue coverage denials to patients within Medicare Advantage plans. The UnitedHealth suit is available here: class-action-v-unitedhealth-and-navihealth

What is interesting is that both suits allege that using AI, specifically nH Predict system, creates coverage denials with a high error rate AND both insurers knew this. The UnitedHealth suit alleges that the insurer knew the algorithm had an extremely high error rate and that it denied claims knowing that a small percentage of patients (0.2%) would file and appeal to the decision. The complaint further alleges that the AI system, nH Predict, has a 90% error rate. The calculation is based on the percentage of denied claims that are overturned, internally or via administrative proceedings.

In May, the Senate Committee on Homeland Security and Governmental Affairs sent a letter demanding that the nation’s largest Medicare Advantage plans (Humana, UnitedHealth) provide more transparency on how AI is used in coverage decisions.

Additional calls for transparency and accountability continued into November, when 30 House Democrats urged CMS to investigate the ways algorithms are being used in the healthcare claims management/coverage decisions. 

Leave a Comment