AI in Healthcare

AI Fights Back Against Algorithmic Health Insurance Denials

Rising insurance denials in the US, fueled by AI-powered algorithms, are prompting lawsuits against major insurers like UnitedHealth and Cigna, alleging widespread improper claim denials. The extremely low appeal rate, despite a high reversal rate upon appeal, highlights the system’s inherent flaws and the difficulty patients face navigating complex appeals processes. New AI tools are emerging to automate appeals, but lasting change requires broader healthcare reform, addressing high costs and ensuring equitable access to coverage. Experts emphasize the need for human oversight of automated systems and industry standardization to reduce denials stemming from administrative errors.

Read More

California Bans AI-Driven Insurance Claim Denials

Senate Bill 1120, the “Physicians Make Decisions Act,” prohibits California health insurers from denying claims based solely on AI algorithms. Driven by a high rate of claim denials (approximately 26% in California) and concerns about AI misuse, the law ensures human oversight in coverage decisions for medically necessary care. While not banning AI entirely, SB 1120 mandates that human judgment remains central, safeguarding patient access to quality care. The Department of Managed Health Care will enforce the law, auditing denial rates and imposing deadlines for authorizations, with potential fines for violations. This California law is garnering national attention, with other states and Congress considering similar legislation.

Read More

New Law Bans AI-Driven Healthcare Denial by Insurers

The Physicians Make Decisions Act (SB 1120) mandates that licensed healthcare providers, not AI algorithms, make final decisions regarding medical necessity for treatments in California. This law addresses concerns about algorithmic bias and inaccuracies in insurance claim processing, preventing potential harm from AI-driven denials of care. SB 1120 requires physician review of all AI-influenced decisions impacting patient care, ensuring human oversight and equitable standards. Effective January 1, 2025, the act establishes a national precedent for responsible AI implementation in healthcare.

Read More

Lawsuit Claims UnitedHealthcare Used Faulty AI to Deny Claims

A lawsuit alleges UnitedHealthcare, using an AI tool with a purported 90% error rate, wrongfully denied medically necessary claims, resulting in patient deaths. The company denies the AI makes coverage decisions, claiming it’s merely a guidance tool. Despite this, UnitedHealthcare’s claim denial rate is reportedly much higher than competitors, prompting some hospitals to refuse its insurance. Following the CEO’s targeted murder, where bullets bore inscriptions linking to a book criticizing insurance practices, the connection between the lawsuit and the shooting remains under investigation.

Read More