Investor Statement on AI in Healthcare
While there are many concerning provisions in the “One Big Beautiful Bill Act” (OBBBA), one stands out for us: the provision imposing a 10-year moratorium on state and local regulations of artificial intelligence (AI). This measure, embedded within the broader budget reconciliation package, aims to centralize AI oversight at the federal level, effectively overriding existing state laws and preventing new ones from being enacted. The bill would nullify over 60 existing state-level AI regulations and halt the progress of hundreds of proposed bills attempting to erect sensible guardrails around the use of AI to address issues such as algorithmic bias, privacy breaches, and automated decision-making. If enacted, the ban would have far-reaching impacts on Americans’ human rights and civil rights, and would likely undermine our citizens’ public health well into the future.
Overall, we know that technology companies are influencing politicians to weaken oversight, through the OBBBA and other means; some technology companies donated over $1 million each to the inauguration fund. Subsequently, the President wiped away former President Biden’s executive order that called for a balance between positive impacts of AI’s growth and the need for ethical guardrails and consideration of risks to privacy and job loss. This kind of relationship highlights the need for state and local governments, who may not be as compromised as the federal agencies, to take the lead in common-sense regulations.
Some of the in-place regulations that would be rendered useless include laws that protect against AI-generated explicit material, prohibit “deep-fake” content that misleads voters and consumers, require indications to consumers that they are interacting with types of AI, and other protective measures. Importantly, many of the state laws instituting comprehensive data privacy include provisions that provide consumers the right to opt out of certain types of automated decision-making, and require businesses to conduct risk assessments before using high-risk automated profiling.
Investors have been calling for AI accountability and transparency by pressing healthcare companies to acknowledge where the risks of algorithmic harm exist and to disclose their plans for preventing, identifying, and mitigating these harms throughout their product lifecycles. As the use of AI in health care continues to proliferate, so too do concerns about privacy and the potential for algorithmic bias to negatively impact patient safety, equity of care, and treatment decisions.
Companies that create and market healthcare technologies, such as clinical support decision tools and electronic health records have a responsibility to ensure that these products don’t unintentionally contribute to or exacerbate healthcare inequities or discriminatory practices. By completely removing a state’s ability to implement practical regulations around the use of these products that will safeguard the rights of their constituents, Congress is effectively creating a wild west wherein companies will essentially write their own rules, to which we will all be subject.
While much of how healthcare corporations use AI is unknown, the harms have been much more public. Just a few examples include:
- UnitedHealth Group / Optum – AI Algorithm and Denial of Care.Lawsuits alleged that an Optum algorithm was used to systematically deny rehabilitation care to Medicare Advantage patients based on projected outcomes, not clinical evaluations.
- IBM Watson Health – Misleading Cancer Treatment Recommendations. IBM’s Watson for Oncology, marketed as an AI tool for cancer diagnosis and treatment planning, was criticized for producing unsafe or incorrect treatment suggestions.
- Cigna – Algorithmic Claim Denials at Scale.In 2023, ProPublica and The Capitol Forum reported that Cigna doctors used a proprietary algorithm known as “PXDX” to automatically deny thousands of insurance claims without reviewing individual patient records.
- CVS Health / Aetna – AI-Based Fraud Detection and Claim Denials.AI tools used to detect fraud and assess claims have allegedly led to wrongful denials or delays in legitimate insurance claims.
- Teladoc Health – AI Chatbot Safety and Miscommunication Risks.Teladoc and similar virtual care platforms have faced scrutiny over their use of AI chatbots for triage or behavioral health assessments, raising liability and safety concerns.
Said Lydia Kuykendal Director of Shareholder Advocacy for Mercy Investment Services, “Regulations play a vital role in patient and customer safety. Giving corporations a blank slate to create whatever AI-driven technology they can imagine without having to worry about any consequences is alarming and possibly deadly. In the absence of comprehensive federal regulation, we must allow for common-sense state and municipal oversight of these potentially harmful innovations.”
The risks here are clear and present for healthcare companies choosing to push the boundaries of what AI should and should not do. What happens when something goes wrong? If an AI tool makes a clinical decision that is incorrect, how is the aggrieved party compensated for their loss? Data privacy issues also abound. We know that every year, corporations make billions of dollars from selling our data, a problem that becomes even more severe when we include personal health information. Finally, we know that AI systems are trained on publicly available data: Data that has proven time and again to have racial, ethnic, gender, and age biases. There is no question that tools and public health systems designed with these data sets will fall prey to the same biases that plague the data used to train them.
When it comes to our health, speed should never come at the cost of safety, fairness, or human dignity. Removing states’ ability to regulate the use of AI to protect their citizens will have serious consequences for both patients and business.