Health Technology and Algorithmic Harm

Companies that create and market healthcare technologies such as clinical support decision tools, and electronic health records have a responsibility to ensure such tech is built to be equitable, and to reduce—if not eliminate—potential harms to people. Despite several attempts by government agencies to enact accountability policies, the tech industry’s impact on the healthcare sector remains largely unregulated, exacerbating inequities among BIPOC communities, low-income communities, and Queer and gender non-conforming communities. ICCR members call for AI accountability and transparency by pressing companies to acknowledge where risks of algorithmic bias exist, and to disclose their plans for preventing, identifying, and mitigating these harms throughout their product lifecycles.

Current Initiatives

As part of a newly-launched campaign, our members are engaging a select number of companies to ensure that they are actively developing policies and processes to prevent harms created by algorithmic and machine learning b that may result in adverse health outcomes for certain groups of people. impacts on people.

Targeted sectors include:

  • Medical device and diagnostics
  • Health insurers
  • Health technology
  • Industrials 


Partnership on AI’s Responsible Practices for Synthetic Media

Gender Shades: Uncovering Gender and Skin-Type Bias in Commercial AI Products


Sign Up for our eNewsletter

* indicates required