AI and algorithmic decision tools in healthcare may create gaping digital divides and threaten health equities, placing marginalised groups at particular risk. Hannah van Kolfschooten, Pin Lean Lau, and Janneke van Oirschot sound the alarm and call for the EU to act now.

Imagine you need critical heart surgery and your life is in the surgeon’s hands. Will the outcome be affected by your (biological) sex? The answer is yes: women have an increased risk of short-term mortality after cardiac surgery compared to men. Now, imagine you need a liver transplant. Could racialised characteristics affect whether you get the transplant? Yes: black patients with chronic liver failure are less likely to be placed on transplant waiting lists. Does your monthly income influence  waiting times for an ambulance? Again, yes: socioeconomic status relates to waiting time in emergency care. Health inequity is ubiquitous: social, economic, demographic, and geographic circumstances, in reality, determine how healthy one is.

Throwing artificial intelligence (AI) into the mix will make the situation worse. AI for the healthcare global market is projected to grow exponentially to 164.10 billion USD by 2029, and AI tools are already used for the allocation of medical resources, diagnosis, and treatment recommendations in various healthcare settings around the globe.

Using AI technologies in healthcare will reinforce deeply rooted systemic and societal patterns of bias and discrimination. AI can cause health discrimination in three ways: (1) The AI systems are trained with ‘biased’ data, for example underrepresenting certain population groups, containing harmful stereotypes or patterns of discrimination. There are many instances of racial biases exhibited by AI tools in the health sector, e.g., lower performance rates of AI skin cancer diagnosis for persons of colour. (2) The way AI systems are designed excludes certain population groups from accessing healthcare, due to, for example, digital literacy levels or language barriers. Finally, (3) how AI systems are used can be unfair too, for example when only the affluent or large well-funded hospitals can afford the systems.

Disadvantaged and marginalised groups are already more vulnerable to the risks of AI discrimination. This is not only because algorithms may simply not work as well when the data does not represent one’s skin colour, or because one doesn’t own a smartphone and cannot access the results of a medical examination. This is because the potential harms of health AI have a bigger impact beyond individuals. For undocumented persons, a privacy infringement of health AI may lead to an unwanted simultaneous confrontation with law enforcement. For elderly persons in care homes, AI tools for health monitoring may further limit their autonomous decision-making power. People experiencing mental health issues may be offered the “easy fix” of AI therapy, increasing anxiety and loneliness.

We currently operate in a regulatory wild west without protection against the intersectional risks of health AI in the EU. While this may be to the benefit of private, profit-driven companies – patients from marginalised groups pay the price. The EU has extensive regulatory frameworks for medical devices and data protection, and is developing one for AI systems, but these are not adapted to the multitude of harms for which health AI will be responsible. If we don’t make a clarion call now, AI can and will widen health inequities.

What should be done? The European Union (EU) institutions, Member States, and other actors in health must take certain measures – endorsed by a diverse group of stakeholders within civil society. These include:

  • Rules to ensure robust protection of personal data and confidentiality and strengthen the protection of personal health data in the EU.
  • Fostering potential ways for minimising risks of discrimination and bias, such as: i) requiring transparency and disclosure about how algorithms were built; ii) assessing the impact of potential biases and abuses resulting from algorithms; iii) assessing the quality of all data collected (including how the data was collected, labelled and used); iv) ensuring algorithms can be meaningfully explained to allow informed decisions.
  • Improving breadth and quality of datasets for AI in healthcare to counter algorithm bias and under-representation of especially intersectional key population groups in research literature and data, whilst also acknowledging that risks can still occur for people with unique and rare conditions.
  • Provision of training for duty-bearers and health practitioners regarding risks of bias in AI and the dangers for key populations.
  • Stronger market approval procedures for medical devices using AI.
  • A public EU-wide system to record the relevant usage of health AI systems and communicate its use to the patients concerned.
  • Mandatory impact and ethical assessment (including fundamental rights) of all health AI, subject to competent and regular oversight.
  • Improvement of infrastructure, opportunity, means, access and services to innovative treatments and therapies for all those who need them across Europe.

These are some of the steps that would pave the way for health AI which works equitably for everyone.

Hannah van Kolfschooten researches patients’ rights protection in the regulation of AI at the Law Centre for Health and Life, University of Amsterdam. She is an independent consultant for non-profit Health Action International. 

Dr. Pin Lean Lau is a lecturer in Bio-Law at Brunel Law School, and the Centre for AI: Social & Digital Innovations. Her research broadly encompasses the intersections of law and human rights, society, and emerging technologies.

Janneke van Oirschot is researcher at independent non-profit Health Action International and coordinator of its AI and Medicines programme.