On 22 May 2023, European Union (EU) Health Policy Platform Thematic Network Navigating Health Inequalities in the EU through Artificial Intelligence (AI), published its final Joint Statement. The statement focuses on actions needed to benefit from the potential of AI in health while protecting key populations. It is endorsed by a diverse group of over 30 organisations and individuals. This work is a collaboration between the Centre for Artificial Intelligence: Social & Digital Innovations – Brunel University London and Health Action International, together with network partners.


The use of AI in the health sector is rapidly growing. Whilst the use of AI in health has shown some demonstrable benefits and improvements in specific healthcare areas, such as health data management, diagnosis in cardiovascular diseases, and certain types of cancer, there is growing evidence of the potential risks of AI for patients’ health, wellbeing, and fundamental rights in the absence of an appropriate legislative or governance framework. AI especially poses risks to key populations, who may face specific vulnerabilities and multiple, layering patterns of inequalities, related to age, gender identity, sexual orientation, cultural identity, ethnicity, and race, (digital) literacy, disability and (mental) health status, residence status, and who may already face barriers and inequalities in accessing healthcare, potentially increasing health inequities in the EU.


The Joint Statement highlights specific concerns about health AI, which include:

  • AI systems in healthcare may be trained with sub-optimal-quality health data.
  • AI used in healthcare has potential for life-saving innovation but may pose health risks, especially for key populations.
  • In the absence of strong safeguards, AI used in healthcare may pose risks for erosion of privacy and data protection rights of patients.
  • AI used in healthcare may pose risks for autonomy of patients.
  • AI used in healthcare has the potential to address differential access to health but may also pose risks for furthering healthcare inequalities.
  • Possible lack of transparency of AI in healthcare could further exacerbate unethical practices.


The Joint Statement calls for a number of general and specific measures to be taken by the EU institutions, including:

  • Introduce safeguards against unethical uses of AI in healthcare.
  • Ensure robust protection of personal data and confidentiality.
  • Setting up a public EU-wide system to record the relevant usage of health AI systems and communicate its use to the patients concerned.
  • Strengthen the market approval procedure for medical devices using AI.
  • Require all health AI systems used in the EU, not merely high-risk systems, to be registered in the EU public database.
  • Require all health AI systems to undergo an impact and ethical assessment and be subject to oversight.
  • Strengthen the protection of personal health data in the EU.
  • Invest in the development of education in digital literacy.
  • Improve infrastructure, opportunity, means, access and services to innovative treatments and therapies for all those who need them across Europe.
  • Create awareness and implement professional training among duty-bearers and health practitioners on the risk of bias of training data and the dangers for key populations.
  • Improve breadth and quality of datasets for AI in healthcare.

For more information on key populations, concerns and recommendations, and a list of endorsing organisations/individuals, please see the Joint Statement.

We’d like to give major thanks to all who contributed to and supported the statement. For further information related to the Joint Statement, please contact Janneke van Oirschot (Janneke@haiweb.org).