Artificial Intelligence (AI) and Health Project Coordinator, Janneke van Oirschot, explains how the European Medicines Agency (EMA) aims to tackle the challenge of AI in medicines regulation, and what it should not forget.

In a world in which AI is seeping into every space, few sectors hold as much potential, or as many complicated risks, as the field of medicine. Thankfully, the EMA is taking a forward-looking regulatory approach, aiming to match the complexity and scale of the challenge. It has launched several initiatives to address the opportunities and risks AI brings to regulatory science, including the HMA/EMA Big Data Steering Group, a reflection paper on AI in the medicine product life-cycle and a multi-annual AI workplan. The EMA’s two-pronged strategy focuses on using AI to improve regulatory processes and to govern its use in medicine research and development (R&D). This strategy is laying essential groundwork for responsible use of AI while protecting public health.

The path ahead for the EMA, however, is a perilous one. AI technologies are diverse, and the risks vary significantly depending on the technology in question and the context in which it’s used. The sheer variety of applications of AI in medicine demands a nuanced and granular approach, grounded in technical expertise across regulatory science, biomedicine, data science, and machine learning. These, in combination with legal, ethics and fundamental rights expertise, should ensure that AI applications comply with regulatory frameworks and safeguard patient rights.

Public confidence in AI-driven medicine hinges on the EMA’s ability to guarantee that AI applications, especially those with high-risk profiles, receive appropriate scrutiny. For this, AI systems that pose risks to patient safety and regulatory integrity must be subject to public oversight.

While claims are abound that AI-assisted medicines R&D is having a major impact, it remains unclear just how many medicines available on the European market today were developed with assistance of AI, what types of AI were involved, and which AI applications in use pose the greatest risk to regulatory science or patient safety. Without clarity on these issues, patient trust and broader public acceptance remain at risk. Transparency should be expedited with a sense of urgency: as AI’s influence in medicine R&D inevitably expands, the need for public accountability will also grow.

Yet the challenges of AI in this field don’t end at trustworthy use of technology. Beyond the regulator’s purview, there are wider societal implications that AI in medicine brings, including issues of health equity. As medicine becomes more personalised, the risk of AI systems disproportionately benefiting certain populations increases, potentially leading to gaps in treatment and unequal access across demographic groups. The potential for AI to amplify biases and inequities within healthcare further underscores the importance of developing frameworks to assess and mitigate such impacts.

The EMA is taking bold steps, but its success will depend on transparency, public trust, and frameworks for health equity. With these elements at its core, the EMA’s approach to AI governance can help steer the future of medicine toward better outcomes for patients and public health.

To find out more about AI in medicines regulation, visit our AI project page.