When People Become Datapoints: The potential impact of AI in mental healthcare | Download PDF

Artificial intelligence (AI) holds great promise for transforming mental healthcare. From personalised treatment plans to early detection of mental health conditions, AI could make mental health services more accessible and effective. AI systems are already being developed for various applications, including diagnostic assessment, therapeutic support (such as chatbots), monitoring of mental health, and even educational tools aimed at promoting mental health literacy. These applications span both clinical and non-clinical settings, addressing a broad spectrum of conditions from depressive disorders and anxiety to non-medical issues, such as loneliness. And yet, despite all its potential, AI introduces a set of new risks that extend beyond individual patients to broader societal concerns, raising questions about equity, safety, and ethics.

This policy brief outlines potential risks of AI in mental healthcare, which can be identified at three levels. At the individual level, concerns include misdiagnosis, inappropriate treatment recommendations, and privacy breaches. At the collective level, issues such as biased datasets, accessibility barriers, and the marginalisation of vulnerable groups come to the forefront. At the societal level, challenges including over-surveillance, erosion of trust in healthcare, and the commodification of mental health services emerge, revealing broader implications for equity and justice.

As well as expanding on the risks at these three levels, this paper give a number of recommendations for action to address them, including through:

  1. Ethical Impact Assessments
  2. Algorithmic Accountability Laws
  3. Community Advisory Boards
  4. Public Awareness Campaigns
  5. Preventing Commodification

To read the full findings and recommendations, download the policy brief here (opens in PDF).