The HAI team took to Amsterdam’s Vondelpark to speak to members of the public on the topic of artificial intelligence (AI) in mental healthcare. It was fascinating to hear different opinions on an issue of growing societal importance and relevance and learn to what extent the public are comfortable (or not) with the use of AI in mental healthcare. In our video below, our AI and healthcare expert Dr. Hannah van Kolfschooten responds to the public’s comments, debunking some myths and providing up to date information. 

The past years have seen a distinct increase in the development and use of AI in healthcare, including in the mental healthcare sector. There are many different ways in which AI may be used across the mental healthcare spectrum, from diagnosis to monitoring, through education and therapies.  

Some applications may be beneficial – accessibility can be improved through the use of chatbots, reaching those who may not otherwise access any form of care. Further, the use of AI may improve the efficiency of care and reduce administrative or financial costs on the healthcare provider side.  

However, as Dr. Van Kolfschooten and members of the public note, it is not without risks. The deployment of any AI-system in mental healthcare must be accompanied by stringent oversight to ensure the safeguarding of patient’s health, as well as the privacy of their data.  

Users of chatbots seeking help for their mental health run the risk of receiving misleading information, or even misdiagnoses. This could exacerbate feelings of distress and potentially delay seeking professional care.  

Chatbots of course rely only on what information has been provided to them by the user. As highlighted by members of the public, it raises questions as to the full contextual understanding that the chatbot will miss, as well as physical cues, and other signs that a traditional therapist would be able to perceive. Chatbots in the meantime only rely on the words typed into their system.  

Another essential point when it comes to mental healthcare is trust. A patient must be able to trust their healthcare provider and be certain that their own private information remains between them. What happens when your provider is a chatbot? Here, there are also clear privacy risks. Sensitive information provided to chatbots could be subject to breaches or unauthorised sharing.  

From a wider perspective, the use of chatbots for mental healthcare also raises broader societal questions such as whether this would lead to or normalise more dehumanised care? Could this result in a trend of prioritising timesaving and cost-saving over patients’ care? 

Ultimately, one thing is clear – AI must be developed and deployed in ways that enhance care, not to replace humans in the provision of mental healthcare.  

Read our fact sheet on mental health and AI for more information.