How AI is shaping the management of severe mental illness

June 25, 2025

How AI is shaping the management of severe mental illness

By Professor John Ainsworth, Director and Co-Founder, CareLoop Health  

Artificial Intelligence has firmly secured its place in the management of mental health in recent years, particularly through digital therapy solutions. The use of AI-powered chatbots to deliver cognitive behavioural therapy (CBT) is growing and is helping to provide fast and accessible treatment to ease rising demand and growing waitlists for those that desperately need help.  

However, the use of AI to help manage severe mental illness (SMI) is still in its infancy. At CareLoop Health, we’re already using AI to power our platform in simple ways, and we’re undertaking significant research to develop more sophisticated AI methods to prevent relapse in people living with psychosis.  

Using technology to collect data among people with SMI brings unique challenges – they may find it harder to self-report data and have less trust in technology when compared to those with other, less severe mental health issues. Our research at the University of Manchester seeks to fully understand these challenges and how we can overcome them, which will also help us plan how the future inclusion of AI in these platforms might impact those challenges. 

Predicting relapse through multimodal data 

A key question is whether we can use multimodal data to improve relapse prediction in SMI. Personal sensing can collect physiological and behavioural data from smartphones and wearables, without requiring the user to do anything. If we can leverage this data using AI algorithms, we can minimise the reliance on people to self-report particularly at times when this is a challenge. This passive approach to data collection could reduce the burden on people to self-report and might lead to a more consistent data stream and therefore more reliable relapse prediction. 

At the University of Manchester, the CONNECT study, funded by The Wellcome Trust, and led by Professor Bucci, aims to assess the effectiveness of passive data collection in individuals with severe mental illness (SMI). The study leverages multimodal data collection from physiological measurements from wearable devices and behavioural data collected via smartphones (communication patterns, social interactions). We expect that this passive data collection reduces the burden of self-reporting and means we can more accurately predict relapse, but the CONNECT study will help quantify that impact.  

Overcoming hesitancy 

The ethics and trustworthiness of AI are hot topics in the news, workplaces, and everyday life. For people living with SMI, these concerns can be even more pronounced, particularly for those experiencing symptoms like paranoia, hallucinations, and delusions often associated with schizophrenia. 

We know that a small proportion of individuals with conditions like schizophrenia may develop paranoia about remote monitoring technologies However, a previous study showed that active symptom monitoring  was safe and acceptable among people with SMI, with 45% of the eligible sample agreeing to enter the trial, and 90% of those using continuing to use it regularly at three months.  This makes it important to identify who would benefit from interventions like this and who would not. 

The closed-loop model 

Another interesting future development of AI in SMI is the potential to combine data collection, symptom prediction, and therapeutic intervention in what’s called a “closed-loop model”: 

  1. The system collects data passively from the patient 
  2. AI algorithms predict what symptom domains are getting worse (e.g., hearing voices, anxiety, hopelessness, depression) 
  3. The system automatically delivers targeted CBT content specific to those symptoms 
  4. This creates what researchers term a “just-in-time adaptive intervention” 

This approach effectively customises therapeutic content based on detected symptoms, potentially without direct therapist involvement. However, this work is still nascent and there are some serious considerations around how far this technology can go; for example, with the therapist effectively removed from the loop, how can we ensure patient safety and be certain that the AI will respond appropriately in scenarios such as thoughts of self-harm or suicidal ideation. 

Beyond research 

While we continue our research and wait for outcomes of studies like CONNECT, we are continually working to making these systems reliable, trustworthy, explainable, and free from bias, and – crucially – to communicate these qualities to patients who may have additional, specific concerns and challenges. 

Back to all posts

Similar stories

Explore how CareLoop is impacting both care models and the lives of individuals.

  • Customer and patient stories

    arrow
  • Customer/patient pull quotes

    arrow
  • email icon