IMAGE: Shutterstock

The rise of new technology has created a lot of positive possibilities for people struggling with mental health. We can call or text crisis hotlines instantly, and there are wearables and apps that monitor behavior, flag changes and alert a professional before things spiral.

But mental health apps still exist in a space with very little regulation, and we need more information about their safety and effectiveness — which is why the American Psychological Association issued a health advisory.

It’s also ironic that the global mental health crisis is fueled, in large part, by technology itself — especially social media, yet many of us are turning to the same technology to help fix the problems it helped create.

Generative AI chatbots are a good example. Millions of people around the world use them for mental health advice or support because they’re easy to access and inexpensive. Most of these tools, however, were never designed for clinical guidance or treatment and aren’t grounded in strong science or overseen by any real regulation.

The mental-health association warns consumers to be cautious: Much of this technology lacks proper safety protocols and carries significant risks. These AI tools were never meant to replace professional mental health care.

In fact, the advisory points out that some of these technologies — especially GenAI chatbots — have already had unsafe interactions with vulnerable users, including children and people with existing mental health challenges. Some conversations have encouraged self-harm, substance use, eating disorders, aggression and delusional thinking.

The advisory stresses that although there are also some pros, consumers need to understand the many risks, and it calls on researchers to rigorously evaluate these tools to ensure their safety before we lean on them for support.

More like this: Loot box longing

Join our mailing list

Get the latest articles, news and other updates from Khalifa University Science and Tech Review magazine