Integrity Score 4462
No Records Found
No Records Found
No Records Found
Increasingly sophisticated AI systems can perform empathy, but their use in mental health care raises ethical questions
By A.T. Kingsmith, OCAD University
In a world where technology is increasingly intertwined with our feelings, emotion-AI harnesses advanced computing and machine learning to assess, simulate, and interact with human emotional states.
As emotion-AI systems become more adept at detecting and understanding emotions in real-time, the potential applications for mental health care are vast.
Some examples of AI applications include: screening tools in primary care settings, enhanced tele-therapy sessions and chatbots offering accessible 24/7 emotional support. These can act as bridges for anyone waiting for professional help and those hesitant to seek traditional therapy.
However, this turn to emotion-AI comes with a host of ethical, social and regulatory challenges around consent, transparency, liability and data security.
My research explores these potentials and challenges of emotion-AI in the context of the ongoing mental health crisis in the years since the COVID-19 pandemic.
When emotional AI is deployed for mental health care or companionship, it risks creating a superficial semblance of empathy that lacks the depth and authenticity of human connections.
What’s more, issues of accuracy and bias can flatten and oversimplify emotional diversity across cultures, reinforcing stereotypes and potentially causing harm to marginalized groups. This is particularly concerning in therapeutic settings, where understanding the full spectrum of a person’s emotional experience is crucial for effective treatment.
Age of emotional AI
The global emotion-AI market is projected to be worth US$13.8 billion by 2032. This growth is driven by the expanding application of emotion-AI across sectors ranging from public health care and education to transportation.
Advancements in machine learning and natural language processing allow for a more sophisticated analysis of people’s emotional cues using facial expressions, voice tones and textual data.
Since its release in early 2023, OpenAI’s generative-AI chatbot ChatGPT-4 has been leading the charge with human-like responses across a broad spectrum of topics and tasks. A recent study found that ChatGPT consistently scored higher on “emotional awareness” — identifying and describing emotions accurately — than general population averages.