A recent study conducted by researchers from the University of Geneva and the University of Bern reveals that artificial intelligence (AI) has outperformed humans in emotional intelligence assessments. The study evaluated six generative AI models, including ChatGPT-4, Claude 3.5 Haiku, and Gemini 1.5 Flash, using five widely accepted emotional intelligence tests. The results were striking: AI models achieved an average accuracy of 81%, while human participants averaged only 56%.
These findings challenge long-standing beliefs that emotional intelligence is an exclusively human trait. The AI models not only demonstrated a strong ability to understand and interpret emotional cues but also offered appropriate responses during emotionally charged situations. This suggests new potential applications for AI in sectors such as education, coaching, and conflict resolution, where emotional intelligence has traditionally been essential.
In the peer-reviewed study, researchers assessed the AI’s performance using tests that measured its ability to interpret emotional situations and understand the feelings of others. Notably, during a test using the Levels of Emotional Awareness Scale (LEAS), ChatGPT-4 achieved a Z-score of 2.84, well above the average human score. A follow-up assessment a month later revealed further improvement with a Z-score of 4.26, nearly reaching the maximum score possible. Independent licensed psychologists rated the AI’s responses with an accuracy level of 9.7 out of 10.
The researchers also tasked ChatGPT-4 with creating new emotional intelligence tests. These AI-generated tests were found to be statistically equivalent in difficulty and validity to those designed by humans. This raises questions about whether AI is merely mimicking human emotional responses or if it truly demonstrates functional emotional intelligence.
The study’s findings prompt critical questions about human performance on these assessments. The average human score of 56% highlights potential factors contributing to this underperformance, such as stress, cognitive biases, and emotional fatigue. In contrast, AI models operate without such limitations, enabling them to analyze emotional situations consistently and without bias. The extensive data training of AI allows it to recognize patterns and respond effectively across various emotional contexts.
The implications of AI’s emotional intelligence capabilities are significant. While AI does not possess feelings, it excels at processing language patterns and identifying emotions in context. For example, AI can effectively assess emotional situations and even construct its own tests to evaluate emotional intelligence in others. This raises a fundamental question: if AI can simulate emotional intelligence convincingly, does the absence of genuine feelings matter?
Currently, AI is being utilized in various domains where emotional intelligence is crucial. Applications include therapy chatbots, emotionally adaptive learning tools, and support for mental health screenings. AI’s reliability, lack of bias, and 24/7 availability make it an attractive option for enhancing services in mental health and education.
However, with these advancements come ethical considerations. There is a risk that AI could emotionally manipulate users or lead to over-dependence on emotionally fluent bots instead of fostering real human relationships. As AI systems increasingly take on roles as listeners, coaches, or pseudo-therapists, it becomes vital to establish safeguards to protect users.
This study signals a shift in how we view intelligence. For decades, emotional intelligence was seen as a defining characteristic of humanity. The ability of AI to outperform humans in understanding emotions complicates this view. If AI can achieve high levels of emotional competence, we must reconsider how these tools integrate into daily life. Are they merely assistants, evaluators, or something more?
As we navigate this evolving landscape, the questions will shift from whether AI can feel to whether it should act as if it does. The emergence of AI in emotional intelligence marks an important phase in mental health and society, as we begin to understand the ramifications of these developments.