30 July, 2025
ai-reveals-alarming-links-between-hate-speech-and-mental-health

A recent analysis indicates that speech patterns in hate speech communities on Reddit exhibit striking similarities to those found in forums discussing certain psychiatric disorders. The research, conducted by Dr. Andrew William Alexander and Dr. Hongbin Wang from Texas A&M University, was published on July 29, 2025, in the open-access journal PLOS Digital Health.

The rise of social media platforms has raised concerns about their role in disseminating hate speech and misinformation, which can lead to prejudice, discrimination, and even violence. While previous studies identified connections between specific personality traits and online hate speech, the relationship between psychological well-being and such speech has remained ambiguous. To address this gap, Alexander and Wang utilized artificial intelligence tools to examine posts from 54 Reddit communities involving hate speech, misinformation, and psychiatric disorders. For neutral comparison, they also included groups unrelated to these topics.

The selected subreddits for the study included r/ADHD, focusing on attention-deficit/hyperactivity disorder; r/NoNewNormal, which addresses COVID-19 misinformation; and r/Incels, a community banned for hate speech. Using the large-language model GPT-3, the researchers transformed thousands of posts into numerical representations that captured underlying speech patterns. These representations, referred to as “embeddings,” were then analyzed through machine-learning techniques and a mathematical method known as topological data analysis.

The findings revealed that speech patterns in hate speech communities closely mirrored those associated with complex post-traumatic stress disorder, as well as borderline, narcissistic, and antisocial personality disorders. While connections between misinformation and psychiatric disorders were less pronounced, some links to anxiety disorders were identified. Importantly, the researchers clarified that these results do not imply that individuals with psychiatric disorders are more likely to engage in hate speech or spread misinformation.

There was no way to verify whether the analyzed posts originated from individuals diagnosed with these disorders. This underscores the need for further research to explore the nuances of these relationships. The authors propose that their findings could inform new strategies for addressing online hate speech and misinformation, possibly incorporating therapeutic approaches designed for psychiatric disorders.

In their study, the authors noted, “Our results show that the speech patterns of those participating in hate speech online have strong underlying similarities with those participating in communities for individuals with certain psychiatric disorders.” They specifically highlighted the relevance of Cluster B personality disorders, which are characterized by a lack of empathy or difficulties in managing anger and interpersonal relationships.

Dr. Alexander emphasized the distinction between misinformation and psychiatric conditions, stating, “While we looked for similarities between misinformation and psychiatric disorder speech patterns, the connections we found were far weaker.” He suggested that the majority of individuals spreading misinformation appear to be psychologically healthy.

He further elaborated, “I want to emphasize that these results do not mean that individuals with psychiatric conditions are more likely to engage in hate speech. Instead, it suggests that people who engage in hate speech online tend to have similar speech patterns to those with cluster B personality disorders.” This raises the possibility that exposure to hate speech communities might cultivate traits akin to those seen in cluster B disorders, particularly concerning the targets of such speech.

Dr. Alexander concluded with a cautionary note, indicating that prolonged exposure to these communities could diminish empathy towards others. More research is necessary to substantiate these claims and better understand the implications for mental health and community dynamics.

The study titled “Topological data mapping of online hate speech, misinformation, and general mental health: A large language model-based study” offers critical insights into the intersection of technology, communication, and mental health. As social media continues to evolve, understanding these connections will be essential for developing effective interventions against online hate and misinformation.

For further information, the full study is available in the journal PLOS Digital Health and can provide deeper insights into the methodologies and implications of these findings.