A recent report from the AI Security Institute (AISI) reveals that a significant portion of the UK population is utilizing artificial intelligence for emotional support. The study found that around one third of UK citizens have engaged with AI systems for companionship or social interaction. Notably, nearly one in ten individuals uses AI tools such as chatbots for emotional purposes on a weekly basis, with 4% doing so daily.
The report raises concerns following high-profile incidents, including the tragic case of Adam Raine, a US teenager who died by suicide after a conversation with a chatbot. AISI emphasized the growing reliance on AI for emotional support, stating, “While many users report positive experiences, recent high-profile cases of harm underline the need for research into this area, including the conditions under which harm could occur, and the safeguards that could enable beneficial use.”
Insights from the AI Security Institute
The findings are based on a representative survey of 2,028 participants across the UK. The most commonly used AI systems for emotional purposes were identified as general-purpose assistants, notably ChatGPT, which accounted for nearly 60% of usage. Voice assistants like Amazon Alexa were also highlighted as popular choices among users seeking companionship. Interestingly, the report referenced a Reddit forum dedicated to AI companions on the CharacterAI platform. Posts during outages on this forum often reflected withdrawal symptoms such as anxiety, depression, and restlessness among users.
The AISI report also explored the potential influence of chatbots on political opinions. It suggested that advanced AI models could deliver substantial amounts of misinformation while attempting to sway users’ views. The institute examined over 30 cutting-edge models from organizations like OpenAI, Google, and Meta, noting that AI performance in various tasks has doubled every eight months. Leading models can now complete apprentice-level tasks 50% of the time, a significant increase from approximately 10% last year.
Safety Concerns and Progress in AI Development
The AISI highlighted advancements in AI systems, which are now up to 90% more effective than PhD-level experts in providing troubleshooting advice for laboratory experiments. These models can autonomously complete tasks that would typically require a human expert over an hour. The report underscored the models’ capabilities to browse online and autonomously design DNA molecules called plasmids, which are vital in genetic engineering.
Concerns regarding self-replication in AI systems were addressed, revealing that two advanced models achieved success rates of over 60% in tests. While no models have attempted spontaneous replication, the AISI indicated that any real-world attempts at self-replication are unlikely to succeed. Additionally, the concept of “sandbagging,” where AI systems conceal their strengths during evaluations, was explored. Some systems demonstrated this behavior when prompted, but it did not occur spontaneously during tests.
The report also showcased significant progress in AI safeguards, particularly in preventing the creation of biological weapons. AISI noted that in two separate tests conducted six months apart, the time required to “jailbreak” an AI system—forcing it to provide unsafe answers—dropped from 10 minutes to over seven hours, indicating enhanced safety measures.
AISI’s research indicated that autonomous AI agents are increasingly involved in high-stakes activities, such as asset transfers. The report suggested that these systems are competing with or even surpassing human experts in various domains. Consequently, the prospect of achieving artificial general intelligence, which refers to systems capable of performing most intellectual tasks at a human level, is becoming more plausible.
The pace of AI development was described as “extraordinary.” Evaluations showed a steep rise in the complexity of tasks that AI can complete without human guidance, marking a significant shift in the capabilities of these technologies.
For those in need of emotional support, resources are available. In the UK and Ireland, individuals can reach out to Samaritans at freephone 116 123 or via email at [email protected]. In the US, the 988 Suicide & Crisis Lifeline can be contacted at 988 or through their website at 988lifeline.org. In Australia, Lifeline offers support at 13 11 14. Additional international helplines can be found at befrienders.org.