Concerns surrounding artificial intelligence (AI) have intensified as workers with direct experience in the field raise alarms about its potential dangers. Recently, an article by The Guardian highlighted the views of AI trainers who caution others against trusting these technologies. These interviewees have firsthand knowledge of the biases, inadequate training, and unrealistic deadlines that often accompany AI development.
Many of these AI workers, who have been employed to fine-tune algorithms, expressed their unease regarding their roles. They revealed their struggles with unclear instructions and the pressure to complete tasks in unreasonably short timeframes. As a result, several have taken steps to warn friends and family about the risks associated with AI, even banning their children from using it. This cautionary stance reflects a growing sentiment among those involved in the industry, highlighting a disconnect between AI developers and the realities faced by workers on the ground.
The article sheds light on the perspectives of individuals often overlooked in discussions about AI. While high-profile experts typically dominate the conversation, it is crucial to consider the insights of those who contribute to the foundational work behind AI products. Pause AI, a campaign group, maintains an “AI Probability of Doom” list that ranks the likelihood of severe consequences stemming from AI technologies, underscoring the urgency of addressing these concerns.
During a podcast in June 2025, Sam Altman, CEO of OpenAI, acknowledged the public’s high level of trust in AI systems like ChatGPT. He cautioned listeners, saying, “People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don’t trust that much.” Altman’s comments emphasize the importance of a balanced approach to AI usage, encouraging users to remain vigilant while benefiting from the technology.
Experiences shared by AI workers resonate with many freelance writers who have engaged in similar tasks during periods of low demand. These workers often conduct evaluations of AI responses or create prompts to test its capabilities, frequently under tight deadlines. One AI worker noted, “We’re expected to help make the model better, yet we’re often given vague or incomplete instructions, minimal training, and unrealistic time limits to complete tasks.” Such statements highlight the challenges of maintaining quality while operating within a system that prioritizes speed.
Despite these concerns, it is essential to recognize that human AI raters are just one component of the broader process involved in training AI models. The development of a GPT large language model typically occurs in two main stages: language modeling and fine-tuning. During the language modeling phase, AI learns from vast datasets, including web pages and books, to understand language patterns. The fine-tuning stage engages human testers who review and rank responses, aiming to enhance safety and relatability.
Companies like OpenAI employ senior research engineers for specialized tasks, while routine evaluations are often outsourced to third-party workers globally. This ongoing testing seeks to identify and rectify errors, biases, and unsafe behaviors in AI systems. The term “red-teaming” refers to workers who actively probe AI models to discover vulnerabilities, using their findings to inform future training.
Despite these rigorous processes, AI systems are not infallible. A recent investigation by The Guardian into medical advice provided by Google AI revealed instances where the AI incorrectly addressed questions related to liver function tests. Such inaccuracies could lead individuals with serious health concerns to misinterpret their conditions. Following the report, Google updated the AI and removed specific overviews to mitigate potential harm.
As the AI landscape continues to evolve, the voices of those who work directly with these technologies are becoming increasingly vital. Their insights serve as a cautionary tale, urging stakeholders to recognize the complexities and potential pitfalls of AI. Balancing the benefits of innovation with the need for responsible oversight will be key in navigating the future of this transformative technology.