17 November, 2025
media-professor-warns-ai-weakens-critical-thinking-skills

Petter Bae Brandtzæg, a media professor at the University of Oslo, has raised concerns about the impact of artificial intelligence (AI) on critical thinking skills. In a recent study, he argues that AI’s advanced capabilities to generate thoughts and statements can undermine human judgment, thereby reducing our ability to think critically. This research comes at a time when AI technologies, like ChatGPT, have rapidly gained popularity, with approximately 800 million users globally.

Brandtzæg’s findings are part of a project titled “An AI-Powered Society,” conducted in collaboration with the research institute SINTEF. This initiative represents Norway’s first in-depth examination of generative AI and its effects on users and society. Brandtzæg’s concerns were partly prompted by the 2022 report from the Norwegian Commission for Freedom of Expression, which he felt inadequately addressed the implications of generative AI.

AI’s Influence on Society

Brandtzæg emphasizes that AI technology disrupts our cognitive processes, affecting how we read, write, and think. He notes, “We can largely avoid social media, but not AI. It is integrated into social media, Word, online newspapers, email programs, and the like. We all become partners with AI—whether we want to or not.”

His research indicates that reliance on AI may weaken critical thinking and moral judgment. The professor highlights existing studies suggesting that AI alters our language, comprehension of the world, and ethical decision-making processes. The launch of ChatGPT shortly after the Freedom of Expression Commission’s report has made Brandtzæg’s research increasingly relevant.

Brandtzæg and his team have introduced the concept of “AI-individualism,” inspired by the earlier framework of “network individualism.” This framework described how technology enables individuals to create personalized social networks beyond traditional boundaries. With AI, however, the relationship between humans and systems is evolving, as AI begins to fulfill roles typically held by people.

Changing Relationships and Preferences

The project’s findings reveal that generative AI can cater to personal, social, and emotional needs. Brandtzæg, who has a background in psychology, has studied human-AI interactions, particularly with chatbots like Replika. He asserts that while AI can enhance individual autonomy by fostering self-reliance, it may simultaneously weaken community bonds. “A shift toward AI-individualism could therefore reshape core social structures,” he warns.

To better understand AI’s impact, the researchers conducted a survey involving 166 high school students. Many expressed a preference for AI assistance over traditional resources. One student remarked, “ChatGPT helps me with problems; I can open up and talk about difficult things, get comfort and good advice.” Another study showed that a significant portion of participants favored chatbot responses for mental health inquiries over those from professionals, demonstrating AI’s powerful influence.

Brandtzæg also introduces the notion of “model power,” which pertains to the influence wielded by those controlling significant models of reality. This concept, rooted in sociologist Stein Bråten’s theories from the 1970s, now applies to AI systems that generate content widely used in public discourse. “A kind of AI layer is covering everything,” Brandtzæg observes, warning that this could lead to monopolistic influences on human beliefs and behaviors.

As AI technologies become increasingly integrated into daily life, concerns regarding misinformation have arisen. A survey by the Norwegian Communications Authority in August 2025 found that 91% of Norwegians are worried about AI’s potential to spread false information. Notably, a report used by the Tromsø Municipality to justify school closures was based on AI-generated sources that were fabricated, highlighting the risk of misinformation.

Brandtzæg questions how many other municipalities may have similarly relied on inaccurate AI-generated content. He notes that while many individuals profess to be critical of AI, they often follow its guidance, illustrating the model power inherent in these systems. “It’s the first time in history that we’re talking to a kind of almighty entity that has read so much,” he explains. “But it gives a model power that is scary.”

Furthermore, Brandtzæg points out that the dominant AI firms are based in the United States and heavily rely on American data. He estimates that less than 0.1% of the data in AI models like ChatGPT is Norwegian. This raises questions about the cultural implications and the potential for an American monoculture to influence global values and norms.

Brandtzæg concludes that the world has never encountered such pervasive technology, stressing the need for regulation to ensure that AI aligns with human values and needs. “We must not forget that AI is not a public, democratic project. It’s commercial, and behind it are a few American companies and billionaires,” he cautions.