14 January, 2026
google-halts-ai-health-summaries-after-misinformation-warnings

Google has removed its AI-generated health summaries from search results due to concerns over inaccurate and potentially dangerous medical information. These changes followed an investigation that highlighted significant issues with the reliability of the company’s “AI Overviews,” particularly in relation to liver function tests. The company’s decision underscores the potential risks posed by inaccurate health-related content.

The AI Overviews tool, which uses generative artificial intelligence to create brief summaries for user inquiries, has been criticized for delivering misleading health information. In particular, the summaries regarding liver function tests raised alarms among medical experts, who noted that they provided incorrect data without adequate context. For instance, when users searched for the normal ranges of liver blood tests, the AI displayed various numbers that did not take into account crucial factors such as the patient’s nationality, sex, ethnicity, and age.

Sue Farrington, chair of the Patient Information Forum, emphasized the gravity of the situation, stating that inaccuracies could lead patients to misinterpret their health conditions. “The removal of these summaries is a positive outcome, but it is just the first step,” she said. Experts are particularly concerned that patients with serious liver issues might wrongly perceive abnormal results as normal, potentially neglecting necessary follow-up healthcare appointments.

The removal of the AI Overviews specifically addresses searches like “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” A spokesperson for Google stated that the company does not comment on individual content removals but emphasized that they strive for improvements when their AI lacks the necessary context.

Despite the removal of some summaries, many problematic AI Overviews remain active on the platform. These include summaries on cancer and mental health topics that have also been criticized for presenting incorrect information. When asked why these summaries had not been taken down, Google responded that they link to reputable sources and that their internal team of clinicians had reviewed the content, deeming it accurate.

Farrington pointed out that millions of adults globally face challenges accessing reliable health information. This existing issue makes it imperative for Google to direct users toward well-researched health resources from trusted organizations. The investigation into the AI Overviews has raised broader questions about the responsibility of tech companies in providing accurate health information, especially as reliance on digital platforms for medical guidance continues to grow.

As the landscape of online health information evolves, the importance of ensuring accuracy in AI-generated content cannot be overstated. Google’s actions reflect a growing recognition of the need for accountability in the digital health space, particularly when human lives may be at stake.