11 December, 2025
librarians-struggle-as-ai-chatbots-generate-fake-references

Librarians are facing an increasing number of inquiries about non-existent books and articles generated by artificial intelligence (AI) chatbots. According to a report from Scientific American, a significant portion of reference requests received by libraries now stem from AI-generated content. Sarah Falls, the chief of researcher engagement at the Library of Virginia, estimates that around 15% of their emailed reference questions are based on misleading information produced by AI tools like ChatGPT.

Many library professionals are becoming exhausted by these requests, which often include questions about fictitious citations. This trend has been noted in various institutions, with librarians expressing frustration at the lack of trust in their expertise. Falls indicates that individuals frequently prefer the responses provided by chatbots over the clarifications given by human librarians.

In a related statement, the International Committee of the Red Cross (ICRC) issued a warning regarding AI-generated archival references. The organization clarified that if a reference cannot be found, it does not mean that information is being withheld. They highlighted that the inaccuracies could arise from incomplete citations, documents residing in other institutions, or increasingly, AI-generated hallucinations. Users may need to investigate the administrative history of references to confirm their authenticity.

The proliferation of fake books and articles attributed to AI has become evident throughout the year. For instance, a freelance writer for the Chicago Sun-Times created a summer reading list that included ten fictitious titles among fifteen recommendations. Furthermore, a report released in May 2023 by Health Secretary Robert F. Kennedy Jr. revealed that at least seven citations in the document published by his “Make America Healthy Again” commission did not exist.

While AI-generated misinformation has contributed to the current challenges, it is essential to recognize that issues of false citations predate the advent of these technologies. In 2017, a professor at Middlesex University discovered over 400 academic papers citing a non-existent research study, which was essentially filler text. These citations, often included in lower-quality papers, were likely the result of carelessness rather than intentional deception.

The growing reliance on AI tools raises questions about why users tend to trust chatbots over human professionals. One reason could be the authoritative tone in which AI communicates, leading users to favor the chatbot over a librarian. Additionally, some users believe they can enhance AI reliability through specific prompts, such as instructing the chatbot not to “hallucinate” or to “write clean code.” If these approaches were effective, it is likely that major technology companies would implement such strategies universally.

This situation underscores the need for users to critically evaluate information sources, especially when engaging with AI. Librarians remain committed to their role in providing accurate information, but the increasing prevalence of AI-generated misinformation presents a significant challenge for the profession.