Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Books

Librarians Are Being Asked To Find AI-Hallucinated Books (404media.co) 50

Libraries nationwide are fielding patron requests for books that don't exist after AI-generated summer reading lists appeared in the Chicago Sun-Times and Philadelphia Inquirer earlier this year. Reference librarian Eddie Kristan told 404 Media the problem began in late 2022 following GPT-3.5's release but escalated dramatically after the newspapers published lists created by a freelancer using AI without verification.

A Library Freedom Project survey found patrons increasingly trust AI chatbots over human librarians and become defensive when told their AI-recommended titles are fictional. Kristan now routinely checks WorldCat's global catalog to verify titles exist. Collection development librarians are requesting digital vendors remove AI-generated books from platforms while academic libraries struggle against vendors implementing flawed LLM-based search tools and AI-generated summaries that undermine information literacy instruction.
This discussion has been archived. No new comments can be posted.

Librarians Are Being Asked To Find AI-Hallucinated Books

Comments Filter:
  • Lie-brarians (Score:4, Insightful)

    by Registered Coward v2 ( 447531 ) on Saturday September 20, 2025 @07:20AM (#65672488)
    It never ceases to amaze me how people will accept as correct whatever output a computer provides while disbelieving someone who is likely an expert in their field or at least has information available to validate or to attempt to correct the computer's output. I suspect AI's flattering way of providing answers makes people feel more connected to their 'friend' versus some random librarian and thus get defensive when told something doesn't exist.
    • by too2late ( 958532 ) on Saturday September 20, 2025 @08:13AM (#65672526) Journal
      It's Lie-barian
    • Re: (Score:3, Insightful)

      by gweihir ( 88907 )

      It is a general trend: People believe some random crap over expert statements.

    • by Anonymous Coward

      People also think the first result on the Google result page must be the truth.

    • by jd ( 1658 )

      Oh, absolutely. These days, I spend so much time checking the output from computers, it would normally have been quicker to do searches by hand. This is... not useful.

    • What level of fact-checking requirement should be put on the newspaper and publishers in general for what should be factual data, a list of books, the birth date of presidents, etc. and other things which are not opinions?

    • Like your presentation but not your Subject--even though I had a couple of negative encounters with rule-based librarians recently.

      Unfortunately I think libraries are losing there relevance and it's related to the AI reference in your FP. However I just started thinking about a more insidious version of the problem. You can say that it's a big problem that generative AIs will fabricate BS, but even when we realize an answer is BS, we may learn the wrong lesson from it. After all, many of the AI answers are

      • Like your presentation but not your Subject--even though I had a couple of negative encounters with rule-based librarians recently.

        Tea, it wasn't meant as a shot at librarians but rather how AI is making people view them (and clickbait).

        Unfortunately I think libraries are losing there relevance and it's related to the AI reference in your FP. However I just started thinking about a more insidious version of the problem. You can say that it's a big problem that generative AIs will fabricate BS, but even when we realize an answer is BS, we may learn the wrong lesson from it. After all, many of the AI answers are pretty good (on the theory you can make sufficient allowance for your own tendency to believe what you want to believe), so there's a kind of reinforcement in favor of those questions and prompts.

        Good point. The reinforcing nature of AI do to prompt choices as well as design is an insidious feature that is no doubt viewed a a positive by companies since it keeps people coming back.

        Most people like oracles and want to get "authoritative" answers to their questions.

        Yet it's not so much that we may learn to think like machines (which is still a big problem), but rather that we may learn not to ask certain kinds of questions. We won't even be able to ask why those questions are so problematic because we already "know" the oracular AI can't handle them. (Even if the government or some greedy megalomaniac intervened to make sure the question was unanswerable.) Hallucinated books may the smallest of our future worries.

        It think it's not just the authoritative nature but the belief that somehow AI is unbiased in the answers it provides. I have friends who truly believe, because AI has so much data the answers must be correct and unb

        • by shanen ( 462549 )

          Mostly the ACK and concurrence, though I noticed and regret my typo of "there" where I meant "their".

        • It think it's not just the authoritative nature but the belief that somehow AI is unbiased in the answers it provides. I have friends who truly believe, because AI has so much data the answers must be correct and unbiased, and GIGO is no longer a problem even though they are fishing in a data sewer.

          This prompted me, as an experiment, to ask ChatCPT "why did Putin invade Ukraine?" Response:

          "Russia’s full-scale invasion of Ukraine on 24 February 2022 was driven by a mix of strategic, political, and ideological motives. Analysts often highlight several overlapping factors:

          "1. Stopping NATO and Western Alignment

          "Security concerns (stated reason): The Kremlin claimed that Ukraine’s growing ties with NATO and the European Union threatened Russia’s security.

          "Reality: NATO posed no immi

          • ChatGPT does correctly capture the attitude of the US mainstream news media, so I'll give it credit for that.

            Interesting insight. I suspect the results are due to the data used for trading. Unless it scrapes and is able to parse a large number of languages any output will be biased to its data and provide a viewpoint slanted to one geopolitical area. In addition, if one POV is overrepresented I think it would tend to favor that one, even if the amount of data is not well correlated with the % of a population who holds that view.

            Ah, but are there cute cat videos?

            "It logically follows if there are no cute cats there can be no cute cat videos" --

    • A lot of people using computers lived through an age where computers only dealt with data and were generally speaking more reliable than humans. Now that computers mainly deal with dogshit things some hell idiot invented, they're slow to change their view.
  • Not just defensive (Score:5, Interesting)

    by JoshuaZ ( 1134087 ) on Saturday September 20, 2025 @07:23AM (#65672490) Homepage
    My wife works in a library. Some of these people become not just defensive, but outright hostile. Part of the problem is socioeconomic and education based. A lot (not all but a lot) of people using libraries on a daily basis don't have much formal education and have little experience with computers. Much of my wife's work is just helping people do very basic tasks, like showing someone how to open a Word document, or how to download or upload a file for a job application. So for probably some of these patrons, ChatGPT must seem like magic. The interface is simply typing what they want, and even highly misspelled or garbled requests will generate something like a coherent response from it, so they don't even need to know what any icon means. And if one is dealing with people who often literally don't understand the difference between a file stored on a computer and a file on the cloud (to use one common example) then even explaining the idea of an AI hallucination is going to be an uphill battle.
    • My wife works in a library. Some of these people become not just defensive, but outright hostile.

      I suspect part of it is also being told something they asked for is incorrect and taking it a being told they are wrong and thus taking it personally, even when though that is not the librarian's intent.

      • Part of it is learning to be diplomatic with ignorant people such as those you mention. Don't say: you're wrong, the book you are looking for doesn't exist. Say instead: sorry, the library computer can't find it rigght now, maybe it was misfiled, come back another day. You will seem helpful and mildly incompetent to them, and then they will go way.
      • I’ve had multiple people tell me ridiculous things they heard from a LLM and when shown that that is wrong fly off the handle, double and tripling down because the AI is “super intelligent”. The world is legitimately in danger of weaponized confidentiality incorrect idiots.
    • Wait, you mean the files are ... IN the computer?!
    • Libraries should roll out their own LLM agent that's sandboxed to the libraries book list, then point the Dingu's to use that...... On a side note, I know a lot of libraries and librarians in Canada also have the issue of people with severe mental health and substance abuse issues using library facilities as a substitute for packed homeless shelters. So I'm wondering if that's adding to the weird mix of being a crisis counselor as well as being IT support and librarian.
    • Librarians have a list of books in print: if someone asks for one that isn't in print, they can say "it's out of print if it exists", and/or "it hasn't been printed yet", ditto.. Followed by "did chatgpt tell you about it? If it did, you need to tell it to not to tell you fibs". (:-))
  • by devslash0 ( 4203435 ) on Saturday September 20, 2025 @08:22AM (#65672538)

    Russia's favourite weapon is misinformation. They have a saying that if you want to conquer a nation instead of sending tanks you just plant a disagreement and let the nation collapse on its own.

    They must really love what's happening in the world right now. AI is doing all the dirty work for them. Persistent disinformation.

  • by gweihir ( 88907 ) on Saturday September 20, 2025 @08:23AM (#65672546)

    So we now have idiots of the 2nd order: They believe the AI hallucinations and defend them as if they were their own hallucinations. Nice.

  • Set up a computer that you type anything into and it names a fake library some distance away. Tell them it's ChatGPT, and it says to go there.

    Then if they return, "defensive", tell them they must have not searched correctly, because the chatbot says it's real.

    • by ffkom ( 3519199 ) on Saturday September 20, 2025 @09:00AM (#65672600)
      Or, just lower your voice, and whisper to the person asking: "I should not tell you this, but that book has been black-listed by the government, so we are not allowed to speak of it anymore."
      • Or, just lower your voice, and whisper to the person asking: "I should not tell you this, but that book has been black-listed by the government, so we are not allowed to speak of it anymore."

        Oh, yeah, THAT is going to make the overall situation in that locality so much better!

    • by ffkom ( 3519199 )
      Another option: Setup a computer that actually does use some (locally hosted, reasonably cheap) LLM, but with a system prompt that instructs it to make up "chapter 1" of any book it is asked for based on the book title - but make it boring to read, then offer the user to generate the next chapter upon request.
  • by Anonymous Coward

    > Libraries nationwide are fielding patron requests for books that don't exist

    Books that don't exist yet.

  • by burtosis ( 1124179 ) on Saturday September 20, 2025 @09:18AM (#65672636)
    The librarian just needs to put the request into a LLM prompt and generate the book, problem solved!

    Oh god, the world is screwed.
  • When AI was just starting to become popular a co-worker asked me to review code they could not get to work. I took one look and quickly understood that core library functions being used did not exist and that is why the code failed. He had used AI and then did not even go to any trouble to look at the API to verify the code.
  • by jenningsthecat ( 1525947 ) on Saturday September 20, 2025 @09:49AM (#65672694)

    Libraries nationwide are fielding patron requests for books that don't exist after AI-generated summer reading lists appeared in the Chicago Sun-Times and Philadelphia Inquirer earlier this year.

    Step 1: Find out which fake titles received the most requests
    Step 2: Have AI write books to go with the titles
    Step 3: Profit?

  • by groobly ( 6155920 ) on Saturday September 20, 2025 @10:49AM (#65672772)

    Libraries will now have 3 categories instead of 2: Non-fiction, fiction, and fictional.

  • Asimov's “The Endochronic Properties of Resublimated Thiotimoline” (1948).

    In it, he invented a fictional chemical — thiotimoline — that dissolves before it touches water. He wrote it in the form of a serious scientific paper, with footnotes, jargon, and references, but the subject was pure nonsense.

    Because the style was so convincing, some people at the time wondered if it was real. Asimov himself later joked that librarians and scientists would get requests for the “thiotimoli

  • We need to implement a full stop on any data that is less than about a year old. EVERYTHING after that is suspect. Flag it in red, so even the "casuals" can see that it's not verified. I know, how do you do that? No idea. But I'm sure people will want to throw AI at finding the AI fakes. And that will work really well

  • Are any Berenstein Bears books on the shelves? Did they check the Quantum Card Catalog?

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...