Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Google Technology

National Archives Pushes Google Gemini AI on Employees 19

An anonymous reader shares a report: In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called "AI-mazing Tech-venture" in which Google's Gemini AI was presented as a tool archives employees could use to "enhance productivity." During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.

In December, NARA plans to launch a public-facing AI-powered chatbot called "Archie AI," 404 Media has learned. "The National Archives has big plans for AI," a NARA spokesperson told 404 Media. "It's going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future."

Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history. One worker who attended the presentation told 404 Media "I suspect they're going to introduce it to the workplace. I'm just a person who works there and hates AI bullshit." The presentation was given about a month after the National Archives banned employees from using ChatGPT because it said it posted an "unacceptable risk to NARA data security," and cautioned employees that they should "not rely on LLMs for factual information."
This discussion has been archived. No new comments can be posted.

National Archives Pushes Google Gemini AI on Employees

Comments Filter:
  • by Frobnicator ( 565869 ) on Tuesday October 15, 2024 @11:51AM (#64866247) Journal

    I really have to wonder what triggered the complete turnaround in policy. Going from "unacceptable security risk" and "cannot be relied upon", over to "big plans" and "essential to how we conduct our work", would require some massive changes.

    The only two I can think of are truckloads of money that allowed the business team to override the technical team, or some people who were focused on the technical risk getting sacked . I don't see either in the headlines, so wonder what was happening behind the scenes.

    • by Njovich ( 553857 )

      Maybe they learned what the tool actually can do and found out their initial concerns were overstated?

      • by Junta ( 36770 ) on Tuesday October 15, 2024 @12:35PM (#64866365)

        Note that is often a marketing trick too.

        We have an executive, and I don't think he sees benefit in throwing his main job under the bus to appease a vendor, at least at a price point that would be worth while to the vendor.

        So he came and seemingly relayed the pitch and then collected generally negative feedback about how unreliable the output was and sent out a broad mail acknowledging it didn't work based on everyone's feedback.

        Then a couple weeks later, he had seen the light after reviewing demo material given by the vendor, and that we just must not understand how to use it to generate code very well if the code is largely useless. The proof were basically a series of Comp Sci 101 level prompts to generate well trodden demos, and the demo was powerful because he, a non-technical executive, was able to adjust some behaviors on simple prompt editing. Now his ideas on how to tweak it were fairly mundane and *well* within the general variation of these sorts of copy paste problems, *however* he has no context to know that it doesn't scale to harder code, coding is coding. He thinks that the AI code generator are limited merely by how long a contiguous sequence of code can be generated, so humans just string it together.

        The thing is, it can help navigate the 'hello world' phase and it can augment, to some extent code completion. Usually the 'complicated' stuff that can be successfully generated is better served by explicitly linking to a library implementation that is well maintained, rather than that moments output of LLM, with no promise of ongoing compatibility. However, he imagines that it flips the programmer job around so that folks should be churning out software 25-30x faster as he assumes the tedium of coding is the long pole.

        • by Njovich ( 553857 )

          I think if the concern is sending in sensitive data, why don't they start by doing projects without sensitive data? If the concern is that Google would train heir public data on Google, it's just nonsense. Where the concern is accuracy, then they don't understand that gemini supports grounding in various ways including RAG. It sounss like they think it can only work by dumping literally all their data in public gemini and then getting it to hallucinate stuff, which is not how this type of project works.

          • by Junta ( 36770 )

            The broad concern is that it was a bit more frustrating to try to use than it was to do without for a lot of the work. Even as someone gets a feel for what it's *probably* going to be able to cover, it's relatively little, and executives are expecting *massively* faster progress when the result is not appreciably faster. The marketing guys calibrate expectations in the sky and then there's something "wrong" with the developers when they didn't get any faster despite the company shelling out a lot of subsc

    • The answer likely is: "Truckload of money, and by the way, do the thing we we asked you to do. Find all the wrong-think you can in our National Archive and "correct" it.

      I don't see a good outcome from this. Best case, our tax money is wasted. Worst case, they re-write or otherwise shame, ridicule and "community note" history to suit the current whim.

      The pandering, it's going to kill this nation.

      • No, stupidly, the truckload of money is likely going the other way, from taxpayers to google. Not from google to government employees.

        If you've paid attention to the normie net in the past little while, there's about even numbers of people who get what the "new" AI does through metaphors like "fancy autocorrect" and people who think "AI is here and can do anything! Look at this picture it made!"

        I think us technical folks lean much more disproportionately to the skeptical side. Government managers who ma

    • by narcc ( 412956 )

      When the National Archive is working for the people, AI is an unnecessary and unreliable security risk. When they're working for Google, AI is essential to operations.

      Like all public-private partnerships, the sole purpose is to funnel as much taxpayer money as possible into corporate coffers. This won't change until we get money out of politics.

    • Re: (Score:2, Insightful)

      by EvilSS ( 557649 )

      Going from "unacceptable security risk"

      For this I suspect it's the difference between users using their personal ChatGPT accounts vs NARA using commercial Gemini. There are much tighter policies when you pay for enterprise access vs some Joe Schmoe personal or free account.

      and "cannot be relied upon"

      Depends on what they are using it for. NARA is in control of how the model is used vs, again, some personal account. So they have control over what it is being used for. If they are using it to answer common operational questions then it could do fine. If they are trying to

    • something operating on a public cloud versus operating on a private government cloud. Because it should be isolated.

    • Based on the way my current CEO talks about AI, I think that lots of private and public organizations are really optimistic that AI can lower labour costs.

      It's not even always necessarily about replacing head count with automation, it's just as much about solving hard-to-solve problems and making existing head counts more productive through AI.

      For example, our CEO is convinced that within a year or two we'll be able to use AI to find the root cause of bugs in highly complex systems.

      Do I share the optimism?

    • They have started to setup separate servers for the government. So on those servers it is limited in what is remembered.
  • ...useful data will be gathered in order to make future versions better.
    It's always a hard question whether or not to use very early versions of tech that are known to have problems

  • by CEC-P ( 10248912 ) on Tuesday October 15, 2024 @01:10PM (#64866451)
    Black nazis.
  • One of the primary things I want, as a service, from archive sites in general are summaries. LLM's like Perplexity have worked quite well at that task for me. I haven't really played with Gemini much outside of Google Search results, but they seem OK too. The only reservation I have are the reported "hallucinations" where if the LLM doesn't understand or know the information prompted for, they will sometimes make up information. If we can just teach a LMM to say "I don't know" instead, it could be a use

If it wasn't for Newton, we wouldn't have to eat bruised apples.

Working...