Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IT Technology

Adobe Acrobat Adds Generative AI To 'Easily Chat With Documents' (theverge.com) 31

Adobe is adding a new generative AI experience to its Acrobat PDF management software, which aims to "completely transform the digital document experience" by making information in long documents easier to find and understand. From a report: Announced in Adobe's press release as "AI Assistant in Acrobat," the new tool is described as a "conversational engine" that can summarize files, answer questions, and recommend more based on the content, allowing users to "easily chat with documents" to get the information they need. It's available in beta starting today for paying Acrobat users.

The idea is that the chatbot will reduce the time-consuming tasks related to working with massive text documents -- such as helping students quickly find information for research projects or summarizing large reports into snappy highlights for emails, meetings, and presentations. AI Assistant in Acrobat can be used with all document formats supported by the app, including Word and PowerPoint. The chatbot abides by Adobe's data security protocols, so it won't store data from customer documents or use it to train AI Assistant.
The new AI Assistant experience is available for Acrobat customers on Standard ($12.99 per month) and Pro ($19.99 per month) plans.
This discussion has been archived. No new comments can be posted.

Adobe Acrobat Adds Generative AI To 'Easily Chat With Documents'

Comments Filter:
  • by davide marney ( 231845 ) on Tuesday February 20, 2024 @09:48AM (#64254320) Journal

    ... and pay for the privilege. Generative models summarize precisely the way its modelers and trainers want them to. It will be trivially easy to spin a narrative through summaries. There's no silver bullet shortcut to real knowledge. But there is a shortcut to propaganda.

    • by zekica ( 1953180 )
      It's not propaganda, it's just the bias inherent in the training set. Anyway, I wouldn't trust any output a LLM generates just because it is inherently a very very very advanced prediction engine.
      • by cbm64 ( 9558787 )

        Anyway, I wouldn't trust any output a LLM generates just because it is inherently a very very very advanced prediction engine.

        Humans get things wrong all the time too, the bar isn't for the generative models to always be right, but to do at least as good a job (error/quality wise) as an average human doing the same work. There is a reason we have quality control already, also LLMs will need that.

      • "Inherent" bias isn't really a thing, though. A training set is selected to elicit a desired output. The bias is therefore as "selected" as any other parameter.

        • by ceoyoyo ( 59147 )

          Ha, if only. Training sets are generally selected because they're the easiest obtainable. It's called a convenience sample and it often results in unexpected selection biases.

    • I can't wait to see the spin on a 30 page contract.

  • So, once shit like this takes root, will we be seeing stories in ten years about how students graduating no longer understand what books are, just like today we hear that students don't understand what files and folders are? This seems a bit more scary than the current issue, since we're literally talking about filtering full information into easily digested tidbits, and the way these AI systems work today, it'd be completely random whether they are getting the important bits, or the bits deemed important b

  • But it better run locally and goddamnit if it "finds" what I'm looking for and it's not actually there (disappointed glance in Google's direction...)!

  • by argStyopa ( 232550 ) on Tuesday February 20, 2024 @10:20AM (#64254406) Journal

    "The idea is that the chatbot will reduce the time-consuming tasks related to working with massive text documents -- such as helping students quickly find information for research projects or summarizing large reports into snappy highlights"

    I mean...we are just going to steadily (or increasingly...) get stupider now, yeah?

    Isn't the point of being a student that you learn the ability to, y'know, create and read things like reports or projects, using your own knowledge, experience, and human brain to do things like summarize or imagine "snappy highlights" (Jesus Christ...) for summarizing and communicating those facts to other people?

    • It's creating management summaries
    • by qbast ( 1265706 )

      "The idea is that the chatbot will reduce the time-consuming tasks related to working with massive text documents -- such as helping students quickly find information for research projects or summarizing large reports into snappy highlights"

      I mean...we are just going to steadily (or increasingly...) get stupider now, yeah?

      Isn't the point of being a student that you learn the ability to, y'know, create and read things like reports or projects, using your own knowledge, experience, and human brain to do things like summarize or imagine "snappy highlights" (Jesus Christ...) for summarizing and communicating those facts to other people?

      Yes, we are. Imagine a generation that lost ability to search for information unless spoonfed by a chatbot. Or to even understand any longer text without asking for a summary. We are basically outsourcing our brains and any good manager will tell you that no company should ever outsource its core business. Imagine a whole generation of people moulded by biases embedded in AI tools.

    • "The idea is that the chatbot will reduce the time-consuming tasks related to working with massive text documents -- such as helping students quickly find information for research projects or summarizing large reports into snappy highlights"

      I mean...we are just going to steadily (or increasingly...) get stupider now, yeah?

      Isn't the point of being a student that you learn the ability to, y'know, create and read things like reports or projects, using your own knowledge, experience, and human brain to do things like summarize or imagine "snappy highlights" (Jesus Christ...) for summarizing and communicating those facts to other people?

      That used to be the point of being a student. Education now targets "make them pliable for the workforce." If people graduating are expected to be baseline, barely-functional, worker-bees? This is perfect. And removes yet more barriers to power-grabbers by removing still more incentive to learn anything real about history, or about culture, or about politics, or power structures, or anything else deemed "dangerous" by those currently in power and scared to death of losing it. It's win-win for those at the t

    • Don't worry, very soon there will be a larger need than ever for us all to use our brains. Once AI-generated video/pictures/text are indistinguishable from the real thing. Then you're back to having to use your brain to assess whether information is factual. But perhaps I'm an optimist.

  • This is a terrible idea, all it's going to enable is lazy people, continuing to find ways to be lazy. The number of times I've seen someone "read" a 20+ page document in 2 minutes, is annoying. Those people always believe they're world champions at speed reading, and then with 9X% accuracy, misquote the document, incorrectly understand the document, and explain / ask nonsense questions or points from it

    The places this will be applied:

    1. RFP's
    2. Compliance documentation.
    3. Policy documentation.
    4.
    • The number of times I've seen someone "read" a 20+ page document in 2 minutes, is annoying.

      There is your use case!
      I doubt the AI will be much worse than those people :-P

      Seriously, though, I agree with you.
      Context is everything in many documents, so I do not have high hopes of a "generic AI" getting that right most of the time.

      • My point was more that people never take time to do a detailed read of documents. A 20-page document, that is information dense, might take me several hours to read, and I'll probably come out of it with a page of questions for research. One time I wrote into a 40-page document, "If you accept this, you're a stupid bum head.", and sure enough 6 people, literally 6 people, all signed off that the document was good, had been reviewed, and could be sent. I was in the last meeting before the email with the
  • by Mr. Dollar Ton ( 5495648 ) on Tuesday February 20, 2024 @11:03AM (#64254496)

    I've read, written, edited, annotated and even printed and shredded documents, but I've never "experienced" them. Why TF is this imbecile jargon necessary? What's wrong with "work with documents"?

    • I've read, written, edited, annotated and even printed and shredded documents, but I've never "experienced" them. Why TF is this imbecile jargon necessary? What's wrong with "work with documents"?

      It's a new concept. If the AI allows you to "Experience" the document without reading it, it's so much MORE AMAZING!

      I see nothing at all good in this, and several ways it will be leveraged in schools to continue dumbing down the majority of the population that aren't curious enough to seek their own continuing education outside of the regulated school system. Myself? I checked out a LOT of science and history books, and bought a lot of them for long-term research as well. I don't see that being a thing in f

    • I've read, written, edited, annotated and even printed and shredded documents, but I've never "experienced" them.

      Oh you clearly haven't worked with a large EPCM contractor in engineering before. Trust me it is an "experience" to be handed over documentation from them. You ask for a single loop drawing and you get a 1500 page PDF with a mixture of anything from A5 to A0 page sizes, images and scans, takes 10 minutes to open, and is not indexed and told "It's in there".

      That is an "experience". I rank it somewhere between getting 4 root canals and being forced to translate a Trump speech into a foreign language.

      • Yes, the the general experience of being shafted as hard as possible by the guy who has the money is not unknown (from large banks to ESA), but in the Adobe "document experience" case we're discussing the mere action of opening a PDF file.

        It is kinda grand to blow it so much out of proportion :)

  • Even the early versions, with their biases and blind spots, may be useful as a "did I miss anything when I read the whole document myself" tool. Of course, this assumes they don't "hallucinate."

    Once these are "almost as good as humans" at making "meh-quality" "executive summaries" or "custom executive summaries, with only information relevant to the query" they will be much more useful. Not as accurate as a human, but much faster.

    Sometimes, a fast meh-quality answer is better than a slow high-quality answ

  • Being able to query large documents is actually pretty awesome in a business context. Of course, it's dependent on the quality of the PDF... if it's a bad OCR, you're going to have the same GIGO problems.

  • Do you think you have the time to read and understand the 2024 US budget omnibus bill? No? Then wouldn't it be nice to say "How much of this bill allocates money to unspecified or highly general purposes such that it would be hard to track how it's spent, and which places in the bill are those listed"? Or "What's the total amount to be spend on building new VA hospitals"? Or whatever your concern is? How about scanning in one of those massive boilerplate contracts and saying "Which parts of this document ar
    • To a large degree, I agree.
      Many documents are really databases, with only a small subset of the document really useful for any given reader.
      Being able to query the document for the relevant subset would indeed be very useful.

      But I am worried about the system actually delivering the right result.
      There is a reason why there is all that other text in there, so it is important to preserve all the bits and pieces that are relevant.
      I don't have high hopes that a "generic AI" will get it right every time.

      BTW: The

  • How quickly the AI shark is jumped.
  • by hackertourist ( 2202674 ) on Tuesday February 20, 2024 @12:00PM (#64254680)

    Acrobat reached a peak of usability more than a decade ago. Since then, all Adobe has done is make the interface worse, make Acrobat slower to start up and run, and insert useless ads everywhere. This new addition will make things worse again.

  • by 80N ( 591022 )

    This conversation discusses concerns and opinions about the use of generative models and chatbots for summarizing large text documents. The initial post by Davide Marney expresses skepticism about generative models, stating that they can be easily manipulated to present a specific narrative, potentially leading to propaganda. A response suggests that bias in generative models is due to the training set, not necessarily intentional propaganda.

    The discussion delves into the reliability of generative models,

The reason why worry kills more people than work is that more people worry than work.

Working...