Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google

Google Launches AI-Powered Notes App Called NotebookLM 22

Google is launching its AI-backed note-taking tool to "a small group of users in the US," the company said in a blog post. Formerly referred to as Project Tailwind at Google I/O earlier this year, the new app is now known as NotebookLM (the LM stands for Language Model). The Verge reports: The core of NotebookLM seems to actually start in Google Docs. ("We'll be adding additional formats soon," the blog post says.) Once you get access to the app, you'll be able to select a bunch of docs and then use NotebookLM to ask questions about them and even create new stuff with them. Google offers a few ideas for things you might do in NotebookLM, such as automatically summarizing a long document or turning a video outline into a script. Google's examples, even back at I/O, seemed primarily geared toward students: you might ask for a summary of your class notes for the week or for NotebookLM to tell you everything you've learned about the Peloponnesian War this semester.

These are the kinds of features you'll hear about in practically any AI product, but Google is hoping that by limiting the underlying model only to the information you've added yourself, it can both improve the model's responses and help mitigate its tendency to confidently lie about everything. (Google's not unique in this idea, either: Dropbox, Mem, Notion, and many others are pursuing similar hyper-specific AI tools of their own.) NotebookLM also has citations built in, which should make it easier to quickly fact-check the automatically generated responses. But Google does warn that NotebookLM might still hallucinate and that the model won't always get it right. It also, of course, depends on the information you provide -- if you wrote down the wrong dates for the Peloponnesian War in class, it can't help you.

Google says that the NotebookLM model only has access to the documents you choose to upload and that your data is neither available to others nor is it used to train new AI models. This is one of the trickiest parts of a product like this: Google is asking users to give their private information to an AI model in exchange for some convenient and useful features, and that tradeoff gets more complicated the more sensitive the information becomes.
This discussion has been archived. No new comments can be posted.

Google Launches AI-Powered Notes App Called NotebookLM

Comments Filter:
  • by flyingfsck ( 986395 ) on Thursday July 13, 2023 @05:31AM (#63682151)
    CEOs will use AI to convert Powerpoint bullets into long and windy blurbs, while the receiving minions will use AI to summarize the blurbs again and tell them whether there is anything worth reading.
  • by Rosco P. Coltrane ( 209368 ) on Thursday July 13, 2023 @05:35AM (#63682153)

    to some cellphones and laptops that have nothing to do with AI research and development?

    Like why do Pixel phones have a Tensor processor in them?

    The killer feature Google wants is client-side scanning - i.e. the ability to have a mini semi-smart avatar running on your device, constantly checking out your documents and photos in the background, without having to send the files over the internet on one of Google's server for processing, thereby sidestepping end-to-end encryption entirely.

    Apple already does it, allegedly to look for pedo material. But think about it: Apple is vetting YOUR document on YOUR device without your consent. And after pedo material, what else will they look for? Are you cool with that?

    That's what Google wants too. Stay away from devices made by Google that have smarts in em. The smarts ain't there for your benefit.

    • by real_nickname ( 6922224 ) on Thursday July 13, 2023 @06:13AM (#63682189)

      Like why do Pixel phones have a Tensor processor in them?

      Image processing so you get better photos. Privacy issues do not get better or worse with a ai chip, they already can send whatever data they want to their cloud and process them if they want to.

      Apple already does it, allegedly to look for pedo material. But think about it: Apple is vetting YOUR document on YOUR device without your consent.

      I think it has been canceled but the idea was to compute a hash on encrypted data(using homomorphic encryption) and check for already known illegal photos but apple can't see the content of your photos.

      That's what Google wants too. Stay away from devices made by Google that have smarts in em. The smarts ain't there for your benefit.

      Not everything is black or white.

      • the idea was to compute a hash on encrypted data(using homomorphic encryption) and check for already known illegal photos

        Who mandated Apple to search for illegal photos - or indeed anything - on people's phones? Are they law enforcement? Do they have a warrant?

        Not to mention, what gives them the right to use CPU and battery to compute hashes without permission?

        apple can't see the content of your photos.

        That doesn't prevent them from calling the fuzz on you. Or Google [nytimes.com].

        Not everything is black or white.

        No, but when it comes go Big Data - and Google in particular - it's pretty safe to assume whatever they come up with is bad news for privacy.

      • by AmiMoJo ( 196126 ) on Thursday July 13, 2023 @08:23AM (#63682397) Homepage Journal

        Pixel phones also use the Tensor chip to do on-device voice recognition, so that sound doesn't need to be sent to the cloud. It also reduces latency.

        The AI features are also used for object recognition with Google Lens.

        Apple has indeed abandoned their attempt to detect child abuse images. They published the hashing algorithm, and people quickly found they could create hash collisions with fairly arbitrary images. In other words someone could send the victim an innocent looking image, and their iPhone would report them to the police.

        It was fairly obvious that would happen, so my guess is that Apple did it to demonstrate that the technology doesn't work. When politicians come asking, they can say they tried and couldn't do it.

      • Privacy issues do not get better or worse with a ai chip, they already can send whatever data they want to their cloud and process them if they want to.

        The more data they send home, 1) the greater chance they will get caught, 2) the more data they have to process, 3) the more hardware they have to maintain to process it. If they put the processing on your device, then you're actually paying for it — you pay for the hardware development, then you pay for the hardware, then you pay in power for the processing itself.

    • While various uses are possible, the most obvious advantages of local processing power are to provide features while preserving a higher level of privacy. If they want to upload stuff most phones have enough bandwidth to just upload it. On the other hand, if you have significant local processing power you may be able to run models like image enhancement or pretty good voice recognition right on the phone. With enough hardware it can even become possible to train customized models on the phone itself. Many
    • by mjwx ( 966435 )

      Apple already does it, allegedly to look for pedo material. But think about it: [b]Apple is vetting YOUR document on YOUR device[/b] without your consent. And after pedo material, what else will they look for? Are you cool with that?

      Erm, you've got that backwards... It's "your" Apple Iphone so that you never forget that it's Apple's device, you're just using it with permission. So they own everything, you, your documents, your eyeballs, your dick, the lot.

  • by nagora ( 177841 ) on Thursday July 13, 2023 @05:45AM (#63682165)

    Has Google announced the shutter date for this project yet?

  • Because AI and the Peloponnesian War are how you get space pirates. You can't fix AI's hallucination problem because it is the result of statistical averaging of the flotsam and jetsam on the Internet.
  • "show my Clipboard"

    "According to How-To Geek: How to Access Your Clipboard on Android. Open your keyboard and tap the small clipboard icon to display your clipboard on Android. The clipboard will store only a limited number..."

    Just great. And that's on the latest Pixel with the "AI Powered" SoC and all permissions, including an IOU for my soul given to Google.

  • by cstacy ( 534252 ) on Thursday July 13, 2023 @07:38AM (#63682307)

    No, there is another...

    Google is hoping that by limiting the underlying model only to the information you've added yourself, it can both improve the model's responses and help mitigate its tendency to confidently lie about everything.

    Yes and I hope that monkeys quit flying out of my ass.

    Are your notes so bad that you can't skim over them yourself? You need an idiot who doesn't understand the subject matter (or in fact, anything whatsoever) and who is a compulsive liar to summarize them for you? I really don't get why everyone suddenly wants a confusion/misinformation amplifier stuck onto everything.

    • Are your notes so bad that you can't skim over them yourself?

      Yes, their notes are. Next question.

      You need an idiot who doesn't understand the subject matter (or in fact, anything whatsoever) and who is a compulsive liar to summarize them for you?

      Yes, they do.

      I really don't get why everyone suddenly wants a confusion/misinformation amplifier stuck onto everything.

      Because they are just barely smarter than said idiot themselves. To the point you'd need to question why you can't just replace their broken ass brain with a CPU and call it a day. (Hell, for some the CPU would be an upgrade as it can do basic arithmetic and logical operations that their brains seem to struggle with immensely.)

  • Google says that the NotebookLM model only has access to the documents you choose to upload and that your data is neither available to others nor is it used to train new AI models.

    Without any data privacy law like GDPR, what can users do when one day Google tell us "Oopsie, you know the data that we promised we won't use train AI, it had been *mistakenly* fed into our latest AI model. Sorry about that!"? Good luck convince any judge you were harmed by this breach.

    This promise is worth less the memory used to hold it.

  • I look forward to starting my new notebook, adding notes, and then figuring out what to do with them when Google kills the project in a year.

"There is no statute of limitations on stupidity." -- Randomly produced by a computer program called Markov3.

Working...