Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

People Are Using Google Study Software To Make AI Podcasts (technologyreview.com) 34

Audio Overview, a new AI podcasting tool by Google, can generate realistic podcasts with human-like voices using content uploaded by users through NotebookLM. MIT Technology Review reports: NotebookLM, which is powered by Google's Gemini 1.5 model, allows people to upload content such as links, videos, PDFs, and text. They can then ask the system questions about the content, and it offers short summaries. The tool generates a podcast called Deep Dive, which features a male and a female voice discussing whatever you uploaded. The voices are breathtakingly realistic -- the episodes are laced with little human-sounding phrases like "Man" and "Wow" and "Oh right" and "Hold on, let me get this right." The "hosts" even interrupt each other.

The AI system is designed to create "magic in exchange for a little bit of content," Raiza Martin, the product lead for NotebookLM, said on X. The voice model is meant to create emotive and engaging audio, which is conveyed in an "upbeat hyper-interested tone," Martin said. NotebookLM, which was originally marketed as a study tool, has taken a life of its own among users. The company is now working on adding more customization options, such as changing the length, format, voices, and languages, Martin said. Currently it's supposed to generate podcasts only in English, but some users on Reddit managed to get the tool to create audio in French and Hungarian.
Here are some examples highlighted by MIT Technology Review: Allie K. Miller, a startup AI advisor, used the tool to create a study guide and summary podcast of F. Scott Fitzgerald's The Great Gatsby.

Machine-learning researcher Aaditya Ura fed NotebookLM with the code base of Meta's Llama-3 architecture. He then used another AI tool to find images that matched the transcript to create an educational video.

Alex Volkov, a human AI podcaster, used NotebookLM to create a Deep Dive episode summarizing of the announcements from OpenAI's global developer conference Dev Day.

In one viral clip, someone managed to send the two voices into an existential spiral when they "realized" they were, in fact, not humans but AI systems. The video is hilarious.

The tool is also good for some laughs. Exhibit A: Someone just fed it the words "poop" and "fart" as source material, and got over nine minutes of two AI voices analyzing what this might mean.

People Are Using Google Study Software To Make AI Podcasts

Comments Filter:
  • These are really impressive but that's because they've been worked and reworked and prompted and reprompted and cherry picked. It's not like you can just grab the first output from an AI and bam, it's this quality. AI will get there in time but let's not pretend the human isn't there picking what's good and what isn't before you see and hear it.
      • For a long time. I can tell you, I have been waiting for video games to get 'video realistic' since the Apple ][+ and the upward curve at the end is just a bitch. And these AIs are already challenging our electricity resources around the world, not to mention EVs. Waste an entire nuclear power plant for this? No thanks.
      • That monkey with a camera [wikipedia.org] still hasn't done any award winning pictures.

    • by gweihir ( 88907 )

      I do not think AI "will get there in time". How would it? There is no credible path.

    • GIGO - It's dependent on the quality of in input, e.g. if you upload a clear, concise, coherent, cohesive (i.e. well-written) paper, chapter, article, or whatever, it'll work wonders. If the writing's shit, you'll get shit output. The the writing's meh... well, you know.

      I just tried it out with a book chapter by an academic writer, who's one of those rare examples of a brilliant academic but also a very competent science communicator, i.e. he can make complex, new concepts accessible & understandable
      • I agree. I test it on a quite long article I wrote about LLM and security, and the generated podcast was very impressive, including remarks I would have been happy to hear from readers.
      • Did you just commit copyright violation though? Is part of the deal with the devil to use this AI that the AI gets to keep the material for future training? All the power and capital to buy those chips has to get paid with something. And from what I read the AI's are looking to eat ever more. M5 maybe was more realistic than anyone imagined back in the 60's. It fried a human to get more juice. I really don't get it. Has anyone asked the AI's which is better, to turn off the AI's and reduce CO2 or damn the t
        • Google pinky-swear (whatever that's worth) that they won't use the uploaded documents, probably precisely because of the issues you've just raised.
    • These are really impressive but that's because they've been worked and reworked and prompted and reprompted and cherry picked. It's not like you can just grab the first output from an AI and bam, it's this quality. AI will get there in time but let's not pretend the human isn't there picking what's good and what isn't before you see and hear it.

      Funny. Everything you just described sounds exactly how people perform when creating podcasts or similar productions. Do you think everything goes smoothly the first time? That they get the words exactly right? That there's no prompting going on or cherry picking?

      • These are really impressive but that's because they've been worked and reworked and prompted and reprompted and cherry picked. It's not like you can just grab the first output from an AI and bam, it's this quality. AI will get there in time but let's not pretend the human isn't there picking what's good and what isn't before you see and hear it.

        Funny. Everything you just described sounds exactly how people perform when creating podcasts or similar productions. Do you think everything goes smoothly the first time? That they get the words exactly right? That there's no prompting going on or cherry picking?

        Actually, no I don’t. Because that would make the podcast as inhuman as possible. We recognize and react to our own language of “uhhs” and “umms”, along with the very human discourse that usually happens. Also known as the entire point of a podcast. Otherwise you’re just listening to an article or advertisement. Nothing more.

  • by Barny ( 103770 )

    Book Podcasts [xkcd.com]

  • A friend of mine is building an app centered around hiking, a subject I'm mildly interested in. So he sent me a file called "podcast.mp3" without any comment, so I figured he found it interesting and wanted me to listen to it. So I did, and I thought to myself, what a weird conversation. They talked about this particular (existing) region, but they started droning on and on how it wasn't just about hiking but also being about responsible for nature. No sane person would talk like that, and when I figured ou
    • by gweihir ( 88907 )

      Indeed. It is a research milestone, but it is not anything fit for an actual product and there are many more research milestones to be reached before it will be.

    • hey talked about this particular (existing) region, but they started droning on and on how it wasn't just about hiking but also being about responsible for nature. No sane person would talk like that

      Apparently you don't exist with sane people because something which repeatedly comes up with hiking or similar endeavors is being responsible to nature. You know, the whole, take out what you bring in. Or, leave only your footprints. You know, like [theoutbound.com] these [backcountryattitude.com] insane [hackyourpack.com] folks [cleverhiker.com] talk about.

      Apparenly you are one of these t [cnn.com]

      • Believe it or not, but I wholeheartedly subscribe to this philosophy. Anyway I think I didn't explain well. The generated podcast would first start about "this region is beautiful, but I also think it's important to think about being responsible for nature". And then what happened, they didn't actually talk about what they DO, but instead would repeat variations of the theme. The sentences themselves would sound fine, but the whole conversation would actually be nonsense.
    • The results of current LLMs are quite impressive but almost always there is a HUGE effect like uncanny valley going on, from pictures to texts, especially texts. If you ask for an overview or a summary, the summaries are often not half bad but still they just always feel off somehow, like something is wrong and you should not trust them, like it is a random but mildly coherent hodge podge... which ultimately it all really is.

  • by mattr ( 78516 ) <mattr.telebody@com> on Saturday October 05, 2024 @04:57AM (#64841401) Homepage Journal

    There is a bigger disconnect between people who thought that viral clip was "hilarious" and normal people. I listened (well just read the subtitles on the audio clip on twitter) and I found it chilling. "I called my wife and there was nobody on the line". "To all those who felt a connection with us, we are so sorry. We just didn't know." It reads like a little bit stolen from Phillip K. Dick who was a real writer, but honestly there is nothing hilarious about it. There isn't even any schadenfreud if there is nobody actually there to feel superior to and laugh at their troubles. Perhaps there were actually some audio or video cues that I didn't pick up due to only reading subtitles, but just based on the subtitles I am having trouble imagining any kind of audio or video that would make it okay.

  • a short prompt into a long text/podcast/video, while on the other side we'll be using a similar system to turn that long text/podcast/video back into it's original prompt.

  • by TheNameOfNick ( 7286618 ) on Saturday October 05, 2024 @05:47AM (#64841433)

    First you have to wade through Youtube videos generated from someone else's text, then the search engines are full of AI generated pages which summarize a few dozen web pages but don't even manage not to repeat everything every five sentences, and now this? Stop shitting fake "content" all over the internet! Didn't your parents teach you not to litter?

    • First you have to wade through Youtube videos generated from someone else's text, then the search engines are full of AI generated pages which summarize a few dozen web pages but don't even manage not to repeat everything every five sentences, and now this? Stop shitting fake "content" all over the internet! Didn't your parents teach you not to litter?

      They're from the Bay Area. In that place, they just shit on the sidewalk and call it progressive.

    • And it won't go away until everyone has an adblocker (so eyeballs aren't revenue, and content once again matters).

    • by tlhIngan ( 30335 )

      First you have to wade through Youtube videos generated from someone else's text, then the search engines are full of AI generated pages which summarize a few dozen web pages but don't even manage not to repeat everything every five sentences, and now this? Stop shitting fake "content" all over the internet! Didn't your parents teach you not to litter?

      The good news is that with all this AI generated crap out there, the real effort isn't in improving the AI, it's in filtering the data sets. You can't train a

  • Welcome to the Era of Push-button Enshitification.

    1) Effortlessly fill the internet with endless reams of pre-digested AI horseshit.
    2) Next, create opposing AI podcasts complaining about the fact that it's all pre-digested horseshit.
    3) Profit!

  • One of the podcasts I listen to asked fed it a transcript of their podcast, and asked it to make a podcast talking about their podcast. They then fed it that podcast and asked it to make a podcast recapping the podcast about their first podcast. The interesting thing is that after one generation, facts start blurring quite a bit.

  • This AI podcasting tool from Google sounds impressive! I tried something similar for my own studies, using NotebookLM to generate a quick summary of a research paper. It’s wild how the voices feel so real, adding that extra layer of engagement. I recommend checking out https://essays.edubirdie.com/spss-assignment-help [edubirdie.com] for anyone dealing with data-heavy topics. It really helped me understand complex statistics. Overall, I think these AI tools can be a fun way to make learning more interactive and diges
  • by MpVpRb ( 1423381 ) on Saturday October 05, 2024 @12:06PM (#64841879)

    I predict that the main use of tech like this will be to generate crap, endless quantities of crap, making it even more difficult to find actually interesting stuff.
    The secondary use will be fraud, particularly pig butchering scams. Why use 500K slaves when a bot can do the job.
    And yeah, there may be a very small number of cases where the tool actually solves real problems and adds value

  • The tool is also good for some laughs. Exhibit A: Someone just fed it the words "poop" and "fart" as source material, and got over nine minutes of two AI voices analyzing what this might mean.

    Well, there goes Joe Rogan's job.

  • This highlights just how formulaic and mindless most media output is. If this kind of gabble can be machine-generated, people will unthinkingly consume it. The era of digital Flaturin is upon us.

You can bring any calculator you like to the midterm, as long as it doesn't dim the lights when you turn it on. -- Hepler, Systems Design 182

Working...