Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Google Technology

Google's Text-To-Image AI Model Imagen Is Getting Its First (Very Limited) Public Outing (theverge.com) 22

Google's text-to-image AI system, Imagen, is being added to the company's AI Test Kitchen app as a way to collect early feedback on the technology. The Verge reports: AI Test Kitchen was launched earlier this year as a way for Google to beta test various AI systems. Currently, the app offers a few different ways to interact with Google's text model LaMDA (yes, the same one that the engineer thought was sentient), and the company will soon be adding similarly constrained Imagen requests as part of what it calls a "season two" update to the app. In short, there'll be two ways to interact with Imagen, which Google demoed to The Verge ahead of the announcement today: "City Dreamer" and "Wobble."

In City Dreamer, users can ask the model to generate elements from a city designed around a theme of their choice -- say, pumpkins, denim, or the color blerg. Imagen creates sample buildings and plots (a town square, an apartment block, an airport, and so on), with all the designs appearing as isometric models similar to what you'd see in SimCity. In Wobble, you create a little monster. You can choose what it's made out of (clay, felt, marzipan, rubber) and then dress it in the clothing of your choice. The model generates your monster, gives it a name, and then you can sort of poke and prod the thing to make it "dance." Again, the model's output is constrained to a very specific aesthetic, which, to my mind, looks like a cross between Pixar's designs for Monsters, Inc. and the character creator feature in Spore. (Someone on the AI team must be a Will Wright fan.) These interactions are extremely constrained compared to other text-to-image models, and users can't just request anything they'd like. That's intentional on Google's part, though. As Josh Woodward, senior director of product management at Google, explained to The Verge, the whole point of AI Test Kitchen is to a) get feedback from the public on these AI systems and b) find out more about how people will break them.

Google wouldn't share any data on how many people are actually using AI Test Kitchen ("We didn't set out to make this a billion user Google app," says Woodward) but says the feedback it's getting is invaluable. "Engagement is way above our expectations," says Woodward. "It's a very active, opinionated group of users." He notes the app has been useful in reaching "certain types of folks -- researchers, policymakers" who can use it to better understand the limitations and capabilities of state-of-the-art AI models. Still, the big question is whether Google will want to push these models to a wider public and, if so, what form will that take? Already, the company's rivals, OpenAI and Stability AI, are rushing to commercialize text-to-image models. Will Google ever feel its systems are safe enough to take out of the AI Test Kitchen and serve up to its users?

This discussion has been archived. No new comments can be posted.

Google's Text-To-Image AI Model Imagen Is Getting Its First (Very Limited) Public Outing

Comments Filter:
  • Typing a few words into a command line and waiting for software to generate an image derived from other available images ... won't make you an artist.
    • Where do we draw the line? Are you still an artist if you produce your art in Photoshop?

      Do you think human art isn't derived from other available images?
      Do you think anything is truly original?
    • Re: (Score:2, Flamebait)

      No, but it'll be a lot of fun when what you type in is "Marjorie Taylor Greene being buggered by a goat with a QAnon brand on its fur, with a background light display of Jewish space lasers".
      • by dgatwood ( 11270 )

        No, but it'll be a lot of fun when what you type in is "Marjorie Taylor Greene being buggered by a goat with a QAnon brand on its fur, with a background light display of Jewish space lasers".

        I'm guessing that for legal reasons, any software like that will flat out reject any attempt to show any photo of a real person, but if you're really lucky, you might be able to pull off something like "Marjorie Taylor Greene as a Simpsons character being smothered by a pillow held by the cartoon Jesus from South Park." :-D

        • It's a good question- the legal aspect.
          Anyone know what the legality of such works is?

          Because ML can absolutely provide a photographic-quality image of MTG being fucked by a goat with a Q-Anon brand on its fur, with a background light display of Jewish space lasers.
        • That's part of the fun, you see. The horse has already left the barn. We can run these models locally. No need to beg for API access. No need to worry that you get an algorithmical scarlet letter from making fun of MTG's antisemitism (well, not from just running the model at least).

    • I think the big application of this stuff is video games and movies. Everywhere that the rate of artist content creation is actually an expensive bottleneck.

      The classic video game Elite used a procedural universe. Movies still use terragen.

      If they can get queries to these diffusion networks fast enough, or make the networks small enough....
      • I've seen several AI-upscaled texture improvement mods for games... And frankly, they're fucking amazing.
        It's only a matter of time before the textures really are generated with nothing but a text description.
    • 'print twenty dollar bill' - now if that worked I might be interested.
      • 'print twenty dollar bill' - now if that worked I might be interested.

        They print just fine. It’s the error handling that a big problem.

    • Typing a meaningless opinion into Slashdot won't make you a pundit, either.

    • Oh no! My goal all along was to convince Slashdot first posters that I was a real artist and all ;(

    • Typing a few words into a command line and waiting for software to generate an image derived from other available images ... won't make you an artist.

      Neither will smearing some oil paint on a canvass, so that's fine.

    • Typing a few words into a command line and waiting for software to generate an image derived from other available images ... won't make you an artist.

      Of course it won’t, you have to win an art contest [nytimes.com] first.

    • My hobby is typing a few words into a command line and waiting for software to generate an image derived from other available images.
  • by niff ( 175639 ) <{moc.liamg} {ta} {kciretfinnavretuow}> on Thursday November 03, 2022 @01:52AM (#63021107) Homepage

    For Google to kill it?

    Queue it up here already:
    https://killedbygoogle.com/ [killedbygoogle.com]

  • That's closer to art than anything I ever saw John Cage create.

Do molecular biologists wear designer genes?

Working...