Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google AI

Google To Relaunch Tool For Creating AI-Generated Images of People 35

Google announced that it will reintroduce AI image generation capabilities through its Gemini tool, with early access to the new Imagen 3 generator available for select users in the coming days. The company pulled the feature shortly after it launched in February when users discovered historical inaccuracies and questionable responses. CNBC reports: "We've worked to make technical improvements to the product, as well as improved evaluation sets, red-teaming exercises and clear product principles," [wrote Dave Citron, a senior director of product on Gemini, in a blog post]. Red-teaming refers to a practice companies use to test products for vulnerabilities.

Citron said Imagen 3 doesn't support photorealistic identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes. "Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we'll continue to listen to feedback from early users as we keep improving," Citron wrote. "We'll gradually roll this out, aiming to bring it to more users and languages soon."
This discussion has been archived. No new comments can be posted.

Google To Relaunch Tool For Creating AI-Generated Images of People

Comments Filter:
  • by jrq ( 119773 ) on Wednesday August 28, 2024 @05:46PM (#64744982)
    ...got tired real quick.

    Bring back search, you dolts!
    • by Tablizer ( 95088 )

      I wish they'd just be honest and say, "We can't compete against other evil companies if we are not evil ourselves."

      Not sure I entirely agree that's true, but it would at least be honest about their reasoning.

      Anyhow, I'm sure there are new glitches in the revised gizmo to laugh and cringe at. Let's see how Google falls the second time around? Hey, new slogan:

      Google: Fail Different

    • people keep complaining google search sucks now but never give examples of their search terms and results(screenshot).

      • ... because they don't use Google search, because it sucks?

        (I don't 2go on about how "google search sucks". I just don't use it. My browsers are set to use DDG for search. My phone has the DDG app installed (though some shite from my phone manufacturer manages to go direct to Bing, for some reason. not that I use the phone manufacturer's recommendations if I can possibly avoid it). I guess the greatest fear for Google is being considered irrelevant.

        Also - what is this "online advertising" thing? Ad-Blocke

        • So you claim Google search sucks but can't provide any objective measurement about your claim being true. If you don't use it anywhere, how can you say it isn't good?

          • Excepting the typo, I'll repeat (for the hard-of-reading), 'I don't go on about how "google search sucks".' I just don't use them.

            I know that other people are vociferous about how they don't use Google search "because (in their humble, or even well-founded, opinion) Google search sucks". That's their opinion, to which they have a right. It's not my opinion, or my reasoning.

            Since I don't use them, and don't go on about it, why would I need to collect statistics and results about a product that I don't use?

      • by Kartu ( 1490911 )

        Welp, a couple of days ago I recall the first time in my life trying bing after google.

        I've searched something rather basic, regarding next football match of a certain nation.

        A lot of search nowadays goes straight to ChatGPT.

        Google's dominance is no longer as solid and untouchable as it seemed.

  • Both of which were a common complaint.
  • I remeber a day sometime in year 2001-2003, when I was talking with my schoolmates, and I mentioned that I have the feeling that Google search results suddenly gotten much worse/not diverse/irrelevant ... And my schoolmates confirmed they noticed the same. Something broke back then, something happened to the internet.
    • They overtook Altavista Search as the popular engine. There was no longer a reason to be good, just remain popular.
      • That's what I'd guess, too. I noticed a lot bad shit happened in 2003. The Alpha EV7 was canceled for good. SGI under the leadership of Rick Belluzo started to flounder and make terrible decisions like trying to make Intel-based workstations and Itanium based servers (both of which failed miserably). Shit man, if you are going to fail doing stupid shit, we could have at least got a Fuel2 or Tezro2, instead. Intel was still convinced in 2003 that the Itanic was going to sail, but in 2004 they licensed AMD64
    • I'm convinced Google's entire business model is to make search suck as much as possible and only have their algorithms provide results for things which are already popular. That way, anyone new hoping to get noticed has to buy paid ads on Google.

  • Everyone of these AI stories claim the AI can't do this and it won't do that. We are not stupid. We have already seen that these measures can be subverted by pedophiles, criminals, hackers, and teenagers. It's almost a joke. The only way to prevent anti-social behavior is to end AI completely, which is impossible, or embrace the new paradigm of dis-information. From this point forward, nothing we see, touch, taste, or smell is Real. Sounds like the Matrix to me.
    • You're right that corporate controls may not (currently) be able to coral ALL the nutty stuff LLMs do ... but when you then try to argue that means either disinformation or the death of AI you lose me.

      It's like arguing "locks don't always stop all criminals, so we can only have anarchy or a police state". As a matter of fact, our society does just fine with locks protecting most doors ... even though criminals do pick locks sometimes.

      Similarly here, LLMs can continue to exist, despite the fact that "guard

    • In the physical world, our laws can't stop people from shoplifting, or abusing children, or just plain stupid. We don't respond to this impossible situation by throwing up our hands and doing nothing, because that would be even worse. So as imperfect as the guardrails are, we still need to have them, and to keep improving them, forever. That's how guardrails or laws work, we won't ever get it "perfect" but we have to keep trying.

      • In the physical world, our laws can't stop people from shoplifting, or abusing children, or just plain stupid. We don't respond to this impossible situation by throwing up our hands and doing nothing, because that would be even worse. So as imperfect as the guardrails are, we still need to have them, and to keep improving them, forever.

        Laws are based on the consent of the governed and apply consequences to action. People don't put locks on their doors because the law says. They do it because they understand the difference between the states legal system and reality.

        You could run around trying at great expense to prevent items such as knives, bats, guns, cars and cast iron skillets from being capable of killing people however just because society has reached consensus murder should not be allowed does not also imply it is also willing to

        • Nothing you said about AI guardrails, distinctly applies to AI. Corporations behave this way in everything they do, not just in their deployment of AI.

          No, of course you don't "have" to keep trying to make everything safer. But there are consequences for not doing so.

          Originally, cars didn't require keys of any kinds. If you could crank the engine (with an actual crank), you could start it up and potentially steal the car. But as cars became more popular, people and corporations began to realize that they nee

          • Nothing you said about AI guardrails, distinctly applies to AI.

            Why does it matter? What is the relevance?

            Corporations behave this way in everything they do, not just in their deployment of AI.

            Again what is the relevance?

            No, of course you don't "have" to keep trying to make everything safer. But there are consequences for not doing so.

            Nothing objective is being communicated with these remarks. There are consequences for all action and all inaction. See also serenity prayer.

            Originally, cars didn't require keys of any kinds. If you could crank the engine (with an actual crank), you could start it up and potentially steal the car. But as cars became more popular, people and corporations began to realize that they needed to add ignition keys. Crooks got smarter and figured out ways to hot-wire cars, even if they didn't have a key. So manufacturers added computer chips to keys. Criminals figured out how to defeat the chips. So manufacturers added electronic engine immobilizers. Notably, Hyundai failed to add engine immobilizers when the rest of the industry did so, and suffered significant setbacks as a result.

            AI guardrails work like this too. It will always be an arms race. With each iteration, the crooks will get smarter, and the AI manufacturers will make their defenses more robust. This is how it always works.

            As OP pointed out this is nonsense, there is no arms race there is only futility. There is no known way to make models robust against augmentation even if you could design a model that was sufficiently resistant to social engineering with front end only access for it to have materially mattered.

            T

            • You really have bought the hype, haven't you.

              The truth is, every single AI product out there, is created by people. LLMs aren't *actually* intelligent. If you think the output of AI can't be shaped and guardrails applied, try this sarcastic AI: https://flowgpt.com/p/sarcasti... [flowgpt.com] AI is software--it's fancy software, but still software--and it can be made to do what humans want it to do.

  • by Powercntrl ( 458442 ) on Wednesday August 28, 2024 @06:55PM (#64745124) Homepage

    I recently had ChatGPT write a song about Musk tilting at windmills, as a reference to him vowing to fight the "woke mind virus". I wanted some funny artwork to go along with the song, I was thinking something like Musk riding in a Tesla Roadster with a big lance charging towards a windmill. Nope, couldn't get any of the image generators to do that because apparently we're not allowed to poke fun at celebrities anymore.

    Remember when Saturday Night Live used to take celeb pictures and do a really bad chroma keying effect to make it look like their lips moved when they spoke? Suddenly now that you can do the same thing but slightly better with a computer, it's no longer okay.

    The stupid thing is I'm not really that bad at Photoshop and if I really did want the silly Musk image, I certainly could do it myself. AI image generation is just a less time consuming means to the same end.

    • Remember when Saturday Night Live used to take celeb pictures and do a really bad chroma keying effect to make it look like their lips moved when they spoke?

      Yes. I also remember folks in the 1970's and 1980's would take photos of celebs and politicians and tear the mouth part of the photo then use the lips to create stop motion videos of them saying X, Y, or Z. I remember folks used to worry about stop-motion features in VHS cameras because of that stuff.

      • by mjwx ( 966435 )

        Remember when Saturday Night Live used to take celeb pictures and do a really bad chroma keying effect to make it look like their lips moved when they spoke?

        Yes. I also remember folks in the 1970's and 1980's would take photos of celebs and politicians and tear the mouth part of the photo then use the lips to create stop motion videos of them saying X, Y, or Z. I remember folks used to worry about stop-motion features in VHS cameras because of that stuff.

        The problem is, that shit used to take actual talent to do... Which cost money.

        Now it can be done for minimal cost with a computer and zero skill.

    • by AmiMoJo ( 196126 )

      Might be a specific Elon Musk thing, because he is the number 1 choice for crypto scammers. Twitter is full of deep fake video of Musk hawking shitcoins and other scams.

      • by tlhIngan ( 30335 )

        Might be a specific Elon Musk thing, because he is the number 1 choice for crypto scammers. Twitter is full of deep fake video of Musk hawking shitcoins and other scams.

        Social media is full of it. YouTube, Facebook, etc are all filled with hacked accounts featuring nothing but Musk and Tesla hawking crypto scams.

        Why, I re3ally don't know - scammers would somehow hijack some high profile account with a lot of followers and all the content gets replaced by a Musk deepfake with some crypto scam or another.

        You

  • Flux is way better and open source models are unencumbered by ideology.

    • There is no such thing as "unencumbered by ideology." Everyone, and every AI tool, has a particular slant or perspective. When it comes to AI, this "slant" is the result of the specifics of the training set, and the software that harnesses that model.

      • There is no such thing as "unencumbered by ideology." Everyone, and every AI tool, has a particular slant or perspective. When it comes to AI, this "slant" is the result of the specifics of the training set, and the software that harnesses that model.

        There are literally hundreds of AI models and adaptions to pick from all in common formats compatible with the same rendering software and front ends. The point of the unencumbered statement is there are wide ranging choices not dictated by the ideology of a single corporation.

        • Sure, and how do you pick which AI model you are going to use? You will make that choice based on some set of criteria that are important to you, and that is where the slant comes from.

  • Finally, Slashdotters can generate images of Natalie Portman naked and petrified and covered in hot grits! *sound of person striking a guardrail at 55mph*
  • So google is determined to create a vast supply of junk data to feed its AI so it can produce more fakes to supply more junk data until the internet looks like one giant overcopied xerox sheet?

  • Google demonstrated that we can't even begin to trust them. Their new fake thing isn't going to help them overcome that.

Whoever dies with the most toys wins.

Working...