Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Movies Slashdot.org

Meta's New 'Movie Gen' AI System Can Deepfake Video From a Single Photo (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand. The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to "enhance their inherent creativity" rather than replace human artists and animators. The company envisions future applications such as easily creating and editing "day in the life" videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta's previous work in video synthesis, following 2022's Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos. [...] Movie Gen's video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements.
You can view example videos here. Meta also released a research paper with more technical information about the model.

As for the training data, the company says it trained these models on a combination of "licensed and publicly available datasets." Ars notes that this "very likely includes videos uploaded by Facebook and Instagram users over the years, although this is speculation based on Meta's current policies and previous behavior."
This discussion has been archived. No new comments can be posted.

Meta's New 'Movie Gen' AI System Can Deepfake Video From a Single Photo

Comments Filter:
  • by slowdeath ( 2836529 ) on Friday October 04, 2024 @04:25PM (#64840551)

    Great. We have finally been able to get to the point where nothing you see online is real. It will all be AI generated. Congratulations to us. We have just destroyed the human race and given in to our robot overlords.

    • by Rinnon ( 1474161 ) on Friday October 04, 2024 @04:35PM (#64840579)
      We're just moving the internet from being an historical non-fiction to being an absurdist semi-fiction, where there's just enough reality to make everything tangentially less real.
      • We're just moving the internet from being an historical non-fiction to being an absurdist semi-fiction, where there's just enough reality to make everything tangentially less real.

        Yeah. Including humans.

    • Well, consider the benefits, I can easily have a full Facebook profile to the envy of all my followers or friends or subscribers or whatever, without lifting a finger or sharing anything about myself.

      Complete privacy.

  • because people don't expect movies to have an interesting plot anymore.

  • What a gift! (Score:2, Insightful)

    by gillbates ( 106458 )

    What a wonderful gift to vindictive ex-spouses and divorce attorneys everywhere! Who wouldn't believe your ex cheated on you, why, when we have the video right here?

    Just because you can do something doesn't mean you should. There have always been ways of faking photographs, etc..., but those generally required advanced skills and substantial amounts of time and money. This makes it possible for anyone with a grudge and a lawyer to use the legal system to punish anyone they dislike.

    • Re:What a gift! (Score:5, Insightful)

      by Rinnon ( 1474161 ) on Friday October 04, 2024 @05:25PM (#64840665)
      You know who's going to be making bank here? The digital forensic analysts that law firms will have to hire to provide an expert opinion on the validity of any given piece of digital evidence, such as this. But don't worry! The law firms will pass along those costs to the litigants, they'll be ok!
    • Re:What a gift! (Score:5, Interesting)

      by test321 ( 8891681 ) on Friday October 04, 2024 @05:26PM (#64840667)

      I think civil lawsuits will not be the major problem, because
      1) videos produced by Meta models will be watermarked wityh their own Stable Signature https://ai.meta.com/blog/stabl... [meta.com] or a future equivalent. so if you're hit with a fraudulent filing, your attorney will suggest an expertise, which will trivially reveal the fraud. This will happen naturally as in a year or two everybody will know videos can be faked and expertise are required.
      2) Nobody other than the very big actors (Meta, OpenAI, Adobe, cinema studios etc.) can afford to train such a model, so there probably won't be a rogue (un-watermarked) model for small scale fraudsters to download.

      A major problem can be fake news, political/business influence. Have your competitor (political or business) say certain stupid things, and leak it to a complacent social network (that won't check AI watermarks). Even if the video is uncovered as fake, damage is done. There are also chances the video isn't uncovered as fake, because hostile foreign agencies can afford to train high quality models that won't have easy flaws.

      • You're standing under an avalanche saying, "Look, it's just a little snow falling here where I am. As it falls it will shed off of my coat. Everything is fine."
      • This assumes that these models will remain the purview of big actors. For other AI generative systems, the pattern has been that the cost of generating a model of a given caliber has gone down drastically. Roughly speaking, it seems that about 2 years after a proprietary model comes out, an open model with nearly the same capabilities will be available.
  • by Anonymous Coward

    Porn

  • My friends will be shocked to find out how long I've been dating Emma Watson.

  • All the comments so far seem to assume this is the end of the human race (one literally claimed it). People also did the same "preacher's wife in The Simpsons" bit when photo and audio editing came out.

    But neither Photoshop nor Pro Tools have led to the end of trust in photos/audio. You might be a little more skeptical of things that seem fake ... but otherwise life goes on. The same thing is true here: we've had video editing for years now and society hasn't collapsed.

    Meanwhile, an entire generation of

    • by dfghjk ( 711126 )

      "But neither Photoshop nor Pro Tools have led to the end of trust in photos/audio."
      Because they are not content generators, they are content editors, and no made the arguments for them you claim they did.

      "Meanwhile, an entire generation of incredible film makers from all over the world ... kids who can barely afford a computer,..."

      Sure, kids who cannot afford a computer are "incredible film makers" that will make the next "Apocalypse Now" despite being unable to afford access to the technology that will ena

  • by Tablizer ( 95088 ) on Friday October 04, 2024 @06:32PM (#64840783) Journal

    Current AI doesn't really model in 3D, but rather apes patterns it sees in 2D frames. There's lot of subtle things this approach gets wrong in terms of consistency and perspective.

    I suspect future AI will build a 3D model of the scene, not unlike for gaming, then map the movement of objects in model, then re-render it as 2D frames. It'll still make stupid shit sometimes, but at least the 3D aspects will "add up".

    • by Tablizer ( 95088 )

      Addendum: making a 3D model first also allows human editors to more easily tweak it.

    • Current AI doesn't really model in 3D, but rather apes patterns it sees in 2D frames. There's lot of subtle things this approach gets wrong in terms of consistency and perspective.

      One of the ways to spot a deepfaked image of a person is to look at the eyes. Normally there will be a light spot in the same position on each eye, which is a reflection from whatever is lighting the scene. Image generators don't/can't take this into account so you'll often see the light spots in different places.

      • Not many people are wanting perfect fidelity in their porn, so it's kind of a good thing these tiny inconsistencies can be included. Fakes will be more easily proven to be fakes and we get our porn with less risk of it being used to harm anyone.
    • Hard to say... look at the top video at the meta site, of the swimming baby hippo(?) Look how it turns from one side profile to the other, swimming all the while, and looks awfully good.

      Or the dancing 'ghost' (in a bedsheet) twirling around and around, under "Generate videos from text."

      The inconsistency issue is real and probably will never be perfect, but if it's good enough...

      • by Tablizer ( 95088 )

        Those sites likely feature the best videos and are thus not representative of average quality. I'm not claiming the problems will be apparent on *all* 2D-based generative approaches, only too many.

  • to this could be amazing. Being it's Meta, they will probably shit the bed but the technology itself could be nothing short of amazing.

    Imagine being able to feed your book and some pictures to represent various characters and creatures into this and it pops out a series for you. It would be incredible. It won't need to come up with a story or script, as you provide that. It would need to "read the book" to interpret the scenes for the movie and you would want to train it on images that went with a specific

  • Welcome to the dawning of the Age of Unreality.

  • by ndykman ( 659315 ) on Friday October 04, 2024 @09:40PM (#64841097)

    Granted there is value in showing that it can be done so that people can understand the value of having a reputable source, but since Facebook is already a mass of disinformation, I don't know how adding to the completely made up BS is helping in any way.

    Honestly, are we going to have build a complete hardware and software ecosystem to establish a trust network for journalism? At this point, it seems so.

  • There are a whole bunch of reasons why this matters. The only thing one can reasonably do is identify when it doesn't matter, and have the internal fortitude to simply dismiss it.

    Nude fake - R or X - appears in the wild? "Not me." Move on. I know... easier said than done. But once it exists, you can choose whether to expend your mental health on it or not.

    When the fake is used as evidence of infidelity in divorce proceedings... then, it matters.

    I don't know what to think about the technical world anymore, a

  • by votsalo ( 5723036 ) on Saturday October 05, 2024 @06:40AM (#64841521)
    Was it AI generated and he posted it to see if anyone would notice?
  • Let's see this tech do something basic - convert 4:3 video into 16:9 and upscale it, in a way that is consistent and doesn't create fill that doesn't actually exist when the camera pans.

    Then, of course, I want a nude patch so I select who I want to see in simulated nudity.

    After that we can get into editing out laugh tracks and the dialog pauses to allow for them, adding in blood that's been kept out for a lower rating, etc.

    Then I want to be able to make random actor substitutions including voice and manneri

  • While is easy to take an anti meta stance, we all know there will be others, currently researching, developing and preparing their much the same but competing generative AI's.

    To me we haven't got to the bad shit yet, its yet to come, once AI without oversight can start to interact with the physical world, posting videos, stories comments to all and every website, sending texts, emails, making phone calls the average person has difficulty now being able to identify "fake", the below average, yes that is 50%

Any programming language is at its best before it is implemented and used.

Working...