Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Getty Images Sues AI Art Generator Stable Diffusion in the US For Copyright Infringement (theverge.com) 98

Getty Images has filed a lawsuit in the US against Stability AI, creators of open-source AI art generator Stable Diffusion, escalating its legal battle against the firm. From a report: The stock photography company is accusing Stability AI of "brazen infringement of Getty Images' intellectual property on a staggering scale." It claims that Stability AI copied more than 12 million images from its database "without permission ... or compensation ... as part of its efforts to build a competing business," and that the startup has infringed on both the company's copyright and trademark protections. The lawsuit is the latest volley in the ongoing legal struggle between the creators of AI art generators and rights-holders. AI art tools require illustrations, artwork, and photographs to use as training data, and often scrape it from the web without the creator's consent.
This discussion has been archived. No new comments can be posted.

Getty Images Sues AI Art Generator Stable Diffusion in the US For Copyright Infringement

Comments Filter:
  • The images were downloaded (fair use) and processed into the training set. No copies were retained -- can't get the originals back from the model. Might not be what was intended by putting the images online, but hard for me to see how this violates copyright law as that law is written today. It'll be interesting to see this one play out in court.

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Monday February 06, 2023 @03:53PM (#63270197)
      Comment removed based on user account deletion
      • by narcc ( 412956 ) on Monday February 06, 2023 @04:07PM (#63270261) Journal

        They're creating derivative works

        I don't think so. Ignoring how the technology works for a moment, if you didn't know a work was generated, could you put it side-by-side with one or more training images and make the same "derivative work" claim? I highly doubt it.

        That, to me, is the only reasonable test.

        he only implied license was the right to view it as part of the webpage it was on.

        Did they really do more that that? That's not an argument I would want to defend. Do artists sometimes look at Getty images? If so, what's the difference here? If anything, the AI retains significantly less information about any individual image than the artist.

        • by account_deleted ( 4530225 ) on Monday February 06, 2023 @04:11PM (#63270287)
          Comment removed based on user account deletion
          • by narcc ( 412956 )

            Yeah, I know what a derivative work is. To make that claim, however, you'd need to demonstrate that one work is derived from another. That can only happen on the basis of similarity.

            If you want to argue from the technology, that's fine with me. You're going to find yourself on the losing side of that, however, as that derivative work claim is really, really, weak.

            You're kidding, right?

            Nope. That's just reality. Maybe you should learn something about the technology before making grand pronouncements?

            • There's other ways to demonstrate that it's derivative. In this case, they might use the test of "if the material in question is expunged from the training sets would it still generate the same result" which would be a no... the current result is reliant on the source. I'm personally on the fence about this fight; it's obvious the results are not even close to the originals but also obvious that without scraping their image libraries the "AI" wouldn't have any sort of effective dataset.
              • by taustin ( 171655 )

                The legal question here is whether the end result is a derivative work - which requires permission - or a transformative work - which does not.

                Since the technology involved has never existed before, there's no precedent to rely on. Guidance can be taken from other, similar cases - Google's scanning of millions of books and putting short snippets up in their search engine was ruled transformative, and thus legal, for instance - but that cuts both ways.

                It's a question with no clear answer until the courts get

                • AI is just a calculation with no creativity. Unless you can demonstrate that the AI has used creativity in the work then it will always be derivative. You can only prove creativity if you can prove it derived from something that was never part of the training set at all.
              • by narcc ( 412956 )

                In this case, they might use the test of "if the material in question is expunged from the training sets would it still generate the same result" which would be a no...

                I wouldn't bet on it. Virtually no information from any individual image in the training set is retained. There simply isn't room in the model! It's basic math.

              • by catprog ( 849688 )

                Can you show the part of the model that uses the original image so it can be removed?

          • by Anonymous Coward

            Suppose a human student has learned to paint by studying pre-existing paintings. Can we say that now, when he is an accomplished painter on his own right, his output is based on those existing paintings and every painting of his is a derivative work of those pre-existing paintings? Now replace the student with an AI, and point out the differences in the two scenarios.

          • > You're kidding, right?

            No, not kidding. Indeed, the whole crux of the argument from the pro-AI side rests on this exact thing: that what a human artist retains in their heads about an image is the same kind of information the AI is retaining, although the human stores it in different format and with different quantities. There are human schools where students spend days reverse engineering how brush strokes of a painting are done. There are plenty of people who have done similar for digital ar

            • There are human schools where students spend days reverse engineering how brush strokes of a painting are done. There are plenty of people who have done similar for digital art to recreate a given brush stroke. The AI has *not* made a copy of the original work. It downloaded it for temporary viewing, analyzed it, and then threw the download away. The adjusted weights of the net are not the original image.

              FWIW, there are artist studio "factories" where students are trained to reproduce the Masters work brushstroke by brushstroke and it is sold as original work by the studio and sometimes even as if it were work from the master. I became aware of this when studying Hudson River Valley school artists. I've lost the reference but at least one of them had this kind of art factory running reproducing his works by biological machine learning systems.

              When the students started painting on their own, the imprint o

          • by Spekdah ( 804218 )
            It's not an amalgamation of images, there are no images stored in the diffusion model. It contains text descriptions and weightings. Its more akin to a student reading textbooks and then being able to recommunicate the knowledge if asked (prompted). It will be interesting to see how that gets argued as derivative work.
            • That's like saying a photocopy is not a copy but merely an impression of ink that a machine has made on paper by observing the original work.
          • by mysidia ( 191772 )

            Did they really do more that that? That's not an argument I would want to defend. Do artists sometimes look at Getty images?

            It doesn't matter, since they don't require a license of any kind in the first place to process data posted publicly -- Copyright doesn't protect against other people processing data you post publicly; just ask Google Image Search, TinEye reverse image search, etc. They have been training AIs off images for many years for the purposes of answering Search queries.

            There is not an

        • There are some special cases where this is not true. If you constrain the instructions to the AI engine such that the only possible answer is very specific element in a training set like the artist signature, you will see something from the original artist but this is not the fault of the AI or an indication that it is copying images but is an artifact of how AI interprets instructions on what to do.

          You get the same behavior if you give a human artist, trained on a set of images, and given a constrained
        • The various views expressed in this thread all all interesting, and probably arguable in court, but the only thing we really know for sure is that now that technology has changed the landscape for IP law, the vastly profitable corporations that benefit the most will need to get the laws changed so that they can continue to profit from IP laws.
          • I don't think it has changed IP law at all. IP law is a weighing of how much creativity and skill was applied between an original work and a new one. While it certainly takes skill to program an AI, the AI itself has no creativity or skill. It is merely a complex calculation. Its ability to produce results, as varied as those results may be, will always be bound completely by the set of images it has used to learn.

            If it has never seen a car in its training set it will never create a picture of a car,
            • IP law is a weighing of how much creativity and skill was applied between an original work and a new one.

              Except it's not really.
              IP law is something put together by a bunch of lawyers who work for vast corporations who profit from IP laws. Anytime it looks like those profits might slip, the lawyers write some new laws and pay the US government to enact them, then the US government puts pressure on everyone else to change their laws too in the interests of "consistency".
              That's why a bunch of songs the Mick Jagger and Keith Richards wrote in the 1960's won't come out of copyright until 2092 (and counting).
              I

        • Do artists sometimes look at Getty images? If so, what's the difference here?

          I look at a Getty image and then I consciously ensure that my inspired/referenced work is sufficiently different to not be a rip-off. Usually, the safest bet is to just not use Getty as reference at all and take the risk.

          • by narcc ( 412956 )

            We're a long way from something like a reference image! Some simple math shows that the average amount of information retained in the model from any given training image is just a few pixels worth or less.

            This is much more like you having once seen a Getty image.

            The idea that it's copying the training images or making derivative works is preposterous. This is as silly as claiming that anything an artist creates is a derivative work of anything they've ever looked at. It simply doesn't work that way.

      • It's still a losing battle for Getty. All Stability did was use Getty for its training set. Makes sense -- high quality images with lots of tagging. But hardly a deal breaker if they have to source images elsewhere.
        • Maybe not. It might be decided by a jury of humans who would say to themselves "If the law supposes that, then the law is a ass." Charles. Dickens
      • The images are publicly available to view with varying licensing terms for commercial use. Viewing the images does not constitute copyright infringement, so why would scraping, processing, and deleting the images?

        The Getty shakedown process is basically taking advantage of this ability, only StableDiffusion doesn't need to retain or re-display the original images.

        Copyright law needs to evolve to address this, but as of today I think it is reasonable to label it as fair use.

      • Well, one major way is that your definition of "Fair use" is arbitrary, bizarre, and I suspect wouldn't stand up in court. Another is that even if one part turned out to be "fair use" if done by itself, that doesn't mean the entire thing is fair use.

        The download of images for training purposes is likely a transformative, and therefore fair use similar to how Google was found to be engaged in a fair use when it downloaded copyrighted books en masse for Google Books. You should read Author's Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015) and Authors Guild, Inc. v. HathiTrust, 755 F.3d 87 (2d Cir. 2014). The training data issue is probably not a serious concern.

        They're creating derivative works

        That's the real question. Are they? Does the law protect the elements of the underlying

        • by mysidia ( 191772 )

          Consider an artist with a very distinctive style, such as Al Hirschfeld. Is his artistic style copyrightable separately from his creative works?

          No.. Artist styles are not copyrightable. Part of the protectable expression in a particular work can include the style it is made in - but a style or method on its own doesn't have copyright protection.

          All those so-called "Closed species" created by artists who want to charge other artists for the privilege of designing a character within their species are in

      • by mysidia ( 191772 )

        It seems open and shut to me. They're creating derivative works and selling them

        It's not open and shut. The plaintiffs will have to prove in court that they are creating and distributing a legally derivative work in order to succeed in that claim.

        Just using a prior work to create a new one does Not make something derivative; It won't be derivative unless it is an adaptation or translation that substantially reproduces the original work -- for example an author referencing a prior work and utilizing som

    • by Junta ( 36770 ) on Monday February 06, 2023 @04:01PM (#63270229)

      The images were downloaded (fair use) and processed into the training set. No copies were retained - can't get the originals back from the model.

      I don't think "fair use" has a provision for something like this.

      Further, there's a lot of room for interpretation about the claim that no copies were retained. *Technically* if you re-save as JPEG, you can't get the original back either. If you turn down the quality and lose detail, that copy is still considered potentially infringing. Visit the article and see their cited example.Stable Diffusion reproduces many specific part of the source material, including the watermark. In this case combining it with other things.

      Later it shows a more close to verbatim reproduction of the getty images watermark on a screwed up picture, which if a human did it would amount to attributing subpar pictures with the getty images brand.

      Images training into a model can (and has been) considered a form of interesting, lossy compression. It may not resemble an encoding that one would make for the purpose of faithful reproduction, but the fact that portions of the training set "leak through" in obvious ways demonstrates their stance that this is a violation.

      • by taustin ( 171655 )

        I don't think "fair use" has a provision for something like this.

        That's for the court to decides. Fair use does cover "transformative" works, which Stability will argue their output is. Getty will argue it's derivative, and thus requires permission.

        It's a new technology, and there's no direct precedent available. SCOTUS will decide what definition applies today, and Congress will decide whether or not to change it.

        • by Junta ( 36770 )

          I think they have a very hard uphill battle for "transformative". In the case that established that criteria, it was transformative because the borrowed phrase proposed new meaning and the use of the phrase was more satirical than trying to just reiterate the same thing.

          The AI output as cited is trying to make the same thing as the getty images source, but somehow not being that while basing it on that. It's right up the alley of a "knock-off", which is not seen as a transformative thing, despite the fact

      • *Technically* if you re-save as JPEG, you can't get the original back either. If you turn down the quality and lose detail, that copy is still considered potentially infringing.

        Making thumbnails of copyrighted images have been proven to be fair use.

        • by Junta ( 36770 )

          Under what specific circumstance though? If it's making a thumbnail to indicate the original in the form of a link to the original, ok.

          If I change someone else's image to a thumbnail format, and claim that the thumbnail is my original work without any relation to the original, I'd be at risk.

          The AI output is not making something to facilitate summation in an attribution...

          • by vivian ( 156520 )

            No problem - all the AI has to do is make available a list of URLs to all the source images that was ever used to train it.

    • by cob666 ( 656740 )
      Not sure if not retaining a copy absolves them of any wrongdoing here as the images could be considered as being used for commercial purposes. Another strike against Stability AI is that the Getty Images logo is still visible on the AI generated images, makes it pretty obvious that the images were 'borrowed'
      • > makes it pretty obvious that the images were 'borrowed'
         
          It makes it obvious that the strokes of the Getty logo were trained into the generator. Viewing a piece of publicly available art and learn from it is fair use for any human being. The AI tools are just a more codified form of learning. The image might fall afoul of trademark for use of the Getty logo but not copyright.

    • More importantly no images were redistributed as-is and no claim of ownership of the as-is images was made in anyway. There is established case law for collages which is a very similar physical situation to this digital situation. I do not see Getty winning this lawsuit.
    • No copies were retained -- can't get the originals back from the model.

      You can get portions of them back though, there have been off and on examples of "generated" images that include watermarks from places like Getty!

      I am all for AI generated images, but it does seem some images like Stable Diffusion are simply reproducing large portions of images wholesale which does seem wrong to me, although I have to admit some modern art is collages which reuse portions of images and it seems like thaws are accepted

    • can't get the originals back from the model.

      Anyone who actually knew anything about AI image generation knew that this claim was total bullshit.

      And now we have proof:
      https://arstechnica.com/inform... [arstechnica.com]

      Are the number of memorized images low? Yes. But that's like saying "Our film only infringed copyright in a couple seconds out of a 3 hour long movie!" doesn't matter. It also calls into question to what degree everything else is memorized as well. Maybe not a full image, maybe just a pattern... still copyright infringement.

      • Depends upon where you live in USA. The 9th and 6th circuit courts have split on sampling, and as of today, I do not know of any case that has resolved the matter at the SCOTUS. But in some places, yes, a few seconds in a 3-hour long movie would be completely legal. Also, parody is largely allowed to borrow heavily from the original source material.
        https://www.ip-watch.org/2016/... [ip-watch.org]
        https://repository.law.uic.edu... [uic.edu]

        • Having said the above, to reply to your main point: yes, that discovery does raise problems for the defense. But I don't think it severs the defense. There doesn't appear to be rhyme or reason to which images were memorized, and they aren't exact. This, I would argue, is no different than an artist having memorized some works while learning how to play. They didn't set out to record those images exactly, and it required extreme effort by the users to force 109 images (none of them exact) to be reproduced. T

  • They aren't selling the images, they are using the (stolen) images to train their AI. When the camera at every Microsoft entrance takes my picture as I badge to train their facial recognition AI, can I sue them for copyright infringement? Better question: does using an image for training an AI fall under fair use? It's about the same thing as quoting a single line out of a book, isn't it?
    • When the camera at every Microsoft entrance takes my picture as I badge to train their facial recognition AI, can I sue them for copyright infringement?

      The taker of the picture owns the copyright to the photo. So, in your case, that would be Microsoft.

    • by narcc ( 412956 )

      It's about the same thing as quoting a single line out of a book, isn't it?

      If we're talking about the amount of information retained by the model, significantly less.

      • by vux984 ( 928602 )

        If that were the case the model wouldn't be spitting out a recognizable getty images watermark on some of it's images.

        Your sense of 'what the model retains' isn't all that relevant when "the model" uses what it retains to reconstruct significant and obviously recognizable elements of the training data.

        In other words, it doesn't really matter if "the model" doesn't have more than fragments of sentences of any particular work literally stored.

        I can't quote lord of the rings at you either; so I guess my model

        • by narcc ( 412956 )

          If that were the case

          It is. You can easily do the math yourself.

          the model wouldn't be spitting out a recognizable getty images watermark on some of it's images.

          That's simply not true. The Getty watermark appears in countless images. It is exactly the kind of thing you would expect to be reproduced.

          Your sense of 'what the model retains'

          Is based on fact. That You don't understand the technology doesn't change the reality.

          I can't quote lord of the rings at you either; so I guess my model retains no copyright work right?

          What are you babbling about?

          I'd be sued into oblivion.

          No, you wouldn't. You seem to lack a basic understanding of both AI and copyright law.

  • by nonicknameavailable ( 1495435 ) on Monday February 06, 2023 @03:50PM (#63270181)

    The same getty images that demands money for pictures they do not own ?

  • If they can claim a machine learning algorithm that's been trained on a mix of copyrighted (but openly available) and public domain imagery commits copyright infringement when it produces a work of art, how would the same not in principle apply to humans? Our brain takes in all it sees, including tons of copyrighted imagery, and outputs new artworks based on all that input.

    Conversely, if it's legal for a human to visit an art gallery or browse art sites to be inspired, why should the same principle not appl

    • Yes, that is why you do not have the Right to Read or view or listen until you've been injected with memory squelching nanites.

      The brain may or may not work similarly. It's relatively safe to say the way the brain actually "stores information" is many orders of magnitude from ML models. Research has not gone that far and this subject is another one of those "about 20 years" hypothetical piles.

      • by Mascot ( 120795 )

        I get the impression you may have missed my point. It was not in how similar machine learning is to how the brain technically functions ("not even remotely", seems like an extremely safe bet), but the principle of it with regards to using copyrighted materials as inspiration for new artwork.

    • by TheMESMERIC ( 766636 ) on Monday February 06, 2023 @06:05PM (#63270647)

      that is what non-artists keep on spouting.

      artists are not constantly gulping down images from the internet. but sketching scenery, drawing statues or live nude models - often on a daily basis.

      the way they learn the craft is absolute nothing at all like AI - there is no "visual osmosis". Such ignorant assumption.

      It's but eye-mind-hand training coordination. It's not entirely intellectual but akin to a sport. You teach your eyes how to see, you teach your hands how to paint / draw. How to render values (light, shading), you constantly practice in order to become faster and more accurate (proportions, edges, lines). Then years of studying human and animal anatomy. Then colour theory and perspective.

      No internet scraping, no seeking "inspiration" .... just a few recommended books, sharing experiences and techniques with other artists peers, observing nature and the real world.

      They may have tutors, they may purchase expensive DVDs. Those are all paid for - not stolen - not free - thus supporting the community.

      That's how real artists learn .

      But as it goes, when people don't know about a subject they come up with utterly ignorant assumptions in order to validate their own arguments. Instead of shutting up.

      It's an ignorant echo chamber and non-artists as a group now think they are right .... with their agreed fallacy. No, you have no experience, you don't know what you're talking about. That is not how artists learn the craft at all.

      • My hat's off to you sir. Your post is like dropping a whoosh nuke.

      • by Mascot ( 120795 )

        You seem to be conflating the technical aspects of creating art, and what I was talking about which is the inspiration for what to create.

        Musicians listen to music, painters look at other people's paintings and drawings, writers tend to read a lot. Someone walks past a store window, sees anime playing, and with no conscious choice finds artwork they create weeks later to have an anime influence to them. They certainly did not "steal" anything from the creators of the anime they glimpsed.

        We all get inspirati

    • They'd most certainly sue the art galleries, and your grandmother, if they thought they could get away with it.

  • by thesjaakspoiler ( 4782965 ) on Monday February 06, 2023 @04:09PM (#63270271)

    Me :
    Explain to me how an AI generator called Stable Diffusion can illegally download millions of images from Getty Images if Getty Images publishes those pictures with watermarks online for everyone too see?

    ChatGPT :
    The act of using an AI generator such as Stable Diffusion to download millions of images from Getty Images, even if those images are published with watermarks, would likely be illegal. This is because it would likely involve the unauthorized use and reproduction of Getty Images' copyrighted works. Just because an image is marked with a watermark does not mean that it is freely available for use without obtaining a license from Getty Images or the copyright holder. Unauthorized downloading, reproduction, and distribution of copyrighted works can result in legal liability, including fines and damages.

  • by mspohr ( 589790 ) on Monday February 06, 2023 @04:23PM (#63270317)

    From the US Copyright Office
    https://www.copyright.gov/fair... [copyright.gov]

    Amount and substantiality of the portion used in relation to the copyrighted work as a whole: Under this factor, courts look at both the quantity and quality of the copyrighted material that was used. If the use includes a large portion of the copyrighted work, fair use is less likely to be found; if the use employs only a small amount of copyrighted material, fair use is more likely. That said, some courts have found use of an entire work to be fair under certain circumstances. And in other contexts, using even a small amount of a copyrighted work was determined not to be fair because the selection was an important part—or the “heart”—of the work.

    Additionally, “transformative” uses are more likely to be considered fair. Transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work.

    Since the AI Art Generator uses a very small portion of any copyrighted work and undertakes significant transformation, I would think this would be "fair use".
    (I am not a lawyer)

  • SD is developed by CompVis @ LMU Munich. Released by Stability, CompVis & Runway. (source: Wikipedia)

    Only Stability AI is named in the suit.

  • by Growlley ( 6732614 ) on Monday February 06, 2023 @05:36PM (#63270557)
    on bucket loads of images they had no right to?
  • Fact is, there is just about no prior law on this technology. The one th8bg we know is that it’s disruptive to the status quo.
    That is going to be on lawmakers and politicians minds when they start legally reacting.

    In many cases, situations like this are mitigated until it can be seen in a better light.

    Mind you, ai art is not really disruptive to artists, as with our an artist to filter it, most of what it outputs is either trash, porn, or trashy porn. Not to mention, it’s not good with complex

  • One of the rare occasions I find myself supporting Getty.
    The article mentions that Getty is launching its own AI efforts.
    I'm sure *cough cough - read that EULA* photographers will be compensated when their images are used for training Getty's AI.
  • Getty pretends to "own" things they have no rights to, SELL or LEASE or RENT those rights to others they have no right to transact with on "properties" they have no rights to and THEY have the TEMERITY to sue an AI firm?

    FUCK Getty and FUCK Getty Images, and Getty if you don't like my opinion, come sue me. I have a trademark on suing me and you'll be violating it, you shit-eating stinky-ass pieces of lowlife scum pretending that "intellectual property" is either property OR intellectual.

    E

    • I never liked the way Getty infects search results with problematic non-free images; it makes my job as a creative mind harder.
  • Judge: Okay. Show me the trail of ownership for these images that starts with the creator and ends with Getty Images.
    Prosecution: Ah, can we approach the bench? We thought this was just going to be a routine shake-down operation.

  • Their Site Terms of Use specifically outlaws data mining. If SD produces a very similar copy of a stock image (pretty likely at some point) that might be easy to catch, but if they were scraping Getty then Getty can say that whatever SD produces is a derivative of their library as a whole and try to demand compensation or cease and desist. Compensation might never get to any of the photographers though.. After all Getty could also use SD and as long as they run a visual comparison against their library woul

  • It 'looked' at them and remembered, there's no looking-right violation.

  • I'd like to suggest that most people here don't understand how human emotions and life experiences go into the creative process for a work. That part is lacking in an AI, and without that part it will always be a derivative work.

    An artist, whether they are a song writer or painter, or sculptor, creates based on everything they have been though their entire lives. When they create, they aren't just creating something based on every painting they have seen or every song they have heard. If Getty can make

panic: can't find /

Working...