

Thousands of Artists Allege Midjourney Used Their Work To Train AI Software (theguardian.com) 182
An anonymous reader shares a report: Since the emergence of Midjourney and other image generators, artists have been watching and wondering whether AI is a great opportunity or an existential threat. Now, after a list of 16,000 names emerged of artists whose work Midjourney had allegedly used to train its AI -- including Bridget Riley, Damien Hirst, Rachel Whiteread, Tracey Emin, David Hockney and Anish Kapoor -- the art world has issued a call to arms against the technologists. British artists have contacted US lawyers to discuss joining a class action against Midjourney and other AI firms, while others have told the Observer that they may bring their own legal action in the UK.
"What we need to do is come together," said Tim Flach, president of the Association of Photographers and an internationally acclaimed photographer whose name is on the list. "This public showing of this list of names is a great catalyst for artists to come together and challenge it. I personally would be up for doing that." The 24-page list of names forms Exhibit J in a class action brought by 10 American artists in California against Midjourney, Stability AI, Runway AI and DeviantArt. Matthew Butterick, one of the lawyers representing the artists, said: "We've had interest from artists around the world, including the UK."
"What we need to do is come together," said Tim Flach, president of the Association of Photographers and an internationally acclaimed photographer whose name is on the list. "This public showing of this list of names is a great catalyst for artists to come together and challenge it. I personally would be up for doing that." The 24-page list of names forms Exhibit J in a class action brought by 10 American artists in California against Midjourney, Stability AI, Runway AI and DeviantArt. Matthew Butterick, one of the lawyers representing the artists, said: "We've had interest from artists around the world, including the UK."
The human brain does the same thing... (Score:4, Insightful)
Are they going to sue my brain because it stores a copy of their artwork and used it for inspiration in a derivative work?
Re: (Score:2)
Are they going to sue my brain because it stores a copy of their artwork and used it for inspiration in a derivative work?
Well yeah, but your lone brain gets fatigued before too long and then it ain't worth shit. Plus you'll charge us for your time and we're really cheap. Especially after the passage of time!
Re:The human brain does the same thing... (Score:5, Insightful)
Your brain does not store a copy of their artwork. Your brain keeps an incomplete memory - even less complete than the models do. Pedantry aside... Considering the time it would take for you to create that derivative work, the fact that you can't jump to proficiency quickly, and the high likelihood of you not actually being up to the task, even if they could pursue you for it, I doubt they would.
Re: The human brain does the same thing... (Score:3, Informative)
"Your brain keeps an incomplete memory - even less complete than the models do"
In a large model there is typically less than one byte per training image.
Re: (Score:2)
The question is more complicated (and fascinating) than that. There's been quite a lot of work that demonstrates you can pull "ghosts" of the training data out. And the more times the image entered the training cycle, the clearer the ghost. No, you can't pull bit for bit out, and the success rate is low... but it demonstrates that the "less than one byte" claim per training image is actually a misdirection.
Re: (Score:2)
Just as a followup: https://www.kaspersky.com/blog... [kaspersky.com]
It's cool stuff. Nerdy.
Re: The human brain does the same thing... (Score:4, Insightful)
So, yeah, it's going to learn a pretty solid conception of the Mona Lisa. Not unlike a person.
Re: The human brain does the same thing... (Score:2)
I am referring to a single source image as "an image" because I am a nerd and specificity matters to me.
Multiple different images of the same work can produce what you are talking about, but it will NEVER produce a pixel perfect reproduction of ANY of the source images. It may produce something recognizable, but that is not the claim.
Re: The human brain does the same thing... (Score:4, Interesting)
People can get in trouble for this too. There is currently a lawsuit going on over a tattoo of a famous jazz player. The photographer alleges that the tattoo artist copied his work, with the same composition and pose etc. The tattoo artist is saying she used his photo as a reference, but made enough changes that it qualifies as transformative and not infringing.
That's basically the issue with AI. Is the output different enough from the original to qualify as a separate, non-infringing work. Google just published a video about their new video generation AI, and in it you can see Iron Man in his standard Disney favourite arm-outstretched pose, so the tattoo case seems extremely relevant.
Outside of the US the situation is similar in many countries. Some years ago a photographer in the UK won a copyright case against another photographer who copied his composition of a bus driving over a bridge in London.
Re: (Score:2)
Please correct me if I'm wrong, but I believe these AI systems, do NOT in fact, store the images.
They pretty much only use them to set weights.
Re: (Score:2)
Kinda. I mean yes, but it turns out that in many cases there is enough information in the weights that original data can be pretty accurately reproduced.
Kinda Sorta (Score:2)
As the image generators begin with noise and pattern match out of the noise, you will never get an exact reproduction of the original image. It's statistically possible, but you are talking about millions of values having to line up exactly right.
DOES the human brain do the same thing? (Score:2)
Are they going to sue my brain because it stores a copy of their artwork and used it for inspiration in a derivative work?
This is important: "artificial Intelligence" is not intelligent . It is pattern-matching and replication software. There is no "intelligence" involved.
When you say "the human brain does the same thing"-- really? You know how the human brain works? So, publish in a peer reviewed journal and you will be the most famous brain scientist in the world, because in fact no one knows how your brain works.
Re: (Score:2)
This is important: "artificial Intelligence" is not intelligent . It is pattern-matching and replication software. There is no "intelligence" involved.
When I view AI images I see quite a bit of intelligence. If I ask the machine to draw a lake with mountains, trees and a castle the machine produces something with convincing GI, lighting, shadows, and reflections. In the water you can see realistic specular reflections of the castle and surrounding trees and mountains. These systems don't have fancy libraries full of material properties, ray tracers or fancy lighting / illumination systems. Nobody programed these systems to do any of that. They learne
a good appearance of intelligence (Score:2)
This is important: "artificial Intelligence" is not intelligent . It is pattern-matching and replication software. There is no "intelligence" involved.
When I view AI images I see quite a bit of intelligence. If I ask the machine to draw a lake with mountains, trees and a castle the machine produces something with convincing GI, lighting, shadows, and reflections.
Producing the appearance of intelligence (enough to fool you) is not the same as actually being intellilgent.
...
While AI image generators almost entirely lack any sort of high level understanding of scenes
Exactly. It doesn't have any understanding of what it's producing a picture of; it doesn't even know that there exists a real world with actual physical objects in it. It is producing an image matching what it calculates the pattern for images described by the words "lake with mountains and trees" should be.
Re:DOES the human brain do the same thing? (Score:4, Insightful)
intelligence (n)
the ability to acquire and apply knowledge and skills..
Dictionary.com: intelligence: noun
the capacity, especially of a particular person or animal, for learning, reasoning, understanding, and similar forms of mental activity; relative aptitude in grasping truths, relationships, meanings, etc
Re: (Score:2)
Re: (Score:2)
Are they going to sue my brain because it stores a copy of their artwork and used it for inspiration in a derivative work?
If you memorize how to draw Bart Simpson and then create a cartoon whose main character has yellow skin, spikey hair, on roller skates, and shouts things like, "Bite my shirt!" Then you will have a big problem.
Copyright violations don't have to be verbatim. They just have to be close enough. And what constitutes "close enough" is very fluid from court to court. Look at Harry Potter, for instance. It is a spitting image of Star Wars, yet has never been hit by Disney for copyright violations.
Re: (Score:2)
YES. People that don't know what a derivative work is should probably stay out of discussions on copyright. They won't sue you over your brain, they'll sue you for having "used it for... a derivative work."
Re: (Score:2)
no, the human brain doesn't.
if you want to draw traditionally and well, you need to do a set of very boring exercises like shading platonic solids under different light conditions, learn about human surface anatomy and their landmarks, learn about perspective, and then after countless practice drawing from life ... you may start getting somewhere that is if you are lucky with said strategy.
looking (as in deeply studying) other people's artwork and photographs non-stop will only turn you into a pretty medioc
Re: The human brain does the same thing... (Score:3, Informative)
No, and neither can LLMs.
Re: The human brain does the same thing... (Score:5, Informative)
Neither can JPEG, yet somehow compressing an image doesn't exempt you from copyright.
Also they clearly copied the images to train their model.
Re: The human brain does the same thing... (Score:5, Informative)
In your mind, LLMs just produce jpeg-like degradated copies of originals? *facepalm*
They're not compositors. You can overtrain a diffusion model to learn a specific image, but for general models, they struggle even with things replicated as commonly in the dataset as e.g. flags and the Mona Lisa. LAION-5B is 2TB in size, compressed. Like 20TB raw image data. Models are a couple GB. The're not "images compressed at a 20000 to 1 ratio". Indeed, the more images you add to training, the better the model gets - 200TB would yield an even-better model - would you say they're just being compressed at 200k to 1? What about 200 exabytes of training images? 200 petabytes? At what point will you accept that it's not containing the training images?
Try searching LAION [github.io]. Enable full captions. Get a very distinct, detailed caption for an image. Then render it in Stable Diffusion, which was trained on LAION. Generate a hundred images, as many as you want. You'll quickly discover two things.
One: you DON'T get the image whose caption you used.
Two: all 100 (or however-many) images are fundamentally different from each other, let alone from the training image.
If your prompt is "firebreathing dragon on an iceberg", it's not looking up images of dragons and piecing them together with images of icebergs. It has a model of what "fire" looks like, what "firebreathing" looks like and how it relates to fire, what dragons look like, how they relate to firebreathing, what icebergs look like, etc. The iceberg isn't any specific iceberg, it's its understanding of the shape of all icebergs. The dragon isn't an specific dragon, it's based around its understanding of all dragons, and same with the fire.
What comes up starts out as an image of random noise, which the model, like seeing shapes in clouds, tries to push and pull on to make look more like the elements of your prompt. It's basically the reverse of image recognition. And just like how image recognition doesn't require having had a specific image in its dataset to recognize it, neither do diffusion models create specific images. In neither case does the model have specific images - what they have is an understanding of the general shapes and properties of the whole category of objects.
Re: (Score:2)
Again, it's not impossible to learn an image. Like with a flag or the Mona Lisa, if something accidentally appears enough in the training dataset, or via training bugs, it might be overtrained to a specific thing. But this is obviously not the general case, it physically can't be the general case, and is not desirable by any party (not the trainers, not the users, not the owners of the training images).
The main case where you'll see overtraining is, rather, custom models made by users. They usually train
Re: (Score:3, Informative)
In your mind, LLMs just produce jpeg-like degradated copies of originals? *facepalm*
How the on earth did you get to that interpretation of what I said?!
Absolute perfect reproduction is not required to violate copyright. For example, JPEG does not perfectly reproduce images, but JPEG compression does not exempt you from copyright. If you can reproduce a significant enough chunk of a copyright work well enough from a network then it's a pretty clear copyright violation.
JPEG is an example of imperfect reconstr
Re: (Score:3)
Not being "nothing at all like a work", however, IS.
Any given work's impact on the weightings is veritably homeopathic. Like 1e-5 is a typical value.
The generalization of reality == understanding. Converting something from just "pixels" to a latent so coherent that you can do math on concepts themselves. The classic example being "k
Re: (Score:2)
Re: (Score:2)
Neither can JPEG
Comparing generative AI to algorithmic compression or an original work for the purposes of copyright has to be the dumbest thing I've read on this subject to date.
Also they clearly copied the images to train their model.
No. There's no copying. AI models do not store the source material any more than you copied the words off your screen reading this. You appear to have no clue what so ever how this works.
Re: (Score:2)
I love it when a moron screeches that someone else is a moron then just gets shit wrong.
Feel free to read any of the papers I posted in to sibling posters who aren't aggressively stupid which do in fact show that these AI models do in fact memorize some of the training data.
I now predict you'll either go silent, or react with anger with more screaming about stupidity rather than reading the paper and learning you're wrong. Nothing like someone entrenched in their own, deeply deluded sense of superiority.
Re: (Score:2)
Re: (Score:2)
Yes but brains are not mechanical devices subject to the laws of copyright. Computers are.
Re: (Score:2)
Yes but brains are not mechanical devices subject to the laws of copyright. Computers are.
What is the basis in copyright law for this statement? Can you cite relevant text of either copyright law or applicable case law to support your claim?
Re: (Score:2)
What is the basis in copyright law for this statement?
No claim to the contrary has ever been raise in copyright law. Can you cite a text o case law to say the brain is covered?
Re: (Score:2)
No claim to the contrary has ever been raise in copyright law. Can you cite a text o case law to say the brain is covered?
So you have no basis to support the claim "Yes but brains are not mechanical devices subject to the laws of copyright. Computers are." in either the text of copyright law itself or relevant case law?
If this is in fact the case what is the objective basis for your claim?
Can you cite a text o case law to say the brain is covered?
I'm not making any assertions in this regard. I can't prove a negative. I can't say that such a distinction does not in fact exist. I am unaware of any objective basis to affirmatively support its existence.
17 USC 106 sets out the rights o
Re: (Score:2)
Re: (Score:2)
You have no basis to support the claim that brains ARE covered under copyright law.
I'm not making any assertions in this regard. I can't prove a negative.
And yet you are demanding the same of me. There has never been a jugement made that asserts that brains are covered.
They all seem to be output oriented rather than modality oriented.
How do you get it into a machine without reproducing it inside the machine in some manner?
Re: (Score:2)
Re: (Score:2)
I can't. And neither can this AI.
Re: (Score:2)
Re: (Score:3)
No, what those "hippies" would say that if the data being fed to these models is free and opensource then so should midjourney and all it's algorithms and all it's training data should be open source and free as well. If Midjourney is totally free and open source then you can point to hypocrisy all day long but not until then.
Re: (Score:2)
Well for one the modern concept of copyright law didn't exist then so if Mozart was doing a public performance for his own profit of someone else's work, note for note without credit or licensing well then he might be in trouble.
Re: (Score:3)
Human brain 'storage' is also subject to one's perception ... such as that dress being gold and white (blue? what blue?!). That makes the process at least somewhat transformative and thus your expression in creating future works can be considered unique. (unless they're directly derivative of course)
OTOH song covers, while often being clearly unique from the original, are not considered original works under US copyright law and instead would be derivative works. (they do have a different, compulsory lice
The Argument is Complex (Score:5, Insightful)
I've been following discussions on here, in musician's forums, and sometimes on author blogs. There's usually a mishmash of responses that break into a few camps. "That's no different than how people learn", "If it can be reached on the internet it's fair game", "No it isn't fair game, copyright holds"... etc.. People want to shove it cleanly into a nicely packaged dichotomy - especially those who believe that AI "training" is the same as human "learning". Problem is, it's not. This is a new thing. It has parallels, but it's very fundamentally different, and it now touches copyright and fair use in ways never considered.
So this is what has to happen - it has to be negotiated. In courts, in public opinion, in policy... the world will need to come to a new paradigm, under new consideration. Precedent should be a very minor player in this one - we shouldn't leap to demanding "which old way is best". I think "learning licenses" is a dodgy compromise... but at least it's a compromise.
Re: (Score:2)
Re: (Score:2)
The problem as I see it is that a human, no matter how skilled and which artists they have learned to imitate, is limited by time. We need to eat and sleep, and we can only draw so fast.
AI does not need to eat or sleep (at least, not in ways that affect productivity) and can churn out such mass amounts of content that no human can ever dream of competing.
Re: (Score:2)
Re:The Argument is Complex (Score:4, Interesting)
Also, I would argue that the artists and writers really contributed very little to the final product. No artist or writer ever created their work with the intent that it would be used to inform an algorithm’s understanding of vision or language. If none of these AI companies ever existed, none of these artists or writers would ever collaborate to produce these models out of their pooled creative output. The real invention here was the machine learning algorithms and infrastructure that produced this model. Somebody’s deviantart portfolio of anime elf paintings, that they posted publicly online for free for people to look at, is only valuable insomuch as that it slightly inched the AI’s understanding of anime elf paintings in one direction or another.
One other thing that annoys me about these lawsuits, is that they only exist now, after the technology has been developed to a level of maturity that there are viable profit-generating businesses to be made. For years academics have been publishing papers and showing off demos based on the same level of flagrant “copyright infringement,” and the public’s response has been “oh that’s neat.” Now that it can become a profitable product, the knives come out and everybody wants their piece.
Re: (Score:2)
This might be a reason to update copyright law.
That is the only possible end game, but it has a journey through the courts to get there. Once the Supreme Court rules, Congress has to decide if and how to update Title 17 to account for it.
There's plenty of historical examples of how this works.
Re: (Score:2, Informative)
This might be a reason to update copyright law.
Everyone needs to be very careful what they wish for, because you are absolutely 100% correct.
Copyright law only protects distribution and public performance, neither of which have anything to do with training.
(Note I mean actual *training* here, I am not referring to "AI" that spits out reproductions)
In the US this would require a constitutional amendment, which is hard (for a reason), and these complainers KNOW it's hard, thus the endless failed lawsuits.
Yet the last times we did update copyright law were
Re: (Score:3)
Even if it's the same as how the brain works, if I 'generated' mickey mouse, Disney could still sue me.
Re: (Score:2)
Yes. They couldn't sue the pen and paper you used to create the infringing image. They can't sue the tools, only the people that used them, and even then only if they used the image in a way that infringed upon the copyrights. If they hung their drawing of Mickey in their house for their private enjoyment, there's no infringement.
Re: (Score:2)
Even if it's the same as how the brain works, if I 'generated' mickey mouse, Disney could still sue me.
Exactly this. The training, outputs, and uses are three separate issues that need to be dealt with individually. There should be no restrictions whatsoever on training because all they're doing is building a tool that in no way resembles the source material. If the tool is intended for illicit use or fails to take adequate precautions to preclude illicit use (e.g. "secure" phones marketed as cartels), we already have laws and precedent that cover those sorts of situations, so you go after the product, not t
Re: (Score:3)
The fundamental definition of technology is that which amplifies the effects of a person's labor, what the military calls a "force multiplier." Where at one time, 95%+ of humanity had to spend their lives growing food to feed the other 5%, now the percentages are reversed - and everyone whose ancestors farmed had to find new work. A century or two ago, every village had a blacksmith, because you needed one. Now, we all use industrially produced steel products every day of our lives, and most of us have neve
Re: (Score:2)
You seem to have a grudge against artists making a living. That living is under threat - of course they should try to protect it.
Re: (Score:2)
That's like buggy whip manufacturers "protecting their living" by passing laws making cars illegal.
You don't get a fucking hall-pass for toxic behavior because you were "protecting yourself."
Re: (Score:2)
You seem to have a grudge against artists making a living.
Do you have a grudge against buggy whip makers making a living? Then don't drive a car, ever.
Do you have a grudge against blacksmiths making a living? Then don't use anything made out of iron that wasn't hand forged by one, ever.
Technology makes jobs obsolete. It always had, and always will, because that's the whole point.
Resorting to person attack is an admission that you have nothing else to offer.
Re: (Score:2)
You seem to have a grudge against artists making a living. That living is under threat - of course they should try to protect it.
30 years an artist, 20 in VFX. You are wrong.
Re: (Score:2)
Appeal to authority doesn't work well with me. My PhD in physics, my second in mathematics, and my 175 years as a glass blower all make me mostly immune. :)
But even if you are who you say you are, your post still makes you "seem" to have a grudge against artists making a living. That's literally what I said.
Re: (Score:3)
Re: (Score:3)
The legal argument is the same as the Google book scanning lawsuit [wikipedia.org], that is, whether what they are doing is derivative (which requires permission from the copyright holder) or transformative (which does not). Google argued that a full text searchable database that only delivers short (fair use length) samples is fundamentally a different thing than the book itself.
Note that, after a $100 million+ settlement (plus ongoing payments) was rejected, Google won at trial and appeal.
Midjourney (etc) is producing vi
Re: (Score:2)
One of the biggest bullshittery to be trotted out is Van Gogh, because of "Starry Night". Yet most of these "artists" don't know that he literally copied other artists paintings, including exact compositions, in something like 90% of his paintings, only changing the painting by using his particular brush strokes. And this is completely legal and covered under copyright laws as they stand
The application of law across examples needs to be set in context. You're assuming 19th century French copyright law parallels modern global copyright...which seems rather unlikely but maybe you're an historian and know better.
Re: (Score:2)
This is a useless rearguard fight against a huge liability minefield. Apparently no one bothered talking to a lawyer about LLM issues.
I suppose they are now, and finding out that they'll need legislation at the very least and probably a constitutional amendment in the US to get around this one.
Re: (Score:2)
NYT's examples were of 5 year old articles that had been copied on thousands of websites.
With enough prompt magic, and enough over-training, you can reproduce input data.
This is not unlike a person who has seen something so many times they can recite it from memory, verbatim.
When trying to make your point, try not to use evidence with seventeen asterisks of caveats.
Oh, and fuck you.
Re: (Score:2)
And ChatGPT is a different thing than Midjourney. It's not nearly as usable as text to image AI. And the reason one is useful and the other isn't is the same reason: They both hallucinate. The difference is, that's good when generating images, and bad when generating news articles.
I suspect we'll see the lawsuits against OpenAI and Midjourney end up in different rulings.
Re: (Score:2)
This is not the case at all with the default Stable Diffusion (and -XL) models.
Re: (Score:2)
That a tool can be used to do illegal things does not make the tool illegal. A Xerox machine can be used to make copies of a novel, which can then be illegally distributed, but Xerox machines are not illegal. A VCR can be (and has been) used to make copies of movies which can then be illegally distributed, but - despite lawsuits back in the day - VCRs are not illegal. The same - including the lawsuits - is true of DVRs.
If someone has to "overtrain" Stable Diffusion to produce infringing images, that's on th
Re: (Score:2)
If someone has to "overtrain" Stable Diffusion to produce infringing images, that's on them as the user of the tool, not the tool, any more than Putty is at fault because a hacker used it to break into someone's network.
We agree entirely.
The point is, LLMs can be similarly overtrained to do the same thing, but correct training will not.
Re: (Score:2)
The question isn't whether Midjourney can be trained to do that, but rather whether ChatGPT can be trained to not do it.
Re:The Argument is Complex (Score:5, Interesting)
Yes, I saw the paper where they were arguing about the model memorizing Edgar Allan Poe's "The Raven", as if ChatGPT hasn't seen "The Raven" thousands upon thousands of times. As if it *shouldn't* know "The Raven". As if tons of humans don't know "The Raven".
NYT spent weeks trying to figure out how to hack the model to spit out specific NYT articles which had been replicated all over the internet. And? "Model memorizes text that it was repeatedly given over and over" isn't exactly the damning indictment you seem to think it is, when what people actually care about is the general case (whether it outputs arbitrary things from its training in response to normal user prompts), not the exception (whether it can be made to output specific widely-replicated things when given a highly tuned prompt containing a large chunk of the article that they want it to replicate so they can file a lawsuit). The general case is "no". If I ask it, "List numerous similarities between medieval broadswords and Betty Crocker", it's not quoting a paper on the topic, it's dealing with conceptual interrelationships.
We'll just ignore that the ratios of dataset sizes to model sizes are orders of magnitude smaller with LLMs (text) vs. diffusion models (images).
Re: (Score:3)
I guess it's easier to combat arguments people aren't making. Hit that straw man even harder!
Re: (Score:3)
Except the NYT fed long prompts designed to recreate the articles: they gave it prompts designed so that the "predict the next tokens" would recreate the article, regardless of whether the article even appeared in the training set. If you try *real hard* you can force the AIs to do things they wouldn't normally do, but isn't that a problem stemming from the user, not the AI?
Re: (Score:2)
The point is that the representation of the original still exists within the model - not as a bit for bit reproduction, but conceptually. Saying it "isn't there" is a bit of a dodge.
When judging whether a copyright violation exists, one of the standard criteria is whether or not the alleged violator could reasonably be expected to have seen the original work. This is in place to protect against cases where the infringement is coincidental. It's why the old straw man of copyrighting every permutation of note
Re: (Score:2)
The point is that the representation of the original still exists within the model - not as a bit for bit reproduction, but conceptually. Saying it "isn't there" is a bit of a dodge.
Copyright protects works not underlying facts. I can OCR the entire phonebook into a publicly searchable database and there isn't jack yeller pages can do about it.
When judging whether a copyright violation exists, one of the standard criteria is whether or not the alleged violator could reasonably be expected to have seen the original work. This is in place to protect against cases where the infringement is coincidental. It's why the old straw man of copyrighting every permutation of notes doesn't work - even if you could do it and file it all, there's no way somebody writing music would ever see it, and so no infringement would be possible.
This is not the case. Even if you have never seen a work and by sheer coincidence generate a derivative committing "innocent infringement" it is still copyright infringement nonetheless and you can still be held legally liable for it.
The only thing innocent infringement does at the discretion of the judge is influence the awarded damages.
Re: (Score:2)
There's a pot of gold at the end of this rainbow.
Re: (Score:2)
AI is capital owned by the 1% of the 1% (Score:3, Insightful)
It really doesn't matter what the artists say or do here. There's just too much money involved. The laws will be changed to accommodate the owner class. Most likely using the courts as was done when Uber, Lyft, et al wanted to redefine what it means to be an employee. That's why folks like the Koch Brothers (now singular) spent 40 years packing the courts with pro-corporate judges while we were busy worrying about gay frogs and whatnot.
Re: (Score:2)
Re: (Score:2)
What is the legal basis for copyright claims? (Score:2)
What is the basis in copyright law for asserting rights over being influenced by or learning from works whether the thing doing the learning be a computer or a person? Copyright protects the reproduction and performance of works and derivatives. It isn't a patent. It isn't an exclusive grant of authority over use, styles, information...etc.
Re: (Score:2)
I think this will be argued by attorneys for years. I'm sure it will be a way they will be earning their keep for a long time to come, with laws shifting one way or another.
In any case, I'm sure billions to trillions will be given to the legal eagles to sort this one out, similar to how MP3s went from completely illegal to thashed out... but on a much larger scale. Of course, while Western companies battle for rights, I'm sure there will be LLMs from countries that don't care about IP vacuuming up any ima
Re: (Score:3)
US Constitution: Article I, Section 8, Clause 8: [The Congress shall have Power . . . ] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
As language goes, that's pretty cut and dried.
Re: (Score:2)
US Constitution: Article I, Section 8, Clause 8: [The Congress shall have Power . . . ] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
You are going to have to be a lot more specific than that to provide an answer responsive to my question. "What is the basis in copyright law for asserting rights over being influenced by or learning from works whether the thing doing the learning be a computer or a person?"
As language goes, that's pretty cut and dried.
You didn't even bother to quote any applicable copyright law.
Re: (Score:2)
Ah, but that is only the vague first cut of law: it all hinges upon the definition of "the exclusive Right".
This gets elaborated upon in 17 U.S. Code 107 - Limitations on exclusive rights: Fair use:
"Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom us
Re: (Score:2)
The basis of the copyright claims is that what is being produced is derivative, which requires permission. The defense is that it's not, it's transformative, which does not.
Same argument as the Google book scanning [wikipedia.org] lawsuit (which Google won at trial).
It will be several years before the Supreme Court hears it, and they will, and several more while Congress dithers over whether or not to change the law in light of that ruling. Both sides make good enough arguments to keep it from being simple.
Re: (Score:2)
The basis of the copyright claims is that what is being produced is derivative, which requires permission.
It will be several years before the Supreme Court hears it, and they will, and several more while Congress dithers over whether or not to change the law in light of that ruling. Both sides make good enough arguments to keep it from being simple.
Personally I think this aspect is a lost cause because AI models themselves are clearly transformative works. What I think will continue to happen is what has been happening. A claim is made, the judge asks for evidence to support that claim, none is forthcoming.
The more realistic legal questions have to do with whether model outputs from companies offering AI as a service outputting content deemed to be derivative constitutes a fixed copy, whether it is protected by fair use or whether it constitutes an
Re: (Score:2)
I'm inclined to agree, but as things stand right now, unless there's a major change at the Library of Congress, only Congress can provide copyright protection for AI art. Whether or not they will is probably the biggest obstacle to widespread adoption on the AI art.
Re: (Score:2)
Since that right was, in fact, disputed, in court, I disagree. They won, but it was certainly disputed.
And everybody has the right to look at other people's art and be inspired by it. And that is undisputed.
All the same (Score:2)
As far as I'm concerned, the training doesn't really matter. What matters is the output. If someone uses it to generate a fairly accurate reproduction of a copyrighted work, that's copyright infringement. It's an arbitrary distinction, left up to the courts or a jury.
If you use it to make a pastiche, or critique of that work, that's fair use and it's fine.
It's not the input that matters in copyright, it's the output.
The AI is bogus .... (Score:2)
The reason all of this keeps coming up is that these AI art generator sites seem to be promising one thing, but doing something very different in reality.
The "claim" is that they ingest all of this human artwork and then "learn" from it so they're able to synthesize their own unique works of art based on the collective knowledge gained from processing the sum total of existing art it scraped or had fed into it.
The reality? It seems to simply match up existing art to a user's requests for the type of art th
Greater good. (Score:2)
100s of artist butthurt. Millions of people benefit. None of your original work is infringed on, And you still have the right to sell it in its (un)original form.
If you were locked in a box your whole life and some how spontaneously generated your magnificent art, then you can talk.
Re: (Score:2)
oh, because girls really dig "prompt engineers"
the embarrassment,
of not being able to draw another human being well,
with simple paper and pencil, while many of us still can code.
once again: art is not learnt via osmosis.
it is practice and theory. drawing from nature.
the wide ignorance is astounding.
FINE (Score:2)
From now on, every one of them is BANNED from ALL art galleries. Nor may they own ANY books with pictures in them. Would hate to think that they'd infringe on some other artists' intellectual property by incorporating a color palette or scene they liked into their work.
Human artists are trained... (Score:2)
...by seeing the work of others
My browser cache stores a copy too (Score:2)
People say, "But they made a copy in order to train their model."
About that: my browser makes a copy of images to the local cache when I visit websites.
If making a local copy of something online is copyright infringement, then we're all guilty.
If you don't want it seen... (Score:2)
Re: (Score:2)
Nobody can put this toothpaste back into the tube. Accept it. If you quash Midjourney good friggin luck quashing Stable Diffusion. Adapt and move on, artists.
No, but the FDA can rule that your toothpaste doesn't follow the applicable laws and can no longer be sold. Look at crypto. Wild, wild west ... until it came to attention of various regulatory agencies and now there's lots of regulations.
Re: (Score:2)
Nobody's selling SD though... It's free. How do you regulate free?
Re: (Score:2)
There will always be a niche market for art produced the old fashioned way, just like today, in a market of digital media, there are still collectors of oil paintings.
But the industry will go with what's the most efficient for their business needs, and that will be AI art because it's a) cheaper, b) means waiting seconds, minutes or hours, instead of days or weeks, and c) at least as easy to tweak to exactly specifications as working with a live artist.
The future is coming, and those who stand in the way wi
Re: (Score:3)
Really? If I sell one book to one library, now anybody can make a zillion copies of my book? ...
Why do you get to go to the library instead, make a copy of my book, leave with that copy
No, copyright law grants the copyright owner exclusive control over the production of fixed copies.
In the US... title 17 USC 106
"Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:
(1) to reproduce the copyrighted work in copies or phonorecords
(2) to prepare derivative works based upon the copyrighted work;
"
then feed it into your AI for your own profit, then deny I have any standing?
Feeding the book into an AI is not making a fixed copy. The AI model is a transformative work and therefore
Re: (Score:2)
The toothpaste IS still out of the tube though. I have Stable Diffusion running locally right now. Nobody can take it from me. It literally is too late.