Originally published at: Do AI images violate copyright? A lawyer explains the Stable Diffusion lawsuit | Boing Boing
…
This is going to be interesting to watch and I really don’t know how I feel on the subject. I’m kind of torn in between copyright is already to strict and wow this going to hurt artists who already seem to get shafted by society at large.
It’s important to note that the lawsuit is NOT about the images that are being created. It is about the process of “viewing” the existing artwork to train the AI.
I think it would be very difficult to show that the training process itself is causing any harm. It is really no different than art students viewing a lot of art to study and learn the styles of other artists. The only reason they have any case at all is that the bits must be copied as part of the “viewing” process. The AI then “looks” for the common traits of similar images (whether it be images of the same subject, artistic style, artist, theme, color scheme, etc.).
Exactly. A finding against the defendants would seem to mean you can’t take the students of an art class to a museum to see art as you might be violating the copyright of the artists by viewing art in a training context. It seems on its face pretty ridiculous.
That equates showing people things to help establish ideas that they can then try to implement on their own and fitting a hypersurface to a bunch of vectors so that new ones can be calculated between them. I think that’s either very dismissive of the first or greatly overestimates the amount of creativity in the second. It might or might not be a copyright violation – at this point it’s new enough that I think it’s basically a new decision – but it is not taking a student to art galleries.
This worries me, too. This already happens today with patents - we’re told to never, ever read patent applications for anything you might have even a tangential relationship with because if you are found in violation, even having looked at the patent can make that violation “willful” (treble damages) because now it’s in your brain.
We’ve already seen this happen with sampling, and, now most recently, copyright lawsuits because a song “has the same musical theming” as another. I fear that we’re headed towards a US copyright system where those who can afford to defend their copyrights (increasingly successful artists or corporations) are going to be able to lock out smaller creators entirely because they are at risk of “looking” or “sounding” like another artist because they referenced their works. And that’s humans, let alone AI!
I can see the merits on both view points, i do think that having one’s style copied by an AI without permission to be pretty unethical regardless if its legal or not. And i can see how it can devalue or lessen demand for a given artist because an AI can generate a near endless amount of similarly looking work for a fraction of effort. Even a student would need to spend a lot of time working on their own skills to emulate or imitate a professional artist or a master, unlike an AI. That said the direction of technology has been moving towards automating processes in photography, drawing, and animation so the end result of the AI being able to copy people’s styles was going to happen sooner or later.
This is such a weird time/thing.
So people copy other peoples’ styles all of the time in art. Some times as homage, sometimes as inspiration with their own twist, sometimes at the direction of the editor or company to help maintain a consistent look or brand, sometimes just to rip off a popular artist.
But all of those examples still have the person outputting some of themselves in the new work (exceptions being people doing straight copies/swipes that are near identical.)
AI isn’t a person. There is no adding some of themselves. There is no self. They are outputting based on the input fed into them. If there was no original artist to input, there would not be this output. So I can see how artists are feeling ripped off by this latest technology.
An example: Mike Mignola has a very distinct style and uses it in making Hellboy comics. Duncan Fegredo does a lot of Hellboy comics too, and his style is very similar to Mignola. But it isn’t identical, and you can see Fegredo’s own style poking through.
An AI drawing Hellboy from Mignola inputs isn’t adding anything to it. It is just learning what a Mignola Hellboy looks like (sorta) and outputs it.
What ever protections they add, it needs to be specific to AI, and not to humans. Everything is a remix and humans should have some leeway to borrow and chop up influences into something new.
Wait isn’t that Jake from Corridor Crew. They’ve done their fair share of AI stuff for fun, including impersonate Jake.
Is there a parallel with music in the 80s and sampling? Copyrighted art in as feedstock, new art out in a way that (ideally) bears an artistically significant difference to the original?
It’s a good break down. I agree with his point, even if I personally think AI is not transformative, because it literally uses the original art as part of the process… it’s not just trying to mimic the style, it literally starts with that art. And really, AI is a souless nothing machine being created by a bunch of tech guys to undermine yet another creator market, and I have no sympathy if they lose.
You’d think they’d have better things to train their AI to do, like root out corruption among the wealthy elite, or track back all the dark money spending on politics to its owners. But no… let’s take art away from the people and give it… to other people? I guess. Whatever.
No, not exactly. The AI is not a person. It is not a student, and it is not alive. It is a machine. In this case, it is a machine designed to create new works from old works. A task it can not do without first copying the old works of people who are not machines.
There’s one big, glaring, legal difference: while I’m sure that Pearson SuccessGene™ will eventually drag a contrary position to the supreme court; art students are people rather than ‘works’. They may or may not do infringing things; but we don’t treat changes to them(possibly excepting tattoos; those seem like a potential carve-out, anyone?) as works, derivative or otherwise.
Stable Diffusion is a work, rather than a person, so it faces the possibility of being an unauthorized derivative work or the product of an unauthorized use of copyrighted works in a way that no human can.
It seems worth noting, since you mentioned art students, that ‘educational’ has often been cited as a class of fair-use defense; which is pretty much an admission that certain things students do would be illicit on copyright grounds but for the special deference we give to students and education; one that does not extend to something that isn’t a student(and which appears to exist for commercial purposes; also an unhelpful detail when ‘non-commercial’ is another common fair-use thing).
This is not to say that I would definitely take the position that such training activities are infringement; but the analogy between a machine learning model and a student is…more novel than strong…
all decided by juries who aren’t familiar with the compositional constraints imposed by “Western music theory”.
Reasonable arguments can be made arguing that it is fair use and that it isn’t. This is a pretty classic example of technology having moved ahead of the law. Copyright and, especially, the fair use exception, did not anticipate this situation. It is unsurprising, then, that there is concern and that people aren’t exactly sure what they think the right thing to do is. I’m not sure, either. On the one hand, I think there are some valid benefits to this technology. Also, it’s inevitable at this point, so banning it or declaring it to be infringement is not going to make it go away, so we need to deal with this. On the other hand, I completely understand why artists who oftentimes are struggling to make a living are angry and concerned. I would be, too, if I were in their shoes.
If we had a functional legislature, I would say this is where the legislature needs to step in, form a committee, hold some hearings, have some debates, and update our copyright law, but I’m not delusional. I know that’s not going to happen. So the courts are going to have to define this and place some limits on it. It will be interesting to see what happens with AI creative works over the next few years, because this isn’t going to be the only lawsuit. I’m sure a lawsuit over ChatGPT is right around the corner.
I’m just curious what you think happens when a human views something? That data is copied and stored inside our brains - the process is lossy and uses chemical and organic reactions but the fact is that data is stored in a medium and then analyzed by numerous processes by our brains.
Regarding art - humans are in fact stupidly good at pattern matching and identifying similarities between different images - something that computers are currently pretty crappy at. That’s why so many CAPTCHAs are ‘find the hydrants in these pictures’.
I’m not sure if it should be limited, but the idea that copying bits that never leave the inside of the computer chip being some kind of crime is absurd.
I do remember when this site used to defend Aaron Swartz - this is exactly the kind of thing he was fighting against.
And I’ve always felt people who think “information wants to be free” but they didn’t spend years of their life creating the data they so blandly steal and share with others without permission is absurd. But hey, what do I know, I’m just a writer who has watched my peers have their works stolen by others who then profit from it while they beg for money on patreon.
I don’t think you really know what Aaron would have supported in this situation, nor does it serve you to invoke the dead’s name here. He definitely supported Creative Commons license, which gives copyright holders the CHOICE to release their works for free use. That’s not the same as demanding “information wants to be free” and giving permission for folks to scrape terrabytes of data from hard working artists to train their computer AI model to put them out of work.
This case and ones like it are a golden opportunity to draw a bright line between individual human beings defined as legal persons with associated rights and algorithms that are the agents of corporations (that happen to act in ways that our legal system has previously assumed only human entities can).
This has bigger ramifications for out future society than decisions which were made around corporate person-hood.
The significant difference between aforementioned transformative use examples is that in those cases a human agent (essentially a legal singularity rather than a mechanism or work) is effecting the transformation, which makes sense in that they are contributing a new interpretive spin that adds value in a non-mechanistic fashion. Performing a statistical analysis is not that.
Content rights holders should be able to choose whether or not their materials will be added to a training database - they should be able to withhold it from that database if they feel that the market value of their works would be diminished by inclusion. Something like a robots.txt tag would be a commonsense approach to a future standard. Ability to audit and excise existing training sets would be a good remedy for the present systems.
yes! this.
there’s obviously value in using the works without the artists permission, otherwise the developers would have either a) asked for permission, or b) only used works in the public domain
the truth is they want the value out of these artists and their works, and they want it without payment or license.
copyright was created to try to honor the balance between the artists (authors) and the public interest. this new situation deserves to fall under that same umbrella
( and while i’m wishing for platonic ponies. copyright should be yanked out of disney’s hands and itself be brought back into balance )
Look out, lawyers—you’re next