…Not the point.
The point is “I know [a derivative work] when I see it” doesn’t fly, and so there are specific tests, and so fanfic infringes.
It’s also frequently protected by fair use, as long as it’s not published for profit.
Yes. But if the copyright holder sends a threatening letter who’s going to defend their fanfic in court?
Anyway, I digress.
EFF? The major fanfic sites have attorneys.
Then you have to file off all the serial numbers so no one knows that your porno books are Twilight fan fiction
And I donate to the OTW. I’m still not holding my breath.
…I assume FF.net doesn’t count as a major site anymore; they don’t even seem to have staff.
Laches are in a weakened states for copyright.
It is absolutely a copyright infringement. Just like making a film adaptation of a novel under copyright would be infringing, regardless of trademarks. Adaptation rights are a very old part of the copyright bundle.
Not necessarily. It depends on how much you use. Land and Freedom borrowed heavily from Homage to Catalonia, but I haven’t seen any copyright lawsuits in the last 25 years. It’s only a copyright violation if it’s ruled so in a court of law. Not every adaptation goes to court. Not every adaptation that goes to court is ruled a copyright violation.
The training program and the model to generate new art are separate, copyrightable programs. The latter is the result of a combination of the the former and the copyrighted art.
Except it’s not. The latter is the result of the training, not the art itself. It’s an important distinction, like the difference between remembering a painting you saw and cutting up a copy of a painting to make a new derivative work. The program is only remembering the process, not saving and using/copying the original work in the creation of a new work.
As for transformative, I think you’re confusing a colloquial and legal definition. A non-human is legally incapable of having purpose, or intended expression, meaning, etc. If the model is found not to be infringing, this would be a good argument that the copyright should go to the person who selected the prompts.
Some have argued exactly that. Some AI art websites claim that users are creating works the user’s now own the copyright for. But again, that’s irrelevant because the use of the original work is only in the training, so you could only argue that the training is not transformative. And if you understand how it is trained, it’s difficult to say it’s not transformative.
The “new work” here is not the artwork generated by the end user, but the model that results from the initial art selection and the Machine Learning program. (Note: whether this creates a new program is technically moot, as RAM doctrine applies).
That’s what I was saying, but your comment was pretending that the works created by the AI were a part of the four factors analysis. If you’re honestly only addressing the program and the model, then the four factors are even more in favor of the AI training being fair use.
The photographer did not offer sculptures for sale, and had no plans to do so. The potential market for such sculptures was proven by the sculptor’s sales. The court found in the photographer’s favor.
Because the sculpture was a derivative work. The training model is not.
More broadly, it’s self evident how generating art in the style of particular artists, especially those who are open to commission, hurts those artists’ sales.
But you’re talking about the AI-generated art here again. You keep bouncing back and forth between talking about the resulting works as the infringement or the training of the AI as the infringement. You’re muddling your arguments.
I’m a paralegal who works in IP. I’m not an expert in copyright (I mostly work with trademark), but I’m not just talking out of my ass
And there are copyright lawyers who lose copyright lawsuits. Working in the field doesn’t mean you can predict how judges and juries will go or what lawyers will argue.
Yes, I absolutely was. The comment I responded to with that analogy was specifically about sampling:
My point wasn’t about recording a new version of a song when I referred to playing by ear, but the process of learning how to create art or music from being exposed to it. You took it out of context. You can play a song by ear without recording it. You can learn to create new songs by hearing other songs and understanding how chord progressions work, among other things.
I think that you are not understanding my point - we already have something in place for allowing commercial entities (which all of the machine learning companies are) to take the work of others and repurpose it. It’s a mechanical license that the creator has no ability to deny and that the party using the work has to pay for.
You seem to not consider the the dataset to be a derivative work. I am not so sure that the a court will agree. So, we can talk about our various opinions all we want, but this is new territory and the courts will work it out. There is a lot of existing case law that make me think it is not going to go great for the machine learning companies, but that is just my opinion, and we will have to see what happens. This is not a slam dunk for the machine learning companies.
I understand your point. It’s irrelevant to the analogy I was making.
we already have something in place for allowing commercial entities (which all of the machine learning companies are) to take the work of others and repurpose it.
There isn’t a mechanical license for images, so the analogy falls apart completely here. There’s no point in going down this thread.
commercial entities (which all of the machine learning companies are)
But not everyone who trains the AI is an employee of said companies. I’ve trained AI with images (my own). Anyone can. You can create your own models for Stable Diffusion.
You seem to not consider the the dataset to be a derivative work.
No, it isn’t. How you can have a derivative work that contains no part of the original work?
I am not so sure that the a court will agree.
That’s an entirely different issue. Courts don’t rule consistently or logically. It often has to do with who has the better lawyers or which company is more wealthy and powerful. Law isn’t always (or often?) decided by logic or ethics.
They are ruled by precedent, and my statements have all been based on how courts have ruled on these issues. Simply put, most of your claims are directly contradictory to decades of binding precedent.
The artists should have been paid for the training that was done prior to any AI launch; there I agree.
Saying that nothing can ever look or sound anything like someone else’s work without it being theft of intellectual property… well, that stance is more than a bit problematic…
Like that ^^^
(eta - switched out for better example)
Digital media blurs any concept of “containing” to an arbitrary degree of abstraction. But, I would think that the important factor isn’t really to what degree apparent aspects of the original works are present, but rather the facts of “mechanical ingestion” that went into building the apparatus.
Even more than an issue of artist rights and recompense - this seems like an an important test case, (or opening salvo?) - around questions of authorship and responsibility regarding algorithmic person-hood / rights. The courts recognize humans as uniquely transformative filters in a number of cases - “clean room” engineering, like the legally exacting process which was used to reverse engineer I.B.M.s BIOS, (using programmers who were given a spec but had never themselves looked at the original code) - or Senator Mike Gravel’s transmogrifying the Pentagon Papers from classified to public record by reading them aloud into the congressional record.
If an algorithm can have creative and transformative powers, can it enter into contracts? be charged with crimes (shielding its owners from legal effects?)
Re ChatGPT, what are its sources for, say, American History? Does it “know” how to avoid plagiarizing someone else’s work? And how long will it take for writers of said subject to take notice.
There’s a lot of hot air blowing around, but use an earlier, less fraught example: databases of photos of a specific popular tourist destination. Arc de Triomphe, Devil’s Tower, etc. One can aggregate all those images and build a sort of virtual 3-D-ish model and then use that to make new pictures of the monument. Fly around it and get closer and see more detail. BUT: a photographer who has copyrighted and already taken steps protecting a specific photograph (which is part of our current law) absolutely can claim that they don’t want their copyrighted work to be included in that database. All the new developments; technical and ethical issues are new and have to be debated (and we aren’t solving them here), but there is precedent for a photographer refusing to allow their copyrighted work to be used or reproduced. Let’s start there, instead of with our hot feelings about art and exploitation etc.
Except photomosaics arguably do contain the works used to compose the replica of the original, so it doesn’t qualify.
Except I’m not arguing for algorithmic personhood or copyright protection of anything it creates. You don’t have to argue in favor of those to argue against factually inaccurate depictions of the technology and legal theories derived therefrom.
They are ruled by precedent
If that were true, there’d be no such thing as precedent. New precedents can and will be established. Old ones will be overturned.
and my statements have all been based on how courts have ruled on these issues. Simply put, most of your claims are directly contradictory to decades of binding precedent.
Except this issue hasn’t been ruled on specifically as far as I’m aware. If you want to cite some caselaw of this exact scenario, I’m up for a deep dive.
There are artists who have been blind from birth, and produce very interesting, attractive paintings. Eserf Armagan for example.
What might an AI produce if it had never been given sample image data? Could it be trained a different way?
(honest question, IANACL) Would a photomosaic made using public domain photos but of a copyrighted work be protected? Would the resolution matter? The commercial/commentary/satire aspects?
Contain could mean a lot of very different things here. I’d argue that the important thing is that the model or output or what-have-you incorporates elements derived from protected work. It’s definitely very interesting that these techniques can produce output that doesn’t have direct obvious relationships to specific elements of the input data, but that shouldn’t mislead us into thinking that it isn’t a purely mechanical derivation.