Never mind. I’ll delete the comments.
They are fine - I really don’t understand and I would like to, though maybe here is not the place.
(It’s not.)
Just sayin’.
There are all sorts of reasons any person of any background might choose to remain anonymous, and yet they all come back down to one core tenet; far too many people are just hateful assholes.
Sad but true.
And I seriously doubt that the people DEFENDING AI are the ones who are gonna suffer real harassment, whatever their gender. It’s the toxic wing of the tech-dude-bro community and their hangers-on that are a majority of the source of online harassment campaigns.
Would a human that can do what the AI can do be considered derivative if they looked at existing art and emulated a style while producing completely different art or subject matters? Artists throughout history have learned from other artists. Heck, the credits on a lot of Renaissance art is something like: “[master artist] and [apprentices/assistants].” Often the master would sketch a mural and the assistants would paint it. Da Vinci learned from Verrocchio, specifically learning his style and then how to improve upon it. Do we consider Da Vinci derivative because he didn’t become an artist in a vacuum, having never experienced any art before?
Not really. If a jpeg or mp3 codec rendered an entirely different work, you’d consider it to be broken.
Humans extrapolate and in theory are taught to avoid direct plagiarism. Machine learning only interpolates and has no concept of whether it happens or not. I feel like those two differences make it a difficult parallel.
That’s a good argument against copyrightability of AI generated art as it being uncreative, but not for whether the art is considered a copyright violation of existing works that aren’t contained in either the model or the result of a rendering.
You’re also ignoring that humans are involved in the process of machine learning, so the machine learning isn’t left to its own devices. The weights of words are associated with the images the machine learning views to convey the meaning of an image with a word. This is why you can get better images from more detailed prompts that contain more descriptive, associated words.
@smulder is right that a jpg doesn’t really contain the image in question either, you know. It’s an algorithm designed to replicate it very closely, just like a neural network trained on a single image would be. The difference is these ones interpolate between many such data points, so do give new outcomes but only within the space they define.
And of couse humans are involved in setting up the machine learning, but that…doesn’t change anything I said? It’s not like they teach it whether it’s copying or remixing.
My personal opinion is that it’s hard to call the results plagiarism or original works because it’s doing things in a way very different from where those concepts came from, mixing together data without understanding. From an ethical standpoint, though, I do know it very much depends on artists’ hard work and so far is being done without permission or compensation for that. I hope that part can be changed, because it’s not fair.
… it’s a digital simulation of Rob Liefeld
Sort of?
Much like you’ll see a lot of the painting type images created having “signature” artifacts, if you ask Midjourney to create a stock image of something, you do get some results that have similar artifacts that resemble watermarks.
Still more accurate than the original.
This is beside the point. I just don’t follow the implied logic here that seems to want to grant rights of agency and creativity to an algorithm just because it has superficial similarities to things that we’ve historically considered uniquely human. The only point of doing that would be to shield the owners of said systems from accountability.
Machine learning and Neural Net and G.A.N. techniques are really cool and are surely going to form the basis of exceptionally useful and increasingly powerful creative tools. (they aren’t Artificial Intelligence either, calling it that is just a B.S. marketing malapropism that popped up more recently than some items in the back of my fridge).
The problem here isn’t the technology generally speaking - it’s the specific case of it being used as a foil for mechanically harvesting others material without permission and building a generative system with that information. Pretending that feeding datasets is just like human beings having experiences is a conceptual degradation of both the technology and the humans involved.
Nothing and no.
I’m steering clear of the bulk of this discussion since people are covering the bases just fine, but since I’m an AI engineer who worked in machine learning, neural networks and expert systems, and you asked a specific technical question, I’ll wade in.
These algorithms are most easily understood as statistical averages of all artwork. Without the input artwork, they produce nothing. The term “AI” confuses lay folk because it makes it sound like there’s some decision making or process over and above the mathematical average of all the inputs. There isn’t. A weighted neural network trained on the input images is just curve-fitting against each pixel of each relevant image it was given as training. If there’s no training data, it can’t do anything. It’s the same reason it always generates weird hands, as someone asked above. It has no understanding or conceptual grasp of anything. It’s just a statistical average applied at each pixel of the image as it goes along. That’s all. People imagine a lot more “intelligence” than is there.
Also, since nobody has said it yet…
… but arguably most importantly, judges do not understand the technology. This is the real problem IMHO, and why laws like the awful DMCA get through. The law has, historically, done a really poor job of keeping up with technology because the judges and politicians deciding this stuff still think digital watches are a pretty neat idea (with apologies to Douglas Adams).
This topic was automatically closed after 5 days. New replies are no longer allowed.