Most of the ‘you stole my cat pics!’ folks are laboring under perhaps a fundamentally incorrect understanding of how diffusion models, LLMs and other ‘AI’ shoggoths actually work. Sorry if this is uncomfortable to hear, but MidJourney (the model I have some experience with, so I cannot say this is true for all models) isn’t simply a giant catalog of pics where they randomly cut n paste pixels from A to B and viola. Reproducing a istock.com watermark or a near exact duplicate of an existing work are what I’d call failure states (results?). MidJourney won’t let you create porn or gore etc, so they must have trained it on some of that to ‘poison’ the model against delivering results that cross MJ’s lines?
I suppose I need to use more (possibly stolen) animated .gifs to explain things. =`M…
It’s hard to have a conversation with some one who is more interested trying to make people sound stupid by talking down to them.
Honestly, I hate AI because it has unleashed a wave of people like this in a space i used to enjoy participating. Tech assholes ruin everything with their personalities, art included… but ultimately… everything. That’s my opinion fwiw. Whatever happens legally or financially as far as I can tell it doesn’t matter, it will still suck for everyone who doesn’t enjoy the ethics and humanism that brought us WeWork.
None of them are, they’re all complex functions based on fitting the input vectors. But it’s still entirely based on them. Like, I think we all realize that if you trained a model on a single image than all it would do is give that image back exactly like a jpeg does, because that’s all it would learn to generate. Instead it develops a way to generate things as close to as many of the input vectors as possible.
I guess we are supposed to pretend that’s fundamentally different, but it’s not really. I mean, it’s an experimental fact that means sometimes these models can match inputs, and it’s kind of infuriating advocates keep just dismissing that as if it weren’t true anyway.
But either way, the value of the model comes from the data set. That’s really cool when it’s something like the predictions jhbadger mentioned, where I assume they worked hard to find the input data and now have a great tool to apply it to other situations. I’ve worked with similar. But it’s not cool when the data set is stuff you ripped off from other people who worked hard to make it, and now are getting nothing for that.
People here don’t hate “AI” or whatever nonsense you want to sling at us. I will repeat, neural networks are a great way to do more things with your data sets. We just don’t accept that means you can take said data sets without caring about where they came from.
It’s all a game with no real world consequences to some people; all they seem to care about is “winning” and looking like they are the smartest person in the room.
ITFY; Improved that for ya.
My prediction? This is going to end really badly for genuine indie artists, due to a Byzantine web of laws that were put in place supposedly to protect artists that will, somehow, only enrich corporations rather than those same individual artists.
Meanwhile the “state of the art” will cross a point where average folks stop caring if a work was generated by human hands or not - it’ll be “good enough” and cheap, even if these systems are only trained on public domain works eventually.
In other words, Cory’s Eastern Standard Tribe got it exactly right. Shit.
That seems likely unfortunately, at least until gridfall eventually happens.
No electricity = no digital images.
As long as humanity still exists, there will always be those who create.
Whether they can make a decent living off what they create is another story.
Oh, I can imagine that discerning readers will be willing to pay more for “artisinal” sci-fi novels.
A large media company can train an in-house model on their own copyrighted content, which sidesteps most of the issues with existing models. They probably won’t be shipping raw output, though. The royalty obligations and copyright status of any such output is still up in the air, and there’s a whole host of potential reputational issues with publishing raw output that may be infringing, offensive, or otherwise off-brand.
The near-term effect is likely to be companies pushing their artists to use bespoke genAI models to be more productive, either to ship more content(cool!) or to downsize their art staff(uncool!) without affecting overall productivity. In fact, Square-Enix recently announced plans to do exactly that.
If they don’t steal from humans- this is what happens:
It’ll probably happen to humans who get fed the same diet.
"Dear Disney and/or Paramount: I am writing today to complain that the latest entry in your series of pseudonymously written media-tie-in novels is
- TOO SIMILAR
- NOT SIMILAR ENOUGH
to the source material which appeared on screen several decades ago. Please adjust your models to produce a less or more familiar product."
“Dear Disney why does Bambi have six legs?”
Needs to be a stormtrooper. At least that way, Bambi will be safe.