Winner of a prestigious literary award unabashedly used AI to write it

Capitalism is the elephant in the room here. The many issues with AI cannot be divorced from the need for artists, like everyone else, to “earn a living”.

12 Likes

No worries, capitalism has that covered. If all you can get is a typical gig-economy McJob you won’t have any time to be creative. Problem solved eliminated. Now back to work, you’ve stealing from the shareholders and may negatively impact your manager’s bonus.

5 Likes

Yeah, I remember when Bill Gates as part of his “philanthropist” shtick was suggesting that labour from robots should be taxed because workers were (how’s that giving all your money away working out Billg? Because last I saw you had ten times as much as when you said you were giving it away. No Bill. We need to tax capital. Hard. Fuckers like you need to be taxed out of existence.)

Adversarial, wasteful, class war, uses of capital like AI would not exist without ultra wealthy people with nothing useful to do with vast tranches of wealth.

LLMs will not cure cancer, which was suggested as a reason to allow capital to.steal from artists above. Fuck it we’ve been through this already. What they will do when applied to cancer healthcare is suck up money which should be spent on things that work and leave nothing behind except misery and a huge pile of personal patient data that somehow manages to avoid the rules on privacy and is private equity fodder.

Watson: good at some game show shit, kills people when used instead of real medicine.

6 Likes

Are you aware that there are attacks that extract the training data?

This demonstrates that the training data is stored in a neural network. It’s just so happens that neural network isn’t readable by humans. This lets people launder works not made by them.

This would ethical to use if the only works contained in the neural network were no longer protected by copyright.

9 Likes

Clog Season 3 GIF by The Simpsons

3 Likes

You have not. People don’t remember or actually even see things that way. Your brain makes up a bunch of concepts that it fits over what your eyes are scanning. That’s why you can find details you never noticed in something you’ve seen before, and if I ask you to draw any of them from memory, you’ll probably find there are parts you just didn’t pick up.

You know American Gothic, with the farmer and his daughter? Maybe take a quick glance to remind yourself. Now…do you know what kind of curtains the windows have? Most of us probably do not. We saw them but we weren’t looking at them.

These models don’t work like that. They treat images as points and fit a hypersurface through them. There’s no parts they notice and parts they don’t because they don’t break them down at all. And really it should be no surprise to anyone that sometimes better ones can recreate the exact same points – that’s them doing a good job with fitting.

The two types of processing are fundamentally dissimilar, and that so many advocates of these models describe them otherwise feels like a really bad sign to me just on its own. Like, what does it say when all the defenses of something are actually defending something else?

10 Likes

Word. Another of my annoying tropes is demanding people explain what they mean by “AI” and how it actually works in the exact context they are trying to sell it. If they conflate generative large language models with, say, image processing I know they are a fraud.

Who am I kidding? They are all frauds. Every fucker of them is a fraud.

7 Likes

Apropos for artists:

3 Likes

To bring it back to the writing (versus ai visual arts generators) the ones I’ve tried are deeply flawed.
I can kind of see using it to generate an outline, much as I might look at the table of contents of a good textbook to get an idea of how I want to structure my own content. You can ask chatGPT to outline a class or a story and it comes up with a bullet list. Can be helpful.
The bigger problem I see is where it’s generating actual prose or technical writing that gets copy/pasted into final products. Because not only is it stealing from actual content creators/writers, but it doesn’t even provide credit. When I’ve included the prompt to show the sources of the info, then checked those sources, they don’t match up. It’s stealing the content and then mis-attributing it. It’s a real mess.

Claudine Gay got onerously ousted for far, far less.

9 Likes

Or it just makes stuff up, as at least one lawyer found out when using an LLM to generate court filings.

4 Likes

I’m not sure it really matters how Gen AI image creation works technically in comparison to human generated images, does it? What seems to matter is whether there is a difference between the following two example scenarios:

A: I am a human exposed to a body of work which includes (but is not limited to) pointillism. I decide to use a method of making images by using my brush in a certain way that makes my art look very similar to the pointillism of Georges Seurat. I paint a picture of a space shuttle such that somebody might even think I was Seurat if they did not know he never painted a picture of a space shuttle. I am not regarded as a “real” artist because of this unoriginality, but my work is protected by copyright in the US.

B: I am a robot exposed to a body of work which includes (but is not limited to) pointillism. I can now output images that look like the pointillism of Georges Seurat. I am told by a human to make a picture of a space shuttle such that somebody might even think it was by Seurat if they did not know he never painted a picture of a space shuttle. My “prompter” is not regarded as a “real” artist because of this unoriginality, and my/their work is not protected by copyright in the US.

They seem very similar scenarios to me. :person_shrugging: I suppose if the scenarios talked about a living arist and not Seurat, that would be different? But copyright can protect those too (successful court cases against Jefff Koons, for example).

I dunno. I am consfused. Is it just down to the fact that in B we could be talking about hundreds of Seurat-style pictured produced in hours? I note Ai Weiwei has some rather stark views on this BTW.

And AI will soon be creating many new gig job opportunities! (In the form of solving captchas for them)

1 Like

Yeah, Google “stochastic parrot”. It’s a pretty good analysis of this problem.

2 Likes

What is that exactly?

It’s like a sort of active watermark. So if you are on some “not for AI use” list, you run your images though the tool and upload them to wherever (websites, insta) as normal. Then if the AI scrapers come along and ignore the “no use” thing and put your art in their corpuses, the tech acts to disrupt the models so they behave weirdly. The hope is that this disruption will force the tech bros to respect the “no AI use” lists and respect the rights holders to really make sure they don’t get paid money.

At least I think that’s the idea. Rather cool!

EDIT: Now reading about how it’s been defeated multiple times. And in its defeat, it means that the models that are worried about training on their own data get to see which images aren’t AI created: the ones that have been run through Nightshade :frowning:

7 Likes

Ah, excellent! thanks!

And thanks for posting the link @LurksNoMore!

7 Likes

I think that link deserves a one box:

2 Likes

… what are you sorry for exactly :confused:

There has for a long time, perhaps throughout the age of the copyright industry since the 1900’s, been a taboo about art and artists which it looks increasingly like this AI debate will have to also wrestle with: as necessary and important as art is to human life (it makes life worth living, after all!) … if the vast majority of it disappeared tomorrow - if legions of artists gave up and never came back - nobody would notice. I mean, look at Spotify.

Artists won’t ever stop making art.

Art publishing may collapse.

The return of galleries and physical art for sale sounds like a fabulous result of all this faux-AI fuckery.

8 Likes