Originally published at: Sci-fi magazines are getting inundated with AI fiction submissions | Boing Boing
…
Has there been a use case for “AI making art” that doesn’t involve “so we don’t need to pay creative humans?”
Because so far, I’m not a big fan…
It’s also gibberish. And not the sort of gibberish that does actually mean something/ is deliberate a la “Common Time”.
The answer seems obvious - set a bot to filter out the bots, then a bot to check the first bots work, a third bot to keep an eye on the second bot to make sure it’s not slacking off on the job.
Eventually we can replicate an entire publishing house.
With those bots ultimately answering to other bots aka the stock market trading algorithms.
As bad as this is, it’s still human spammers sending in the chatbot submissions. Eventually the algorithms are going to start doing that on their own too
this makes me wonder about one of the big differences between these current AI ‘it just predicts the next word / denoises noise to make a picture’ systems, and humans.
Maybe our brains do some of these techniques to generate language/images in our heads. However, ChatGPT and Stable Diffusion (picking on them) just sit on their butts until someone makes them do an action.
Human brains, on the other hand, are continual thought factories. You have an enormous number of thoughts a day - they are happening constantly, and you have little control over them beyond how you might react to them. You also have the ability to observe and react to your thoughts intentionally, which is probably a part of what we consider human consciousness. (I’m really into this topic lately as I learn about mindfulness, acceptance, the default mode network in the human brain, etc.) TLDR, these content generation AI things have no ‘default mode network’ and their intent is only provided externally by humans providing triggering input.
What would it take to add that ‘functionality’ to an LLM? What would happen if we did?
b4t ai iZ tH3 F4TurE!!! /S
bOTS FIX everything!!! They’ve done wonders on the twitters! /s
AI text generation companies should start voluntarily keeping a (super-secret, who-created-information-scrubbed) database of all text created with their model (if not forever, than at least for a period of time). They can then offer (probably as a paid service) access to search the database for specific text strings to see if their model created it or not (of course this won’t be of use if people are using a model locally on their computer, or a shady AI outfit). Oh well, just a thought.
This topic is the number one issue members of SFWA are discussing right now. They stood up an AI forum on their discord server late yesterday. I posted once this morning, and an hour later there were already 151 additional replies. People are very concerned about this trend, and no one has a good answer. For every “we could try this” someone comes up, there’s a bunch of reasons why it may not work.
I had to mute that thread, frankly. I’ll never keep up with the volume there, I have too much work to do to think about it right now. AI art is a real problem that’s only going to get worse and potentially push out human artists.
As I always say: bots aren’t The problem with Twitterz, people are. I used to follow bots deliberately.
This isn’t a new problem either though. Conferences have been spammed with AI written submissions for years, journals too. That many of them were sleazy and took them is another (human) story. Amazon used to be full of Wikipedia reprints until they stopped it and then they were pumped full of books written by finding failed searches via their API and a Markova chain. Thousands of books a month from some authors. Not to mention that any essay question a first year will ask will spit out a spew of replica unsourced almost identical SEO optimised clickbait, some written by humans, some not. Or ask a search engine what streaming service near you has access to a show or film and see the bot spew out confident nonsense.
Google tooling up for its IPO in 2003 and ditching page rank. That’s when this started in earnest for me.
I’d argue it’s both. Or rather that people make bots to be disruptive as possible, which riles up some of the actual people, who start posting racist, misogynistic, transphobic, etc, etc posts in response.
Indeed!
And… they’ve closed submissions because of the flood of AI garbage.
Looks like the noise created by AI-generated submissions is going to overwhelm a lot of different systems, leading to similar outcomes - services and opportunities being closed off to most people in favor of only allowing a hand-picked list of the already known to participate.
The thing is, the AI-text generating companies also have a sideline in selling AI-generated text detectors. They’re creating the problem and monetizing the solution. (Except the detectors don’t always work, especially for text generated by someone else’s AI, so it is, at best, only a partial solution and everyone still gets flooded by AI garbage.)
The AI detectors aren’t very good. If they wanted to make money, they’d make more by selling access to search the database of their generated text. And on top of that, they could make even more money by charging customers a large amount of money to not have the text those customers have generated in the database (some corporations, for example, would pay this fee so it is not revealed how much they are relying on AI).
It’s true - the grifting opportunities with AI are fractal! (It’s grifts all the way down!)
… early adopters adopt early
… that’s called “extortion,” right?
I thought the internet and ubiquitous computation were supposed to democratize everything and level the playing field
but they seem to have destroyed any economic value in amateur expertise and made connections and credentials more important than ever
Speaking of fractals, have you seen The Light Herder? (just a little shameless self promotion ;-))
Turns out those are two sides of the same thing - the tools have democratized things so much that you can produce (something at least vaguely resembling) an original story without any writing ability - or even real literacy, and the playing field is so level that the millions of people for whom a magazine short story fee is a substantial amount of money and who don’t care how they make that money, are on equal footing with people who actually want to write stories. Soon the playing field will be so democratized and level that writers will be competing directly with the software itself, which can spit out millions of submissions in the time it takes the writer to make one…
I doubt very much that this will ever happen.
Publishers will start doing it themselves and eliminate the writers before that
I mean the tools will get streamlined to the point where the users can have the software itself send out effectively infinite submissions of auto-generated garbage “content” without having to bother with the manual work themselves.
I can’t see publishers getting into it because if it ever hits the point where the output isn’t garbage (or publishers think it might be worth printing), the publishers themselves are going to be obsolete as well. Why pay for stories when you can pay directly for (access to) a story-producing machine? There’s no room for a publisher in that dynamic.