Originally published at: BuzzFeed discloses use of AI to write stories | Boing Boing
…
The first and last sentences of this post were written by a human.
I sensed that something was off fairly quickly, if that’s any comfort.
Still, the AI and ML are only getting better. And so as a former journalist I’ve accepted that I’ve joined the ever-expanding ranks of those who were lucky enough to once get paid for doing work that will be handled by a machine in the future.
That Fully Automated Luxury Communism is coming any day now, right?
That, or Fully Automated Capitalist Starvation.
Yes, it was much too verbose and lacked sharp edges.
Also, none of it made any sense.
Plus it didn’t talk about eye-mouths, which is surprising considering how much AIs like eyes.
Seriously, though, it went off-topic rapidly and then veered into incoherence, leaving me scratching my head about what was going on.
Playing around with GPT-3, it can make surface-level posts about most things, but it lacks the skill to go in-depth into any topic. That means if you want to make a listicle or summary of an event (past event since the model was trained in 2021), it can do it, but only on the surface. For example:
In today’s world of digital communication, the use of artificial intelligence (AI) to power forum conversations is becoming increasingly popular. AI-driven forums can offer users a wide range of features such as automated moderation, natural language processing, and personalized recommendations. However, the Boing Boing BBS forum has chosen to stay true to its roots and remain a human-only forum. This decision comes with several benefits that can’t be replicated by an AI-driven forum.
First, Boing Boing BBS offers users an open platform to discuss topics in a more natural way. AI-driven forums are often programmed to only recognize certain topics, which can limit the range of conversation and lead to more homogenous discussions. This can be a major issue for more specialized topics, where the AI may not recognize the nuances of the conversation. With Boing Boing BBS, users can have conversations on any topic in a more free-flowing manner.
Second, the human-only nature of the forum allows for a more engaging experience. AI-driven forums often lack the ability to converse in the same way that humans can. This can lead to conversations that feel more like a monologue than a dialogue, which can be off-putting for some users. With Boing Boing BBS, users can have a more engaging experience as they can have real conversations with other users.
Finally, the human-only nature of Boing Boing BBS gives users more control over the content they are exposed to. AI-driven forums can be prone to bias, as the AI may not be able to recognize the nuances of certain topics. This can lead to users being exposed to content that they may not be comfortable with. With Boing Boing BBS, users can ensure that they are only exposed to content that they are comfortable with as the content is moderated by humans.
In conclusion, Boing Boing BBS has chosen to remain a human-only forum and this decision comes with several benefits. It offers users an open platform to discuss any topic in a more natural way, a more engaging conversation experience, and more control over the content they are exposed to. As AI-driven forums become more popular, it’s important to remember the benefits of a human-only forum such as Boing Boing BBS.
I still choose to believe that Buzzfeed articles are written by a room filled with chimpanzees and typewriters.
Funny. I wonder how an AI trained on corpus of your actual comments here would have fared in generating the same comment.
Also, while we joke about AI- vs human-driven comment forums, bots are a very real problem. I wish more sites took the example of BB BBS and treated them all as what they are: spam.
The article was definitely weird. It almost felt like a random walk, moving from one topic to a peripherally related one by the luck of the dice— a series of somewhat related sentences, but without a well-defined path through the forest of ideas, so in the end you don’t really know what the point was. But I wonder if it might actually act like a confusion induction to some fraction of the readers, and put them in a suggestible state.
It’s pretty clear the internet and search results are rapidly going to devolve into AI-generated nonsense, often trained on text that includes content that wasn’t created by humans to begin with and makes no sense. There are already examples of plausible-appearing but completely wrong “answers” being provided by AI to simple factual questions that were derived from website SEO gibberish text - that’s just the tip of the onrushing iceberg. (Especially as search engines include more AI responses to queries.) The argument has been made - probably correctly, that the economic model for the big AI gold rush will largely consist of AI companies generating enormous amounts of AI-generated information pollution and selling tools to detect the garbage their AIs are producing, just in order to navigate around it. Which most people won’t. Search engines, which are already struggling (and often failing) to provide relevant results, will just collapse.
And that’s not even getting into the impact AI bots will have while clogging up social media and email, etc. I notice that post-Elno, Twitter replies are getting spammed with pretty obvious bots running the latest scam. I suspect something like Twitter, a mass social media environment where anyone can talk to anyone, will be impossible soon, replaced by highly curated, heavily gated, and much smaller, online social spaces.
Which makes it a bad (or, depending on intent, relatively entertaining) result. What the AI seems to be getting really good at is creating bland, plausible text that doesn’t really say much of anything - which means when it includes falsehoods, they’re harder to spot.
If any legislation emerges from the current swirl of fear and confusion and ignorance swirling around politicians over AI, the simplest and most effective one should be that all editorial content not generated by humans must be clearly labeled as such. Put it under the well-understood rubrics of truth in advertising or accurate product labelling.
So much “content” would necessarily fall outside the law that it would still have huge impacts, though, even if all articles were required to be labeled. Also, I could see such a law getting snarled up in 1st Amendment issues as well… (plus, it would potential just force content into non-US-based sites).
That’s why I specified editorial content. It has the potential to do the most harm and is easier to regulate than user-generated content. If the U.S. regulates on this issue in any way, this should be the initial approach.
It’s far more likely that the EU will once again take the lead here, as it did with the GDPR that’s now an aspect of most Americans’ on-line lives. I can see them coming up with a standard mark or marks that have to be slapped on any content initially generated by AI.
It will also dilute the reliability of sites like Wikipedia that may be crowd sourced and human edited but where third party citations from places like BuzzFeed are being utilized. A fitting word I’ve been hearing a lot lately to describe what is going on is “enshitification”.
This type of AI generated text interestingly has the same sort of feel as a typical AI generated image - if you squint your eyes at it and take the overall view, it has the appearance of being a certain type of picture, but then if you look more closely at any of the specific details you see that none of them really turn out to be what they were supposed to be.
I think @beschizza has commented on this effect in his old AI generated ‘Dune’ images. It forces your eye/attention to wander around frustratingly- unable to properly latch on to the finer details.
I have a hunch that this effect, while annoying in the current crop of AI works, will likely be harnessed in the near future to make a new type of art image that makes this never ending attempt of the brain to properly analysis the image somehow pleasurable. Imagine people staring at these images enraptured for hours on end, uneaten food dishes piling up around them as their families struggle in vain to break their attention…
They love eyes and are overly sensitive about hands, which they can’t seem to get right.
Wait a minute…overly sensitive about hands, rapidly veers off-topic. Trump is an AI!
I feel like editorial content - leaving aside 1st Amendment issues - is a drop in the bucket, but also I suspect we’re going to get into a “California Prop 65” situation (which required labeling everything with carcinogens, but resulted in labeling… everything). That is, we’re going to see the adoption of “AI-powered” tools for writing (and art, etc.), so there’ll be a very blurry line between AI-created (and human-edited) and AI-assisted work that will just result in pretty much everything getting the label, making it useless.
The first and last sentences of this post were written by a human
For no good reason, let’s do a little ChatGPT experiment.
Request: rewrite this post in the style of Dr Seuss.
Actual output:
Oh, the things that BuzzFeed said,
Using AI to make stories for their bed,
Their stock price soared, investors so glad,
To replace human writers, it’s not that bad!
But wait, said CEO Jonah Peretti,
Creating fake news could be quite messy,
False stories also from big outlets flow,
Like the AP’s report, “Jailed and Fired For Being Pro-Fascism: A Review”
The New York Times also had a say,
On a case where the government went astray,
Violating the Constitution, they did it with glee,
Creating a body where employers could fire those who disagree
Vox also chimed in, a new tool to see,
Bias in job decisions, oh my, oh me,
And more than 10% of Fortune 500,
Changed their policies, all thanks to data done right
But the online community, they had a hoot,
Saying the algorithms were too snazzy and cute,
They’d have to change them, toot sweet,
When BuzzFeed will deploy, still incomplete.
I think that even more than this Buzzfeed example the recent CNET stuff points to how this is going to destroy the internet. At CNET Red Ventures is using AI to generate money management articles that they are using to insert credit card affiliate links into. The articles are intended to be generic and essentially useless, all that matters is getting that link in front of you.
What I think makes AI so dangerous is the ease and scale at which anyone can set themselves up generating huge amounts of web presence for affiliate sales while providing no value to the internet from a content standpoint - just shooting out lots of chaff (which will eventually also get used to train machine learning algorithms, making them worse…)
If I wanted to I could probably overnight write something that crawls Amazon for popular items, takes the name of the item, pipes that to GPTChat with the prompt “Write a positive review of [product name]].”, takes the output, makes every mention of the product name an affiliate link and publishes it to a review website filled with nothing but positive reviews my AI has generated.
I could generate, let’s say, 1 every two minutes (which is actually pretty slow…I could probably generate a more). At the end of my first day that’s 720 reviews. That never slows down.
And this is easy to do. Really easy. So eventually the internet is overwhelmed by AI generated cruft and useless. This is what I am really worried about with all this stuff, more so even that the copyright issues.