Google shows off AI "news article" writer to newspapers, leaving newspapers to wonder who did the fact-finding, investigating and reporting

I guess the idea is ‘there is nothing new under the sun’ and all the news has already been written. You just need a basic AI to meld current names with archived stories.


I was reading a list of small cities in Calif, one of which was Ft Bragg. The AI generated article was confusing Ft Bragg in the Carolinas with the one in California, but the fort is now been renamed Ft Liberty.


It’s incredible how many people seem oblivious to this basic fact of journalism.

The world has been sinking ever deeper in a sea of misinformation in part because so many people don’t understand the distinction between an opinion writer / blogger and a journalist. Apparently Google doesn’t get it either.


It’s 793, except that these hubristic math bros are absolutely in a position to burn down the field they fail to understand. Seems likely to go well.


I think it’s pretty clear that this or something like it will become standard in every newsroom in five year’s time.

It will probably be used by (fewer) journalists, who do the actual leg work. Who knows? Maybe it will create a whole new type of successful journalist who’s incredible at investigating but bad at writing. These tools could close the gap.

But my suspicion is that it will make journalism worse. Writing is thinking. The best journalists present the facts in way that impacts us emotionally and mentally. There is a marriage between message and meaning that will not be captured through a union of man and machine.


Jebus Christ, the AI people are astoundingly clueless. It’s one thing to pitch something without fully understanding their business, but it’s something else entirely to just pretend their basic business doesn’t even exist, as if journalism comes into being through spontaneous generation. I have to say, if I had been a journalist hearing that presentation, I’d be moved to (verbal) violence, loudly questioning the basic intelligence of the presenters and strongly suggesting that perhaps their jobs would better be replaced by AI.

Coincidentally I just this minute learned of multiple “gamer news” sites which are apparently fully AI-generated, scraping Reddit for content. Reddit users figured this out and started having discussions about nonsense game features to see if they could get the AI to post articles about them, which apparently they have. One site’s “World of Warcraft” news seems to be about half nonsense, thanks to this - “articles” about how players are looking forward to the introduction of “Glorbo” and the new map of Colombia in the game. I just noticed another, identical website that clearly got all its content the same way, ironically with an article about how Diablo players were concerned about AI-written articles scraped from Reddit…

Nah, because the investigative journalist still has to write down what they found out, and running it through an AI isn’t remotely going to make it better. The journalist has to be able to convey what’s important and why, as that’s not information the AI can even figure out. All the AI can really do is arrange the text in a conventional way.


Finally, quality newspapers are freed from the tyranny of having to manually paraphrase the same AP and Reuters releases that everyone else republishes verbatim anyway. Hallelujah!


All this AI techbro bullshit is just a cover for mass plagiarism and copyright violation. There is no intent, purpose, or meaning to be conveyed in the writing because the whole thing is just word salad, intended to be consumed in 2-secs attention span culture. As long as it looks coherent enough at first glance, it passes the smell test. I mean, they have no idea what to do with it that is practical and realistic to solve any real world problem. The whole technology is about pattern recognizing and bullshit generating more than anything else. The funny thing about generative AI is they need real/legitimate pattern from real people to create something look human but what’s happening here is one vomit crap and the other swallow it up. There is also people will look to exploit those dumb “AI” for fun and profit by poisoning the input like you mentioned about Reddit’s users.

Yes, I admit they made something impressive at first glance but the problem is those AI lack the capacity to detect nuance and implication in human’s interaction. That’s why content moderation on social media is not easy. Hell, I misunderstand or be misunderstood at times because communication is hard, even for human. The most comical thing about this upsell bullshit is all about replace “human” on the payroll while trying to do what human is best at, communicating with another human.

I mean if the tech is so good, why not do anything like create unmanned labor force to send to the moon or mars where the environment is hostile to human. But no, we have to replace human on earth first. All these thing is just techbros projecting their inner desire to replace anyone they consider “disposable” because we ask for rights, benefits, guarantee standard of living and they want to have none of that. Well, fuck them. Society can exist without them but they can’t exist without us. They can fuck off to fourth dimension for all I care.


It’s almost like Timnit Gebru was fired because she was warning about this sort of thing, as if this was the goal they had in mind the whole time.


It’s the main use of “AI” and that’s only useful for these bogus “news” sites that just have to lure people in and serve the ads - they don’t care if readers immediately recognize that it’s all garbage. The owner has their fully-automated revenue stream and they don’t care that they only thing they’re actually doing is polluting the internet with garbage that makes it harder for the real thing to exist.

I’ve seen a couple legit (very limited) use for AI-generated text, but largely they’re just information pollution.

Don’t need an ethics team if your plan is to be deliberately unethical!


Exactly. The scary part is that so many small papers were put out of business, now in many areas there are no journalists to report on events. Those in power who want to keep residents ignorant about decisions being made by the folks in charge of local government, schools, and law enforcement have no problems with this. So word salad or nothing at all is what those folks affected will get on social media, with no one to point out issues like mismanagement, corruption, and other shenanigans.

:thinking: Maybe our only hope is to game the system in the same manner as those Reddit users.

Of course, hinting that pols “seem misguided/overpaid/incompetent/corrupt/etc.” could be substitutes for “looks tired.” :smiling_imp:




This AI journalist sounds like a sub-editor.

You probably could get rid of sub-editors as long as you still had reporters and press releases to feed raw stories into the machine. Like someone said above, you’d just run the feed from AP and Reuters through the machine to “polish” it.

As a one-time sub-editor, I’d like to think sub-editors made a valuable contribution to news production, but maybe the machine can do it cheap enough that no-one will care.

1 Like

I feel like this is basically what politics has already turned into, with the Republican party counting on it working this way. With few of their voters aware of what positions they actually take, it’s all about wild social media claims they try to propagate, both in their favor and libelous claims about their opponents. Their voters seem to just accept whatever claims align with whatever they wanted to believe anyways.

What I find even funnier is that one AI article about the phenomenon seems to have happened without the Reddit users trying to manipulate the system - the AI turned their complaints about itself into an “article” just because they were in a game subreddit. The end result of these systems seems to be to eat their own tails.


If your AI can’t sit down and interview Joe Biden in person and get some salient quotes, then what good is your AI to me? Wake me when it can do that, at which point I’ll have it replace my staff, you nutless fecks.


Now you’ve got me hoping he never used any speech-to-text owned by Big Tech. They’d even generate his sound bites. He could deny anything they made up, and maybe most of the public would believe him. :weary: jk.

:thinking: No, using current systems for recording meetings and interviews is easier, like all the videoconference tools we have now. Questions will lose nuance, with less follow-up or challenges against vague answers and deflection. Is the cost of creating articles from that type of data less than paying people to attend meetings, ask questions, and report the results? :woman_shrugging:t4: I’m sure some tech firms would like to convince folks that it saves money, without clarifying who benefits and who loses something (or the difference between cost and value).


I’d take this one step further. News organizations need their own Certified Organic label that identifies their stories as 100% A.I. free


Exactly this. Anyone who’s written down the facts of a thing has likely experienced the sensation of “hmm…I wrote that A leads to B, and C leads to D, but how did B relate to C? I need to understand this better because my description has a big hole in it.”

As we’re seeing in so many places, an AI just papers right over those kinds of gaps, either by ignoring them or by making up complete fabrications to bridge the differences. That’s not reporting, that’s somewhere between fiction, reckless indifference, and libel.


Quality is less and less anyone’s idea of a USP any more. Sigh.

@angusm Strange how your post works equally well both with and without an /s at the end… :wink:

So, when an AI inevitably attributes a quote to someone incorrectly, who gets sued? If a story it writes directly attributes false information about a person, who is guilty of slander/libel? Google? The paper?

Since separate fact-checkers and copy editors went away a while ago for the majority of publications, writers are now mostly expected to fact check and copy edit their own work, and therefore have ultimate legal responsibility for the publication. If a publication chooses to use an AI, I fail to see how that publishing company won’t be held to the same legal standards as any other publication else making such claims. This will probably result in a landmark case which establishes who exactly is to be held liable in such instances, but I can only imagine the legal mess that would result.

Now, this could bring back factcheckers and copyeditors, just to make sure that what’s being said isn’t actionable, but I view this as just what Neal Stephenson mentions in Anathem: Artificial Insanity polluting all known information sources, rendering the truth itself unknowable and, since it’s usually less entertaining, largely irrelevant to the general population.