Google shows off AI "news article" writer to newspapers, leaving newspapers to wonder who did the fact-finding, investigating and reporting

Now you’ve got me hoping he never used any speech-to-text owned by Big Tech. They’d even generate his sound bites. He could deny anything they made up, and maybe most of the public would believe him. :weary: jk.

:thinking: No, using current systems for recording meetings and interviews is easier, like all the videoconference tools we have now. Questions will lose nuance, with less follow-up or challenges against vague answers and deflection. Is the cost of creating articles from that type of data less than paying people to attend meetings, ask questions, and report the results? :woman_shrugging:t4: I’m sure some tech firms would like to convince folks that it saves money, without clarifying who benefits and who loses something (or the difference between cost and value).

3 Likes

I’d take this one step further. News organizations need their own Certified Organic label that identifies their stories as 100% A.I. free

3 Likes

Exactly this. Anyone who’s written down the facts of a thing has likely experienced the sensation of “hmm…I wrote that A leads to B, and C leads to D, but how did B relate to C? I need to understand this better because my description has a big hole in it.”

As we’re seeing in so many places, an AI just papers right over those kinds of gaps, either by ignoring them or by making up complete fabrications to bridge the differences. That’s not reporting, that’s somewhere between fiction, reckless indifference, and libel.

4 Likes

Quality is less and less anyone’s idea of a USP any more. Sigh.

@angusm Strange how your post works equally well both with and without an /s at the end… :wink:

So, when an AI inevitably attributes a quote to someone incorrectly, who gets sued? If a story it writes directly attributes false information about a person, who is guilty of slander/libel? Google? The paper?

Since separate fact-checkers and copy editors went away a while ago for the majority of publications, writers are now mostly expected to fact check and copy edit their own work, and therefore have ultimate legal responsibility for the publication. If a publication chooses to use an AI, I fail to see how that publishing company won’t be held to the same legal standards as any other publication else making such claims. This will probably result in a landmark case which establishes who exactly is to be held liable in such instances, but I can only imagine the legal mess that would result.

Now, this could bring back factcheckers and copyeditors, just to make sure that what’s being said isn’t actionable, but I view this as just what Neal Stephenson mentions in Anathem: Artificial Insanity polluting all known information sources, rendering the truth itself unknowable and, since it’s usually less entertaining, largely irrelevant to the general population.

This topic was automatically closed after 5 days. New replies are no longer allowed.