Originally published at: Associated Press "clarifies" its AI rules | Boing Boing
…
I didn’t feel like composing my own comment, so I had ChatGPT do it:
The Associated Press’s attempt to downplay the impact of its partnership with OpenAI is concerning. While claiming that AI won’t replace journalists, the AP’s decision to train AI with its news archive contradicts this assertion. Treating AI-generated output as “unvetted source material” raises questions about reporting reliability, and the acknowledgment of AI’s potential for mis- and disinformation highlights the risks. Encouraging skepticism in journalists is important, but it’s insufficient against the sophistication of AI-generated content. The sarcastic reference to “$5.29 settlements” adds bitterness. The AP should address ethical and societal concerns genuinely instead of offering superficial clarifications.
Not bad, I guess.
It’s definitely concerning, but I’m not sure how training it on past stories would help it write an article on a breaking story. The AI can’t go do the research and conduct interviews, so it’s just going to either make those up (something it has done before), or copy them from other news sources. I don’t see how doing the latter would benefit the AP unless they no longer care about being the first to publish a story. All this would seem to provide for is crafting an AP-like story with facts and details taken from articles published by other sources. In other words, it’s just going to take articles from CNN, FoxNews, the NY Times, the Guardian, etc. and rewrite them in the style of the AP. That seems like the fast track to irrelevancy.
As a former writer, I’ll admit that a lot of journalistic writing is formulaic, even on breaking stories. That’s especially true of wire service copy. What an AI can’t do is go beyond that base to search out the poignant human elements of the story or tease out a nuanced conclusion from the gestalt of all the facts and quotes of a particular report. AI also won’t be able to exercise judgement on how to classify and treat a human source or subject.
That lack of judgment extends to the ability to distinguish a reputable citation from a disreputable one – a media literacy skill that’s critical in our current Age of the Grifter. In addition, AIs don’t have awareness of ethical and moral societal norms.
All of which won’t stop the AP from using AI to reduce costs further, eliminating a whole bunch of already underpaid employees (many of whom might have used their experience to develop their careers as investigative or long-forn journalists or as editors).
[Edited to correct autocorrect typo]
It sounds like they’re just using ChatGPT as an automated research assistant. ChatGPT is able to pull information together from more sources (although not necessarily better sources) than a human researcher can and provide a decent summary.
I think this is exactly why a lot of writers who’ve been doing this for a while aren’t super concerned about losing their jobs (myself included). They bring so much more to the table than simply stringing words together.
At a low level, people are going to lose out. But those folks who’ve been doing this for a long time aren’t easily replaceable any time soon.
I’d like to think so, but I don’t discount the power of the MBA mindset or the greed of many shareholders. There are a lot of suits salivating at the prospect of reducing headcount just in time for the next quarterly report.
A good (if hypocritical) goal, but how the hell are they supposed to do that? They’re creating a ton of AI-generated content of their own but hope to somehow not be exposed to the AI content created by others?
There’s been a lot of short term fallout (I’m seeing a TON of very burned out and tired writers on social right now), but it also seems to be pushing people to get creative with how they earn a living as a writer.
Suits have been trying to kill journalism for as long I can remember. They always seem to forget that people want quality. There are so many talented writers floating around out there right now looking for work and a lot of organizations eager to scoop them up. The desire to improve the bottom line is going to come back and bite these MBAs in the ass when all that talent they let go starts publishing with the competition.
Maybe I’m just in the overly optimistic stage of everything at the moment. I’ve been watching this play out for months and it generally feels like it’s getting better (despite crap like this).
I’m a bit more pessimistic when it comes to the effects of AI in journalism or any other creative industry. I see it pushing the career path for human writers even more in the direction of a “star system”, one where the selection of the handful of winners is still informed by privilege and nepotism as well as talent and effort.
Yes. In any field, there are stars who will rise to the top and be successful. But it’s a pyramid, and most workers are not stars. AI is going to be cutting off the base of the pyramid. How far up will the cut go? 20%? 80%? The impact on society will be large.
It’s always felt like that kind of system to me.
I think most of my optimism comes from the fact that this doesn’t feel all that much different from how journalism reacted to the internet as a whole. My first couple years of school were filled with career writers wringing their hands about the impact of the internet on the industry (there was a ton of negativity and no one seemed to know how to work with this new fangled tech). Things are still far from perfect, but we came through those years okay.
Unfortunately, I see Neal Stephenson once again being prescient. In Fall, he describes a future where the wealthy employ human editors to cut through the mountain of garbage media (much of it presumably created by AI). I can see a lot of those who might have once been journalists doing this kind of gig work instead.
It ways has been. What’s different here is that AI is a technology that in large part overcomes the suits’ “problem” of creating a simulation of a human writer that’s convincing to the majority of consumers. That in turn will cut off traditional avenues of opportunity to apprentice and journeyman writers (especially those from less advantaged backgrounds), as @muser notes.
A disconcerting amount of “urging” and “encouraging” in those guidelines…
ETA: meant to quote @gracchus, but as a general thread reply…
This is the clear response to the argument of “Don’t worry, AI can’t do the top tier stuff, real workers with something to contribute will be fine.” Every job has a pipeline and a farm system. Developing and preparing a skilled workforce is the thing that successful societies do best. Social Democracies are remarkable in particular for their investment in training and retraining workforces for changing economies. AI seeks to nuke opportunities for development at the intern and entry level positions, which will ripple through the system, particularly a heavily non-socialist society, where everyone is expected to scrap for opportunities. Meritocracy was always a joke, but this removes even a fig leaf, and pushes us further toward not having many other options besides “well, I golf with his father, so let’s hire him.”
more of an umami i’d say
Yeah. That’s very true. I definitely owe a good chunk of my early success to an older writer who needed an editorial assistant. He opened my eyes to what the life of a working writer really looked like.
If there’s an upside to what @muser said about cutting off the bottom of the pyramid is that world is incredibly exploitative at the moment. They’re content mills and whatnot that are destroying the people who get sucked into them.
Alongside that, I’m seeing more people from those places that are traditionally exploited pushing back. I know a few people who are working really hard to improve the conditions and treatment of young/new writers in those regions, which is awesome. The more the playing field gets leveled for everyone, the better.
It’s a weird time to be a writer. I’m trying to remain hopeful, while also trying to find ways to not rely quite so heavily on my words and experiences (which could end up being worthless down the road).
can it? because i’d question whether it can adequately detect novel information, or whether it would ignore that novel information in favor of similar summaries of previous content
gracchus above highlights the issue
what you’d have to do is conduct blind studies on summaries of new information - not data that’s already in it’s training set.
for instance, look at the generated summary in this thread:
this is superficial regurgitation of existing talking points. one novel element it glosses over:
that’s jobs right there. this means they are fine chopping human made graphics from their articles so long as it’s labeled. skilled graphic artists and photographers will be displaced with these guidelines, and it’s not mentioned in the ai generated summary
unless that’s on purpose… queue dun dun dun noises
Considering sometimes those sources are completely made up, yeah… it’s not better… I also doubt it’s any good at vetting those sources, while a trained researcher should be able to do that…
They could use GPT to polish their prose, which is what LLMs are made for, and good at.
Having ChatGPT write the articles in the first place would be counter-productive, because by definition a news agency deals in “the news” rather than what word, according to The Internet™ as of mid-2020, will be most likely to show up next in this unfinished sentence.