Daily Caller, Newsmax, TruthSocial, etc. … TAKE OUR MONEY!
"I made this myself!’ -D. Nunes probably.
I presume the intended use is a way to organize information a reporter has already collected into a first draft for an article but perhaps i’m giving Google too much credit.
I’m sensing the ultimate propaganda spreader here.
“We’ll give others that competitive advantage if you don’t want it.”
Competitive advantage? In ad revenue and subscriptions? Assuming that AI-aided or AI-created journalism would be identified as such with every publication (and there had better fucking be that requirement, because readers would be asking), why would any reader – given a choice and being aware of what could go wrong – opt for AI journalism notwithstanding the desire for shits and giggles from the occasional peeks?
I guess the idea is ‘there is nothing new under the sun’ and all the news has already been written. You just need a basic AI to meld current names with archived stories.
I was reading a list of small cities in Calif, one of which was Ft Bragg. The AI generated article was confusing Ft Bragg in the Carolinas with the one in California, but the fort is now been renamed Ft Liberty.
It’s incredible how many people seem oblivious to this basic fact of journalism.
The world has been sinking ever deeper in a sea of misinformation in part because so many people don’t understand the distinction between an opinion writer / blogger and a journalist. Apparently Google doesn’t get it either.
It’s 793, except that these hubristic math bros are absolutely in a position to burn down the field they fail to understand. Seems likely to go well.
I think it’s pretty clear that this or something like it will become standard in every newsroom in five year’s time.
It will probably be used by (fewer) journalists, who do the actual leg work. Who knows? Maybe it will create a whole new type of successful journalist who’s incredible at investigating but bad at writing. These tools could close the gap.
But my suspicion is that it will make journalism worse. Writing is thinking. The best journalists present the facts in way that impacts us emotionally and mentally. There is a marriage between message and meaning that will not be captured through a union of man and machine.
Jebus Christ, the AI people are astoundingly clueless. It’s one thing to pitch something without fully understanding their business, but it’s something else entirely to just pretend their basic business doesn’t even exist, as if journalism comes into being through spontaneous generation. I have to say, if I had been a journalist hearing that presentation, I’d be moved to (verbal) violence, loudly questioning the basic intelligence of the presenters and strongly suggesting that perhaps their jobs would better be replaced by AI.
Coincidentally I just this minute learned of multiple “gamer news” sites which are apparently fully AI-generated, scraping Reddit for content. Reddit users figured this out and started having discussions about nonsense game features to see if they could get the AI to post articles about them, which apparently they have. One site’s “World of Warcraft” news seems to be about half nonsense, thanks to this - “articles” about how players are looking forward to the introduction of “Glorbo” and the new map of Colombia in the game. I just noticed another, identical website that clearly got all its content the same way, ironically with an article about how Diablo players were concerned about AI-written articles scraped from Reddit…
Nah, because the investigative journalist still has to write down what they found out, and running it through an AI isn’t remotely going to make it better. The journalist has to be able to convey what’s important and why, as that’s not information the AI can even figure out. All the AI can really do is arrange the text in a conventional way.
Finally, quality newspapers are freed from the tyranny of having to manually paraphrase the same AP and Reuters releases that everyone else republishes verbatim anyway. Hallelujah!
All this AI techbro bullshit is just a cover for mass plagiarism and copyright violation. There is no intent, purpose, or meaning to be conveyed in the writing because the whole thing is just word salad, intended to be consumed in 2-secs attention span culture. As long as it looks coherent enough at first glance, it passes the smell test. I mean, they have no idea what to do with it that is practical and realistic to solve any real world problem. The whole technology is about pattern recognizing and bullshit generating more than anything else. The funny thing about generative AI is they need real/legitimate pattern from real people to create something look human but what’s happening here is one vomit crap and the other swallow it up. There is also people will look to exploit those dumb “AI” for fun and profit by poisoning the input like you mentioned about Reddit’s users.
Yes, I admit they made something impressive at first glance but the problem is those AI lack the capacity to detect nuance and implication in human’s interaction. That’s why content moderation on social media is not easy. Hell, I misunderstand or be misunderstood at times because communication is hard, even for human. The most comical thing about this upsell bullshit is all about replace “human” on the payroll while trying to do what human is best at, communicating with another human.
I mean if the tech is so good, why not do anything like create unmanned labor force to send to the moon or mars where the environment is hostile to human. But no, we have to replace human on earth first. All these thing is just techbros projecting their inner desire to replace anyone they consider “disposable” because we ask for rights, benefits, guarantee standard of living and they want to have none of that. Well, fuck them. Society can exist without them but they can’t exist without us. They can fuck off to fourth dimension for all I care.
It’s almost like Timnit Gebru was fired because she was warning about this sort of thing, as if this was the goal they had in mind the whole time.
It’s the main use of “AI” and that’s only useful for these bogus “news” sites that just have to lure people in and serve the ads - they don’t care if readers immediately recognize that it’s all garbage. The owner has their fully-automated revenue stream and they don’t care that they only thing they’re actually doing is polluting the internet with garbage that makes it harder for the real thing to exist.
I’ve seen a couple legit (very limited) use for AI-generated text, but largely they’re just information pollution.
Don’t need an ethics team if your plan is to be deliberately unethical!
Exactly. The scary part is that so many small papers were put out of business, now in many areas there are no journalists to report on events. Those in power who want to keep residents ignorant about decisions being made by the folks in charge of local government, schools, and law enforcement have no problems with this. So word salad or nothing at all is what those folks affected will get on social media, with no one to point out issues like mismanagement, corruption, and other shenanigans.
Maybe our only hope is to game the system in the same manner as those Reddit users.
Of course, hinting that pols “seem misguided/overpaid/incompetent/corrupt/etc.” could be substitutes for “looks tired.”
This AI journalist sounds like a sub-editor.
You probably could get rid of sub-editors as long as you still had reporters and press releases to feed raw stories into the machine. Like someone said above, you’d just run the feed from AP and Reuters through the machine to “polish” it.
As a one-time sub-editor, I’d like to think sub-editors made a valuable contribution to news production, but maybe the machine can do it cheap enough that no-one will care.
I feel like this is basically what politics has already turned into, with the Republican party counting on it working this way. With few of their voters aware of what positions they actually take, it’s all about wild social media claims they try to propagate, both in their favor and libelous claims about their opponents. Their voters seem to just accept whatever claims align with whatever they wanted to believe anyways.
What I find even funnier is that one AI article about the phenomenon seems to have happened without the Reddit users trying to manipulate the system - the AI turned their complaints about itself into an “article” just because they were in a game subreddit. The end result of these systems seems to be to eat their own tails.
If your AI can’t sit down and interview Joe Biden in person and get some salient quotes, then what good is your AI to me? Wake me when it can do that, at which point I’ll have it replace my staff, you nutless fecks.