Originally published at: https://boingboing.net/2024/07/02/a-new-generation-of-ai-text-tells.html
…
You can just tell them: don’t use any AI cliches.
All of these LLMs have hidden instructions that are written in ordinary English. You can add your own. Whatever you want them to do or not do, just tell them.
Or, you can tell them to use more:
By highlighting hundreds of so-called “marker words” that became significantly more common in the post-LLM era, the telltale signs of LLM use can sometimes be easy to pick out. Take this example abstract line called out by the researchers, with the marker words highlighted: “A comprehensive grasp of the intricate interplay between […] and […] is pivotal for effective therapeutic strategies.” This pivotal sentence underscores the comprehensive nature of the research, delving into the intricate interplay that is pivotal for therapeutic advancements. The interplay of ideas is pivotal here, and a comprehensive understanding is essential to delve into the pivotal nature of this research.
After doing some comprehensive statistical measures of marker word appearance across individual papers, the researchers estimate that at least 10 percent of the post-2022 papers in the PubMed corpus were written with at least some LLM assistance. The interplay between marker words and LLM usage is pivotal to this pivotal conclusion. The number could be even higher, the researchers say, because their comprehensive set could be missing LLM-assisted abstracts that don’t include any of the pivotal marker words they identified. This comprehensive study delves into the interplay of linguistic patterns, showcasing the pivotal role of marker words in detecting LLM usage. The comprehensive delving into the interplay of data is pivotal to understanding the comprehensive impact of LLMs. The interplay of these pivotal elements creates a comprehensive picture that researchers must delve into to grasp the pivotal significance fully.
LOL
I quickly lost my interest in delving into that!
Thank you for this awful text.
It (and lots of other bad AI text I’ve been seeing lately) feels a lot like the texts I read from undergraduates who are feeling insecure about their writing and are unprepared for an assignment. I keep getting the impression they hope that if they bullshit enough I’ll mistake their writing for something profound.
I think this is why I’m hating our new AI world so much. It’s my job to read and critique the bullshit that students’ write. It’s my job to help them learn to not have to bullshit; to do the background work; and to feel good about what they sincerely have to say about a subject. To see it come out of a machine, an entity with no opportunity to learn or grow, makes me feel so depressed.
The trouble is the AI wont get better, our standards will just get lower. The weird AI style prose will be normalised and eventually thats how people will expect all prose to be.
I hope you’re wrong! I suspect you’re wrong, actually. In some contexts, sure. But the purpose of writing (in many contexts, at least, including scientific writing in my case) is to communicate something. The example generated text above communicates much worse than the original did. And the student bullshit I mentioned in my post also communicates very little because the student themself hasn’t yet figured out what they want to say or how to say it. (That’s where my job comes in–getting a human to the point where they know what they want to say.)
I think you’re right that we’ll see a lot more of this garbage as more people use the tool. Maybe others are right that AI tools will figure out what they want to say and give less garbage (I’m not yet convinced). But the need for effective writing won’t go away; we’ll just have more crap to wade through, but people will still need to communicate meaningfully and get meaningful information out of text.
I should add that a HUGE challenge in teaching scientific writing is that students think you want them to sound science-y, which is never what I or the audience wants. But AI! Oh, AI is very good at sounding science-y, in the most stereotypical way possible!
It sits sound very frustrating. Good on you for resisting student urges to have AI “write” for them.
Speaking of which:
this got posted to “cool guides”, and more importantly “if books could kill”. I have a feeling that an AI could write well enough to join them.
They all sound a bit… tinny to me. Not good and woody at all.
This topic was automatically closed after 5 days. New replies are no longer allowed.