If anything, this experiment should demonstrate to Happy Mutants how bad current “AI” is. If I were to hire a human to summarise discussion threads, I wouldn’t choose one with poor reading comprehension skills, who attributed quotes to the people citing them, etc.
I thought of that, too.
More work for @orenwolf to moderate the summaries?
What if an AI-generated summary violates the community guidelines?
Some time ago I made a (mostly tongue-in-cheek) suggestion that AI could be used as a moderation assistant. It was laughed at then, but who’s laughing now?
Maybe the AI thinks we’re all addicts?
Perhaps all the comments related to the new AI gizmos and whatzits would best be split off into a seperate topic? AI features feedback? I think this topic could get cluttered up otherwise.
“Would you prefer Consumer, meat-bag?”
Can you imagine, if in the 1960’s someone introduced a pocket calculator that could instantly compute sine/cosine, square roots, with astounding accuracy, but sometimes returned “5” for 2+2, and other times returned “-299792458?” Who would use such a thing? That’s how I feel about a lot of the so-called AI stuff bring proffered.
Absolutely nobody.
Great analogy.
It’s been estimated to be about 70% accurate, at best.
And here’s the thing about that percentage:
Just because HK-47 is fun:
Back on topic!
thank you that’s validating for someone who’s inclined toward perfectionism
Having higher standards isn’t perfectionism; I’d happily settle for mere accuracy and accountability.
Modern software release cycles have numbed us to bugs and glitches. We accept them as the normal part of releasing software nowadays. Look at how buggy the average game is on initial post-beta release. Mobile games especially now are using their own player base’s feedback in place of internal QA when releasing updates. I understand why BoingBoing’s bbs makes a good test site for Discourse, but at what expense to our experience here? This thing doesn’t seem ready for real world testing yet. I feel like this was rolled out prematurely.
Or perhaps have some other way to tell the AI “You’re hallucinating and/or drunk, try again” to get it to revise its summary?
Perhaps instead of making the summaries available via a special button on the post, they should be posts from an “AI_Summarizer” user that reacts to replies to its messages (which could be difficult to implement effectively in such a way that it’s hard to fool or game) to update its summary.
So if a summary said that the main topic of the post was that “Joe Biden is the Republican nominee for President of the United States in 2024”, a reply could ask “Are you sure Biden is the Republican nominee?” and the summarizer could then reevaluate and revise its summary.
This “thing” should never had been rolled out at all. It’s rather a sour pill for the site.
Oh! It’ll all suddenly make sense when I’m Ridiculary Functionary Officer of a firm that ships all the things based on a blanket license of MDPI research. (Or F1000?) Where can I tithe to Gearbox Studios anyhow, I finally got Tiny Tina’s Wonderlands (Chaotic Great Edition) and that’s the Writer’s Rx I can line up for properly I think.
Flossi’s sudden incorporation of gifs has me concerned/hopeful for the future of impressionist bot discourse. They are learning. But what are they learning?