30% inaccurate
Yeah, inaccurate. But the reason I use hallucinatory here is because it’s not JUST inaccurate. It can be dangerously inaccurate. It can make up things that can get people hurt. Imagine a thread where we’re discussing serious topics such as religion or gay rights or Epstein’s sexual assaults and rapes and the model hallucinates someone saying exactly the opposite of what they said. “@anon85524460 said that Jeffrey Epstein was a good guy and was likely framed” when maybe what I said was “His friends first insisted that he was innocent or framed but we know they’re wrong if not involved.” Or worse, it could take MY statements, misread them, and assign them to a completely different party. Like… a sex abuse survivor?
Inaccurate means to me like, “Ooops, guessed wrong.” Hallucinatory implies to me that it just completely made shit up. And honestly, I’d use lying, but it’s not a real living thing. It has no intelligence whatsoever. It’s a complex mathematical model. It doesn’t know what a lie is. It doesn’t know what the truth is. It just knows that the next word has a score of 90 and the other options don’t come close, so it found it’s next word.
Yeah, we agree. “Funny” though that your explanation is exactly why I called out “hallucination” - that word seems to minimize the impact to me. I just think that it helps to keep the focus on exactly what you and I are discussing here - that LLMs are just another algorithm, and that algorithm frequently generates untruths. Inevitably some of those untruths will be harmful.
What exactly drives “related” in an article?
Currently the top related on “List of Epstein Associates Drops” is “The absolute joy of pressure washing”
Weirdly, I see the connection!
It does feel good to power wash all the gross dirt away.
Like, it’s NOT okay to ‘get it wrong’ that much; clear communication is important, and folks glibly acting as if it’s not is deeply disturbing to me.
I’m not a fan of the “related” AI. It looks like it dredges years-old articles. If you just read a pleaseant topic it can be okay, but if the article was “watch nazis kick kittens” it serves up similarly monstrous articles with no warning.
I’m just waiting for them to at least move the damn widget thingy; it gets in the way of editing comments.
Agreed, very strongly.
I’ve been thinking about this a lot for the past day or so, struggling to get my thoughts together enough to post.
I understand the folks of the BBS wanting to help Discourse develop their product. It’s a very strong platform, probably one of the best examples of forum software I’ve come across in all my years of Web-wandering. The moderation system in particular has contributed to the health and strength of this site, and I appreciate it. It’s natural to want to contribute to that platform in return for the benefits it’s given us all.
All that having been said, I’m not enthusiastic about this implementation of AI summaries.
I understand not wanting to wade through a thread of 50, 100, 200 posts before jumping in, and a summary could be helpful in that regard. But it seems a little contradictory to encourage participating in a manner that rewards a lack of interaction with previous posts. (Or maybe that’s just my brain making things more complicated than they have to be? ) It seems to me that part of the value of this site is the variety of voices that post here, and the AI summaries I’ve seen so far do a very poor job of conveying that experience. That might change as the process develops, of course, especially if it’s more weighted toward popular posts or those that receive more replies than others… but it’s definitely not there yet.
What truly disturbs me it how incredibly badly it fails to correctly interpret the essence of people’s words. As @anon67050589 , @anon85524460 , and @anon72357663 pointed out above, the current AI is not accurately repeating what users are saying; and there is a distinct probability that in doing so it will harm users, readers, and the website itself, with emotional and possibly legal consequences.
(A potential example: the AI summary distorts a user’s direct quote of a BBS article on, say, Grimes, into something defamatory, and her lawyers take notice… resulting in hefty court fees and a shitstorm of negative press. Ouch.)
If an average user puts words in anothers’ mouth that they did not say, that are nowhere near what a user was attempting to say, then they are highly likely to get flagged by the Commentariat and the offending post deleted by a Moderator. I don’t think AI should be given a pass to do that in summaries. That’s against the Guidelines.
And yes, I read the terms of service before I posted here (and many times since.) While I acknowledge they still hold, this is something way, way above and beyond the scope of what that implied almost nine years ago. We’ve gotten opt-outs for other site changes throughout those years, from layout elements to fundamental components like DMs. With that in mind, I think an opt-out for AI should be at least seriously considered, even if it still winds up dismissed in the end.
I concur.
When I first joined BB, this is what I heard more than anything else as a n00b; “Have you read the whole article/discussion?”
(And so I did, from that point on.)
Fast forward 7 years, and now we’re actively encouraging people not only to read less, but to rely on really poor machine interpretations of what was discussed?
Again:
“Strange Days, indeed…”
It seems to me that the original purpose of building this particular commenting system was to make online conversations better and to give communities greater flexibility with regards to moderation… there are some great things to like about the platform overall. But at this point, I’m not sure that AI has really improved any of the core problems that this platform was meant to help deal with? It’s certainly not made online conversations deeper, or smoother, or more empathetic… the only thing that’s going to do that is if we continue with good quality moderation that’s even handed, and that’s got to be by people, not by an AI that most certainly does not have empathy…
To you’re point about the AI widget being a problem… this just happened to me when I brought up the window to type…
Not helpful!
Taking my own general dislike of current MLL aside, the placement of that widget is very poorly chosen; it blocks the text that is being input.
I know this is the beta phase, but that’s one of the first things they ought to adjust - don’t make the feature so invasive.
Quite honestly, that seems to be the going template for so much of the modern internet… invasive… I’d love it if this platform could avoid that…
Gimmicks come and they go.
Can’t wait until this one fades away.
It’s going to be like the whole “hiring off shore contractors” for software projects because they get it done quicker fad. There’s going to be a great business fixing things that LLMs set on fire. There’s going to be a LOT of things set on fire and a lot of people paying a lot of money to desperately put the fires out.
Can “users” flag AI summaries? And can it be banned?