Well, it was clear to me that the comments weren’t about anyone’s family in particular. They said that evolution was a stupid argument against using sunscreen. No idea why you felt your family was being disrespected, since in the end the argument was that using sunscreen to prevent early deaths was a good thing.
Actually, I didn’t say anything about my own family; and I am not interested in arguing with you.
Except that AI can and are trained by scraping web forums. We mere mortals have little recourse over what sites are scraped or by whom.
Yeah, I meant that it causes less harm on web 1.0 discussion platforms compared to social media in terms of viral spread to other humans.
If this was a facebook group, you could post, say, a screenshot or a quote of some guy ranting about how sunscreen is a conspiracy to control us by blocking the natural energy of the sun, along with the comment “JFC WTF is wrong these people”.
I come along, barely give it a second of thought, I anger react or laugh react the original post, then scroll on.
My action of reacting to your post causes the quote to appear on the newsfeed of a paranoid habitual weed smoker I met at some festival 7 years ago, who thinks “fuck yeah, I knew something was up with that sunscreen shit, I bet the Rothschilds own all the companies that make it too” and he likes and shares it. Within a single degree of separation, the intent of your original post has been completely inverted.
That, at least, doesn’t happen with a format like this. I don’t know what we can do in terms of AI language models scraping the content. Maybe we need regulation that makes it possible to legally deny consent to be scraped for anything other than search and indexing purposes, but then again soon enough there’s not going to be a distinction between search indexing and AI.
If we need to start worrying about the effect that quoting bullshit might have on AI language models, here’s one solution that could work: bullshit tags.
When you want to quote bullshit, put it between a set of bullshit tags, which will add the word “bullshit” in between every word, in the smallest possible font, and in the same colour as the background.
It’ll read more or less normally, but to an AI, or anyone copying and pasting it to elsewhere, it’ll read “bullshit sunscreen bullshit causes bullshit cancer bullshit”
I dunno I think the ship has sailed. Information seen on the Internet and information seen on mainstream news media has been irrevocably tainted already. Whether ai writes the bullshit or human volunteers or employees write it I think we are stuck with dealing with a massive volume of disinformation, misinformation, and propaganda.
Agreed. Media literacy is our best defense. It’s taught here at the local library, and attendees are a funny mix of senior citizens wanting to avoid getting scammed, and young students taking it so they can do better research for their papers.
Yeah, I think the sum total of all the instances on the internet, such as this topic here, where disinfo or conspiracy theory is quoted in the context of a discussion about disinfo, probably amounts to a puny drop in the ocean compared to the sheer volume of bullshit being churned out and propagated by malignant actors, true believers and assholes on social media who just refuse to consider that the share button has consequences. In which case it’s probably fairly pointless to even consider the effect that quoting it might have on any AI language models scraping the content of your forum.
… and then someone duct taped a second knife to Stabby …
So do you ship Stabby and Knifey or are you one of those heartless bastards that wants to continue staging fights between them despite the fact that they’re clearly in love?
The Conspirituality podcast that inspired this post to start with is a particularly good episode, even if your gut reaction is “why would I want to listen to a whole podcast about idiots who are against sunscreen”.
I listen to the podcast semi-regularly, and this is one of the best ones I’ve heard them do recently in terms of explaining how health and wellness woo fits into the wider alt-right disinfo-conspiracy world, so would make a great jumping off point for new listeners. Their guest Sara Aniano is an excellent communicator on this subject.
DId you even stop to think how complicated, from an organizational standpoint above all, your solution is?
Let’s take web standards to start, they take a good amount of time to ratify, years many times. Then comes getting it implemented in browsers, then dealing with those that have old browsers, or disable X functionality, and so on.
Then, why should non sighted people be bombarded with bullshit (and if there’s a way to filter it, would AI or the researchers also use it)?
Also, what’s to stop people using it for valid concepts?
That’s 2 minutes of thinking about issues…
I was more thinking along the lines of something that could be implemented by a forum, rather than the entire infrastructure of the internet. Instead of trying to come up with rules about quoting disinfo, just make everyone put it inside bullshit tags. I was only being partly serious, but from what I vaguely recall about forum admin tools, it’s not a million miles away from the kind of stuff that can be implemented.
I ship them!
This topic was automatically closed after 5 days. New replies are no longer allowed.