Q&A on Non-Moderation Matters (Mechanics, How-To, etc.)

jon snow college GIF
Did you use :sparkles: to find out?
game of thrones television GIF by Saturday Night Live

That said, what are the variables which predict on which topic I can use the sorcerer’s summary, and on which not?

ETA:

@Falco1, in regard to the user-provided imagery - the model just pulled out a deep link to one and put it in a summary:

2 Likes

TL3+ users can request new summaries. TL2- users can only see cached summaries that were requested by TL3+ at some point.

2 Likes

Okay, I clicked it.

I can’t really see the point of it, bar poking fun at it.
We get the gist of the topic from the headline, and although the summary can be entertaining, I don’t find it anywhere near as enlightening as simply scrolling through the topic.

Having said that, it’s early days, new tech etc., so I’m interested to see where this heads.

3 Likes

Ah, I see. Us lower deckers are not worth of triggering the ship’s positronic systems higher functions ourselves, even if it does say so on the display.

May I be so bold and ask that in accordance with common sense, and possibly fleet regulations, a button should state its true function? Can you put “show LLM-generated summary (possibly already outdated)” on the button?

/s, obvsl.

Thanks, anyway. I always enjoy novelty (to much).

Can you tell us what type of model you are using under the hood, and how much of an effort it is to train? Maybe at some point in a front page post? Could be fun to see some of the spectacular fails, and actually great fun to learn how to use a custom-trained model.

Currently BBS uses BAAI/bge-large-en-v1.5 · Hugging Face for embeddings (Related Topics / Semantic Search) and mistralai/Mixtral-8x7B-Instruct-v0.1 · Hugging Face for the LLM (Summary / AI Helper / AI Search), all hosted in our servers.

We are using off-the-shelf already trained models, as training models was explicitly out-of-scope for us last year.

We may toy with it this year, but honestly training is a lot of money and effort and there is a new SotA model every few days, which means it’s already hard enough to keep up even without any sunk costs in training.

3 Likes

OK, time to go to bed, since this left me giggling:

Note that <s> and </s> are special tokens for beginning of string (BOS) and end of string (EOS)

2 Likes

Do we have the option or ability to opt out of the AI summarization of our specific posts? (as well as the use of our posts as training data, I know currently it was mentioned that the LLMs are locally run but I would like to not have my posts be used as training data for future incarnations)

5 Likes

Me Too Gus Cruikshank GIF by NETFLIX

3 Likes

In the meantime you can fill your post with wackadoo text to see if it throws off the AI

2 Likes

Feedback:

I would like to opt out, if at all possible.

I find the new AI widget’s appearance when I highlight text to be very distracting, and it seems that the results frequently misattribute which members made what comments.

Then there’s incorrect mashup of info:

One summarization said that a frequent commenter in the Unicorn Chasers topic was suffering from “the plague.”

4 Likes

I understand that AI can be extremely energy-intensive. Do you have any kind of ballpark figures on what the introduction of AI summaries is doing in terms of the carbon footprint of the BBS?

6 Likes

Nope.

1 Like

I’m glad the kid is okay.
This is a weirdly 21st-century type of crime.

“Victim of cyber-kidnapping.”
Then this:

AI

And then:

All your replies have been kidnapped by the AI.
Tremble, yoomans…

9 Likes

Schitts Creek Lol GIF by CBC

6 Likes

All your comments “are belong to us?”

:thinking:

Not digging this new “addition…” like, at ALL.

10 Likes

Zero Wing Art GIF by Bitwave Games

4 Likes

Me either. I don’t exactly believe I consented to it, but probably did in the TOS. Can we opt out of it summarizing or being trained on our posts?

2 Likes

Considering that “AI” didn’t exist as it ‘does’ now back in 2016 when I joined BB, I’m pretty sure I didn’t willingly consent to be a crash test dummy.

Yeah, free site, we’re the product, they can do what they want, etc, etc; it’s still not an example of that highly coveted ‘good faith’ we always hear about from TPTB.

Some of the ‘summarizations’ I’ve read thus far have been incredibly flawed, attributing comments to folks that they didn’t make and getting all sorts of other data incorrect…

That’s a good question, one better posed to IT.

@Falco1 @sam

Care to comment?

5 Likes

Ha! So, the AI function is mimicking a bunch of the bad faith posters?
What fun. I was just thinking, what we need ‘round these parts is more of that.
/s

5 Likes

Worse; it cant seem to differentiate between posters who made the original comments, and the instances when some one else quoted them in reply.

4 Likes