Android's keyboard will no longer autocomplete "sit" with "on my face" thanks to me

Great minds think alike – I too have been hoping to hear back from the pizza.


Party pooper.


yep, i just went through a stack of android devices we have here for testing… the word sit doesn’t autocomplete or autosuggest to “sit on my face” on any of them so far either.

…specifically which android versions were affected for which languages and character sets? i’m guessing none.

oh you just mean the live google predictive suggest? lol. that is personalized and a related algorithm to google search suggest, it works off of your personal history as well as a generalize conglomeration of geographical and other target group aggregate data.

funny/immature/racist/sexist/bizarre stuff is constantly popping up here and in search suggest and in autocorrect and is being added to the filters as fast as it pops up. it is there because people type that crap in. they get THOUSANDS of these requests daily and there are entire websites dedicated to capturing and laughing at them.

Not sure what the story is…has anyone here not had this experience? It is pretty common, my parents post ones they capture all the time.


This thread…




It is, but is also the only way to handle it. How do you predict what someone is going to want to type when that varies based on where you live, what language you speak, your age, so many factors, it is impossible to curate relationships like that manually to the nth personal degree. they also want to include even more personal data like what they have typed previously, their search history, their emails, their purchase history, etc. to glean how and what they are most likely to say. that gives the very best prediction possible.

the only real way to ever keep up with that constantly changing huge end surface is to mine it as it is used, and aggregate as much data as possible.

manual filters are problematic because again, “sit on my face” is a very specific term with a very specific context for a specific group of people it isn’t universal. same with what should be filters and what is acceptable, that varies just as much, so they are training up AI filters that are as flexible as the data and the issue trying to be solved. when is a cleveland steamer just a train reference and when is it most certainly not? When is free oj the price of juice and when is about releasing the man? these contexts are so diverse and specific, if you travel there is nothing universal about them even across a city/county/country.

some people might want to live in a sex positive culture and have their device not filter based on another cultures moral objections to sexual references.

very few things make the universal fixed filters.

the funniest is when translated sayings that are very common in one language or context mean something very different in another language or context.

cross cultural stuff is even more complex.

AI translate faces a similar challenge humans do when coming from another language…
Q: which is the sound you make with fingers and thumb, for hurting yourself, two for knuckles sound, and for something sex related: a) finger banging, b) finger breaking, c) finger popping, d) finger snapping, e) finger cracking. (Extra credit: is this universal for all english speakers even? hint NO)

things are still in early stages and it already is working far better than we might expect, but thousands and thousands of things slip through daily and have to be added to the training, which makes it better for the future.


I hope the next one is “tell me that you love me”



I know! Now I’m going to have to type out the whole damn thing when sending rude texts…


which is really hard to do when you aren’t using your hands!


We need to start training Markov chains on erotica… maybe someone could make money on a predictive keyboard for kinky texting.


I used to have a client named Alex Gorden, auto correct would always suggest Ale Gorgon. I wonder why I have not heard from him in awhile…

I have more fundamental issues with this approach. Leaving aside the questionable things people search for, the overlap between the contents of search text and what people want to say in texts seems like it would be near zero. It’s just the wrong data set to use as a basis for text prediction.

Oh, a whole lot of people are way, way ahead of you.
(Although it’s actually more sophisticated than a Markov chain: )
Actual Markov chain erotica, read out:


If I could text without using my hands, I’d have a line of suitors awaiting, and would definitely not be on BB on a saturday evening…


with smart digital assistants and these in home smart devices, none of these dataset are silos anymore, they cross pollinate with all the other datasets in multitudes of useful ways. it isn’t the wrong dataset, it is a piece of a larger dataset necessary to accurately digitally do this work which is why it is used.

in a world where text can be reversed from raw accelerometer data, or audio from window reflection vibration, i’m always careful not to discount what data can be or is useful for. predictive text has been getting better in leaps and bounds very recently for a good reason.

while search usage is only one part of the larger datasets used, to a clever data engineer it is incredibly useful.

direct content mining for prediction is only one of a multitude of ways it can be used, why stop there? it can also be used for user classification, speech patterning, term choice and preference information, interest information, dialect information, contextual branching, etc.


I am the Oak in Kays hand
the doctor will see you now

Can I get a predictive text module consisting entirely of the names of Iain Banks’ AI ship names?

1 Like

Stop kinkshaming me! smh

1 Like

“Sit on my face and squirm” is easily in the top-10 songs by The Mentors!

El Duce was in rare form.