It’s just a stupid robot. Its brain is just text a human decided would construct simultaneously cromulent and comedically worthy sentences. It hasn’t the ability to understand the death of a loved one. It doesn’t understand the vagaries of a life. It’s just a set of rules acting on a dataset. I don’t blame the bot for its ignorance, and no human programmer could predict all the ways such a piece of software might impact the lives of its observers. The world is chaotic, and disinterested.
This bot is intentionally a bulwark against the bottomless despair of knowing the truth in this universe. That everything is finite. That the comforts of love and acceptance are at best temporary. And that all those we love will some day be gone from us. So it must be insensitive. So that we may laugh instead of cry.
That’s arguable. We, as programmers or non-, can certainly envision bots that coldly calculate equations dealing with death destruction, images of swastikas, explosions, executions, &c. If we have a news-manipulating bot, should we keep the bot from manipulating these images? Should we bar it from using certain words that may show up organically, but in awkward contexts (n1**er1 or gay or whatever)?
I’ve used the wordfilter module on a few bots, I’ve also whitelisted some of the words.