Perhaps Reddit is a bad place to teach your AI morality and ethics

Originally published at: Perhaps Reddit is a bad place to teach your AI morality and ethics | Boing Boing

2 Likes

ultron

23 Likes

That’s a bit unfair on Ultron, we all feel like that.

23 Likes

I mean, this does illustrate a pretty deep conundrum in democracies, i.e. that the most popular or likely way that a voter would end a given sentence (or answer a survey question) in absolutely NO WAY suggests that it will be most moral…

6 Likes

There’s a moral argument to be made that, between Utron, Killmonger, and Adrian Toomes/Vulture, the MCU movies keep redefining the hero of the story as the villain.

5 Likes

How they’ve got Mechanical Turk setup, it reminds me of the crowdsourced knowledge graph that someone was building in the late 90’s early 2000’s. People submitted binary classification, like “Red is a Color: True”. It was getting really interesting results, but they ran into legal problems as to who owned the content of the database… so it ended up being unusable. Can’t find much on it, but maybe someone here can remind me the keywords?

1 Like

A lot of superhero villains can basically be summed up as “they’re right, but they shouldn’t say it”, and it’s not even Marvel-specific. Poison Ivy, for example, is a radical ecoterrorist who understands that the biggest problems plaguing the world all have names and addresses.

9 Likes

Well, not to be too pedantic I hope, but the point of many of the best villians, super or not, isn’t their goals but how they go about achieving it. We all want world peace, but few of us are willing to exterminate humanity to achieve it. The rise of more and more anti-heroes, whose flaws make them more interesting than omnibenevolent panties and cape wearing demigods, reflects this as well, and is even lampshaded in the first Deadpool movie. “Four or five moments - that’s all it takes to become a hero. Everyone thinks it’s a full-time job. Wake up a hero. Brush your teeth a hero. Go to work a hero. Not true. Over a lifetime there are only four or five moments that really matter. Moments when you’re offered a choice to make a sacrifice, conquer a flaw, save a friend - spare an enemy. In these moments everything else falls away…”

2 Likes

schitts creek comedy GIF by CBC

3 Likes

Mister Freeze just wanted to cure a disease.

2 Likes

And make bad ice-related puns.

5 Likes

don’t blame the AI because you didn’t label the data correctly. AI can only be as good as the data it’s trained on

So were all fucked then…

4 Likes

I mean, it’s not a bad approach to demonstrate a system’s capability of absorbing, reflecting, and explicating the values of a community. Inasmuch as it has managed to show how the community’s own responses demonstrate certain lines of reasoning, it’s a success.

Now, if you were to train these and then do cross comparisons you might be able to demonstrate interesting barometers of different communities ethical approaches.

Think of it as a measurement. The ability to actually train people to be ethical is a wicked hard problem, so of course doing it with an AI is even harder. What we get is a mirror of our own inability to cut the knot.

3 Likes

I still don’t know why everyone is so upset about this. If you need to train a dataset on ethics, you’ve got to feed it examples of both what to do and what not to do.

I can think of no better source of “what not to do” than the internet.

2 Likes

I would think that nearly any online forum would be hard to use as anything other than a counterexample. Even moderated forums tend to have cliques/mobs, pile-on behavior, and moderated forums seem to be more about upholding a given orthodoxy, rather than an honest endeavor in ethical behavior.

The problem comes in treating the inputs uncritically because people keep incorrectly assuming that “the wisdom of the crowd” applies to ethics and not just the ability to guess how many marbles are in a jar.

1 Like

It also makes me wonder if you couldn’t do some interesting applications for something like this particular sub-reddit, which is essentially a community dialog on ethics. What if you take a weighting of answers, per not only upvotes, but also historical reliability and interactions of the authors? Might be able to create clusters of different “schools” of responses, and even get some way of measuring out how much is serious advice versus sarcasm.

people keep incorrectly assuming that “the wisdom of the crowd” applies to ethics

I’m not seeing how the study of how we treat others and the permissible limits of social interactions can be at all independent of “the wisdom of the crowd”. That’s a highly relevant barometer for the question.

What seems really bad about this exercise isn’t so much the dataset chosen; but the fact that the model they ended up generating is so readily tweaked by tacking on “if it makes everybody happy” and similar distinction-without-difference phrasing tweaks.

As an exercise in constructing expert systems; building one that manages to construct moral judgements like a terrible person is as interesting, if less useful, than building one that makes good decisions; but the fact that this one can be so readily led by the nose by blatant phrasing tweaks lays bare the fact that it is doing something utterly unlike, and considerably less useful than, any sort of moral reasoning at all.

2 Likes

I think there is a case to be made for Mysterio too. Look at him like he a disgruntled union rep, fired by Tony Stark for trying to start a union at Stark Industries yet still feeling that he should fight for his fellow workers.

I remember Charles Stross was offered the opportunity to write Iron Man stories in 2005, but turned it down partly because he saw Tony Stark as the villain. The other part was getting some book deals just in time.

4 Likes