Study finds that robots can pressure people to do risky things

Originally published at: https://boingboing.net/2020/12/17/study-finds-that-robots-can-pressure-people-to-do-risky-things.html

17 Likes

Pepper by Marlboro: Why did you stop smoking? Have another. Light up.

4 Likes

Perfected the thread in one post!

5 Likes

FWIW, I don’t know the authors, but have used this task in research, and here is my take. I haven’t read the complete paper, but based on the second-hand coverage, the methods and conclusions seem problematic.

In this task (the BART–balloon analog risk task), it was designed and has been validated to be associated with personality variables associated with risk-seeking. But a typical subject in the task is risk-averse. The balloons have a popping average at some level (depending on condition, on average maybe 16 pumps), which ends up being the level to go to in order to maximize payoff. But the pops are salient, and people usually stop and collect winnings on earlier than the optimal point–like maybe 8-10 pumps on average when it doesn’t pop. Part of this is that it is not easy to tell what the optimal strategy is–you don’t know you are leaving money on the table unless you do the math. Having run through this task myself many times, I have found that it is really hard to maximize bonus in this task, even though I have done the math. Even if they are being paid real money, an average undergrad will not optimize because the extra pay is not worth the additional effort. The subjects are really in the dark here, and I would believe any encouragement or training would have an impact. Really, what the robot is most likely doing is getting them to be more optimal, which in this task happens to be less risk-averse. I would be surprised if more than a handful of their subjects actually made it into the risk-seeking area of behavior.

Most people who use this task don’t understand these issues (I had a friendly argument with someone at a research conference this fall about this issue), and so a business-school prof like Yaniv Hanoch (the first author) probably didn’t know this and the reviewers in this journal were probably not really evaluating that. I’ve only read the press coverage though, not the actual paper.

Anyway, in addition to that, the appropriate control condition to understand the social influence of a robot on risky behavior is not a robot standing by saying nothing, but rather a non-robot giving the same encouragement. I doubt it would have been any different if it were a human, or just cues on the computer screen, or an invisible talking rabbit. Then the conclusion would be ‘robots giving instructions are no worse than humans giving instructions in moving undergrads toward optimality by reducing risk-aversion in a computer balloon-popping task’, which is so much more boring than ‘robots can pressure people into taking risks’.

8 Likes
2 Likes

… Yeah. I had to stop that about two minutes in- it was rage inducing.

Well, I wonder. I suspect that a robot-computer telling a human to take more risks on a computer raises the question about human perception of instructions. Are humans seeing the two electronic devices as somehow “in sync” in the sense that they’re subconsciously thinking “this robot surely knows more about computers than I do?”

It’d be interesting to run the experiment with variously realistic simulacrums of humaniform robots. Do humans get pushed into risk by robots that look more human, or less human?

1 Like
4 Likes

Story checks out.

6 Likes

terry

2 Likes

No wonder they took the robot’s advice: the only “penalty” is that their virtual balloon pops and they get no more pennies. Wake me up when the robot exhorts them to inflict pain on a live human.

If this interests you, look into the work on social robotics (like at MIT Media Lab), which essentially asks your question, but usually not for risk but for things like emotional response. The answer is robots with eyebrows seem to be better. To demonstrate the form of robot matters, you need to compare it to a legitimate non-robotic information source. This is possible (and relatively easy) to do, but apparently it is not necessary in this field because they drew the conclusion without the evidence to support it.

1 Like

There’s a pandemic of this sort of thing, right now.

Edit: Not to ignore the other stuff you wrote. Much appreciated. MIT Media Lab is great, and the questions they’re asking are brilliant. The whole “teaching a robot when it’s OK to kill a human” question is a wonderful ethical rabbit hole.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.