Sure okay, it’s a “hoax”… but oddly enough, if you give one person complete dominance over another, they usually do end up abusing that position of power.
Funny how that works.
“Zimbardo is both unethical and incompetent” is pretty much the consensus view within the psych research community these days.
Don’t forget about the Milgram experiment; it has some issues too…
It’s almost as if psychology is complicated.
Looks like just more toxic masculinity.
While that most definitely is a huge factor in a repressive misogynistic society like ours, it’s a sad fact that women can also be abusive when given the chance.
That’s one way false information gains traction. We are less critical of information that confirms our preconceptions. And then even potentially defensive of it if it turns out to be flawed.
The SPE may have made some accurate conclusions and I find Zimbardo persuasive in his arguments about diffusion of responsibility enabling abuse, but I really don’t know what basis to use to cite them other than as mere opinion since the SPE is so rife with bias and errors.
And to complete the trifecta, Phineas Gage might not have been as screwed up by getting a spike through his head as every first-year psychology textbook would have you believe.
Maybe, but even before I heard about the problems surrounding the credibility of the Stanford Prison Experiment (which were kind of evident from the events even to inexpert me) and Milgram (which were less obvious), I was deeply skeptical that enough subjects hadn’t guessed the ruse to skew the results. Yet I knew actual power is actually corrupting. And I think what @Melizmatic is saying is not that the experiment confirms bias, but that there’s overwhelming evidence that power over others tends to corrupt that the spuriousness of the experiment doesn’t actually call that in to doubt.
Narrator: She knew that he could.
At the risk of going slightly askew the topic, yoinkity yoinky yoink. Athena I love The Warriors!
De nada; I’m surprised you don’t already have that one, it’s a classic.
Yeah, it’s a pretty compelling takedown. One guy decided to intentionally cosplay Cool Hand Luke and a convenient, simplistic narrative was formed around that.
This takedown is also consistent with a lot of other social research which shows “one bad apple does spoil the bunch”, meaning if you work with a total a-hole, that person will make the experience miserable for everyone else such that they eventually turn into a-holes too.
I’ve really been struck at how common bad apples are. Truthfully, I’ve been kind of haunted by my conversation with Will Felps. Hearing about his research, you realize just how easy it is to poison any group […] each of us have had moments this week where we wonder if we, unwittingly, have become the bad apples in our group.
Listen to the first 11 minutes. It’s quite compelling, the experiment they did, and it got repeated with multiple groups.
I don’t think so. I think the majority of people, when given a duty to care for others, do their best to care for those people. I feel like Stanford (and Milgram) tell us to think of everyone as a monster, instead of thinking that 5% of people are monsters and we don’t know which ones. Maybe that feels like a dumb distinction, but I just have the feeling that we had set our baseline idea of how bad other people are so low that the worst of the worst people have managed to step over it.
But the Milgram experiment is one of the world’s most repeated experiments, with multiple teats of its conclusions. Because the conclusions it presented were unexpected at the time, it was picked over extensively to try and refute its conclusions. That hasn’t happened:
Also, I’d be wary of the study linked in the main description. The body text doesn’t live up to the hyperbolic headline, and subsequent studies of the phenomena illustrated in the first prison experiment have already moved the science on from that initial work.
Crass “DEBUNKED” headlines don’t mean that everything that we thought we knew is now uncertain. Constant re-examination of the evidence, and repeated observation of phenomena is how Science works, and it is still does work, regardless of how many people want to see it fail, and use any excuse to kick it over.
I never said anything about “debunked”. It’s just an example where the results and methodology of the original experiment weren’t exactly how people remember them.
A lot of times famous experiments have their nuances erased to turn them into just-so stories. It’s good to remember their specific context and that they’re the start of a longer inquiry, as you say.
I thought I read at some point that it was exceptionally difficult to repeat simply because so many people have heard of it already. Or I would at least expect that a larger segment of the population has developed a certain distrust or suspicion of psychology experiments in general. But then, one mustn’t let one’s expectations get in the way of what the data says.
It’s really difficult to repeat now, both because it was so well known, and because it wouldn’t get past any experiment ethics board nowadays.
However, it was extensively repeated at the time. with multiple independent labs able to look at the factors that gave increased compliance (carrying out the experiment at a prestigious university, the experimenter wearing a lab coat when he gave the instructions) and what decreased compliance (getting other people to refuse to comply, giving people a chance to “cheat” and not administer the shocks).
There was even one study which tested the hypothesis that people were seeing through the ruse. It was called “Obedience with a genuine victim”, and featured a dog as the recipient of the (safe, current limited) shocks. People still complied.
The primary problem with Milgram from the article linked by @tuhu was that the majority of the participants realized it was fake and so they never really thought they were harming anyone. That would hardly have decreased in the decades since the experiment was conducted since it’s very well known, so replicating it wouldn’t undermine that criticism unless somehow we could verify that the person really thought they were really harming another human being (and actually harming them, not inflicting transient pain to a person who is getting paid for their time experiencing transient pain). But if we really convinced people they were doing real harm to another human then the experiment would be unethical.
The point of Milgram being “wrong” isn’t that no one will follow order when they are told by an authority to kill someone. A world were 25% of people vs. 80% of people are authoritarian followers isn’t just a quibble over quantity, it’s two alien universes where we have to make totally different assumptions about our fellow humans.