Today it meant that the kiosk screen in the elevator lobby started autoplaying whatever is on TikTok. (I meant to get video; maybe next time I’m there.)
Another time, at work, it meant that I locked myself in the sub-basement (where they keep the building’s UPS batteries, & with which I have no business). I’m still known for that, even among those hired years afterward.
“Day 47: accepted as subject in another ill-advised scientific experiment. Leaned in to it, to no avail; still no superpowers. Arms smell of burnt hair.”
It always seems hard to make the jumps to the conclusions on studies like this. If you leave me alone to my thoughts, I’ll gladly sit with them for long periods. If you ask, I’ll probably say that I would try to avoid electric shocks. If you give me a button to safely administer shocks to myself, I’ll probably press it like a skinner box trained rat. I like novel stimuli. The shock button gives a new button to press and a shock to experience. My thoughts will still be there later, but the safety calibrated electricity toy will be gone at the end of the study.
The first sentence is directly contradicted by the second which says over half of people rated the experience as enjoyable.
After skimming this embarrassing study (seriously, why would Science ever publish this?), I see that the “enjoyment” assessment asked if they enjoyed their time, if they were entertained, and whether they were bored then averaged the 3 answers and called the measured average “enjoyment”. I spend hours a day alone with my thoughts, but if asked if sitting quietly alone was entertaining I’d say no.
This is all just directed reasoning. People in a psych experiment told to sit still didn’t take it seriously and fidgeted so “being alone with their thoughts for 15 minutes was apparently so aversive that it drove many participants to self-administer an electric shock.”
Of course I would of tried the shocker.
At least twice.
In technical use by statisticians, average is a generic term for any measure of “central tendency”. The most commonly encountered are arithmetic mean and median. Others include mode, geometric mean and harmonic mean, but they are less useful and the last two can be undefined for some datasets.
This highlights a serious issue with much of the work in psychology-the test groups are often very small and composed of college students. The data is then extrapolated to everyone. But when others try to replicate the study, the results don’t match up. And a deeper look into the subjects often reveals confounding factors-the (in)famous marshmallow study was re-examined and it was found that the results correlated much more strongly with the economic status of the subject than anything else.
It’s more pervasive than medical tests mostly being done on and calibrated for males.
That reminds me of my time at a Math department, where administrative staff had introduced “cake day” - on a random day of the month, at 11 there was free cake for everybody in the coffee room. The thinking was that even the most secluded Mathematicians could thus be coaxed to leave their offices, come to the coffee room, and actually talk to each other, such that collaborations and joint grant proposals could emerge…
Few people are as incensed as this as psychologists. We try SO HARD to convince people we’re real scientists…yet most of us only get to do our research on captive populations like college freshmen. Then so many of us make up bullshit generalizations from our little dissertations that it undermines our own arguments for hard science legitimacy. It’s a vicious cycle. I try counseling my research-oriented peers but they tell me I’m a clinician with insufficient power.