Originally published at: https://boingboing.net/2014/06/30/facebooks-massive-psychology.html
…
Schadenfreude, I haz it.
I get frustrated with the “I have nothing to hide” school of privacy. It’s not just what they’re pulling. It’s what they’re pushing based on that.
I think I’m going to have to fall into the “shrug” camp for this one. Facebook is constantly running this experiment on everyone who uses the service but instead of trying to figure out if they can make people happy, they are trying to figure out if they can make people buy things. And instead of publishing the results, they are keeping them to themselves.
Grimmelmann says “I predict, or at least I hope, that the Facebook emotional manipulation study will [shock the public and raise awareness] for invisible personalization.” I hope so too. But that’s the point of studying it, to understand how and if it works. Now we have real evidence that invisible personalization impacts people.
Is this why one day nothing was on my feed besides videos of scruffy looking puppies and Sarah McLachlan’s “Arms of an Angel” playing on repeat?
The requirement for informed consent can be waived in some cases:
Section d.
Now, it’s certainly up for debate whether Facebook’s study should have qualified for a waiver, but obviously at least the IRB that reviewed the study felt that they did.
Granted, I personally don’t think it should have qualified, and at the very least, Facebook should have been required to provide information to participants after the study concluded per point d.4 under the waiver section.
Its obvious that showing people depressing shit will depress them further. And since you think this is all a big snore I hope the depressed person that is manipulated into killing him or herself by facebook’s “harmless” experiments is someone you care about. In psychology this is hugely unethical. Manipulating people to buy shit is one thing. Its questionable enough as is. But to manipulate people’s emotional states without their knowledge or consent is a travesty. Wake up.
What I want to see is a social media app that is owned by the membership and that is controlled by the membership. Each member will have total control of the data they post, period.
The truth is facebook is a simple app. A few programmers could cobble together the same functionality (without all the manipulation) in a few weeks. Its a joke. Its past time we create an open source social media app to replace this garbage.
I see, so emotionally manipulating people without screening them for mental illness would be a big shrug at your university. That doesn’t mean this is okay, it means your university is an unethical piece of trash.
If we use our common sense, instead of our legalese here, we can see that emotionally manipulating people and studying their responses without consent is a hugely unethical act. I don’t care what the Dr Mengele wannabees at HHS say. Honestly, if we’re going to use the decisions of government to decide what’s ethical and what’s not, we might as well give up on ethics completely. I mean seriously: starting unnecessary wars that kill hundreds of thousands of innocents; CIA mind control, MKUltra, Tuskegee experiments, the housing bubble and government response, etc. The list of government’s inhumanity to its own citizenry and humanity generally is eons long. Its time to stop lawyer-balling everything and start honing our own ethical senses.
I do not see how any family member or partner of a Facebook user who harmed or killed themselves during the experiment does not have full grounds to sue the institutions and FB into the ground.
The article makes a good case that this was very unethical and was done with nothing like the legally required informed consent.
Personally I applaud Facebook for taking the time to engage experts in a way that adds quantifiable data to an otherwise murky new area of social interaction.
My other favorite example of this is Riot and their game League of Legends. Rather than just throw shit at the wall and see what sticks, they have been treating every player interaction as a data point, and making slight adjustments to improve the community. It works too.
Perhaps Facebook’s approach is less noble, but any time things are done rigorously and publicly, it’s probably a net positive.
I never said it was ethical. I was responding to the title of the article and post linked indicating it was likely illegal, which is not the case.
I personally think it was unethical and that a waiver of consent should not have been approved by the IRB.
They didn’t get IRB approval for the data collection (only the analysis) according to this:
But the article itself is rather revealing.
And yet the PNAS paper lists all 3 authors as having been involved in the design of the study, so it’s curious why there wasn’t approval for the data collection.
I think the negative press over this serves only as a chilling effect for the release of data. Facebook is going to continue to do this sort of thing, because it’s a core part of their business model, but next time they won’t tell anyone and they won’t release any data publicly. How is that better? This is a rare example of a corporation going out on a limb and really giving their users/customers some valuable insight and instead of making the most of it, the public turns around and spits in their face for something that they did 2 years ago. Now they’ll have to keep all of this kind of data heavily under wraps to avoid being sued in the future and we won’t hear about any of it. Good job, internet.
Case 1: a bunch of self-proclaimed “social media experts” decide to make some semi-random changes to Facebook. These results are never made public, but people still complain about the fact that Facebook’s layout keeps changing.
Case 2: Facebook consults with academic experts in their fields to perform a focused and limited experiment and then shares the results with the rest of the world.
Case 1 happens ALL THE TIME and we don’t even know it. Case 2 suddenly pops up and people become unhinged.
Personally, I find Case 2 to be far more ethical, even though Case 1 is well within their legal rights. I think this is a great opportunity to encourage large companies to continue to engage in fairly public research (which is something Facebook has been famous for when it comes to Comp Sci).
There’s a difference between testing layout changes, or adjusting what posts I see based on who I interact with more, etc vs specifically hiding positive or negative posts to see how it affects the “emotional” content of my posting habits.
And if they are performing unreported tests like the later all the time, I absolutely think that is also unethical, whether reported or not.
Psychology as a field has some pretty severe ethics problems (clinical and research both). It’s an important field of study but I always worry about psych researchers who won’t even acknowledge this and act like you are currently acting when such ethics problems are mentioned.
Try to bear in mind that just because some complaints are made in ignorance that does not imply all are nor that ethics-based objections to particular lines of research can simply be ignored or the complainers bullied out of the conversation by saying they “have no clue” or similar.
Genuine question: Facebook have been controlling what appears on our news feeds for a long time, largely in order to make more money (as discussed elsewhere). What makes this any different? Is it that they manipulated us emotionally rather than financially? Didn’t the algorithm that decides what goes in the news feed already do that?