Hi Xeni - What I said at the time the Facebook study was first posted about on here was this:
While I understand your (and my) offense at the study, I also agree with @anon50609448. This study comes as little surprise. Before I even read any of the comments here, I did go check out the study. They’re looking for ways to tweak people - only ever so slightly - in an emotional direction. Their goal isn’t to send you into a spiraling depression - you’d never open your wallet. They just want proof that remote emotional tweaking works. (Don’t worry, I still think it’s horrible and manipulative.)
I did understand at the time that both parties were manipulated, but the focus was to manipulate targeted possible clients/purchasers. While not a lot of explanation gets thrown around, this was an effort to better target advertise online. Much advertising takes advantage of emotion, and the more remote the advertiser, the more aggressive the campaign. Right now companies are freaking out about the fact that there’s no consistency to rates of advertising money spent online and resulting sales. They don’t know how to spend online advertising dollars, and it’s freaking them out.
The reason I’m re-explaining this is because the people running the test really didn’t care about the people whose posts they altered. They didn’t represent possible dollars in the study. They weren’t the focus group, but they were a part of the study. That said, the testers’ action didn’t make those users complicit, any more than a dropped text would make you responsible for “failing” to contact a person about a changed meeting time. Those users - in good faith - put up their posts thinking they were visible. Without their knowledge the testers changed the visibility of the posts for only some viewers. Because the senders didn’t intentionally block the posts themselves - or know about - they’re not responsible at all. Only the service is.