2 min read

Facebook’s Massive Human Experiment on Emotions

Featured Image

“The results show emotional contagion. For people who had positive content reduced in their News Feed, a larger percentage of words in people’s status updates were negative and a smaller percentage were positive. When negativity was reduced, the opposite pattern occurred. These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting... the first experimental evidence for massive-scale emotional contagion via social networks...”
- Adam Kramer, et al., Experimental evidence of massive-scale emotional contagion through social networks

Facebook’s human experiment on emotional contagion is morally reprehensible. It is alarming to learn that it knowingly and surreptitiously manipulated its users’ news feeds to experiment on the highs and lows of their emotions. In conducting this experiment, Facebook treated its users like mice used in science labs. The claim that the experiment’s results had produced ground-breaking findings fails to justify the unethical methods of this research.

To defend itself, Facebook reasoned that users provided their informed consent to participate in the experiment by clicking on the “I Agree” button when they opened their Facebook account.  Doing so was tantamount to providing informed consent to participate in this unethical experiment.

It is important to note that Facebook’s data use policy is akin to a click-wrap agreement. Users simply click and agree to all its terms if they want to create an account in Facebook. There is no room to negotiate any term, draconian or otherwise. It is also common knowledge that users hardly ever read the terms and conditions of a website’s data use policy. And even if they did, not many of them would understand what its legalistic convoluted terms really mean anyway. Thus, it seems quite a stretch to claim that clicking on the “I Agree” button to open an account on Facebook was tantamount to users providing their informed consent to participate in this unethical research.  On the contrary, the users had no idea that Facebook chose them as unwitting participants in an experiment that manipulated their news feeds to dial up or dial down their emotions, with neither their knowledge nor consent.

I also disagree with Facebook’s rationale that “allowing participants to opt out are best practices in most instances”. Expecting people to opt-out, when they don’t even know that they have opted-in by default, as was the case in this research, is unfairly passing the risk, burden and harm to users. This is not best practice under privacy law. On the contrary, the best practice is that people are assumed to have opted-out of research (or email subscriptions, third party promotions, etc.), until they have knowingly and intentionally opted-in. In this case, it is clear that Facebook grossly violated its users’ privacy by not only assuming that they’ve opted-in, but that they had no way to opt-out. Tragically, Facebook users have become captive prisoners of Mark Zuckerberg’s empire for world domination in the Age of Big Data.