Home / General / Katie Surrence: The Problem With the Facebook Emotional Contagion Study

Katie Surrence: The Problem With the Facebook Emotional Contagion Study

Comments
/
/
/
106 Views

Tal Yarkoni argues here that ethical concerns about the Facebook emotional contagion study are misplaced, for four reasons: Facebook only removed emotional content, and did not heighten or invent content, the Facebook news feed environment is highly manipulated anyway, so further contrivances are hardly a violation of what we should expect, in the scheme of reasons to experiment with our news feed, social science research is better than most, and manipulation and influence themselves are a constant part of life. He titles his post, “In Defense of Facebook.”  I find it unsatisfying as a general defense of waiving informed consent for this study.  Two researchers who were not Facebook employees participated in this study, it was approved by an IRB (according to Susan Fiske, quoted here, though of which institution is unclear), and published in PNAS. This was human subjects research, and should be subject to those standards, not only those set forth in the Facebook Terms of Service.

Later, in comments, he links to the HHS website and quotes the rules for when informed consent may be waived:

(d) An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:

(1) The research involves no more than minimal risk to the subjects;

(2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;

(3) The research could not practicably be carried out without the waiver or alteration; and

(4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

I don’t believe either requirement (3) or (4) is met by this study. It would not have been impractical to get real consent for this study; the researchers could have done it via Facebook message. They could have formed the studies aims vaguely in the consent form to avoid demand characteristics (the phenomenon of research participants behaving the way they suspect they are supposed to, or reacting against what they suspect they are supposed to do). They could have offered a form of compensation—a free promoted post, or something similar.  They could have debriefed participants, thanked them for their participation, shown them the previously excluded content, and explained to them what the aims of the study were and why they thought it was important, all via Facebook message.  I think all of these actions would have made participants feel positive about their participation and encouraged them to participate in research in the future, via Facebook or other means, which is what human subjects researchers should be striving for.

 

The researchers themselves don’t appear to believe that the research qualified for a waiver of informed consent. Rather, they argue that the ToS was consent: “LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.” But the Facebook ToS don’t contain any of the elements of informed consent as defined by HHS. This would be a terrible precedent to set.

 

Susan Fiske is likely right that some of the heat the reaction to this research reflects a larger sense of unease and frustration that our interactions with our friends are mediated by a corporate platform that we don’t understand and can’t control.  When people say, “If you don’t like it, just don’t use the product,” it’s a little unfairly facile.  Facebook attracts enough social energy that in some communities it’s a serious act of self-exclusion not to participate.  But whether or not there’s a problem with the role Facebook plays in our lives, it’s in no one’s interest—not social scientists’ and not research participants’—for academic researchers to be perceived as or to be the same kind of shadowy force.  Facebook exists to make us click ads.  Social science research is supposed to help us understand ourselves better, to provide some benefit to humankind.  Social scientists shouldn’t adopt corporate ethical standards as their own. (Although it is likely symptomatic of a problem that we seem not to expect businesses to have the ethical provision of real utility as a primary reason for their existence. This post from Adam Kotsko made me realize how much I’d absorbed the ideology that businesses exist to make money, as opposed to things people can use.) Even if Tal Yarkoni is right that we are in “reasonable people can disagree” territory, that an IRB wasn’t obviously wrong to approve it, the retrospective knowledge that social scientists participated in an experiment with Facebook that made people feel uncomfortable—in whatever mild sense, violated and paranoid—should be enough to tell us that something went wrong.  Corporate data sets are a rich source of information about human behavior, and it would be a shame if they were off limits to academic study. The best way to avoid a backlash about collaboration between private enterprise and academic social scientists is for researchers to conduct themselves with a high degree of transparency and respect.

Finally, a word on their claims: many others have pointed this out, but the authors’ finding that their manipulation changed the number of emotion words used by participants is very different from finding that it changed participants’ emotional experience. The authors’ findings are interesting as they stand, but they overinterpret them. If they are actually interested in emotional contagion, they should do something to measure mood. One of the advantages of asking participants for consent would have been that the researchers could have asked participants directly about their mood; that is, they could have done a better study.

FacebookTwitterGoogle+Share
  • Facebook
  • Twitter
  • Google+
  • Linkedin
  • Pinterest
  • DrDick

    As a cultural anthropologist, this seems a bit problematic to me, though rather harmless as you say. There is also an exemption to informed consent that applies to the observation of actions in public spaces that might apply in this case.

    • Ann Outhouse

      But this also raises the question of whether you’re in a public space when you’re sitting alone in your room at home reading internet feeds. Physically, you’re not in a public space, and I think when you’re in a private space, you have a reasonable expectation of privacy. Hooking up to the internet should not be interpreted to mean that you’ve revoked that expectation.

      • Johnny Sack

        This is tricky. I read Facebook on my phone in the subway all the time. I’m not sure where you access Facebook should alone be determinative.

    • NonyNony

      Except that IIRC the exemption to informed consent in public spaces involves observing events that occur in public, not manipulating events occurring in public and seeing what kind of reaction you get.

      Not a social science researcher, but I have had to have research approved via IRB process in the past for human/computer interaction studies I’ve been a part of. I am utterly appalled at the lack of ethics involved in the running of this study and I’m really worried that some IRB somewhere actually thought that this was a good idea. There is no reason not to have informed the participants that they were taking part in a study of some kind, allowing them to opt-out and paying the ones with some token payment if they granted permission. That’s, like the bare minimum of what a decent researcher should be doing. You wouldn’t even have to explain what the study is – the IRB process even allows you to tell white lies to misdirect if you can make the case that the participants knowing that they’re in a study would change their behavior – but you at least have to let them know that they’re in a study and choose to not participate if they want out.

      • elm

        I agree with your interpretation of the public space exemption. As soon as a manipulation is done, you need consent.

        I’ve discovered that IRB’s vary wildly in their stringency from university to university. Scholars have told me of projects that they received clearance from their home institutes IRB that my own would never approve. All IRB’s are supposed to judging proposals against the same standards, so I don’t know what explains this variation. Perhaps clearer guidelines are needed or some sort of auditing system.

        Finally, when you deceive subjects (and this is very common) during the study, you almost always have to debrief them afterwards and tell them the truth. This could have been easily done in the facebook study (a notification to all participants afterwards that they had been selected for the study and had had their newsfeed subtly manipulated), so it surprising that it wasn’t.

        • DrDick

          This is really the crux of the matter. There are a lot of grey areas, and this may be one of them, where institutions have a wide degree of discretion. My own institution tends to be fairly strictly conservative, which has caused some real headaches for some of my colleagues doing research overseas.

          • elm

            In broad strokes, it seems to me that public university IRBs tend to be stricter than private university IRBs. I won’t speculate as to why since I have only anecdotal evidence to support the assertion.

            A lot of the debate on this research is gray area, as you say. The lack of a debriefing afterwards, in my opinion, is not. Even if one is being generous and saying that the ToS constitute informed consent, that is not the end of an ethical researcher’s obligation.

            • A counter-anecdote: the response of the University of Minnesota to a lawsuit regarding its Seroquel trial:

              However, when the university’s IRB officials were deposed under oath, they refused to admit that protecting subjects was their responsibility. “So it’s not the institutional review board’s purpose to protect clinical trial subjects, is that what you’re saying?” asked Gale Pearson, one of the attorneys representing Mary Weiss. “That’s true,” replied Moira Keane, the director of the IRB. Astonished, Pearson kept returning to the question, to make sure that she understood it correctly. Keane refused to budge.

              http://www.motherjones.com/environment/2010/09/dan-markingson-drug-trial-astrazeneca?page=4

    • DrDick

      I completely agree with the issues raised by others here. That was just a passing thought about what might apply here. I would say that I believe that there are situations where you can manipulate the situation in essentially non-intrusive ways which pose no significant harm that do not require consent (as with an installation or performance of some sort). As I said, this is pretty problematic any way.

    • Even with the manipulation? I would be shocked.

    • justaguy

      Also a cultural anthropologist – it isn’t observation in a public space, they’re making an intervention on their subjects and observing the reaction. I also agree that it’s probably harmless in and of itself. I see it more as a harbinger of more potentially disturbing experiments to come. Data scientists, as I understand, come from a statistics/programming background, and not from sciences based on human subject research. The size of the platforms they’re using, gives them enormous reach to study thousands, or even millions of people at a time. If they don’t temper that power with a strong sense of ethical obligation to their research subjects, bad things could happen.

      • Lee Rudolph

        I see it more as a harbinger of more potentially disturbing experiments to come

        that are already completed or underway, and that will never be made public because who at FB needs the hassle?

  • jim, some guy in iowa

    “emotional contagion” would make a good title for a play, I think

    • jim, some guy in iowa

      failing that it could be a lost Stones disco song

      “i will be your troll
      of many names
      sharing with you
      my
      emotional con-taaagion”

  • steve

    This is a very thoughtful post. Thanks.

  • elm

    I know that political scientists and other social scientists have been increasingly using Facebook to do experiments and, while I understand the temptation, it makes me nervous. The informed consent issue is important (and I agree that the ToS for facebook serve as rather dubious grounds for informed consent), but there is also the issue of proprietary information vs. scientific replication.

    At one conference presentation of facebook-related research, a prominent political scientist who was doing the research was asked by the audience a number of questions about his methods and sample and whatnot and said he could not answer those questions because of confidentiality agreements with facebook. (I know other researchers who have worked with facebook who have said that they were not bound by such things, so I’m not entirely clear on the details here.) If facebook is putting restrictions on public disclosure of methods and data, then that is a serious problem from a scientific perspective.

    The opportunity exists for facebook, and other corporate web services, to provide great venues for important and interesting research. Hopefully they develop practices that better match scientific standards of accountability, consent, and transparency than seem to currently exist so that researchers can use these resources without ethical qualms.

    • Johnny Sack

      That’s absolutely laughable. If they’re not going to disclose methods or data then we should completely discount the research.

    • bluefoot

      How can they not disclose methods? If methods, sample etc are not available, then the study should not be published. That’s total bullsh*t, from the scientific perspective. And WTF is up the with journals (like PNAS), allowing this sort of crap?

      As someone in clinical research, IMO there absolutely needs to be informed consent, and participants should absolutely have the option to opt out at any time. How hard would it have been to ask for consent via Facebook, explaining that if they agree, for dates X-Y, the contents of their feeds may be altered for research into Subject Z, but they won’t know if they are in the “test” group or the “placebo” group?

      • elm

        As I said, this happened in one case I know of, not all. And it wasn’t “I can’t tell you anything about my methods,” it was more, “I can’t reveal some details of the sampling methodology because it is proprietary.” As I said, it’s problematic, but it probably does not apply to the article in question (you’re right: PNAS would be unlikely to publish opaque sampling methods) and it’s not as egregious as I think I made it sound.

        • And…I think it’s a bit too simplistic to discount such research entirely. *Some* information out of those area of the world is better than no information. The information has to be suitably discounted and interpreted of course.

          Obviously, if the methodology is totally opaque, then the utility of the research is extremely constrained. But if you said, “It was a uniform sample over the population of ‘active’ Facebook users, but we can’t reveal exactly how the sample was constructed or what the precise definition of “active” is due to confidentiality restrictions.” then, eh, I’d still read that paper. I might wonder about “activity” and that would be an issue for any confirmatory work. But it certainly doesn’t preclude confirmatory work, or even replication. (Even fairly strong forms of replication…sample manipulations on a different population or sampling method.)

          • elm

            I don’t disagree. Lack of transparency is a serious problem but it doesn’t mean we should discard the research entirely, though we should be even more skeptical of it than usual until it can be confirmed.

            For me, the more important question is how do we get facebook and other corporate services to change their practices so that we (academic researchers) can use them ethically.

            From an instrumental point of view, scientific publications refusing to publish studies using resources that do not meet standards of consent and transparency and whatnot might be a way to push some of them to meet these standards. (Probably not facebook, as they probably don’t care much about publishing their studies; smaller outfits that might generate some of their revenue stream from collaboration with academics might change, though. I haven’t thought this through much, though, so I’m sure I’m missing all sorts of issues.)

            • From an instrumental point of view, scientific publications refusing to publish studies using resources that do not meet standards of consent and transparency and whatnot might be a way to push some of them to meet these standards.

              Not even a little bit, in my experience.

              (Probably not facebook, as they probably don’t care much about publishing their studies; smaller outfits that might generate some of their revenue stream from collaboration with academics might change, though. I haven’t thought this through much, though, so I’m sure I’m missing all sorts of issues.)

              Nope. Direct conditions of funding, yes. Anything else, forget about it.

              In then end, there will always be some sort of venue in which you can publish. So if nominal publication is all you need, then fine. Plenty of venues offer industry tracks which are certainly “good enough”.

              I’m really not so bothered by obscuring the sampling technique as long as this is properly noted and properly treated downstream. Given how awful most people are at either of these (I also blame myself!), I’m not so sanguine.

      • How hard would it have been to ask for consent via Facebook, explaining that if they agree, for dates X-Y, the contents of their feeds may be altered for research into Subject Z, but they won’t know if they are in the “test” group or the “placebo” group?

        Bingo. There could have been a trivial popup (that most people would have accepted, given the way people generally behave with popups, especially on Facebook) that could have acted as informed consent and satisfied everyone.

        I’m shocked that the researchers were allowed to get away with this, and I’m even more shocked that the editors of the journal allowed it to be published. Informed consent isn’t a huge issue in the area I work in, but ethical sourcing of samples is a huge problem. I’ve seen papers rejected because samples were collected under suspect circumstances or locations weren’t shown; this is far worse. This paper should have been rejected without review.

        • bluefoot

          Also, how many minors were included in their nearly 700K subjects? If the selection was random, a non-trivial number of minors who have Facebook pages would also have been included in the study. Informed consent for use of minors in research studies is particularly scrutinized.

          • +100, +15, or -27, depending on our algorithm. /s

            This is an enormously important point!

            Without disclosure of the methods and IRB approval, we have no way of knowing if they held to the bare minimum of ethical standards.

            The link that Emily posted below has some really strong points along these lines.

  • Emily68

    Science Based Medicine weighs in on Facebook.

    • Emily68
      • This is a really good link, thanks for posting.

      • elm

        Yes, thanks much. I learned a lot over at that link, including that there actually may not have been real IRB clearance for the study, which might put the University-affiliated researchers involved in the study in some hot water.

        Also, PNAS seems to have erred in publishing as the submission did not meet their own standards for disclosure and ethics.

      • bluefoot

        That is indeed an excellent link. Thank you.

        Besides digging into the issues at hand, I also like the way he points out the differences in culture regarding human subjects.

  • The fact that Kramer had a hand in all stages of the study makes it less than credible, to put it mildly.

    Also, there’s no way to replicate the study without FB’s total cooperation.

  • elm

    Oh, one more note: Katie, please keep posting here at LGM. I didn’t read your theater posts because that isn’t really my thing, but this post was great and I hope the front-pagers let you post on a wide array of topics more frequently.

  • I don’t see this study as any different from intercepting someone’s mail/newspaper/television/radio/email and selectively editing the content while monitoring for a response, which would never be permitted without IRB approval.

  • (Okay, my linked-up comment got stuck in moderation, so here’s a de-linked version)

    According to an update on Monday to another Atlantic article,

    the experiment was conducted before an IRB was consulted. Cornell professor Jeffrey Hancock—an author of the study—began working on the results after Facebook had conducted the experiment. Hancock only had access to results, says the release, so “Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.”

    So, no IRB approval, because the research had already been completed—which is just so handy!

    By the way, have you seen the Monkey Cage post on this in which the authors argue that “IRBs are not guardians of ethics; they are protectors of institutional legal liability.” They may have a point in terms of how IRBs have come to be used (see, for example, the response of the U of Minnesota to outcry regarding its Seroquel trial and the suicide-by-beheading one of its subjects), but as someone who works in bioethics and has sat on the Canadian equivalent of the IRB (an REB), I find this attitude not merely troubling, but a corruption of the point of human subjects research protections.

    • INDEED!

      In point of fact, of course, IRBs cannot ensure that research is conducted (or even designed) ethically (just as peer review does not ensure that a paper is of acceptable quality). Just passing and IRB doesn’t mean that the research is ethically ok.

      However, not being passed by an IRB is, or should be, completely unacceptable. Indeed, I’d argue that it’s a separate ethical failure. (I.e., even if the research would have passed a stringent IRB, the fact of not securing appropriate approval is itself a kind of negligence.)

      • elm

        Agreed entirely.

        The Monkey Cage authors are splitting semantic hairs: yes, from the Universities’ perspectives, the IRBs are limiting legal liability, but they are doing so by ensuring that research is done in ethical ways that do not expose the university to adverse legal action. (I also think there’s variation, both across and within IRBs. Much, if not most, of the decision making on the IRB is made by faculty, who surely have different views about the role of the IRB.)

        • We use legal action as the sticklike carrot: If you don’t have IRB approval, you individually bear the liability (including defense).

          I’m not happy about that. I much preferred the affirmative punishments at Maryland (e.g., not graduating, thesis refused, sanctions, etc.)

  • So FB took an earlier look at contagion, but I assume because they merely monitored use patterns, that would not have required an IRB.

  • herr doktor bimler

    That broader cognitive / emotional priming area of research seems to feature frequently at Retractionwatch for irreproducible results.

It is main inner container footer text