Science

Facebook’s Unethical Research Project

Facebook has apparently conducted a “massive” psychological experiment and published the results in the Proceedings of the National Academy of Science (PNAS). The study sought to manipulate the emotional responses of 689,000 Facebook users by controlling the types of content that appeared on their feeds in order to see whether or not emotional traits can be “transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.”

When a person signs up for a Facebook account, they check a box that says they have read and agree with Facebook’s Data Use Policy. Buried within these pages of text is a little nugget explaining that one of the reasons they will use data collected from users is “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

According to PNAS’s policies, they require authors to obtain informed consent from study participants. The study author, an employee of Facebook, claims that the Data Use Policy constitutes informed consent, and it seems PNAS accepted that argument. It is a highly disturbing move because, as far as I can tell, the Data Use Policy does not come anywhere close to the kind of informed consent researchers are usually required to obtain when conducting research using human participants.

Since the release of the Belmont Report in 1978, researchers working in the US with human participants have been required to follow certain ethical guidelines. These guidelines were necessary because of horrendous experimental studies like the infamous Tuskegee syphilis experiment. The Belmont Report laid out a set of principles that include respect for persons, beneficence (avoid harm, do good for participants), and justice (fair treatment of participants). In order to follow these principles, the Report requires that researchers obtain informed consent, provide an assessment of risks and benefits, and carefully and ethically select participants. Typically, the ethical standards of a proposed study are evaluated by an Institutional Review Board (IRB), which is usually made up of a committee of researchers within an institution who weigh the risks and benefits of potential research.

What’s unethical about this research is that it doesn’t appear that Facebook actually obtained informed consent. The claim in the paper is that the very vague blanket data use policy constitutes informed consent, but if we look at the typical requirements for obtaining informed consent, it becomes very clear that their policy falls way short. The typical requirements for informed consent include:

  • Respect for the autonomy of individual research participants
  • Fully explain the purposes of the research that people are agreeing to participate in in clear, jargonless language that is easy to understand
  • Explain the expected duration of the study
  • Describe the procedures that will happen during the study
  • Identify any experimental protocols that may be used
  • Describe any potential risks and benefits for participation
  • Describe how confidentiality will be maintained
  • A statement acknowledging that participation is completely voluntary, that a participant may withdraw participation at any time for any or no reason, and that any decision not to continue participating will incur no loss of benefits or other penalty.

The very broad Facebook Data Use Policy does not do these things. It is a blanket “data collection and usage” policy that they claim covers any and all research they conduct. But their data policy does not meet the requirements for informed consent laid out by the Belmont Report. Participants were never notified of their participation in this particular study. They were never given the opportunity to withdraw from participation. They were never told the risks versus benefits of participating. They were not told what procedures or experimental protocols would be used. In essence, the Facebook Data Usage Policy does not respect persons, and there was no oversight over whether they were beneficent and just.

Just as an example of why this could be problematic, let’s say this research was conducted on a person with severe depression. If that person does not know that they are being subjected to an experiment about human emotions on Facebook, where their emotional state is being intentionally manipulated in positive and negative ways, they do not have the opportunity to opt out of the research. A person with severe depression could potentially have very different risks for participating in such research than someone who does not experience depression. Without knowing they are being manipulated, a person with severe depression could be harmed by participating in this research because it could worsen their depression. How is this person to know based on Facebook’s Data Use Policy that such an experiment is being conducted?

Or, if we want to look at biomedical research, which is the kind of research that inspired the Belmont Report, Facebook’s policy is analogous to going to a hospital, signing a form that says any data collected about your stay could be used to help improve hospital services, and then unknowingly participating in a research project where psychiatrists are intentionally pissing off everyone around you to see if you also get pissed off, and then publishing their findings in a scientific journal rather than using it to improve services. Do you feel that you were informed about being experimented on by signing that form?

To be clear, I am not against Facebook collecting data about usage of their services in order to improve or try out new services. But that’s not the issue here. They went beyond the benign collecting of site usage data and went into actively experimenting on people to produce scientific knowledge based on the manipulations of people’s emotions without their consent. This kind of thing would never pass IRB ethics approval, and it’s really disturbing that a journal like PNAS would publish these findings with such a lack of ethical oversight.

h/t to biogeo for tipping us off to this story.

Updated June 28, 2014 @ 20:42 Eastern Time: Here’s a link, again from biogeo in the comments, to a criticism of the actual methods and conclusions of the study for those who may be interested.

Will

Will is the admin of Queereka, part of the Skepchick network. They are a cultural/medical anthropologist who works at the intersections of sex/gender, sexuality, health, and education. Their other interests include politics, science studies, popular culture, and public perceptions and understandings of anthropology. Follow them on Twitter at @anthrowill and Facebook at facebook.com/anthrowill.

Related Articles

43 Comments

    1. Interesting. That is a less-than-convincing reply, and I would have preferred if they included the IRB protocol number. And, even if it was approved by an IRB panel, I would still argue that it shouldn’t have been and that informed consent should have been required.

        1. Wow, according to a link in that post, the study was funded with federal money, so it DEFINITELY needed to undergo IRB approval. It’s starting to look more and more like whatever IRB approved this dropped the ball.

          1. Wow, that’s bizarre. The paper says absolutely nothing about funding sources. Is it not standard practice in the social sciences to acknowledge your funding in published work?

          2. Will, I basically agree with your post. But I think the bigger failure was on the part of the federally funded researchers and the journal (PNAS), which have rules forbidding this sort of uninformed consent, or the publishing thereof.

            I mean, Facebook might have done something unethical (and I think they probably did), but they didn’t, so far as I know, break any rules — they don’t hold themselves up as an ethical research institution. It seems to me that the researchers and the journal, however, DID break rules.

          3. Yeah, I really want to know what IRB this passed. It’s Facebook, so it’s not obvious. Most research goes through the IRB board of whatever university the researcher works for, in my experience. I get the feeling they went specifically looking for an IRB that would pass this. In my encounters with the IRB they would never, EVER, pass anything that wasn’t crystal clear on exactly as many details as possible, and a debriefing after.

          4. It was Cornell’s IRB. From something I read late last night (and don’t have a link, sorry!), it was proposed as being no different than FB users’ everyday experiences with the site. That, in addition to their blanket “research” policy, apparently the IRB thought that was good enough.

            I’ve also now seen conflicting reports about whether or not the US Army was involved in funding this study. Some reporting has said it was, others have said that the money went to one of the researchers but not to this actual study.

            But, it really shouldn’t be this difficult to find these things out. This is why there should be full disclosure and transparency in these sorts of things.

  1. I guess to me it just seems too much like common knowledge that:

    a) facebook collects data on your behaviour and you explicitly agree to that (and a lot more) by signing up

    b) they use algorithms to determine what appears in your feed, and these algorithms can change for various reasons. Which you also explicitly agree to when signing up.

    c) they use subsets of their userbase to test changes in interface and technology, and this is not usually opt-in unless it’s a major change. Again explicitly agreed to.

    Let’s come up with a parallel situation: let’s say Netflix changes their suggestion algorithm (or something else in the way their interface works) for a subset of users, to see if it affects binge-watching behaviour (and therefore badwidth use). It turns out it does. Is it unethical for them to publish this data because they didn’t inform the users that they were testing this?

    1. It’s almost as if you didn’t read the post where I specifically outlined what constitutes informed consent before replying…

      First of all, “common knowledge” is not sufficient to constitute informed consent. It doesn’t matter if people “should” or do know that FB is generally collecting data while they use the site. That does not constitute informed consent, and I explicitly explained why in the post.

      Second, they were not testing this for internal use for their interface, it was an experiment conducted to publish in a science journal, which is why it is required to meet a higher threshold of ethical standards.

      In response to your final “parallel situation” (which I do not think is parallel at all), yes it is unethical for them to publish the data because they did not acquire informed consent from participants for that specific experiment. Again, as I outlined in the OP, part of what makes it informed consent is that participants are informed that they are part of a specific experimental protocol.

      1. “Second, they were not testing this for internal use for their interface, it was an experiment conducted to publish in a science journal, which is why it is required to meet a higher threshold of ethical standards.”

        Seconded. A company like Facebook can do experiments to improve its business practices; that’s fine, and obviously it would be silly to object to that. But the ethical standards for academic scientific research are and should be different. In this case, if we are to take the researchers’ hypothesis at face value, they manipulated users’ Facebook feeds specifically to see if it would alter their emotional states. That’s not a business practice question, that’s a psychological research question. Furthermore, they did so without consideration of their subjects’ potential mental health status or ability to tolerate emotional distress. Again, if we are to take their research paradigm at face value (which we probably shouldn’t — I have criticisms about the study itself as well), this means that some individuals were selected to receive almost exclusively (90% I believe was the figure) negative updates from their friends for a full week. Personally, I’m troubled by the idea that some individuals suffering from depression or social isolation may have been subjected to this treatment without even the most minimal approval or monitoring by a mental health professional.

        1. Binge-watching is also not purely a business practice question. I chose it as an example precisely because it could have an effect on users’ emotional states.

      2. Or it could be that I read and understand what you said, but disagree that situations like this call for the same degree of informed consent as participants in other kinds of studies.

        Given that no parallel is absolutely perfect, what precisely makes the situation I described so different from what Facebook did (regardless of the fact we disagree on the matter of informed consent being necessary)?

        1. It is not about a “degree” of informed consent, Dan. There was no informed consent. It’s been explained now by both me and biogeo why that’s a problem for this particular study.

          And biogeo’s reply right above yours explains why your parallel does not work. But I’ll be more specific: it would depend on the goal of the research. If Netflix did such a study and the goal was to adjust recommendations to ease bandwidth usage, then that would be fine. If Netflix did such a study and the goal was to produce scientific knowledge about what sorts of things make people binge watch more shows on Netflix, then informed consent would be required because of the different goal. The goal of this FB research was to publish in a scientific journal, not to improve their services.

          It doesn’t really matter whether you like it or not, or whether you agree with it or not, there are certain ethical guidelines in place that are meant to prevent research participants from being harmed in the course of producing scientific knowledge. Informed consent is one of the bedrocks of these guidelines. Researchers should always err on the side of not causing harm to participants. It does not appear based on what this study does that they worked according to the established guidelines for human subjects research.

          As someone who has gone through the IRB process, I have been required to get informed consent just for conducting 45-minute interviews with people about being med students. I had to list as a potential harm that informants may experience discomfort talking about their experiences in med school. I wasn’t even experimenting on people and I had to go through more ethics approval than this study did. That’s what’s so freaking disturbing about this.

          1. It is still possible to raise the question of how such guidelines should be handled, or whether there are certain kinds of exceptions to be made in cases like large-scale data mining of opt-in systems. After all, we already have exceptions to informed consent in cases where the knowledge of the experimental protocol would screw up the results (i.e. when subjects think they are being tested on one thing but are instead being tested on something unrelated or unconscious). It could be that norms will eventually shift to allow for limited kinds of experimentation on users of such systems with only blanket consent as part of the agreement. Especially if people start to view things like aggregated social networking behaviour more like census data and less like experimental observation.

          2. After all, we already have exceptions to informed consent in cases where the knowledge of the experimental protocol would screw up the results (i.e. when subjects think they are being tested on one thing but are instead being tested on something unrelated or unconscious).

            In that case, people still know they are participating in experimental studies, even if they don’t know exactly what the study is trying to detect, and they will still be informed of possible harms. When informed consent is waived or altered by an IRB, it requires that there are minimal risks to participants (which in this specific case is certainly debateable) but also requires that participants are debriefed after they participate, which did not happen in this FB study.

            It could be that norms will eventually shift to allow for limited kinds of experimentation on users of such systems with only blanket consent as part of the agreement.

            I hope not, that would not be following the spirit of the Belmont Report to respect people’s autonomy and right to not participate in research they disagree with. What is to stop researchers from claiming blanket consent agreements in other arenas? And, why should we treat FB differently simply because of the size of its sample? How about instead of changing the system that generally works pretty well to protect participants, we not give FB special privileges to skirt the rules?

          3. “It could be that norms will eventually shift to allow for limited kinds of experimentation on users of such systems with only blanket consent as part of the agreement.”

            I suspect this is probably true, and if managed properly this could be done in a manner that would be consonant with our current ethical standards of informed consent. Essentially this would create a broad class of “minimally intrusive experimental methods,” which users of online systems could opt into. If done right, I think this would be a great idea: online social networks are an incredible treasure trove for scientific research. But a number of issues would have to be considered very carefully. A crucial one is that this would have to be a true opt-in system: no requiring it as a condition of using a service, no treating it as a “default” for which consent is obtained through a clause buryed in the user agreement. The scope of any potential experimental methods would have to be clearly defined when a user opts in, in plain language, according to the standards for informed consent that Will described, and any potential modifications to the blanket agreement would require re-obtaining consent. In practice, I think this would be a pretty light burden on researchers, and would satisfy the demands of informed consent.

          4. @biogeo: Eh, I would still be uncomfortable with that, but I can see how, like you said, if it was done properly and had the proper oversight, it could probably work out okay. But that’s probably based on knowing how blanket user agreements are treated now.

          5. So my responses to those issues would be:

            What is to stop researchers from claiming blanket consent agreements in other arenas?

            By clearly delineating the kinds of experimentation that are permitted on users of opt-in services with anonymized data aggregation, and continuing to subject them to IRB oversight. Any change to the status quo is not necessarily a slippery slope back to Tuskeegee. That’s why the issue would be raised for consideration in the first place.

            How about instead of changing the system that generally works pretty well to protect participants, we not give FB special privileges to skirt the rules?

            These are not quite the same thing. It just happens to be FB in this case, but it could be anyone operating an opt-in service with an up-front agreement to limited user-data experimentation.

            …right to not participate in research they disagree with

            This will depend on how strong that right is determined to be. For the most part that right isn’t protected in the case of census data, and other kinds of things that form part of the public record. It might not be inherently problematic in other circumstances either.

            …participants are debriefed after they participate

            It could be up to a review board to determine if such a debriefing is necessary in light of the specific blanket agreement to which users have consented.

          6. By clearly delineating the kinds of experimentation that are permitted on users of opt-in services with anonymized data aggregation, and continuing to subject them to IRB oversight.

            Why go to all that trouble when the IRB is already set up to handle such studies in a way that minimizes harm to participants? It’s creating more bureacracy where none needs to be just to make it easier for researchers.

            This will depend on how strong that right is determined to be.

            Are you seriously making the argument that people shouldn’t have an absolute right to opt out of participating in experimental research?

            For the most part that right isn’t protected in the case of census data, and other kinds of things that form part of the public record.

            We are talking about psychological experiments conducted on people without their knowledge, not the collection of demographic data.

            It could be up to a review board to determine if such a debriefing is necessary in light of the specific blanket agreement to which users have consented.

            Debriefing is required in cases where people participated in research without knowing what the research was for or about in order to minimize harm to participants. If informed consent is altered, debriefing is usually required. If informed consent is present, debriefing may not be required. IF a system was set up that people opted into a blanket informed consent form AND it was properly regulated, debriefing would likely not be necessary because they would have obtained informed consent.

            I’m just really curious why you are so willing to go with not informing people when they’re participating in research. Considering the history of such things, I would think you’d be more concerned that people are doing this sort of thing. You seem really cavalier about it. It really is not all that difficult to set up proper informed consent protocols for online research, as biogeo has demonstrated. Why go to all the trouble of making a whole new system with new rules and regulations for informed consent when the current ones work just fine when they’re followed?

          7. And besides, Dan, all that is kind of irrelevant to the situation at hand. Such a system does not exist, so let me bring it back to the point. The people who conducted this study should have obtained informed consent from each participant, not through some vague blanket user agreement that references research twice. The fact that it is so damned difficult to find out how they were funded and which IRB did the review, and that they list no demographics (were minors involved? they are not able to give consent to participate in research without parental approval) is really disturbing. This is not how social science research should be carried out.

          8. sigh

            I’m just really curious why you are so willing to go with not informing people when they’re participating in research.

            I am willing to think about what it means to participate in research in situations where users already understand that their data is collected and analyzed and have consented to such. I am willing to consider where lines can and should be drawn in new kinds of research that rely on large opt-in systems of a kind that did not exist only a few years ago, let alone when the foundations of informed consent were developed. But obviously it’s rhetorically useful for you to ignore the conversation we just had here about that (and in which you hesitantly agreed with biogeo who essentially made the same argument as I did), so fine.

            Why go to all the trouble of making a whole new system with new rules and regulations for informed consent when the current ones work just fine when they’re followed?

            Because this case shows that this area needs attention and clarification?. We are looking at an example of something that apparently got past an IRB and got published with the current system in place. So we can either make blanket statements about FB being unethical in doing this study (which is fine so far as it goes, but then what?) or we can think a little more about how the system might have a blind spot that allows for and encourages this kind of behaviour, especially given current shifts in cultural norms surrounding things like user data and systems meant to influence user behaviour. Is it really that awful to use an incident like this as a starting point to imagine ways in which a few small changes could provide better guidelines for IRBs in dealing with a relatively recent phenomenon and thereby and help insure better compliance in future research?

          9. Dan, when you say this: “I am willing to think about what it means to participate in research in situations where users already understand that their data is collected and analyzed and have consented to such,” it sounds to me like you’re talking about broadening access to user data for the purpose of observational study. I’m with you there — there are still important ethical considerations, and I don’t think burying language in the middle of a long user agreement that it’s understood no one will read constitutes adequate consent, but to the extent to which people online are acting in a public space, the hurdle for purely observational research should be low.

            But this isn’t that. Here, Facebook applied a specific experimental treatment to a number of its users, for the purposes of basic science research, with the stated intent to see if it would alter their emotional state. This is an experiment, not just observational research, not just a census or a survey. I’m perhaps more prepared than Will to consider that the standards of informed consent for the kind of experiment these researchers performed might be lower than for other kinds of experiments, but I still firmly believe that informed consent is critical for any experimental intervention, no matter how minor.

          10. I had a whole response written up, but I am really tired of arguing over this, and biogeo said pretty much exactly what I wanted to say in fewer words.

          11. Sure. But there’s a difference between saying “everything Facebook did in this case was totally perfect and immune to any criticism” (which by the way I am not saying, however cavalier one might read me as being) and saying that this case shows that there is actually a real issue here that allows for a certain range of interpretations.

            Clearly all parties involved in the publication of this paper somehow found agreeing to the TOS to be some kind of approximation of informed consent, which may indeed involve a bit of “everybody knows they are doing this stuff.” It’s worth considering how, if an IRB were to veto this model, exactly what steps are necessary and sufficient to fix it.

  2. Great write-up, Will.

    sasha_feather, that’s very interesting that they apparently did pass IRB review. (I am surprised that the paper did not include a statement to that effect. Most papers in my field include a sentence like “All work was conducted according to standards X under the oversight of Y ethics committee.”) I am quite certain it would not have at my institution: my colleagues who do human subject research have to jump through many more hoops even to do simple survey studies online. I agree with Will: in my opinion this is a failure on the part of the UCSD IRB to uphold the most basic standards of ethical research practice on human subjects. However, I don’t buy Dr. Fiske’s (the editor at PNAS who approved publication of the study) argument about PNAS not second-guessing the local IRB here, and I don’t think PNAS is absolved of responsibility because the researchers’ IRB approved it. The total lack of informed consent is quite blatant in this case, and it should have been obvious that this study did not meet PNAS’s ethical standards, regardless of what UCSD’s ethics board decided.

    (Full disclosure: I am a published author in PNAS, though not of a paper containing original data.)

  3. I also can’t help but point out that ethics issues aside, the results of the study are less than impressive. Evidentally there are problems with the tool they used to analyze posts for positive and negative emotional content. I don’t know enough about it to comment, but the writer at that linked article indicates it wasn’t designed for texts as short as the typical Facebook post and may give biased measures.

    What I can say is that the effect sizes they’re claiming are on the order of 0.1% — yes, a tenth of one percent. The problem is, in a study of the size they ran (over 600,000 “participants”), your data set is so overpowered that it becomes highly sensitive to artifacts of the study design. (For Skeptics Guide listeners, Steven Novella talks about this all the time when criticizing studies — and he’s very right to.) In this case, where what they’re measuring is the frequency of certain “emotional words,” I’m frankly rather shocked that the effect size isn’t considerably larger than what they found. I would have expected much larger effects caused by people simply being primed to choose certain words over others when writing posts based purely on what they have recently seen their friends writing, regardless of their own personal emotional state.

    So my own personal take-away from this study is that they provided evidence that the phenomenon of emotional contagion through social networks is extremely weak, if it exists at all. Which, considering my ethical concerns with the lack of informed consent and lack of mental health monitoring, is a good thing: it seems the authors probably didn’t manipulate anyones’ emotions much, after all.

    1. Oh there are lots of problems with it, including a kind of dubious premise so far as word choice correlating to actual mental state goes. Also, how do you control for something like people deciding, say, not to post good news (or to understate their excitement) out of consideration for all their friends who seem to be having a bad time of it lately? Though I suppose the low effect probably has much to do with people not specifically caring who their precise audience is going to be when they post a status (normally that’s what someone would do first, and THEN read the feed, unless something caught their eye on the feed itself).

    2. Thanks for the link. Interesting criticisms of the study’s methods and conclusions indeed, which makes the fact that it was published all the more troublesome.

  4. Incidentally, for anyone reading this who wants to see an example of what informed consent in an online study should look like, some linguistics researchers at MIT have a vocab quiz game online. If you click “Next” to initiate the quiz, you’ll first be presented with a fairly standard consent form.

      1. There used to be another one on that site called “Philosophical Zombie Hunter,” where your goal was to read short sentences and decide whether their subjects were philosophical zombies, which seriously cracked me up. Sadly I don’t see it there any more, so I’m guessing the study ran its course.

          1. Yes… I definitely didn’t spend any time on that site while I was supposed to be writing mine…

  5. With geolocation metadata from the Facebook app, we could conduct all kinds of interesting experiments. The number of times a user goes to a liquor store would be an interesting place to start. How about we identify users who stop buying booze and start going to AA. We’ll isolate them from their friends on social media (maybe even post some hostile messages under their name,) and then show our sober subject lots of ads and posts in their feed promoting alcohol use. Let’s see how fast we can drive them back to alcoholism. We already have their consent. Let’s see what happens!

  6. I got another one. Let’s post messages from a person’s parent telling them their pet died and see if that increases or decrease the number of pictures they post. Look how much science I’m doing!

  7. This reminds me of an “informed consent” issue that was blogged about over at SBM. (from memory, so details might be off) It seems some hospital decided that they would see if implementing checklists of procedures would reduce the error rate over “standard of care” without checklists, sort of like how airplane pilots have checklists to reduce pilot error. Preliminary results seemed to indicate that it was working great. Those preliminary results were reported in the press, and a government agency in charge of research protocols stepped in and informed the hospital that they had to give “informed consent” to each and every “participant” in the “research”, and give them the opportunity to “opt out”.

    This is not exactly the same issue.

    FB uses a variety of algorithms to determine what they put on a specific person’s feed. Part of what determines what goes on that feed is how much advertisers pay to be put onto a feed. FB hasn’t promised anyone that a particular algorithm will be used at any particular time.

    The only reason that anyone outside of FB knows about this “research” is because FB published it. If the research had instead been about modifying individual purchasing decisions due to modification of the user’s feed, FB wouldn’t publish it because that is how FB makes its money, by selling access to users’ feeds to advertisers so as to manipulate the purchasing decisions of users.

    Determining the efficacy of particular algorithms in manipulating users to purchase produces advertised on their feed is certainly within the FB terms of service. Measuring how much ads on FB increase sales to those advertising on FB is ond of FB’s goals. Likely their ultimate goal is how much profit ads on FB generate. That profit can be increased through differential pricing. Keeping track of an individual user allows individual pricing to that individual user so as to minimize money “left on the table” with that specific user. That is the ultimate goal, price everything to everyone at the highest price they are willing to pay.

  8. This horrifies me. A EULA is almost the opposite of informed consent. The potential for abuse is huge. Not surprising when you smoosh together documents designed for completely different situations. Great discussion folks.

  9. Christ, I had to give better informed consent than that for undergraduate research. And the only thing my study involved was Kool-Aid. The only alternation to it was the color. Besides that… normal Kool Aid. We actually had to list “Boredom” as one of the potential emotional harms that could come about as a result of our study.

    1. Was that boredom of the research subjects, or boredom of the researcher? :-)

      Seriously, I’ve always thought that in a just world, shrink-wrap EULAs would be unenforceable, and I never read them in order to maintain plausible deniability. Reading a typical EULA would definitely cause debilitating boredom in anyone who isn’t a lawyer.

      Actually seriously, I wonder if Facebook got this past the IRB by intentionally misleading it. As Will mentioned above, they claimed it was no different from a user’s ordinary experience. Does that mean they’ve always randomly (or not so randomly) suppressed and up-voted comments and posts? Has Oceania always been at war with Eastasia?

      1. Oceania has always been allied with Eastasia, and always at war with Eurasia. Get it right!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button