Science

Maybe We Should Tell People Before Experimenting on Them

Support more videos like this at patreon.com/rebecca!

Transcript:

Pearson is a company that produces educational publications and software, including a program called MyLab that is used by college students studying computer programming languages like C++. Pearson just did an odd thing for such a company — they presented a study at the American Association of Educational Research’s annual conference.

The study involved “growth-mindset” training, which is the hypothesis that if you tell people that intelligence isn’t fixed, that it can be improved through study and work, people will actually try harder and accomplish more (even if they don’t necessarily get smarter). It’s a fairly recent idea that Stanford University psychologist Carol S. Dweck coined as Mindset Theory and it could have implications for how to better educate people.

So Pearson decided to see if they could insert growth-mindset language into their program and get more students to answer more questions correctly. For instance, one  “No one is born a great programmer. Success takes hours and hours of practice.” Now, I don’t personally find that very motivating, but hey! It’s science, right? Let’s see what happens.

Pearson decided to do what’s known in marketing circles as A/B testing to see if this worked. That’s where you split your customer base into groups and you give one group the old program and one group the new program, and then you check the results to see if the updates did what you wanted. This may sound familiar: it’s the same idea as a control group in a scientific experiment. It lets companies get a glimpse of what they can expect before they make a drastic change that they push out to all their customers.

You want to know the difference between A/B testing and a controlled scientific experiment? Ethics. And here’s why I’m talking about this: Pearson doing an A/B test like this would get no press — in fact, no one would even know outside of Pearson. But in this case, Pearson called it research, and for that reason a lot of scientists are pretty alarmed by the fact that they didn’t tell any of the 9,000 students they studied, or any of the 165 colleges and universities involved. In science, you don’t do research on 9,000 people without letting them know you’re researching them.

This isn’t the first time a company has done something like this. Facebook did it back in 2014, when they decided to show some users a disproportionate number of depressing updates on their feed, with some people seeing more happy posts and some people, like me, seeing a seemingly endless parade of dead pets and divorce announcements. Their hypothesis? That Facebook can alter users’ emotional states at will. If you don’t know by now that Mark Zuckerberg is a supervillain, there’s not much more I can do for you.

Because Facebook continues to lead the charge of manipulating users in ways that are increasingly disturbing and that cause us as a society to reconsider our own privacy and what we are comfortable letting huge corporations get away with just so we can have 24/7 access to our aunt’s top Minion memes.

When I was reading about the Pearson “experiment” I found myself sympathizing with them, because having worked in marketing and seen A/B tests, it’s just a normal thing that companies do. You don’t want to roll out a new initiative without testing it first, so it’s normal to make very subtle changes to, say, the placement of your Google ads on your website to see which version nets you more income. But I thought, well, Pearson screwed up by publicizing the data and turning it into a research project, and for actually stating in their paper that they think this shows that their programs can be applied to more scientific testing on educational hypotheses. And they did! They definitely did screw that up. You can’t tell a conference of scientists that they should start using your program for research, and prove it using research you conducted in flagrant violation of established and accepted scientific ethics.

But the more I thought about it, the more I realized I was wrong — even if Pearson hadn’t published this as a scientific study, they would still be ethically in the wrong. A company like this owes it to their users to let them opt into serious testing like this. And it is serious — they thought they might be able to change people’s actual rate of learning in detectable ways. A little heads up is the least they could do for the people paying to use their service.

For the record, their own tests showed mixed results. Students who got the growth-mindset language were more likely to finish problems they started, and more likely to solve problems that they got wrong at first. However, they were much less likely to attempt as many problems as people who didn’t get the growth-mindset language.

It’s also worth noting that students don’t use MyLab in a vacuum — there are instructors who had no idea that Pearson was conducting this experiment, and whose instruction could unknowingly completely change or negate what the program was attempting to do. So this isn’t the final word on growth-mindset.

And it’s definitely not the final word on companies conducting testing on users without notification. I hope there’s a bit of backlash to this so that Pearson and other companies can take note — people actually care about our privacy and what you’re doing with our information. And I seriously don’t want a Facebook feed full of dead dogs again. Christ.

Rebecca Watson

Rebecca is a writer, speaker, YouTube personality, and unrepentant science nerd. In addition to founding and continuing to run Skepchick, she hosts Quiz-o-Tron, a monthly science-themed quiz show and podcast that pits comedians against nerds. There is an asteroid named in her honor. Twitter @rebeccawatson Mastodon mstdn.social/@rebeccawatson Instagram @actuallyrebeccawatson TikTok @actuallyrebeccawatson YouTube @rebeccawatson BlueSky @rebeccawatson.bsky.social

Related Articles

3 Comments

  1. This is a great piece, and I agree that this is ethically wrong. I think it’s worth noting how incredibly widespread this practice really is, though. I work in the free-to-play mobile gaming industry and we run dozens of huge AB tests on our user base every day in every product. Most large scale mobile apps do. You are certainly not playing the exact same version of any game as the person next to you on the bus. The margins in free-to-play are so thin that we must find ways to squeeze tenths of a penny out of each user. AB testing is the primary tool for doing this. Turns out people click on in-game store buttons more often if they are green instead of orange, so make ’em green. That sort of thing. It’s definitely sketchy and manipulative, but it’s also a reality that people will not pay for things anymore. This is the only sustainable business model that many companies can find. That doesn’t make it right, but that’s why they do it. For whatever that’s worth.

  2. If Pearson had straight-up changed something about their software, for 100% of their users, we might criticize some poor design decisions, but it doesn’t really seem like it would be the subject of moral approbation. However, if they make the change for only X% of their users, suddenly they needs to comply with more stringent ethical standards, despite it impacting strictly fewer people?

    It seems like scientific ethical standards just shouldn’t be applied to A/B testing. It’s like how I don’t apply scientific ethical standards to my daily actions–like if I decided to post “growth-mindset” quotes facebook, I wouldn’t bother getting friends’ consent or getting IRB approval for my low-level psychological experimentation. Not to say that there shouldn’t be any ethical standards at all–the whole thing where Facebook made news feeds more depressing sounds unethical whether or not it was part of an experiment. But scientific standards do not seem like the correct choice of standards.

    Great post.

  3. I actually don’t mind Facebook as much as Equifax. I didn’t sign up for Equifax, but my credit information is still available to anyone who wants it for nefarious purposes. That makes my vacation in Mexico look mild by comparison. (Still recalling a breach at my old bank in 2008. Note that it’s not my current bank. I’ve got several thousand reasons why.)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button