Skepticism

What is the Dunning Kruger Effect & Is It Even REAL?

This post contains a video, which you can also view here. To support more videos like this, head to patreon.com/rebecca!

Let’s talk about the Dunning Kruger Effect: you know, that psychological phenomenon in which dumb people are super confident they’re smart? You know? The Dunning Kruger Effect?

AHA, I got you! That is NOT the Dunning Kruger Effect! It’s ACTUALLY the idea that the less skilled someone is in a certain field, the more they overestimate their own ability. Right?

NO, I GOT YOU AGAIN! In fact, the Dunning Kruger Effect doesn’t even exist. It was all just a statistical anomaly, which is ironic because a bunch of scientists thought they were SO SMART but in reality they were DUMB.

At least, that’s the idea you might be left with if you noticed this recent article from The Conversation making its way around Science-Enthusiast-Social-Media last week. You know I try to put the best, most accurate information up here at the top of the video and here it is for this topic: the Dunning Kruger Effect might be real or it might be an anomaly and the whole thing is super confusing and continues to be a hot topic of debate amongst psychologists. As a lay person, I’m not comfortable referring to the Dunning Kruger Effect as if it’s an established scientific theory, nor as if it’s a debunked pseudoscience, and I certainly won’t be using it to denigrate people I don’t like.

The original study was published in 1999 under the title “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” 

It involved 45 subjects, all of whom were Cornell undergrads taking an introduction to psychology class. Yes, that’s not a lot of subjects. And yes, they’re all pretty WEIRD, by which I don’t mean “odd” but the acronym psychologists use to identify the most overly sampled group of people in the field: “Western, educated, industrialized, rich and democratic.”

The subjects completed four different studies to test their ability to detect humor, logical reasoning, and grammar. Their scores were then compared to their self-reported score of how many questions they THOUGHT they got correct and how they estimated they performed compared to the average Cornell student. What they found was that everyone thought of themselves as being above the mean average, regardless of how they scored. The lower quartile had the biggest gap between their performance and their self-assessment, and the highest quartile actually thought less of themselves than their test results showed.

Their argument based on the results was to compare incompetence to anosognosia, in which a patient is paralyzed but unable to understand why they cannot, for instance, lift a cup off a table. “(I)ncompetence, like anosognosia,” they write, “not only causes poor performance but also the inability to recognize that one’s performance is poor.” And thus, the Dunning Kruger Effect was born, and then twisted by a game of internet telephone to mean “dumb people don’t know they’re dumb and thus are more confident than they should be.”

Over the years, other researchers have questioned this study’s methods, statistical crunching, and conclusions, and so there have been dozens of follow-ups, some of which reproduce the results in different fields and others of which suggest that the effect can be attributed to other causes. Dunning and Kruger actually mention a number of previously understood cognitive biases might contribute to why many people tend to think of themselves as “above average.” They also specifically address regression to the mean, which is the idea that if you take a random sample of something and find it to be at one extreme, it’s likely that the next sample will be closer to the “mean” or “average.” If you have a bunch of people flipping coins repeatedly and pull out the people who only flip “heads,” that’s an extreme result. Have those same people start flipping again and it’s likely that now they will be closer to flipping half-heads and half-tails, precluding the existence of some nefarious ne’er do well with a double headed coin.

Dunning and Kruger admit that their study is influenced by this effect: the students who scored extremely badly on the test are likely to perceive themselves as less extreme than they really are. Those students at the bottom were also unable to underestimate their performance (because they already sucked so badly), so the higher self-assessments would drag the average up. That’s why in their final study, the researchers tested the students once, then gave some of them training, and then tested them again to see if their self-assessments would drop. They summarized, “In Study 4…participants scoring in the bottom quartile on a test of logic

grossly overestimated their test performance-but became significantly more calibrated after their logical reasoning skills were improved. In contrast, those in the bottom quartile who did not receive this aid continued to hold the mistaken impression that they had performed just fine. Moreover, mediational analyses revealed that it was by means of their improved metacognitive skills that incompetent individuals arrived at their more accurate self-appraisals.”

Dunning and Kruger hypothesized that while regression to the mean and “above average” biases can account for some of the Dunning Kruger Effect, the data shows a significant result even when those factors are controlled for.

But it’s that nasty little regression to the mean that just keeps popping up: the article in The Conversation is written by Eric C. Gaze, one of a team of mathematicians who found a very interesting result when running their own Dunning Kruger Effect research: they made up 1,154 people and randomly assigned them test scores, and then randomly assigned them self-assessments, and the result they got looks exactly like the Dunning Kruger Effect: the lowest 25% of the fake people drastically overestimated their fake abilities much more than the top 25% of the fake people.

They also gave 1,154 real people – students at every level plus grad students and professors – the basic Dunning Kruger test, and through their own analysis found that “peoples’ self-assessments of competence, in general, reflect a genuine competence that they can demonstrate,” and that “carefully measured self-assessments provide valid, measurable and valuable information about proficiency.” That said, they did find that “(about 5%) (of subjects) merit their being characterized as ‘unskilled and unaware of it.’”

A quick note: the publish date on Gaze’s Conversation article is May 8, 2023, which is yesterday for me, as I’m filming this, which is why it popped up on my feed. But Gaze writes that he and his colleagues published their results “in a recent paper.” The paper he links to is from 2016. I found all this very confusing because one of my writers at Skepchick, Bjørnar, actually wrote about ALL of this back in 2020, and the team even participated in a conversation with him in the comments. I’m not sure why Gaze is pushing this back into headlines now, when it seems like there’s other research he’s been working on since, but I guess Dunning Kruger will always be relevant thanks to its essential meme-iness, so fair enough.

Does this 2016 study “debunk” the Dunning Kruger Effect? No, not in the least. I have to agree with Bjørnar, who wrote in 2020 that the way the Effect has been tested and represented is just really messy, and researchers have and should continue to move on to better ways of evaluating the differences between self-assessments and actual performances in order to better understand how to educate people in that bottom 25%.

The real takeaway from all of this is that all of us (who aren’t researchers actively studying this) should probably just stop referring to the Dunning Kruger Effect in casual conversation. It’s never been an accurate way to refer to individuals you think are stupid–-it’s always been a population-level phenomenon, if it exists outside of statistical anomalies. Referencing Dunning Kruger when, say, Donald Trump brags about how smart he is, isn’t only inaccurate but it does all of us a big disservice. As one of those mathematicians pointed out in his conversation with Bjørnar, is our focus on mocking the overly confident going to lead to “The Little Engine that Could” turning into “The Little Engine that Shouldn’t Aspire to Much”?

Are we encouraging people to be more accurate in their self-assessments or are we shaming them into assuming they’re in the bottom 25%? Are we providing people with the ability to learn and grow, or are we dunking on the people our educational system has failed? 

All of that isn’t to say that there’s not definitely a portion of the population who is ignorant, incompetent, oblivious about it all, and confidently forcing us all to deal with their inadequacies. Instead of using a mildly controversial, popularly misunderstood psychological term to describe them, let’s just go with our existing tried-and-true vocabulary: we call those people assholes.

Rebecca Watson

Rebecca is a writer, speaker, YouTube personality, and unrepentant science nerd. In addition to founding and continuing to run Skepchick, she hosts Quiz-o-Tron, a monthly science-themed quiz show and podcast that pits comedians against nerds. There is an asteroid named in her honor. Twitter @rebeccawatson Mastodon mstdn.social/@rebeccawatson Instagram @actuallyrebeccawatson TikTok @actuallyrebeccawatson YouTube @rebeccawatson BlueSky @rebeccawatson.bsky.social

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button