X
    Categories: Feminism

Sam Harris Doesn’t Understand Bell Curves

It’s Thursday and that means it’s time for another round of Bad Chart Thursday. This week, rather than make fun of a bad chart, I was inspired to write a bit about bell curves and more specifically Sam Harris’ complete lack of understanding as to how bell curves work.

You see, the internet blew up this week after a Washington Post profile of Sam Harris written by Michelle Boorstein included the following quote from Harris explaining why such a large proportion of his fan-base are men:

“I think it may have to do with my person slant as an author, being very critical of bad ideas. This can sound very angry to people..People just don’t like to have their ideas criticized. There’s something about that critical posture that is to some degree instrinsically [sic] male and more attractive to guys than to women,” he said. “The atheist variable just has this – it doesn’t obviously have this nurturing, coherence-building extra estrogen vibe that you would want by default if you wanted to attract as many women as men.”

Many people have pointed out how terribly sexist this statement is. Harris then posted a response to all the criticism where he assures us all that he is totally not a sexist in a piece he calls “I’m Not the Sexist Pig You’re Looking For.” Sure, some of his statements may seem sexist, but according to Sam Harris it’s totally not sexist if it’s based in scientific fact and everyone just knows that science says ladies don’t like Sam Harris because of their estrogen-vibe.

I’m sure that many others will scrutinize each aspect of his defense, but I would actually like to focus on one small section where he discusses the differences between variation between groups versus within groups:

My work is often perceived (I believe unfairly) as unpleasantly critical, angry, divisive, etc. The work of other vocal atheists (male and female) has a similar reputation. I believe that in general, men are more attracted to this style of communication than women are. Which is not to say there aren’t millions of acerbic women out there, and many for whom Hitchens at his most cutting was a favorite source of entertainment. But just as we can say that men are generally taller than women, without denying that some women are taller than most men, there are psychological differences between men and women which, considered in the aggregate, might explain why “angry atheism” attracts more of the former. Some of these differences are innate; some are surely the product of culture. Nothing in my remarks was meant to suggest that women can’t think as critically as men or that they are more likely to be taken in by bad ideas. Again, I was talking about a fondness for a perceived style of religion bashing with which I and other vocal atheists are often associated.

Strangely, Harris starts out seemingly understanding how different populations that have variation within them interact and yet comes to exactly the opposite conclusion of where the argument should lead. In order to understand what is going on here, let’s learn a bit about bell curves.

Here is a simple bell curve. In this case, we’re measuring the amount of estrogen-vibe, a thing that Sam Harris seems to think is an innate quality that increases empathy, decreases aggressiveness, and causes a person to harbor a dislike for Sam Harris. To start with, let’s look at the population as a whole. A bell curve shows the proportion of a population with a certain characteristic, in this case estrogen-vibe. The tallest point on the graph is at the mean or average. Most people in the population are somewhere at or near the average. As you move out to the edges of the graph, the curve converges on 0 as there are less and less people who have the extremes of the characteristic. So, just like very few people are taller than 6.5ft or shorter than 5ft, there are few people with super estrogen-vibe who just have to nurture everything they come in contact with and very few people with almost no estrogen-vibe that purportedly spend all day trolling YouTube for new people to yell rape threats at and enjoy reading things written by Sam Harris. The average person may be somewhere in the middle, but the population as a whole has a lot of variation with some people at the extremes.

According to Sam Harris, women as a group have more estrogen-vibe than men. I don’t know what the fuck he is talking about with estrogen-vibe, but he flat out states that there are innate psychological differences between men and women. Many researchers have searched for cognitive and psychological differences between the genders and taken as a whole, researchers have found little to no difference in cognitive ability or psychological differences between men and women and when differences are found, it’s unclear whether they are themselves innate or a product of our culture and experiences. If differences exist at all they are quite small and can only be seen in the aggregate.

Sam Harris believes that there are differences in the aggregate between men and women on the estrogen-vibe scale. We can split our bell curve into two bell curves, one for women and one for men.


As you can see comparing the distribution of women to that of men, the women’s distribution on a whole has a bit more estrogen-vibe than the men’s. The average estrogen-vibe of women is slightly higher than that of the men.

Although the average man may have less estrogen-vibe than the average woman, a very large proportion of women have even less estrogen-vibe than the average man. If we got a room full of people, we would probably not be able to see the differences between men and women if those differences are as slight as the literature seems to suggest. We may be able to see these small population differences if we ran a large, controlled study but it would be practically invisible to us on an individual level as we go about our lives and it certainly wouldn’t result in large gender imbalances in terms of people who like Sam Harris.

This is why population size in studies matter. If the differences you are looking for are small, you will not be able to see them unless you have a proportionally large population in your study. If psychological differences exists at all, it seems to be quite small since most studies find nothing or find only tiny effects. Even if these effects are real, you could not see them out in the world because the sample of men and women you come across as you go about your daily life is just too small to be able to distinguish slight differences. This is why whenever a new evo-psych study comes out that says something like “women are slightly more cooperative than men” and the response from a couple of your annoying facebook friends that you haven’t spoken to since High School is “see, I told you women were just [insert sexist stereotype here]” they are wrong even if the science is right. Small differences in populations just cannot be seen in the small sample of people you meet in your life. Plus, you can’t even measure psychological traits very accurately so you are not going to be able to get accurate enough measurements to see those differences even if they were big enough to see otherwise. Your own judgments and biases are going to be larger than any actual differences between the groups if those differences exist at all.

The only way you would be able to see differences between the psychologies of men and women is if those differences were quite large. If the only thing skewing the gender of Sam Harris fans was estrogen-vibe, then the differences between the genders would have to be immense, something which would completely overturn all prior gender research. So, the real question here is whether Sam Harris has discovered the magical x-factor that makes the brains of men versus women very different. This would certainly be a shock to the science of gender differences because no one has ever found psychological differences that pronounced.

Rather than there being some previously-unknown psychological factor that gives women a completely different brain than men, the thing that makes women different is our lived experiences. Women grow up and move through the world in a completely different environment from men. From the time we are born, we are treated differently, under different expectations, and have different roles within our culture. Lived experience does make us women different in ways that are so large that we can see them without the need of a scientific study. We can all see that men tend to like Sam Harris much more than do women. It is far more likely that something about the experiences of men versus women in our culture is the explanation for the differences rather than some psychological difference in our brains. Perhaps instead of estrogen-vibe as an explanation, the following is a bit more explanatory:

 

Since women have to live in a world where they face both outright and structural sexism on a daily basis, they are probably much more sensitive to sexism than are most men. Perhaps the reason women don’t seem to like Sam Harris isn’t because they have some magical estrogen-vibe but because he actually is a sexist pig.

Jamie Bernstein :Jamie Bernstein is a data, stats, policy and economics nerd who sometimes pretends she is a photographer. She is @uajamie on Twitter and Instagram. If you like my work here at Skepchick & Mad Art Lab, consider sending me a little sumthin' in my TipJar: @uajamie

View Comments (80)

  • Sure, Harris is wrong and sexist, that's why the whole concept of targeted audiences is just bogus ::eyeroll::

    • ??? What the hell are you trying to insinuate? It may help to actually flesh out your ideas and opinions, rather than leaving very vague, rather empty comments. YOU SURE SHOWED US!

      • What I'm "insinuating" is that this is much ado about nothing. Why are there films labelled "chick flicks"? Why do men buy Tom Clancy novels? Why do politicians draft policy to "target women voters"? Why to market analyzers segment products by demographics?

        Because different things appeal to different groups of people, in aggregate, as Harris pointed out. And BTW, his use of "critical" has been equivocated to mean "critical faculties" whereas he meant it as tenor of debate. Whether you think that's sexist or not, at least get his meaning right to begin with.

        • Yeah, because as everybody knows, marketers and corporations are manged by emotionless robots which are immune to cultural conditioning, right?

          • It doesn't matter what their conditioning is, the fact is that if the targets didn't exist their performance would suffer, the product wouldn't be bought, the movie not watched, the candidate not voted into office. The market, as it were, would have spoken. Now, you're free to say that "the market" operates on motivators that are determined both by nature or nurture. Incidentally, I think that's just what Harris said. Or, you're free to say it's just nature or just nurture. It's your opinion to give. Did someone solve the nature/nurture debate while I was sleeping?

            I just don't think it's right to accuse someone of sexism because, in his opinion (which is what it was, after all) various demographics are attracted to different things and that it's partly based on nature. He may be wrong! That's kind of how opinions work.

  • I had an idea on how to help with all the misogyny, hate and death threats on line, but it will take cooperation from ad providers like Google.

    The idea is that when a site posts a link or a threat or hate speech against a specific person, that the ad provider to that site diverts ad revenues to that person for all views of that hate speech.

    This wouldn't be a big deal for sites that are not using hate speech to generate clicks, but if a site is using hate speech to generate clicks, it would backfire by diverting the click revenue to the victim of their hate speech (or the victim's designated charity).

    It would also give feedback as to how much hate speech there is going on, and from where.

  • You know, Sam Harris has gotten lots of flak for this "estrogen-vibe" comment, as though it were some sort of vague, ill-defined concept with no scientific basis. Hogwash, I say! Sam Harris is a scientist, you guys, he wouldn't do something so disingenuous as to refer to endocrinology to make his argument seem grounded in biology while in fact only using "estrogen" as a metaphorical stand-in for culturally prescribed feminine traits.

    "Estrogen-vibe" has an obvious meaning. The bonds in every molecule, including the estrogens, have natural vibration frequencies. These natural vibration frequencies are important for understanding the biochemical properties of the molecule. Sam Harris is obviously referring to a recent paper which characterized the vibration frequencies of two of the estrogens from quantum mechanical calculations. See, you guys, it's science! Specifically, chemistry.

    I'll admit to some puzzlement as to how the vibration frequencies of the estrogens relate to the rest of what Sam Harris was talking about, though...

  • Sam Harris is going to be on Air Talk with Larry Mantle on KPCC (http://205.144.168.164/programs/airtalk/) in Southern California on Monday. It’s a call-in show that comes on from 11am – 1pm PDT on public radio. Maybe someone in the area who’s more well-spoken than I would like to call in? The number’s 866-893-5722.

  • I found this article funny but I must admit I'm a bit lost by your main point. What exactly does Sam Harris not seem to understand about the statistics of Bell Curves? It seems more like your saying that his estimates of the distribution of 'estrogen vibe' in the population is inaccurate. Isn't that different from saying he doesn't understand Bell Curves?

  • It strikes me that this article is intended as a polemic. However, on the off-chance that you did want to say something serious about statistics and probability theory, allow me to point out a few misconceptions:

    1. You say "Many researchers have searched for cognitive and psychological differences between the genders and taken as a whole, researchers have found little to no difference in cognitive ability or psychological differences between men and women and when differences are found, it’s unclear whether they are themselves innate or a product of our culture and experiences." The second part of this sentence is indeed true, but the first part is false. (Or, let me say - more cautiously - that it is not obviously true.) As far as I can see, most empirical studies do find that gender is a statistically significant factor in mathematical and language abilities, for example, even after controlling for cross-national variations. Briefly, boys tend to score higher in mathematics, on average, and the variance of their mathematics scores is higher. Moreover, the tails of the distribution of mathematics scores tends to be dominated by boys. The reverse appears to be true, when it comes to language scores. Also, these results are apparently not explained by traditional proxies for gender equality. (Some studies dispute these findings, but they seem to be in the minority.)

    2. You say "The only way you would be able to see differences between the psychologies of men and women is if those differences were quite large." Here you appear to be talking about differences between the means of the two sub-populations. This is not generally true, since it depends on the standard deviations of the two sub-populations. Specifically, the standard error of the estimate for the mean of a normally distributed population is proportional to its standard deviation. Hence, if the means of the two sub-populations are different enough, or if their standard deviations are small enough, you will be able to reject the null hypothesis that their means are equal, using only a modest sample. However, you won't be able to reject the null hypothesis if the means of the two sub-populations are very similar, and their standard deviations are large. I agree that this is likely to be the case for most data where the sub-populations are selected on the basis of gender.

    3. It strikes me that your biggest error is to think that any statement about gender-based cognitive/psychological differences must be a statement about the means of the distributions of some trait. (This is borne out by your illustrations, which depict two distributions with the same variance, but shifted means.) Actually, you are much more likely to see differences between the two distributions when you compare their higher-order moments. You would especially like to compare the tail probabilities for the two distributions. Tests that do so require modest samples, compared with tests that compare sample means. That is to say, if it is the case that males inhabit the tails of the distribution for some cognitive ability/psychological trait, then it is likely that statistical tests with reasonable sample sizes will be able to detect this.

    4. One final point where I think you're mistaken. Let us say, for the sake of argument, that Harris is right, in that men are "more attracted to this style of communication than women are." Let us also assume that the standard deviations of the extent to which men and women are "attracted to this style of communication" are large. That is to say, men are - as Harris claims - generally more likely to prefer his style of communication, but there is a substantial variation in this preference across the populations of both men and women. Then two things are likely to happen: (i) Estimates of the mean preferences of men and women won't reveal any gender-related differences (due to the problem with the standard errors of sample means mentioned earlier); and yet (ii) There will be more men than women attending Harris' talks (due to the fact that the true - but unobservable - means of the two populations are different).

    • Actually... The studies show that "math ability" and the like are heavily influenced by the conditions of the testing. Literally, reminding women that they tend to do worse than guys causes them to do worse, and reminding men that they "do better", causes their scores to rise slightly. Do both, and you end up with a huge gap. Language is an even dumber one. Something like 90% of the "best" linguists on the planet are male, yet, supposedly, women as better at language. This doesn't make logical sense, at all. But, honestly, it might make sense if you recognize that men are more likely to get hired, so may be more promoted as linguists, where men, as a general trend, just from my own observations, tend to think they have "better things to do" than waste time doing a lot of reading, i.e., unless they are linguists they don't actually bloody use their language skills a lot, unlike readers.

      Just those factors, by themselves, are enough to skew the results, and its not like you can control for them, unless you find tens of thousands of men and women, who all happen to read the same amount, and things. Same problem with math - men barely use those skills, and women.... yeah, no inherent bias, which throws a monkey wrench into the works when studying the problem...

      • kagehi wrote "Something like 90% of the 'best' linguists [...] are male, yet, supposedly, women are better at language. This doesn't make logical sense at all."

        No, unfortunately you're labouring under a misapprehension. If the distribution of languistic abilities for males has a higher variance than the distribution for females, then it is quite possible for men to have poorer language skills on average AND for the most gifted linguists to be men.

        This is in fact a very common statistical phenomenon. For example, active fund managers perform worse than passive fund managers, on average, yet the best performing funds over any period are all active. Why is that? Active funds hold riskier portfolios, which means that the variances of their returns are higher than is the case for passive funds.

        Of course, such a statistical explanation need not provide the correct interpretation of the evidence you cited - your explanation in terms of employment bias may well be the correct one. But the your evidence does not expose an inherent logical problem with an explanation that invokes inherent gender-based differences in linguistic skills (as loath as we may be to countenance it).

        • No, you're "labouring under a misapprehension," which is that just because 90% of people working in linguistics are men that they are somehow "the most gifted linguists." The point kagehi is making that you seem loathe to acknowledge is that there are structural barriers in place that set up these differences, and in many cases exaggerate these differences, which cannot be attributed to some intrinsic difference between men and women. So it's not that somehow magically the most gifted people in linguistics happen to be men and they are filling those jobs, it's that the way academia and education more broadly is structured sets up a situation where men are hired for positions more than women. You are confusing the quantity of people in a particular position with the quality of their talent.

          • Seems to be a semantic argument here, is this better Hardy - there is no automatic reason to assume that such a bias is real, other than its prevalence in academia, but there is vast amounts of evidence implying a likely alternative explanation, which is sufficiently pervasive across all disciplines and skills, to suggest that its **unlikely** to be an accurate conclusion.

            Since you object to the suggesting that its just pure BS, without going into why.

          • You should probably reread my last message. I don't claim that there are not "structural barriers" that create and/or exaggerate gender imbalances; I specifically allow that such an explanation for the existence of gender imbalances may be correct. My point is simply that kagehi's inference is incorrect. In particular, I show that it is possible for average female linguistic ability to exceed average male linguistic ability AND for the top 90% of all linguists to be male. So, kagehi's claim that these two observations don't "make logical sense" when taken together is false. This is simply a fact of probability theory.

            Now, your preferred explanation for the phenomenon described by kagehi may correct. But, the data he provided does not establish that. Of course, the data also fails to establish that this particular gender imbalance has a biological cause.

      • Literally, reminding women that they tend to do worse than guys causes them to do worse, and reminding men that they “do better”, causes their scores to rise slightly.</blockquote

        Not only this, but merely reminding women that they are women (e.g., by first asking questions that encourage them to reflect on their gender) can cause women to do worse on math tests, so strongly entrenched and internalized is the cultural notion that "girls are bad at math". (Honestly I'm loath to type that phrase even in scare quotes, considering how easy a message it is to reinforce. Girls: you are great at math. Or you can be, if you choose it!)

        • Girls: you are great at math. Or you can be, if you choose it!

          That's what I keep telling my daughter but I don't think she's buying it.

          • A female friend of mine in college, who was extremely smart, studying chemical engineering, and in an elite honors program, once told me that she consciously chose to act less intelligent than the men around her, particularly ones she was romantically interested in, so she wouldn't intimidate them and they'd like her better. As a naive male freshman with feminist sensibilities but little real-world experience, it was an eye-opening moment for me to see first-hand how a woman I liked and respected might see her own intelligence as an impediment rather than a strength, at least in some circumstances, in ways that would never be true for me. Of course, in her case she was consciously aware of what she was doing, but clearly this decision process acts subconsciously as well, in people with or without my friend's level of self-awareness.

            Your daughter's lucky to have a parent who lets her know she doesn't have to be bad at math in order to be accepted. You're tackling a lot of cultural inertia there!

  • The quote I start my blog post on Theory of Mind with is:

    In any great organization it is far, far safer to be wrong with the majority than to be right alone. -- John Kenneth Galbraith

    Pretty obvious that if women get death threats for being good at math, it is safer to pretend to suck at math. The easiest way to pretend to suck at math is to actually suck at math.

  • @Hardy Hulley: No, look, I agree with Kagehi - you need to show the data for this. Yes it is theoretically possible to have different shaped distributions as you say, but male & female distributions for the vast majority of attributes (constructs) are similar.

    • @Jack99: If you agree when Kagehi says “Something like 90% of the ‘best’ linguists [...] are male, yet, supposedly, women are better at language. This doesn’t make logical sense at all,” then I'm afraid you're wrong. It is quite possible for women to perform better than men, on average, and yet for the best performers to be men. This is because the higher-order moments of the distributions of language skills for men and women are more important for determining the right-hand tail of the overall distribution than the means of the individual distributions. This is not my opinion, it is a (simple) probability-theoretic fact.

      Note that I have not claimed that gender *is* an important determinant of language ability; I'm simply pointing out that two facts adduced by Kagehi are not mutually inconsistent. In other words, Kagehi has drawn a false inference (which he/she shouldn't feel bad about, because many non-statisticians make the same error).

      Okay, so what does the data say? Well, the latest analysis, based on 10 years' of PISA data finds persistent and statistically significant differences in both the language and mathematics gender gaps. The sample is important, because it is international, and therefore accounts for the effects of gender-based policy variations across different countries. Moreover, since it is based on tests of 15 year-olds, which is the highest age of mandatory schooling across all participating nations, the PISA data also accounts for possible biases associated with variations in school participation rates across different countries.

      The main findings are:
      1. In mathematics, boys perform better than girls, on average; however, the left-hand tail (i.e. the worst performers) reveals little difference between boys and girls, while the right-hand tail (i.e. the best performers) is dominated by boys.
      2. In language, girls perform better than boys, on average; however, the left-hand tail (i.e. the worst performers) is dominated by boys, while the right-hand tail (i.e. the best performers) shows little difference between boys and girls.
      3. In the cross-section (i.e. when comparing the results across different countries), the gender gaps for mathematics and language are inversely related (i.e. countries that manage to decrease the mathematics gap increase the language gap, and vice versa). Interestingly, there are no exceptions to this pattern!

      Point 3 is extremely important from a policy point-of-view, since it suggests that differences in performance between boys and girls are not corrected by gender equality and empowerment programmes. In fact, the data suggests that countries with such programmes exhibit higher gender gaps in mathematics.

      You can find a video discussion of the results at https://www.youtube.com/watch?v=m9WxvT-82Xg. There you will also find a link to the published paper (which is available for free). It appeared in the scientific journal Plos One in 2013.

      • 1. In mathematics, boys perform better than girls, on average

        Yeah-huh.

        Meta-analytic findings from 1990 (6, 7) indicated that gender differences in math performance in the general population were trivial, d= –0.05, where the effect size, d, is the mean for males minus the mean for females, divided by the pooled within-gender standard deviation. [...]

        Effect sizes for gender differences, representing the testing of over 7 million students in state assessments, are uniformly <0.10, representing trivial differences ... Of these effect sizes, 21 were positive, indicating better performance by males; 36 were negative, indicating better performance by females; and 9 were exactly 0. From this distribution of effect sizes, we calculate that the weighted mean is 0.0065, consistent with no gender difference .... In contrast to earlier findings, these very current data provide no evidence of a gender difference favoring males emerging in the high school years; effect sizes for gender differences are uniformly <0.10 for grades 10 and 11 .... Effect sizes for the magnitude of gender differences are similarly small across all ethnic groups .... The magnitude of the gender difference does not exceed d= 0.04 for any ethnic group in any state.

        So much for that part.

        the left-hand tail (i.e. the worst performers) reveals little difference between boys and girls, while the right-hand tail (i.e. the best performers) is dominated by boys.

        Survey says:

        Greater male variance is indicated by VR > 1.0. All VRs, by state and grade, are >1.0 [range 1.11 to 1.21 ...]. Thus, our analyses show greater male variability, although the discrepancy in variances is not large [...]

        For whites [in grade 11 in Minnesota], the ratios of boys:girls scoring above the 95th percentile and 99th percentile are 1.45 and 2.06, respectively, and are similar to predictions from theoretical models. For Asian Americans, ratios are 1.09 and 0.91, respectively. Even at the 99th percentile, the gender ratio favoring males is small for whites and is reversed for Asian Americans. If a particular specialty required mathematical skills at the 99th percentile, and the gender ratio is 2.0, we would expect 67% men in the occupation and 33% women. Yet today, for example, Ph.D. programs in engineering average only about 15% women.
        Hyde, Janet S., et al. "Gender similarities characterize math performance." Science 321.5888 (2008): 494-495.

        Ok, so there is some evidence for variance, but it's not on the scale that would explain real-world gender disparities and there might be a strong cultural effect driving it, instead of biology.

        2. In language, girls perform better than boys, on average

        Didn't I mention this earlier? [scrolls up] Oh, no, that meta-analysis glossed over the detail. Time for a fresh one!

        Located 165 studies that reported data on gender differences in verbal ability. The weighted mean effect size was +0.11, indicating a slight female superiority in performance. The difference is so small that we argue that gender differences in verbal ability no longer exist. Analysis of tests requiring different cognitive processes involved in verbal ability yielded no evidence of substantial gender differences in any aspect of processing.
        Hyde, Janet S., and Marcia C. Linn. "Gender differences in verbal ability: A meta-analysis." Psychological Bulletin 104.1 (1988): 53.

        I don't see any mention of variability there, alas, but I will note that while the general idea that men exhibit more variance than women has been around for a hundred years, the magnitude of the variance seems inversely proportional to the sample size, and there are a few external factors that can have the same effect. Boys tend to disproportionately drop out of school, for instance, which will artificially boost variability (as low-scorers will be encouraged to stay, while high-scorers desire to stay).

        So while that might be a nice study, when you gather together all studies you get a different picture.

        • @Hj Hornbeck wrote: "... I will note that while the general idea that men exhibit more variance than women has been around for a hundred years, the magnitude of the variance seems inversely proportional to the sample size..."

          The idea that variance estimates could be inversely proportional to sample sizes doesn't make much sense. After all the standard variance estimator is unbiassed. Of course, the standard errors of estimated variances certainly do depend on sample sizes, which in turn means that the statistical significance of the estimates depend on sample sizes as well. But any perceived sample size bias is bound to be spurious.

          What could happen (but you'd have to provide evidence for this) is that more modern studies are progressively using larger datasets, while *population* variances are simultaneously decreasing (for exogenous reasons).

          You wrote: "Boys tend to disproportionately drop out of school, for instance, which will artificially boost variability (as low-scorers will be encouraged to stay, while high-scorers desire to stay)." That's hard to believe. Drop-outs are bound to affect the left tails of performance distributions disproportionately, as the poorest students opt out of a school education. Since truncating left tails should reduce variances, the effect should be the opposite of what you describe.

          In any case, survivorship bias is not a big issue with the PISA data, since schooling is mandatory until the age of 15 in all participating countries.

        • @Hj Hornbeck: You quote extensively from Janet S. Hyde, et. al., "Gender similarities characterize math performance," Science, 321:94-95, 2008. Unfortunately, you probably haven't chosen the best article upon which to base your case.

          1. First, the data obtained by Heyde et. al. (2008) appears to be a snapshot of NAEP scores for 10 US states (presumably for one year - they're not clear about this). By contrast, the PISA data is a panel, where the time-series covers four years (2000, 2003, 2006, and 2009), and the cross-section consists of 75 countries. You will struggle to challenge the findings based on such a broad dataset, by appealing to results of a much narrower study.

          2. You quote the following passage from Heyde et. al. (2008): "Meta-analytic findings from 1990 (6,7) indicated that gender differences in math performance in the general population were trivial, d=–0.05, where the effect size, d, is the mean for males minus the mean for females, divided by the pooled within-gender standard deviation." Curiously, you omit the very next sentence: "However, measurable differences existed for complex problem-solving beginning in high school years (d=+0.29 favoring males), which might forecast underrepresentation of women in science, technology, engineering, and mathematics (STEM) careers."

          3. You quote the following passage from Heyde et. al. (2008): "Effect sizes for gender differences, representing the testing of over 7 million students in state assessments, are uniformly <0.10, representing trivial differences…" Here I'm concerned about how the authors interpret their results. The statistic they're referring to is Cohen's d-statistic (I apologise if I'm telling you something you already know), and they cite Cohen's book, which offers a heuristic to the effect that d-values less than 0.2 are small. Sure, but that only deals with economic significance; what about statistical significance (which is probably more important in this case)? The authors are completely silent on this issue.

          4. You quote the following passage from Heyde et. al. (2008): "Of these effect sizes, 21 were positive, indicating better performance by males; 36 were negative, indicating better performance by females; and 9 were exactly 0. From this distribution of effect sizes, we calculate that the weighted mean is 0.0065, consistent with no gender difference…." This looks like poor methodology. First, the authors had data for 10 grades from 10 states, yet they've calculated only 36+21+9=66 d-statistics; you would expect 100 such values. Second, they report that the weighted average of the individual d-statistics is 0.0065. However, averaging d-statistics across completely different distributions is pretty meaningless. To see why, note that girls and boys may well develop at different rates from Grade 2-Grade 11, so that overall test score distributions could vary substantially over time. On top of that, the mathematics tests they write at different ages are completely different. I can't see how to interpret a composite statistic constructed as a weighted average across heterogenous distributions.

          5. You quote the following passage from Heyde et. al. (2008): "In contrast to earlier findings, these very current data provide no evidence of a gender difference favoring males emerging in the high school years; effect sizes for gender differences are uniformly 1.0. All VRs, by state and grade, are >1.0 [range 1.11 to 1.21 ...]. Thus, our analyses show greater male variability, although the discrepancy in variances is not large..." In keeping with their general indifference to statistical significance, the authors provide no p-values to accompany their results (which is very poor form in any empirical study). Coincidentally, the very next issue of Science published the study by Stephen Machin and Tuomas Pekkarinen, "Global sex differences in test score variability," Science, 322:133-134, 2008. It repeated much of the analysis performed by Heyde et. al. (2008), using PISA data for 2003. However, the authors also had the good sense to disclose the statistical significance of their results. To summarise, for the U.S. alone, they found d-statistics for reading and mathematics of -0.32 and 0.07, respectively, with p<0.01, in both cases (i.e. both results are significant at the 1% level). They also found variance ratios of 1.17 for reading and 1.19 for mathematics, with p<0.01, in both cases. Finally, the d-statistics (gender gaps) for mathematics in the top 95% and bottom 5% were found to be 0.22 and -0.11, respectively (p<0.01).

          7. Heyde et. al. (2008) simply announce that variance ratios between 1.11 and 1.21 are "not large." Really? Suppose boys' and girls' maths scores are normally distributed with the same mean, and a variance ratio of 1.2. Then a quick calculation reveals that boys will outnumber girls by about 3:1 in the top percentile of maths performers. Since those children are most likely to enter high-earning professions, how can anyone argue that such values are economically insignificant?

          8. However, variance ratios are not even that interesting - we're really interested in the tails of the distributions. (Recall, that my initial summary of the results in Stoet and Geary (2013) said "...the left-hand tail (i.e. the worst performers) reveals little difference between boys and girls, while the right-hand tail (i.e. the best performers) is dominated by boys." I said nothing about variance ratios.) Unfortunately, Heyde et. al. (2008) undermine your argument a bit on this score, by reporting that boys outnumber girls by 2:1 in the top percentile of maths performers.

          9. You write: "Ok, so there is some evidence for variance, but it’s not on the scale that would explain real-world gender disparities and there might be a strong cultural effect driving it, instead of biology." Here I agree with you. Even if boys outnumber by girls by 2:1 in the top percentile of maths performers, we should still expect to see around 33% of scientists, engineers and mathematicians being women. I also agree that the data is insufficient to infer a biological explanation for the gender gap in maths test scores - there is more likely a constellation of environmental factors at play (Stoet and Geary (2013) argue that boils down to issues with pedagogy). In any case, irrespective of the causes of the gender gap in maths, there is no doubt that it is a socially harmful phenomenon, and we have every incentive to eliminate it. Where I disagree with you is on the question of whether the gap exists in the first place, and on its economic and statistical significance; I think it is a robust empirical feature - especially in the tails of the distribution - and while denying its existence may be comforting, you can't solve the problem that way.

          10. You write: "So while that might be a nice study, when you gather together all studies you get a different picture." I think you're wrong about this. As far as I can tell, the claim that the distributions of mathematics scores for boys and girls are identical represents a minority view. Very few serious studies make such a claim - not even Heyde et. al. (2008), when you consider their evidence on the tails.

          • Unfortunately, you probably haven’t chosen the best article upon which to base your case.

            Fair enough, I'll use yours instead.

            Curiously, you omit the very next sentence: “However, measurable differences existed for complex problem-solving beginning in high school years (d=+0.29 favoring males), which might forecast underrepresentation of women in science, technology, engineering, and mathematics (STEM) careers.”

            I omitted it because it was irrelevant. The corollary of "regression to the mean" is that "a subset or sample can vary significantly from the mean." Finding one subset of mathematical skill that favors men does not contradict the theory that there are no sex differences overall. What it does do is demonstrate the variability in the dataset, which is associated with cultural explanations (as the realist position assumes the difference is universal and thus should not vary), so at best you've just shown cultural factors can cause Cohen's d to vary by about 0.29. This will come in handy later.

            Sure, but that only deals with economic significance; what about statistical significance (which is probably more important in this case)? The authors are completely silent on this issue.

            You don't seem to understand what statistical significance means. Suppose I demonstrate to a very high statistical significance that flipping a specific coin will result in heads 51% of the time. Should I stop using it for flipping? Probably not, as most coin flips are one-off events, and the consequences of having a slight bias are negligible. Statistical significance only tells us the certainty those results weren't arrived at by chance, it says nothing about how those results came about or what they mean in daily life.

            Also, what I linked to was a two-page summary published in Science. There wasn't enough room there to cover the subtleties. Hyde not only has covered your basic argument about variance, in fact, she did it over thirty years ago.

            Assuming that engineering requires a high level of spatial ability, can the gender difference in spatial ability account for the relative absence of women in this profession? The above findings of such a small gender difference would appear to argue that the answer is no. However, the question has now shifted from a discussion of overall mean differences in the population to differences at the upper end of the distribution. And relatively small mean differences can generate rather large differences at the tails of distributions, as the following sample calculation will show. Assume, conservatively, that the gender difference in spatial ability is .40 SD. Using z scores, the mean score for males will be .20 and the mean for females will be —.20. Assume also that being a successful engineer requires spatial ability at least at the 95th percentile for the population. A continuation ofthe z-score computation shows that about 7.35% of males will be above this cutoff, whereas only 3.22% of females will be. This amounts to about a 2:1 ratio of males to females with sufficient ability for the profession. This could, therefore, generate a rather large difference although certainly not as large a one as the existing one.

            The disparity would become even larger if one considered some occupational feat, such as winning a Nobel prize or a Pulitzer prize, that would require even higher levels of the ability. For example, suppose that spatial ability at the 99.5th percentile is now required. The same z-score calculations indicate that .85563% of males and .27375% of females would be above that cutoff, for an approximate 3:1 ratio of males to females.Once again, though, this is not nearly, a large enough difference to account for the small proportions of women winning Nobel prizes.
            Hyde, Janet S. "How large are cognitive gender differences? A meta-analysis using w² and d.." American Psychologist 36.8 (1981): 892.

            This is a finding she repeats nearly thirty years later: "Gender differences in math performance, even among high scorers, are insufficient to explain lopsided gender patterns in participation in some STEM fields."

            This brings me to your points 7 through 9. Note that in order to do the above calculations, Hyde assumed those d values were entirely explained by biological factors. As you pointed out above, though, they're within the range of what can be generated by social factors. So the central assumption behind her calculation is not established, and thus we can wave the entire thing away with Hitch's Razor: that which can be asserted without evidence can be dismissed without it. The same applies to your calculations.

            First, the authors had data for 10 grades from 10 states, yet they've calculated only 36+21+9=66 d-statistics; you would expect 100 such values.

            If all ten states returned scores for all ten grades, that is. Look at the N values on the main chart; Grade 4 has 763,155 samples, while Grade 5 has 929,155. Is it more likely there was a massive demographic bump that caused an increase in the birth rate by 160,000 over those ten states, or that some of them didn't return test scores below Grade 5? This guesswork is confirmed in the chart on page two, which breaks down the results by state and grade. Wyoming has six gold squares, indicating that it returned only six grades worth of data.

            However, averaging d-statistics across completely different distributions is pretty meaningless. To see why, note that girls and boys may well develop at different rates from Grade 2-Grade 11, so that overall test score distributions could vary substantially over time.

            Hold up, I thought you were arguing for an overall sex difference? Now you're shifting the goalposts, quietly switching your hypothesis to another one that argues for transitory sex differences during development. It doesn't help that your own dataset is drawn an even narrower age range (“the exact age for inclusion is 15 years and 3 months to 16 years and 2 months”), so by taking this tack you've also defanged your own citation.

            In keeping with their general indifference to statistical significance, the authors provide no p-values to accompany their results (which is very poor form in any empirical study).

            The p-value represents the odds of the null hypothesis being true, instead of the testing hypothesis; in this case, it's that there is no correlation between sex and math scores. Hyde's hypothesis is that there is no correlation between sex and math scores.

            Hyde's hypothesis is the null hypothesis, thus it carries no p-value. And as I hinted at before, p-values are overrated.

            Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude –not just, does a treatment affect people, but how much does it affect them.

            -Gene V. Glass

            The primary product of a research inquiry is one or more measures of effect size, not P values.

            -Jacob Cohen

            These statements about the importance of effect sizes were made by two of the most influential statistician-researchers of the past half-century. Yet many submissions to Journal of Graduate Medical Education omit mention of the effect size in quantitative studies while prominently displaying the P value.
            Sullivan, Gail M., and Richard Feinn. "Using effect size-or why the P value is not enough." Journal of graduate medical education 4.3 (2012): 279-282.

            But back to you:

            As far as I can tell, the claim that the distributions of mathematics scores for boys and girls are identical represents a minority view.

            And yet the very study you cite argues for gender similarities rather than differences. Don't believe me? Ask yourself this: what are the effect sizes that paper found?

            Stoet G, Geary DC (2013) “Sex Differences in Mathematics and Reading Achievement Are Inversely Related: Within- and Across-Nation Assessment of 10 Years of PISA Data.” PLoS ONE 8(3): e57988. doi: 10.1371/journal.pone.0057988

            Don’t see them? That’s because the authors buried those numbers in a misleading chart, and hid the decryption key two paragraphs back:

            For all analyses, we express sex differences in PISA score points. These scores are not “raw” scores, but result from a statistical analysis that normalizes student scores ... such that the average student score of OECD countries is 500 points with a standard deviation of 100 points. The advantage of this is that scores become easily comparable and differences easily to interpret. For example, a 10 point difference between boys and girls reflects approximately 1/10th of a standard deviation.

            There’s no indication of sample sizes or standard deviations for boys and girls separately, so we have to assume they’re roughly equal and approximate Cohen’s d. That makes the calculations easy here; just take the point difference and divide it by 100. So for the gender gap in median math performance, d is constant at roughly 0.1, and for median reading we get values that increase from 0.3 to 0.4, ish.

            OK, so what does those numbers mean? Here’s a handy way to visualize them; adjust the slider to the effect size you want, and watch the distributions and numbers change. The percentage of overlap is the area where both distributions visually overlap, relative to all samples. My favorite number on that page is the “probability of superiority;” invented by KO McGraw and SP Wong in 1992, it asks a simple question: if you picked a random thing from sample A, and a random thing from sample B, what are the odds that A would be “superior” to B? I invented a similar-but-slightly-different metric less than a year ago, “predictability:” if I plucked a random value from the total pool of samples, how accurately could you predict which distribution it came from? My metric tends to be a smidge smaller, but in the same ballpark; calculate it by subtracting the percentage of overlap from 100%, then dividing the result by two.

            Sorry, I got distracted. Point is: even a Cohen’s d of 0.3 isn’t much of a difference, and yet the overall gender differences in this paper struggle to hit that mark. For math, there’s a 92% overlap between boys and girls, a 53% probability of superiority, or a “predictability” of 52%. For the 2009 overall reading difference of 0.4, we find an overlap of 84%, a 61% probability of superiority, or a “predictability” of 58%. Not impressive.

            But it gets worse, because those are overall differences. There’s very little genetic difference between human beings, so any variation we see in the numbers should be indicative of culture, not biology. So to get a true sense of who’s in change, we need to consider the variation between countries as well. A look at the relevant chart shows a huge amount of variation. Math differences, translated back into Cohen’s d, range between 0.32 and -.175; reading differences range between 0.7 and 0.05. That’s about 0.5 and 0.6, respectively, or twice the overall mean difference for each category. Culture not only has quite a bit more sway over scores than biology, it looks capable of obliterating any biological gap.

            Except the picture is even worse than that. Which countries were sampled?

            The number of countries contributing to the PISA data sets include both OECD and OECD-partner countries. The number of participating countries/regions (e.g., Hong Kong) has increased to 74 in 2009

            “Organization for Economic Cooperation and Development.” Not only are were the sampled countries skewed towards the richest on the planet, they were skewed towards countries that engage in substantial economic trade with one another… and this would tend to homogenize their cultures. But sitting behind the assertion “the central value represents biological tendencies” is the assumption “all our samples were taken from heterogeneous cultures, representative of humanity as a whole.” That plainly isn’t true, and worse still, the paper conveniently provides evidence both of this homogeneity, and that a more heterogeneous sample would show a smaller gender gap.

            It’s all in this chart. I’ll let the paper speak for itself here:

            The OECD countries not only have higher overall scores, their mathematics gap, favoring boys, is more tightly clustered between ?5.5 and 17.5 points (M = 10.5,SD = 5.1). The two outliers are Iceland (HDI rank = 2, GGGI rank = 1) and Georgia (HDI rank = 61, GGGI rank = 40). In contrast, there is considerable variability in the non-OECD countries (between ?15.0 and 30.0 points, M = 5.4, SD = 10.5), with boys’ having higher mathematics achievement in some of them (e.g., Costa Rica) and girls having higher mathematics achievement in others (e.g., Albania).

            The included “non-OECD” countries still skew towards the rich and trade-happy, but nonetheless form a more culturally heterogeneous sample. And they demonstrate both a smaller gender gap and greater variation when separated out from the official OECD countries.

            I could go on (students demonstrate a much greater gender gap than the general population, for instance, and I never discussed sample sizes), but this comment is getting a bit long. So, let’s cut it short with a summary of what your own citation demonstrates:

            - It demonstrates the gender differences in math scores are negligible, and small-ish when looking at reading scores.
            - It demonstrates cultural influences have a greater influence than any biological ones, and that they’re capable of wiping out any biological difference.
            - It is drawn from a pool of rich countries that have some degree of cultural homogeneity, and demonstrates the central scores are likely contaminated by cultural overlap, further minimizing the influence of biology.

            The authors were oblivious to all this, too, as they approached their dataset from the assumption of difference, rather than similarity.

      • The problem with statistics, even accurate ones, is that they can lie, while telling the truth. For example - how many of those "countries" are female dominated, or neutral on gender, instead of part of the vast majority, which have been, and continue to be, male dominated? This is the critical problem with trying to measure such things. There is literally no way, at all, to create a situation in which to collect such statistical data, in which there is not **already** a bias in the data set. Its like trying to study how wet something can get, when rained on, while 50 feet under water. It doesn't matter how you collect your "statistics", they are never going to be accurate.

        To put it simply, to acquire an accurate assessment of the *real* differences, you would have to do the unethical and immoral - remove thousands of people from society, into a lab, from birth, and have them grow up with "only" those skills you are testing for, while removing any possible accidental biases that might arise from literature source (in the case of language, for example), as well as any, and all possible other sources, by which they might be prone to bias their own behavior, or have it biased, by the expectations of outside influences.

        There is no way to collect statistics, from any kind of meaningful category of nations, without running head first into the **preexisting** cultural biases, which, by existing, alter the baseline, right from moment one, into a framework that would, how ever unintentionally, alter the perceptions, and expectations, of the very people you want to test.

        Show me how you **ever** successfully correct for that, without the inherent margin of error that it causes, presuming you can even reliably define the margin of error, without a neutral baseline to start with, and.. maybe I will agree with your assessment. But... the actual evidence suggests that the bias persists, all across the board, even as the presumption of just what the statistics **should show** has drifted closer and closer together, narrowing the perceived gap.

        The point being, there may be one, but you first have to show that, in fact, that gap is real, not artificially induced, by the continued perceptions and assumptions, which plague every single one of the nations in the studies. Until you can show that the data is not, in effect, poisoned at the well, you have no grounds to claim that the differences detected are "innate" characteristics, instead of statistically anomalies, arising from uncontrolled, nah.. uncontrollable variables. And, none of these studies are capable of showing this to be the case, and past experience from prior "studies", which failed to even acknowledge such bias, have been rendered worthless, because "culture" changed, and made the assumptions change with them.

        Your hubris is in assuming that they won't change again, that you have somehow controlled for the uncontrollable, *and* that time will some how prove out the differences as real, and not a product of a nearly universal set of cultural assumptions, which are prevalent even among people that insist they have left them long behind, but who, never the less, cannot escape the influence of the wider culture, which has not given them up, and the influence of which cannot be avoided entirely (and perhaps not even significantly).

        • @kagehi said "Until you can show that the data is not, in effect, poisoned at the well, you have no grounds to claim that the differences detected are 'innate' characteristics, instead of statistically anomalies,..." Please don't put words in my mouth - I never claimed the results of the study in question were evidence of "innate characteristics." That is your interpretation.

          "The point being, there may be one, but you first have to show that, in fact, that gap is real, not artificially induced, by the continued perceptions and assumptions, which plague every single one of the nations in the studies." You're clutching at straws, my friend. When somebody presents empirical evidence of this sort that you wish to refute, you have the following options: (i) replicate the study, and demonstrate that you don't obtain the same results; (ii) identify a methodological flaw in the analysis; or (iii) explain exactly what sort of bias could bedevil the results, and demonstrate it. Vague nonsense about "...continued perceptions and assumptions..." is just pseudo-science.

          In truth, the results obtained by the study in question make it quite robust to claims of bias. In particular, what type of bias could generate a negative relationship between the gender gaps for language and mathematics? Be specific if you want to tackle that question - no "culture" mumbo-jumbo.

          Unfortunately, the rest of what you wrote only reveals a poor understanding of statistical inference. For example, the stuff about "baselines" is completely irrelevant, and implies that you really don't understand the type of analysis conducted in the study. I suggest you actually read the article, and pay attention to the authors' interpretation of the results, and their resulting policy recommendations. You may find yourself agreeing with them.

          • Just going to say this, since Hornbeck already explains it way better that I can. If you can't determine, or factor for, or control, the variables that confound you statistics, then your statistics are useless when determining what underlies the system.

            Your own challenge to me is, absurdly, "Of course not, the statistics are only showing the current conditions of the system." - i.e. "I never claimed the results of the study in question were evidence of “innate characteristics.”" But, we are not talking about whether or not the current state exists, or is real. We are talking about whether or not there are innate reasons, base purely on the gender of the individuals, without any other confounding factors, that result in the gaps seen.

            My argument is, in a nutshell, that its bloody meaningless to talk about how likely it is that your car will get wet in the rain, if your "statistics" are based on a community where 90% of the population walks every place, and all of their cars remain parked in a garage 24/7, something like 200 out of 365 days a year. Of course the statistics are going to say, "Cars don't get wet when it rains.", if that is the conditions under which you are bloody collecting the evidence. Only, what we have here is the opposite situation, **no one** is keeping their car in a garage, so, of course, the statistics are saying, "When it rains, everyone's car gets wet." And, you actually seem to think this means anything, when the question is, "What if it was parked in a garage?"

            Or, in other words, the questions isn't, "Do these biases exist, period.", which you rightly say they do, but, "Would they still exist, if you changed the bloody variables, so that the whole entire system wasn't stacked against women, from how much they get paid, compared to men, to what they are told, via the mythologies of society, about their math skills, to how they are told to act, to think, to react, etc. If you removed the variables, what would be the result?"

            All you do is keep babbling, "But.... with the variables still there, there is this huge gap!"

            No one cares, because its not about the state of the system, as it exists, its about the state of the system, without the confounding variables. There is clear evidence that, when removed, the "gap" starts disappearing, but we have no way to completely remove the variables, so we have no valid statistical data to say if the gap **is** still there, when/if you remove all of the external factors. But, there is sufficient evidence to suggest that, with what ones we **can** remove dealt with, the gap all but vanishes anyway. And then, you show up again, and start rambling about, "Yes, but, when we flat out refuse to adjust for, or remove those variables, at all, there is this massive gap!!!" Argh!!!

  • I am a woman and I generally quite like Sam Harris, but yeah that was pretty sexist of him. I don't think perceived sexism is the main reason women aren't as keen on him and the whole secularism movement though. Women are more likely to be religious, even in explicitly misogynistic faiths. Possibly due to how being a girl brought up in that environment doesn't equip you to be as able to leave as your brother, as your sense of self worth is linked to your 'purity' etc, lots of reasons. I don't really know though.

    • Hmm. Not sure why. He contributed a lot, yes, at one time. Now... everything he does seems to center on his personal obsessions with Islam, even to the point where he can't write a book on the subject of atheism without rambling about his personal "Red Scare" boogiemen and what we "need" to do about it. When he isn't doing that he is happily reposting/retweeting things from people that are ***not*** allies to atheism, or progressives, or anyone with morals, but which "echo" his personal sentiments about sexism (or, rather, his absolutely deafness and blindness to it, its causes, cures, or his own personal biases, etc.) In effect, he has no clue that the people "defending" him on these things are, more often than not, the same kind of assholes that defend Trump (for example), when he says the same questionable things.

      Its fine to value the man's contributions. Its not at all reasonable to deny that he has gone off the deep end in recent years, with far too little new contributed to any discussion, and all of it poisoned by his personal paranoias, fears, obsessions, and, to be frank, a huge bloody blind spot, when it comes to social issues, and sexism, that is shared, almost universally, by those who once tried to set themselves aside from the old, less enlightened, atheist, by tagging "New" onto the moniker. Sadly.. the only thing, now, which is "new" about it is how quickly it started making excuses for failing to actually change any of the things the prior atheists where getting wrong.

      Harris seems convinced, despite the existence of honor killings, even in the "civilized" parts of the west, which still happen, with Christians, and the rampant presence of witch hunts, and other insanities, by the same, in some parts of the world, that every single person who is Islamic is, "secretly out to get us", and therefor much scarier. The truth - there are probably no more of them in the world than there are crazy Christians, but the latter is too busy trying to rob and legislate everyone they hate to death to bother with IEDs. Doesn't stop them from killing their kids (just not stoning them), or bombing clinics, instead of markets, and doing so for the same reason - to terrorize people into doing what they want. Its the same sort of idiots, just with slightly different tactics, and a different anthem for when they do it, and the same percentage of the population. His answer - kill a lot of people, and they will change. Right... because that is "so" much different than what terrorists think...

      And, when it comes to sexism - he is right up there with all the rest of them in denying it happens, then, if you prove it is, denying that its as bad as you say, then, if you manage to prove that, denying that he is partly responsible for it, and if you can prove that, denying that the people defending all his denials are, oddly enough, some of the best known scum on the internet, and who are saying vastly worse things, while praising him for, apparently, agreeing with their own ranting.

      He needs to a) stop obsessing over Islam in general, and deal with it like with any other problem - with a nuanced understanding. Because, hating/fearing all of them isn't going to make all the bad ones go away, or stop the rest from turning, in reaction to such anger, into more bad guys. And.. with respect to social issues.. the entire "New Atheist" movement needs to pull its head out of its ass and realize that getting rid of religion, while not addressing all the problems it helped to create, or perpetuate, is like steering away from reefs, because you realize they make boats sink, but ***refusing to fix the holes that are already in them***. Removing religion won't do a damn thing - its no longer the cause of the problems, but one of the bloody symptoms. Or, at best, its symbiotic with other causes - helping them feed, but not long since unnecessary for them to actually survive. Yet, the whole bloody lot of them seem obsessed over the idea that their one, single, mission is to rid the world of religion - and everything else is a sideline, or "distraction", or will just go away, somehow, when they win.

      They are, in short, delusional about more than just why a bloody lot of women refuse to have anything to do with them.

      So.. yeah.. really not sure, other than some sort of hero worship of his long past work, you like the guy, or any of the rest of the "leadership" of this particular wreck.