The first principle [in science] is that you must not fool yourself, and you are the easiest person to fool - Richard Feynman
It feels nice to start a note with a quote from some famous guy; it just makes the whole thing that much more official. I decided to break from my proud, two-note, tradition and write about some psychology that doesn't have to do with gay sex, much to the disappointment of myself and my hordes of dedicated fans. Instead, today I'll be examining the standards of evidence with which claims are evaluated.
Were we all surveyed, the majority of us would report that we're - without a doubt - in at least the top half of the population in terms of intelligence, morality, and penis size. We'd also probably report that we're relatively free from bias in our examination of evidence, relative to our peers; the unwashed masses of the world, sheep that they are, lack the critical thinking abilities we do. Were we shown the results of research that said the majority of people surveyed also consider themselves remarkably free of bias, relative to the rest of the world - a statistical impossibility - we'd shake our head at how blind other people can be to their own biases, all the while assuring the researcher that we really are that good.
I think you see where I'm going with this, since most everyone is above average in their reasoning abilities.
In some (most?) cases when evidence isn't present, it's simply assumed to exist. If I asked you whether making birth control pills more available would increase or decrease the happiness of women, on average, I'd guess you would probably have an answer for me that didn't include the phrase "I don't know". How do you suppose you'd respond if you read about some research evidence that contradicted your answer about it?
In real life, evidence is a tricky thing. Results from almost any source can be tainted from any number of known and unknown factors. Publication bias alone can lead to positive results being published more often than null results, leading to an increase in the number of false positives, not to mention other statistical sleights of hand that won't be dealt with here. The way questions are asked can lead respondents towards giving certain answers. Sometimes the researchers think they're measuring something they aren't. Sometimes they're asking the wrong questions. Sometimes they're only asking certain groups of people who differ in important ways from other people. Sometimes (often) the answers people give to questions don't correspond well to actual behavior. There are countless possible flaws, uncontrolled variables, and noise that can throw a result off.
Here's the good news: people are pretty alright at picking out those issues (and I do stress alright; I'm not sure I'd call them good at it). Here's the bad news: people are also substantially worse at doing it when the information agrees with what they already think.
Two papers examined this tendency: Lord, Ross, & Lepper (1979) and Koehler (1993). In the first, subjects were surveyed about their views regarding the death penalty and were categorized as those who were either strongly in favor of it or strongly opposed. The subjects were then given a hypothetical research project and its results to evaluate; results that either supported the usefulness of the death penalty in reducing crime or opposing its usefulness. Following this, they were then given another study that came to the opposite conclusion. So here we have people with very strong views being given ambiguous evidence. Surely, seeing the evidence was mixed, people would begin to mellow in their views, perhaps compromising to simply breaking a thief's hands over killing him or letting him escape unharmed, right?
Well, the short answer is "no"; the somewhat longer answer is "nooooooo". When subjects rated the research they were presented with, they pointed out the possible ways that the research opposing their views could have been misconducted and why the results aren't valid to their satisfaction. However, they found no corresponding problems with the results that supported their views, or at least no problems really worth worrying about. Bear in mind, they read this evidence back to back. Their views on the subject, both pro and con, remained unchanged; if anything, they became slightly more polarized than they already were at the beginning.
Koehler (1993) found a similar result: when graduate students were evaluating hypothetical research projects, those research projects that found results consistent with the student's beliefs were rated more favorably than those with opposing results. We're not just talking unwashed masses anymore; we're talking about unwashed and opinionated graduate students. There was also an interaction effect: specifically, the stronger the preexisting belief, the more favorably agreeing studies were rated. A second study replicated this effect using a population of skeptics and paranormal researchers examining evidence for ESP (if you're curious, the biases of the paranormal researchers seemed somewhat less pronounced. Are you still feeling smug about the biases of others, or are you feeling the results aren't quite right?).
The pattern that emerges is that bias progressively creeps in as investment in a subject increases. We see high-profile examples of it all the time in politics: statistics are often cited that are flimsy at best and made up at worst. While we often chalk this up to politicians simply outright lying, the truth is probably that they legitimately believe what they are saying is true, but it could be something they accepted without even looking into it, or looking into the matter with a somewhat relaxed critical view.
And before we - with our statistically large penises and massive intellect - get all high and mighty about how all politicians are corrupt liars, we'd do well to remember the research I just talked about didn't focus on politicians. The real difference between non-politicians and official politicians is that the decisions of the latter group tend to carry consequences and are often the center of public attention. You're probably no different; you're just not being recorded and watched by millions of people when you do it.
Recently, I had someone cite a statistic at me that the average lifespan of a transsexual was 23 years old. As far as I can tell, the source of that statistic is that someone said it once, and it was repeated. I'm sure many people have heard some statistics about how many prostitutes are actually being coerced to work against their will; you might do well to consider this: http://neuroskeptic.blogspot.com/2009/10/on-sexed-up-statistics.html. Many are probably familiar with the statistic that women earn 75 cents to every dollar a man earns as a result of sexism and discrimination. Some of you will be pleased to know that discrepancy drops very sharply once you actually start to control for basic things, like number of hours worked, education, field of work, etc. Is some percentage of whatever gap remains due to sexism? Probably, but its far, far smaller than many would make it out to be; the mere existence of a gap is not direct evidence of sexism.
Not only are unreliable statistics like those parroted back by people who want to believe (or disbelieve) them for one reason or another, but the interpretations of those statistics are open to the same problem. I'm sure we can all think of times other people made this mistake, but I'll bet most of us would struggle to think of times we did it ourselves, smart and good looking as we all are.
References: Koehler, J.J. (1993). The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Organizational Behavior and Human Decision Processes, 56, 28-55.
Lord, C.G., Ross, L., & Lepper, M.R. (1979). Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology, 37, 2098-2109.
It feels nice to start a note with a quote from some famous guy; it just makes the whole thing that much more official. I decided to break from my proud, two-note, tradition and write about some psychology that doesn't have to do with gay sex, much to the disappointment of myself and my hordes of dedicated fans. Instead, today I'll be examining the standards of evidence with which claims are evaluated.
Were we all surveyed, the majority of us would report that we're - without a doubt - in at least the top half of the population in terms of intelligence, morality, and penis size. We'd also probably report that we're relatively free from bias in our examination of evidence, relative to our peers; the unwashed masses of the world, sheep that they are, lack the critical thinking abilities we do. Were we shown the results of research that said the majority of people surveyed also consider themselves remarkably free of bias, relative to the rest of the world - a statistical impossibility - we'd shake our head at how blind other people can be to their own biases, all the while assuring the researcher that we really are that good.
I think you see where I'm going with this, since most everyone is above average in their reasoning abilities.
In some (most?) cases when evidence isn't present, it's simply assumed to exist. If I asked you whether making birth control pills more available would increase or decrease the happiness of women, on average, I'd guess you would probably have an answer for me that didn't include the phrase "I don't know". How do you suppose you'd respond if you read about some research evidence that contradicted your answer about it?
In real life, evidence is a tricky thing. Results from almost any source can be tainted from any number of known and unknown factors. Publication bias alone can lead to positive results being published more often than null results, leading to an increase in the number of false positives, not to mention other statistical sleights of hand that won't be dealt with here. The way questions are asked can lead respondents towards giving certain answers. Sometimes the researchers think they're measuring something they aren't. Sometimes they're asking the wrong questions. Sometimes they're only asking certain groups of people who differ in important ways from other people. Sometimes (often) the answers people give to questions don't correspond well to actual behavior. There are countless possible flaws, uncontrolled variables, and noise that can throw a result off.
Here's the good news: people are pretty alright at picking out those issues (and I do stress alright; I'm not sure I'd call them good at it). Here's the bad news: people are also substantially worse at doing it when the information agrees with what they already think.
Two papers examined this tendency: Lord, Ross, & Lepper (1979) and Koehler (1993). In the first, subjects were surveyed about their views regarding the death penalty and were categorized as those who were either strongly in favor of it or strongly opposed. The subjects were then given a hypothetical research project and its results to evaluate; results that either supported the usefulness of the death penalty in reducing crime or opposing its usefulness. Following this, they were then given another study that came to the opposite conclusion. So here we have people with very strong views being given ambiguous evidence. Surely, seeing the evidence was mixed, people would begin to mellow in their views, perhaps compromising to simply breaking a thief's hands over killing him or letting him escape unharmed, right?
Well, the short answer is "no"; the somewhat longer answer is "nooooooo". When subjects rated the research they were presented with, they pointed out the possible ways that the research opposing their views could have been misconducted and why the results aren't valid to their satisfaction. However, they found no corresponding problems with the results that supported their views, or at least no problems really worth worrying about. Bear in mind, they read this evidence back to back. Their views on the subject, both pro and con, remained unchanged; if anything, they became slightly more polarized than they already were at the beginning.
Koehler (1993) found a similar result: when graduate students were evaluating hypothetical research projects, those research projects that found results consistent with the student's beliefs were rated more favorably than those with opposing results. We're not just talking unwashed masses anymore; we're talking about unwashed and opinionated graduate students. There was also an interaction effect: specifically, the stronger the preexisting belief, the more favorably agreeing studies were rated. A second study replicated this effect using a population of skeptics and paranormal researchers examining evidence for ESP (if you're curious, the biases of the paranormal researchers seemed somewhat less pronounced. Are you still feeling smug about the biases of others, or are you feeling the results aren't quite right?).
The pattern that emerges is that bias progressively creeps in as investment in a subject increases. We see high-profile examples of it all the time in politics: statistics are often cited that are flimsy at best and made up at worst. While we often chalk this up to politicians simply outright lying, the truth is probably that they legitimately believe what they are saying is true, but it could be something they accepted without even looking into it, or looking into the matter with a somewhat relaxed critical view.
And before we - with our statistically large penises and massive intellect - get all high and mighty about how all politicians are corrupt liars, we'd do well to remember the research I just talked about didn't focus on politicians. The real difference between non-politicians and official politicians is that the decisions of the latter group tend to carry consequences and are often the center of public attention. You're probably no different; you're just not being recorded and watched by millions of people when you do it.
Recently, I had someone cite a statistic at me that the average lifespan of a transsexual was 23 years old. As far as I can tell, the source of that statistic is that someone said it once, and it was repeated. I'm sure many people have heard some statistics about how many prostitutes are actually being coerced to work against their will; you might do well to consider this: http://neuroskeptic.blogspot.com/2009/10/on-sexed-up-statistics.html. Many are probably familiar with the statistic that women earn 75 cents to every dollar a man earns as a result of sexism and discrimination. Some of you will be pleased to know that discrepancy drops very sharply once you actually start to control for basic things, like number of hours worked, education, field of work, etc. Is some percentage of whatever gap remains due to sexism? Probably, but its far, far smaller than many would make it out to be; the mere existence of a gap is not direct evidence of sexism.
Not only are unreliable statistics like those parroted back by people who want to believe (or disbelieve) them for one reason or another, but the interpretations of those statistics are open to the same problem. I'm sure we can all think of times other people made this mistake, but I'll bet most of us would struggle to think of times we did it ourselves, smart and good looking as we all are.
References: Koehler, J.J. (1993). The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Organizational Behavior and Human Decision Processes, 56, 28-55.
Lord, C.G., Ross, L., & Lepper, M.R. (1979). Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology, 37, 2098-2109.
No comments:
Post a Comment