Can you even win debates? I’ve never heard someone go, "My opponent makes a ton of sense; I’m out." -Daniel ToshIn my younger days, I lost a few years of my life to online gaming. Everquest was the culprit. Now, don't get me wrong, those years were perhaps some of the happiest in my life. Having something fun to do at all hours of the day with thousands of people to do it with has that effect. Those years just weren't exactly productive. While I was thoroughly entertained, when the gaming was over I didn't have anything to show for it. A few years after my gaming phase, I went through another one: chronic internet debating. Much like online gaming, it was oddly addictive and left me with nothing to show for it when it all ended. While I liked to try and justify it to myself - that I was learning a lot from the process, refining my thought process and arguments, and being a good intellectual - I can say with 72% certainty that I had wasted my time again, and this time I wasn't even having as much fun doing it. Barring a few instances of cleaning up grammar, I'm fairly certain no one changed my opinion about a thing and I changed about as many in return. You'd think with all the collective hours my fellow debaters and I had logged in that we might have been able to come to an agreement about something. We were all reasonable people seeking the truth, after all.
Just like this reasonable fellow.
Yet, despite that positive and affirming assumption, debate after debate devolved into someone - or everyone - throwing their hands up in frustration, accusing the other side of being intentionally ignorant, too biased, intellectually dishonest, unreasonable, liars, stupid, and otherwise horrible monsters (or, as I like to call it, suggesting your opponent is a human). Those characteristics must have been the reason the other side of the debate didn't accept that our side was the right side, because our side was, of course, objectively right. Debates are full of logical fallacies like those personal attacks, such as: appeals to authority, straw men, red herrings, and question begging, to name a few, yet somehow it only seems like the other side was doing it. People relentless dragged issues into debates that didn't have any bearing on the outcome, and they always seemed to apply their criticisms selectively.
Take a previously-highlighted example from Amanda Marcotte: when discussing the hand-grip literature on resisting sexual assault, she complained that, "most of the studies were conducted on small, homogeneous groups of women, using subjective measurements." Pretty harsh words for a study comprised of 232 college women between the ages of 18 and 35. When discussing another study that found results Amanda liked - a negligible difference in average humor ratings between men and women - she raised no concerns about "...small, homogeneous groups of women, using subjective measurements". That she didn't is hypocritical, considering the humor study had only 32 subjects (16 men and women, presumably undergraduates from some college) and used caption writing as the only measure of humor. So what gives: does Amanda care about the number of subjects when assessing the results or not?
The answer, I feel is a, "Yes, but only insomuch as it's useful to whatever point she's trying to make". The goal in debates - and communication more generally - is not logical consistency; it's persuasion. If consistency (or being accurate) gets in the way of persuasion, the former can easily be jettisoned for the latter. While being right, in some objective sense, is one way of persuading others, being right will not always make your argument the more persuasive one; the resistance to evolutionary theory has demonstrated as much. Make no mistake, this behavior is not limited to Amanda or the people that you happen to disagree with; research has shown that this is a behavior pretty much everyone takes part in at some point, and that includes you*. A second mistake I'd urge you not to make is to see this inconsistency as some kind of flaw in our reasoning abilities. There are some persuasive reasons to see inconsistency as reasoning working precisely how it was designed to, annoying as it might be to deal with.
Much like my design for an airbag that deploys when you start the car.
As Mercier and Sperber (2011) point out, the question, "Why do humans reason?" is often left unexamined. The answer these authors provide is that our reasoning ability evolved primarily for an argumentative context: producing arguments to persuade others and evaluating the arguments others present. It's uncontroversial that communication between individuals can be massively beneficial. Information which can be difficult or time consuming to acquire at first can be imparted quickly and almost without effort to others. If you discovered how to complete some task successfully - perhaps how to build a tool or a catch fish more effectively - perhaps through a trial-and-error process, communicating that information to others allows them to avoid the need to undergo that same process themselves. Accordingly, trading information can be wildly profitable for all parties involved; everyone gets to save time and energy. However, while communication can offer large benefits, we also need to contend with the constant risk of misinformation. If I tell you that your friend is plotting to kill you, I'd have done you a great service if I was telling the truth; if the information I provided was either mistaken or fabricated, you'd have been better off ignoring me. In order to achieve these two major goals - knowing how to persuade others and when to be persuaded yourself - there's a certain trust barrier in communication that needs to be overcome.
This is where Mercier and Sperber say our reasoning ability comes in: by giving others convincing justifications to accept our communications, as well as being able to better detect and avoid the misinformation of others, our reasoning abilities allow for more effective and useful communication. Absent any leviathan to enforce honesty, our reasoning abilities evolved to fill the niche. It is worth comparing this perspective to another: the idea that reasoning evolved as some general ability to improve or refine our knowledge across the board. In this scenario, our reasoning abilities more closely resemble some domain-general truth finders. If this latter perspective is true, we should expect no improvements in performance on reasoning tasks contingent on whether or not they are placed in an argumentative context. That is not what we observe, though. Poor performance on a number of abstracted reasoning problems, such as the Wason Selection Task, is markedly improved when those same problems are placed in an argumentative context.
While truth tends to win in cases like the Wason Selection Task being argued over, let's not get a big-head about it and insist that it implies our reasoning abilities will always push towards truth. It's important to note how divorced from reality situations like that one are: it's not often you find people with a mutual interest in truth, arguing over a matter they have no personal stake in, that also has a clearly defined and objective solution. While there's no doubt that reasoning can sometimes lead people to make better choices, it would be a mistake to assume that's the primary function of the ability, as reasoning frequently doesn't seem to lead people towards that destination. To the extent that reasoning tends to push us towards correct, or improved, answers, this is probably due to correct answers being easier to justify than incorrect ones.
As the Amanda Marcotte example demonstrated, when assessing an argument, often "[people] are not trying to form an opinion: They already have one. Their goal is argumentative rather than epistemic, and it ends up being pursed at the expense of epistemic soundness...People who have an opinion to defend don't really evaluate the arguments of their interlocutors in search for genuine information but rather consider them from the start as counterarguments to be rebutted." This behavior of assessing information by looking for arguments that support one's own views and rebut the views of others is known as motivated reasoning. If reasoning served some general knowledge-refining ability, this would be a strange behavior indeed. It seems people often end up strengthening not their knowledge about the world, but rather their existing opinions, a conclusion that fits nicely in the argumentative theory. While opinions that cannot be sustained eventually tend to get tossed aside, as reality does impose some constraints (Kunda, 1990), on fuzzier matters for which there aren't clear, objective answers - like morality - arguments have gotten bogged down for millenia.
I'm not hearing anymore objections to the proposal that "might makes right". Looks like that debate has been resolved.
Previous conceptualizations about the function of reasoning have missed the mark, and, as a result, had been trying to jam a series of square pegs into the same round hole. They have been left unable to explain vast swaths of human behaviors, so researchers simply labeled those behaviors that didn't fit as biases, neglects, blind spots, errors, or fallacies, without ever succeeding in figuring out why they existed; why our reasoning abilities often seemed so poorly designed for reasoning. By placing all these previously anomalous findings under a proper theoretical lens and context, they suddenly start to make a lot more sense. While the people you find yourself arguing with may still seem like total morons, this theory may at least help you gain some insight into why they're acting so intolerable.
*As a rule, it doesn't apply to me, so if you find yourself disagreeing with me, you're going to want to rethink your position. Sometimes life's just unfair that way.
References: Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480-498.
Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111.