Friday, May 4, 2012

New Location

I've decided to move my writings to a more official site. The new home can be found at

Sunday, April 22, 2012

Depressed To Impress

Reflecting on this morning's usual breakfast cake brought to mind a thought that only people like myself think: the sugar in my breakfast is not sweet. Sweetness, while not a property of the sugar, is an experience generated by our mind when sugar is present on the tongue. We generally find the presence of sugar to be a pleasant experience, and often times find ourselves seeking out similar experiences in the future. The likely function of this experience is to motivate people to preferentially seek out and consume certain types of foods, typically the high-calorie variety. As dense packages of calories can be very beneficial to an organism's survival, especially when they're rare, the tendency to experience a pleasant sweetness in the presence of sugar was selected for; individuals who were indifferent between eating sand or honey were no one's ancestors.

On a related note, there's nothing intrinsically painful about damage done to your body. Pain, like sweetness, is an important signal; pain signals when your body is being damaged, in turn motivating people to stop doing or get away from whatever is causing the harm and avoiding making current injuries worse. Pain feels so unpleasant because, if it didn't, the proper motivation would not be provided. However, in order to feel pain, an organism must have evolved that ability; it's not present as a default, as evidenced by the rare people born without the ability to feel pain. As one could imagine, those who were indifferent to the idea of having their leg broken rarely ended up reproducing as well as others who found the experience excruciating.

                                                                                                    Walk it off

Sensations like pain or sweetness can be explained neatly and satisfyingly through these functional accounts. With these accounts we can understand why things that feel pleasant - like gorging myself on breakfast cake - are not always a good thing (when calories are abundant), whereas unpleasant feelings - like sticking your arm in a wood-chipper - can be vital to our survival. Conversely, lacking these functional accounts can lead to poor outcomes. For instance, treating a fever as a symptom of an infection to be reduced, rather than a body's adaptive response to help fight the infection, can actually lead to a prolonging and worsening of said infection (Nesse & Williams, 1994). Before trying to treat something as a problem and make it go away just because it feels unpleasant, or not treat a problem because it might be enjoyable, it's important to know what function those feelings might serve and what costs and benefits of reducing or indulging in them might entail. This brings us to the omnipresent subject of unpleasant feelings that people want to make go away in psychology: depression.

Depression, I'm told, is rather unpleasant to deal with. Most commonly triggered by a major, negative life event, depression leads to a loss of interest and engagement in almost all activities, low energy levels, and, occasionally, even suicide. Despite these apparent costs, depression continues to be a fairly prevalent complaint the world over, and is far more common among women than men. Given its predictability and prevalence, might there be a function behind this behavior? Any functional account of depression would need to both encompass these known facts, as well as purpose subsequent gains that would tend to outweigh these negative consequences. As reviewed by Hagen (2003), previous models of depression suggested that sadness served as a type of psychic pain: when one is unsuccessful in navigating the social world in some way, it is better to disengage from a failing strategy than to continue to pursue it, as one would be wasting time and energy that could be spent elsewhere. However, such a hypothesis fails to account for major depression, positing instead that major depression is simply a maladaptive byproduct of an otherwise useful system. Certainly, activities like eating shouldn't be forgone because an unrelated social strategy has failed, nor should one engage in otherwise harmful behaviors (potentially suicidal ones) for similar reasons; it's unclear from the psychic pain models why these particular maladaptive byproducts would arise and persist in the first place. For example, touching a hot pan causes one to rapid withdraw their hand, but it does not cause people to stop cooking food altogether for weeks on end.

Hagen (2003) puts forth the idea that depression functions primarily as a social bargaining mechanism. Given this function, Hagen suggests the following contexts should tend to provoke depressive episodes: a person should experience a perceived negative life event, the remedy to this event should be difficult or impossible to achieve on their own, and there must be conflict over other people's willingness to provide assistance in achieving a remedy. Conflict is ubiquitous in the social realm of life; that much is uncontested. When confronted with a major negative life event, such as the death of a spouse or the birth of an unwanted child, social support from others can be its most important. Unfortunately for those in need, others people are not always the most selfless when it comes to providing for those needs, so the needy require methods of eliciting that support. While violence is one way to make others do what you'd like, it is not always the most reliable or safest method, especially if the source you're attempting to persuade is stronger than you or outnumber you. Another route to compelling a more powerful other to invest in you is to increase the costs of not investing, and this can be done by simply withholding benefits that you can provide others until things change. Essentially, depression serves as a type of social strike, the goal of which is to credibly signal that one is not sufficiently benefiting from their current state, and is willing to stop providing benefits to others until the terms of their social contract have been renegotiated.      

                                           "What do we want? A more satisfying life. When do we want it? I'll just be in bed until that time...whatever"

Counter-intuitive as it may sound, despite depression feeling harmful to the person suffering from it, the function of depression would be to inflict costs on others who have an interest in you being productive and helpful. By inflicting costs on yourself (or, rather, failing to provide benefits to others), you are thereby motivating others to help you to so they can, in turn, help themselves by regaining access to whatever benefits you can provide. Then again, perhaps this isn't as counter-intuitive as it may sound, taking the case of suicide as an example. Suicide definitely represents a cost to other people in one's life, from family members, to spouses, to children, to friends, or trade partners. It's much more profitable to have a live friend or spouse capable of providing benefits to you than a dead one. Prior to any attempt being made, suicidal people tend to warn others of their intentions and, if any attempt is made, they are frequently enacted in manners which are unreliably lethal. Further still, many people, whether family or clinicians, view suicidal thoughts and attempts as cries for help, rather than as a desire to die per se, suggesting people have some underlying intuitions about the ultimate intentions of such acts. That a suicide is occasionally completed likely represents a maladaptive outcome of an evolutionary arms race between the credibility of the signal and the skepticism that others view the signal with. Is the talk about suicide just that - cheap talk - or is it actually a serious threat?

There are two social issues that depression needs to deal with that can also be accounted for in this model. The first issue concerns how depressed individuals avoid being punished by others. If an individual is taking benefits from other group members, but not reciprocating those benefits (whether due to depression or selfishness), they are likely to activate the cheater-detection module of the mind. As we all know, people don't take kindly to cheaters and do not tend to offer them more support to help out. Rather, cheaters tend to be punished, having further costs inflicted upon them. If the goal of depression is to gain social support, punishment is the last thing that would help achieve that goal. In order to avoid coming off as a cheater, a depressed individual may need to forgo accepting many benefits that others provide, which would help explain why depressed individuals often give up activities like eating or even getting out of bed. A more-or-less complete shutdown of behavior might be required in order to avoid coming off as a manipulative cheater.

The second issue concerns the benefits that a depressed individual can provide. Let's use the example of a worker going on strike: if this worker is particularly productive, having him not show up to work will be a genuine cost on the employer. However, if this worker either poorly skilled - thus able to deliver little, if any, benefits to the employer - or easily replaceable, not showing up to work won't cause the employer any loss of sleep or money. Accordingly, in order for depression to be effective, the depressed individual needs to be socially valuable, and the more valuable they are seen as being, the more of a monopoly they hold over the benefits they provide, the more effective depression can be in achieving its goal. What this suggests is that depression would work better in certain contexts, perhaps when population sizes are smaller and groups more mutually dependent on one another - which would have been the social contexts under which depression evolved. What this might also suggest is that depression may become more prevalent and last longer the more replaceable people become socially due to certain novel features of our social environment; there's little need to give a striking worker a raise if there are ten other capable people already lined up for his position.

                                                           You should have seen the quality of resumes I was getting last time I was single.

That depression is more common among women would suggest, then, that depression is a more profitable strategy for women, relative to men. There are several reasons this might be the case. First, women might be unable to engage in direct physical aggression as effectively as men, restricting their ability to use aggressive strategies to gain the support of others. Another good possibility is that, reproductively, women tend to be a more valuable resource (or rather, a limiting one) relative to men. Whereas almost all women had a valuable resource they could potentially restrict access to, not all men do. If men are more easily replaceable, they hold less bargaining power by threatening to strike. Another way of looking at the matter is that the costs men incur by being depressed and shutting down are substantially greater than the costs women do, or the costs they are capable of imposing on others aren't as great. A depressed man may quickly fall in the status hierarchy, which would ultimately do more harm than the depressive benefits would be able to compensate for. It should also be noted that one of the main ways depression is alleviated is following a positive life change, like entering into a new relationship or getting a new job, which is precisely what the bargaining model would predict, lending further support to this model.

So given this likely function of depression, is it a mental illness that requires treatment? I would say no to the first part and maybe to the second. While generally being an unpleasant experience, depression, in this model, is no more of a mental illness than the experience of physical pain is. Whether or not it should be treated is, of course, up to the person suffering from it. There are very real costs to all parties involved when depression is active, and it's certainly understandable why people would want to make them go away. What this model suggests is that, like treating a fever, just making the symptoms of depression go away may have unintended social costs elsewhere, either in the short- or long-term. While keeping employees from striking certainly keeps them working, it also removes some of their ability to bargain for better pay or working conditions. Similarly, simply relieving depression may keep people happier and more productive, but it may also lead them to accept less fulfilling or supportive circumstances in their life.        

References: Hagen, E.H. (2003). The bargaining model of depression. In: Genetic and Cultural Evolution of Cooperation, P. Hammerstein (ed.). MIT Press, 95-123

Nesse, R.M., & Williams G.C. (1994). Why We Get Sick: The New Science Of Darwinian Medicine. Vintage books.   

Thursday, April 12, 2012

No, Really, Group Selection Doesn't Work

Group selection is kind of like the horror genre of movies: even if the movie was terrible, and even if the main villain gets killed off, you can bet there will be still be a dozen sequels. Like the last surviving virgin among a group of teenage campers, it's now on my agenda to kill this idea once again, because it seems the first couple dozen times it was killed, the killing just didn't stick. Now I have written about this group selection issue before, but only briefly. Since people seem to continue and actually take the idea seriously, it's time to explicitly go after the fundamental assumptions made by group selection models. Hopefully, this will put the metaphorical stake through the heart of this vampire, saving me time in future discussions as I can just link people here instead of rehashing the same points over and over again. It probably won't, as some people seem to like group selection for some currently unknown reason, but fingers crossed anyway.

                                        Friday the 13th, part 23: At this point, you might as well just watch the first movie again, because it's the same thing.

Recently, Jon Gotschall wrote an article for Psychology Today about how E.O. Wilson thinks the selfish gene metaphor is a giant mistake. As he didn't explicitly say this idea is nonsense - the proper response - I can only assume he is partially sympathetic to group selection. Et tu, Jon? There's one point from that article I'd like to tackle first, before moving onto other, larger matters. Jon writes the following:
In effect, this defined altruism-real and authentic selflessness--out of existence. On a planet ruled by selfish genes, “altruism” was just masked selfishness.
The first point is that I have no idea what Jon means when he's talking about "real" altruism. His comments there conflate proximate and ultimate explanations, which is a mistake frequently cautioned against in your typical introductory level evolutionary psychology course. No one is saying that other-regarding feelings don't exist at a proximate level; they clearly do. The goal is explain what the ultimate function of such feelings are. Parents genuinely tend to feel selfless and act altruistically towards their children. That feeling is quite genuine, and it happens to exist in no small part because that child carries half of that parent's genes. By acting altruistically towards their children, parents are helping their own genes reproduce; genes are benefiting copies of themselves that are found in other bodies. The ultimate explanation is not privileged over the proximate one in terms of which is "real". It makes no more sense to say what Jon did than for me to suggest that my desire to eat chocolate cake is really a reproductive desire, because, eventually, that desire had to be the result of an adaptation designed to increase my genetic fitness. Selfish genes really can create altruistic behavior, they just only do so when the benefits of being altruistic tend to outweigh the costs in the long run.

Speaking of benefits outweighing the costs, it might be helpful to take a theoretical step back and consider why an organism would have any interest in joining a group in the first place. Here are two possible answers: (1) An organism can benefit in some way by entering into a coalition with other organisms, achieving goals it otherwise could not, or (2) an organism joins a group in order to benefit that group, with no regard for its own interests. The former option seems rather plausible, representing cases like reciprocal altruism and mutualism, whereas the latter option does not appear very reasonable. Self-interest wins the day over selflessness when it comes to explaining why an organism would bother to join a group in the first place. Glad we've established that. However, to then go on to say that, once it has joined a coalition, an organism converts its selfish interests to selfless ones is to now, basically, endorse the second explanation. It doesn't matter to what extent you think an organism is designed to do that, by the way. Any extent is equally as problematic.

                                           If you want any hope of being a millionaire, you will need a final answer at some point: selfish, or selfless?

But organisms do sometimes seem to sometimes put their own interests aside to benefit members of their group, right? Well, that's going to depend on how you're conceptualizing their interests. Let's say I'm a member of a group that demands a monthly membership fee, and, for the sake of argument, this group totally isn't a pornography website. I would be better off if I could keep that monthly membership fee to myself, so I must be acting selflessly by giving it to the group. There's only one catch: if I opt to not pay that membership fee, there's a good chance I'll lose some or all of the benefits that the group provides, whatever form those benefits come in. Similarly, whether through withdrawal of social support or active punishment, groups can make leaving or not contributing costlier than staying and helping. Lacking some sort of punishment mechanism, cooperation tend to fall apart. The larger point there here is that if by not paying a cost, you end up paying an even larger cost, that's not exactly selfless behavior requiring some special explanation. 

Maybe that example isn't fair though; what about cases like when a soldier jumps on a grenade to save his fellow soldiers? Well, there are a couple of points to make about the grenade-like examples: first, grenades are obviously an environmental novelty. Humans just aren't adapted to an environment containing grenades and, I'm told, most of us don't make a habit of jumping into dangerous situations to help others, blind to the probably of injury to death. That said, if you had a population of soldiers, some of which had a heritable tendency to jump on grenades to save others, while other soldiers had no such tendency, if grenades kept getting thrown at them, you could imagine which type would tend to out-reproduce the other, all else being equal. A second vital point to make is that every single output of an cognitive adaptation need not be adaptive; so long as whatever module led to such a decision tended to be beneficial overall, it would still spread and be maintained throughout the population, despite occasional maladaptive outcomes. Sometimes a peacock's large tail spells doom for the bird who carries it as it is unable to escape from a predator, but that does mean, on the whole, any one bird would be better suited to just not bother growing their tail; it's vital for attracting a mate, and surviving means nothing absent reproduction.  

Now, onto the two major theoretical issues with group selection itself. The first is displayed by Jon in his article here:
Let’s run a quick thought experiment to see how biologists reached this conclusion. Imagine that long before people spread out of Africa there was a tribe called The Selfless People who lived on an isolated island off the African coast. The Selfless People were instinctive altruists, and their world was an Eden. 
The thought experiment is already getting ahead of itself in a big way. In this story, it's already assumed that a group of people exist with these kind of altruistic tendencies. Little mind is paid to how the members of this group came to have these tendencies in the first place, which is a rather major detail, especially because, as many note, within groups selfishness wins. Consider the following: in order to demonstrate group selection, you would need a trait that conferred group-level fitness benefits at individual-level fitness costs. If the trait benefited the individual bearer in any way, then it would spread through standard selection and there would be no need to invoke group-level selection. So, given that we're, by definition, talking about a trait that actively hinders itself getting spread in order to benefit others, how does that trait spread throughout the population resulting in a population of 'selfless people'? How do you manage to get from 1 to 2 by way of subtraction?

                                                                              Perhaps it's from all the good Karma you build up?

No model of group selection I've come across yet seems to deal with this very basic problem. Maybe there are accounts out there I haven't read that contain the answer to my question; maybe the accounts I have seen have an answer that I've just failed to understand. Maybe. Then again, maybe none of the accounts I read have actually provided a satisfying answer because they start with the assumption that the traits they're seeking to prove exists already exists in some substantial way. That kind of strikes me as cheating. Jon's thought experiment certainly makes that assumption. The frequently cited paper by Boyd and Richardson (1990) seems to make that assumption as well; people who act in favor of their group selflessly just kind of exist. That trait needs an explanation; simply assuming it into existence and figuring out the benefits from that point is not good enough. There's a chance that the trait could spread by drift, but drift has, to the best of my knowledge, never been successfully invoked to explain the existence of any complex adaption. Further, drift only really works when a trait is, more or less, reproductively neutral. A trait that is actively harmful would have a further hurdle to overcome.

Now positing an adaptation designed to deliver fitness benefits to others at fitness costs to oneself might seen anathema to natural selection, because it is, but the problems don't stop there. There's still another big issue looming: how we are to define the group itself; you know, the thing that's supposed to be receiving these benefits. Like many other concepts, what counts as a group - or a benefit to a group - can be fuzzy and is often arbitrary. Depending on what context I currently find myself in, I could be said to belong to an almost incalculably large number of potential groups, and throughout the course of my life I will enter and leave many explicitly and implicitly. Some classic experiments in psychology demonstrate just how readily group memberships can be created and defined. I would imagine that for group selection to be feasible, at the very least, group membership needs to be relatively stable; people should know who their "real" group is and act altruistically towards it, and not other groups. Accordingly, I'd imagine group membership should be a bit more difficult to just make up on the spot. People shouldn't just start classifying themselves into groups on the basis of being told, "you are now in this group" anymore than they should start thinking about a random woman as their mother because someone says, "this woman is now your mother" (nor would we expect this designated mother to start investing in this new person over her own child). That group membership is relatively easy to generate demonstrates, in my mind, the reality that group membership is a fuzzy and fluid concept, and, subsequently, not the kind of thing that can be subject to selection.    

Now perhaps, as Jon suggested, the selfless people will always win against the selfish people. It's an possible state of affairs, sure, but it's important to realize that it's an assumption being made, not a prediction being demonstrated. Such conditions can be artificially created in the lab, but whether they exist in the world, and, if they do, how frequently they appear, is another matter entirely. The more general point here is that group selection can work well in the world of theory, but that's because assumptions are made there that define it as working well. Using slightly tweaked sets of assumptions, selfless groups will always lose. They win when they are defined as winning, and lose when they are defined as losing. Using another set of assumptions, groups of people with psychic abilities win against groups without them. The key then, is to see how these states of affairs hold up in real life. If people don't have psychic abilities, or if psychic abilities are impossible for one reason or another, no number of assumptions will change that reality.     

Finally, the results of thought experiments like the foot-bridge dilemma seem to cut against the group selection hypothesis: purposely sacrificing one person's life to save the lives of five others is, in terms of the group, the better choice, yet people consistently reject this course of action (there, B=5, C=1). When someone jumps on a grenade, we praise them for it; when someone throws another person on a grenade, we condemn them, despite this outcome being better from the group perspective (worst case, you've kill a non-altruist who wouldn't jump on it anyway, best case, you helped an altruist act). Those outcomes conflict with group selection predictions, which, I'd think, should tend to favor more utilitarian calculations - the ones that are actually better for a group. I would think it should also predict Communism would work out better than it tends to, or that people would really love to pay their taxes. Then again, group selection doesn't seem to be plausible in the first place, so perhaps result like these shouldn't be terribly surprising. 

References: Boyd, R., & Richerson, P.J. (1990). Group selection among alternative evolutionary stable strategies. Journal of Theoretical Biology, 145, 331-342.

Monday, April 9, 2012

You've Got Some (Base)Balls

Since Easter has rolled around, let's get in the season and consider very briefly part of the story of Jesus. The Sparknotes version of the story involves God symbolically sacrificing his son in order to in some way redeem mankind. There's something very peculiar about that line of reasoning, though: the idea that punishing someone for a different person's misdeed is acceptable. If Bill is driving his car and strikes a pedestrian in a crosswalk, I imagine many of us would find it very odd, if not morally repugnant, to then go an punish Kyle for what happened. Not only did Kyle not directly cause the act to take place, but Kyle didn't even intend for the action to take place - two of the criteria typically used to assess blame - so it makes little sense to punish him. As it turns out though, people who might very well disagree with punishing Kyle in the previous example can still quite willing to accept that kind of outcome in other contexts.

                                                    Turns out that the bombing of Pearl Harbor during World War II was one of those contexts.

If Wikipedia is to be believed, following the bombing of Pearl Harbor, a large number of people of Japanese ancestry - most of which were American citizens - were moved into internment camps. This move was prompted by fears of further possible Japanese attacks on the United States amidst concerns about the loyalty of the Japanese immigrants, who might act in some way against the US with their native country. The Japanese, due to their perceived group membership, were punished because of acts perpetrated by others viewed as sharing that same group membership, not because they had done anything themselves, just that they might do something. Some years down the road, the US government issued an apology on behalf of those who committed the act, likely due to some collective sense of guilt about the whole thing. Not only did guilt get spread to the Japanese immigrants because of the actions of other Japanese people, but the blame for the measures taken against the Japanese immigrants was also shared by those who did not enact it because of their association.

Another somewhat similar example concerns the US government's response following the attacks of September 11th, 2001. All the men directly responsible for the hijackings were dead, and as such, beyond further punishment. However, their supporters - the larger group to which they belonged - was very much still alive, and it was on that group that military descended (among others). Punishment of group members in this case is known as Accomplice Punishment: where members of a group are seen as contributing to the initial transgression in some way; what is known typically as conspiracy. In this case, people view those being punished as morally responsible for the act in question, so this type of punishment isn't quite analogous to the initial example of Bill and Kyle. Might there be an example that strips the moral responsibility of the person being punished out of the equation? Why yes, it's turn out there is at least one: baseball.

 In baseball, a batter will occasionally be hit by a ball thrown by a pitcher (known as getting beaned). Sometimes these hits are accidental, sometimes they're intentional. Regardless, these hits can sometimes cause serious injury, which isn't shocking considering the speed at which the pitches are thrown, so they're nothing to take lightly. Cushman, Durwin, and Lively (2012) noted that sometimes a pitcher from one team will intentionally bean a player on the opposing team in response to a previous beaning. For instance, if the Yankees are playing the Red Sox, and a Red Sox pitcher hits a Yankee batter, the Yankee pitcher would subsequently hit a Red Sox batter. The researchers sought to examine the moral intuitions of baseball fans concerning these kinds of revenge beanings.  

                                                                                           Serves someone else right!

The first question Cushman et al. asked was whether the fans found this practice to be morally acceptable. One-hundred forty five fans outside of Fenway Park and Yankee Stadium were presented with a story in which the the pitcher for the Cardinals intentionally hit a player for the Cubs, causing serious injury. In response, the pitcher from the Cubs hits a batter from the Cardinals. Fans were asked to rate the moral acceptability of the second pitcher's actions on a scale from 1 to 7. Those who rated the revenge beaning of an innocent player as at least somewhat morally acceptable accounted for 44% of the sample; 51% found it unacceptable, with 5% being unsure. In other words, about half of the sample saw punishing an innocent player by proxy as acceptable, simply because he was on the same team.

But was the batter hit by the revenge bean actually viewed as innocent? To address this question, Cushman et al. asked a separate sample of 131 fans from online baseball forums whether or not they viewed the batter who was hit second as being morally responsible for the actions of the pitcher form their team. The answers here were quite interesting. First off, they were more in favor of revenge beanings, with 61% of the sample indicating the practice was at least somewhat acceptable. The next finding was that roughly 80% of the people surveyed agreed that, yes, the batter being hit was not morally responsible. This was followed by an agreement that it was, in fact, OK to hit that innocent victim because he happened to belong to the same team.

The final finding from this sample was also enlightening. The order in which people were asked about moral responsibility and endorsement of revenge beaning was randomized, so in some cases people were asked whether punishment was OK first, followed by whether the batter was responsible, and in other cases that order was reversed. When people endorsed vicarious punishment first, they subsequently rated the batter as having more moral responsibility; when rating the moral responsibility first, there was no correlation between moral responsibility and punishment endorsement. What makes this finding so interesting is that it suggests people were making rationalizations for why someone should be punished after they had already decided to punish; not before. They had already decided to punish; now they were looking to justify why they had made that decision. This in turn actually made the batter appear to seem more morally responsible.

                                                    "See? Now that he has those handcuffs on his rock-solid alibi is looking weaker already."

This finding ties in nicely with a previous point I've made about how notions of who's a victim and who's a perpetrator are fuzzy concepts. Indeed, Cushman et al. present another result along those same lines: when it's actually their team doing the revenge beaning, people view the act as more morally acceptable. When the home team was being targeted for revenge beaning, 43% of participants said the beaning was acceptable; when it was the home team actually enacting the revenge, 67% of the subjects now said it was acceptable behavior. Having someone on your side of things get hurt appears to make people feel more justified in punishing someone, whether that someone is guilty or not. Simply being associated with the guilty party in name is enough.

Granted, when people have the option to enact punishment on the actual guilty party, they tend to prefer that. In the National League, pitchers also come up to bat, so the option of direct punishment exists in those cases. When the initial offending pitcher was beaned in the story, 70% of participants found the direct form of revenge morally acceptable. However, if direct punishment is not an option, vicarious punishment of a group member seemed to still be a fairly appealing option. Further, this vicarious punishment should be directed towards the offending team, and not an unrelated team. For example, if a Cubs pitcher hits a Yankee batter, only about 20% of participants would say it's then OK for a Yankee pitcher to hit a Red Sox batter the following night. I suppose you could say the silver-lining here is that people tend to favor saner punishment when it's an option. 

Whether or not people are adapted to punish others vicariously, and, if so, in what contexts is such behavior adaptive and why, is a question left untouched by this paper. I could imagine certain contexts where aggressing against the family or allies of one who aggressed against you could be beneficial, but it would depend on a good deal of contingent factors. For instance, by punishing family members of someone who wronged you, you are still inflicting reproductive costs on the offending party, and by punishing the initial offenders allies, you make siding with and investing in said offender costlier. While the punishment might reach its intended target indirectly, it still reaches them. That said, there would be definite risks of strengthening alliances against you - as you are hurting others, which tends to piss people off - as well as possibly calling retaliation down on your own family and allies. Unfortunately, the results of this study are not broken by gender, so there's no way to tell if men or women differ or not in their endorsement of vicarious punishment. It seems these speculations will need to remain, well, speculative for now.

References:  Cushman, F., Durwin, A.J., & Lively, C. (2012). Revenge without responsibility? Judgments about collective punishment in baseball. Journal of Experimental Social Psychology. (In Press)  

Thursday, April 5, 2012

Tucker Max, Hitler, And Moral Contagion

Disgust is triggered off not primarily by the sensory properties of an object, but by ideational concerns about what it is, or where it has been...The first law, contagion, states that "things which have once been in contact with each other continue ever afterwards to act on each other"...When an offensive (or revered) person or animal touches a previously neutral object, some essence or residue is transmitted, even when no material particles are visible. - Haidt et al. (1997, emphasis theirs).
Play time is over; it's time to return to the science and think about what we can learn of human psychology from the Tucker Max and Planned Parenthood incident. I'd like to start with a relevant personal story. A few years ago I was living in England for several months. During my stay, I managed to catch my favorite band play a few times. After one of their shows, I got a taxi back to my hotel, picked up my guitar from my room, and got back to the venue. I waited out back with a few other fans by the tour bus. Eventually, the band made their way out back, and I politely asked if they would mind signing my guitar. They agreed, on the condition that I not put it on eBay (which I didn't, of course), and I was soon the proud owner of several autographs. I haven't played the guitar since for fear of damaging it.

                                   This is my guitar; there are many like it, but this one is mine....and also some kind of famous people wrote on it once.

My behavior, and other similar behavior, is immediately and intuitively understandable by almost all people, especially anyone who enjoys the show Pawnstars, yet very few people take the time to reflect on just how strange it is. By getting the signatures on the guitar, I did little more than show it had been touched very briefly by people I hold in high esteem. Nothing I did fundamentally altered the guitar in anyway, and yet somehow it was different; it was distinguished in some invisible way from the thousands of others just like it, and no doubt more valuable in the eyes of other fans. This example is fairly benign; what happened with Planned Parenthood and Tucker Max was not. In that case, the result of such intuitive thinking was that a helpful organization was out $500,000 and many men and women lost access to their services locally. Understanding what's going on in both cases better will hopefully help people not make mistakes like that again. It probably won't, but wouldn't it be nice if did?   

The first order of business in understanding what happened is to take a step back and consider the universal phenomenon of disgust. One function of our disgust psychology is to deal with the constant threat of microbial and parasitic organisms. By avoiding ingesting or contacting potentially contaminated materials, the chances of contracting costly infections or harmful parasites are lowered. Further, if by sheer force of will or accident a disgusting object is actually ingested, it's not uncommon for a vomiting reaction to be triggered, serving to expel as much of the contaminant as possible. While a good portion of our most visceral disgust reactions focus on food, animals, or bodily products, not all of them do; the reaction extends into the realm of behavior, such as deviant sexual behavior, and perceived physical abnormalities, like birth defects or open wounds. Many of the behaviors that trigger some form of disgust put us in no danger of infection or toxic exposure, so there must be more to the story than just avoiding parasites and toxins.

One way Haidt et al. (1997) attempt to explain the latter part of this disgust reaction is by referencing concerns about humans being reminded of their animal nature, or thinking of their body as a temple, which are, frankly, not explanations at all. All such an "explanation" does is push the question back a step to, "why would being reminded of our animal nature or profaning a temple cause disgust?" I feel there are two facts that stand out concerning our disgust reaction that help to shed a lot of light on the matter: (1) disgust reactions seem to require social interaction to develop, meaning what causes disgust varies to some degree from culture to culture, as well as within cultures, and (2) disgust reactions concerning behavior or physical traits tend to focus heavily on behaviors or traits that are locally abnormal in some way. So, the better question to ask is: "If the function of disgust is primarily related to avoidance behaviors, what are the costs and benefits to people being disgusted by whatever they are, and how can we explain the variance?" This brings us nicely to the topic of Hitler.

                                                                                     Now I hate V-neck shirts even more.

As Haidt et al. (1997) note, people tend to be somewhat reluctant to wear used clothing, even if that clothing had been since washed; it's why used clothing, even if undamaged, is always substantially cheaper than a new, identical article. If the used clothing in question belonged to a particularly awful person - in this case, Hitler - people are even less interested in wearing it. However, this tendency is reversed for items owned by well-liked figures, just like my initial example concerning my guitar demonstrated. I certainly wouldn't let a stranger draw on my guitar, and I'd be even less willing to let someone I personally disliked give it a signature. I could imagine myself even being averse to playing an instrument privately that's been signed by someone I disliked. So why this reluctance? What purpose could it possibly serve?

One very plausible answer is that the core issue here is signaling, as it was in the Tucker Max example. People are morally disgusted by, and subsequently try and avoid, objects or behaviors that could be construed as sending the wrong kind of signal. Inappropriate or offensive behavior can lead to social ostracism, the fitness consequences of which can be every bit as extreme as those from parasites. Likewise, behavior that signals inappropriate group membership can be socially devastating, so you need to be cautious about what signal you're sending. One big issue that people need to contend with is that signals themselves can be interpreted many different ways. Let's say you go over to a friend's house, and find a Nazi flag hanging in the corner of a room; how should you interpret what you're seeing? Perhaps he's a history buff, specifically interested in World War II; maybe a relative fought in that war and brought the flag home as a trophy; he might be a Nazi sympathizer; it might even be the case that he doesn't know what the flag represents and just liked the design. It's up to you to fill in the blanks, and such a signal comes with a large risk factor: not only could an interpretation of the signal hurt your friend, it could hurt you as well for being seen as complicit in his misdeed. 

Accordingly, if that signaling model is correct, then I would predict that signal strength and sign should tend to outweigh the contagion concerns, especially if that signal can be interpreted negatively by whoever you're hoping to impress. Let's return to the Hitler example: the signaling model would predict that people should prefer to publicly wear Hitler's actual black V-neck shirt (as it doesn't send any obvious signals) over wearing a brand new shirt that read "I Heart Hitler". This parallels the Tucker Max example: people were OK with the idea of him donating money so long as he did so in a manner that kept his name off the clinic. Tucker's money wasn't tainted because of the source as much as it was tainted because his conditions made sure the source was unambiguous. Since people didn't like the source and wanted to reject the perceived association, their only option was to reject the money.  

This signaling explanation also sheds light on why the things that cause disgust are generally seen as, in some way, abnormal or deviant. Those who physically look abnormal may carry genes that are less suited for the current environment, or be physically compromised in such a way as it's better to avoid them than invest in them. Those who behave in a deviant, inappropriate, or unacceptable manner could be signaling something important about their usefulness, friendliness, or their status as a cooperative individual, depending on the behavior. Disgust of deviants, in this case, helps people pick which conspecifics they'd be most profitably served by, and, more generally, helps people fit into their group. You want to avoid those who won't bring you much reward for your investment, and avoid doing things that get on other people's bad side. Moral disgust would seem to serve both functions well.

                                                   Which is why I now try and make new friends over mutual hatreds instead of mutual interests.
Now returning one final time to the Planned Parenthood issue, you might not like the idea of Tucker Max having his name on a clinic because you don't like him. I understand that concern, as I wouldn't like to play a guitar that was signed by members of the Westboro Baptist Church. On that level, by criticizing those who don't like the idea of a Tucker Max Planned Parenthood clinic, I might seem like a hypocrite; I would be just as uncomfortable in a similar situation. There is a major difference between the two positions though, as a quick example will demonstrate.

Let's say there's a group of starving people in a city somewhere that you happen to be charge of. You make all the calls concerning who gets to bring anything into your city, so anyone who wants to help needs to go through you. In response to the hunger problem, the Westboro Baptist Church offers to donate a truck load of food to those in need, but they have one condition: the truck that delivers the food will bear a sign reading "This food supplied courtesy of the Westboro Baptist Church". If you dislike the Church, as many people do, you have something of a dilemma: allow an association with them in order to help people out, or turn the food away on principle.  

For what it's worth, I would rather see people eat than starve, even if it means that the food comes from a source I don't like. If you're desire to help the starving people eat is trumped by your desire to avoid associating with the Church, don't tell the starving people you're really doing it for their own good, because you wouldn't be; you'd be doing it for your own reasons at their expense, and that's why you'd be an asshole. 

References: Haidt, J., Rozin, P., McCauley, C., & Imada, S. (1997). Body, psyche, and culture: The relationship between disgust and morality. Psychology and Developing Societies, 9, 107-131.

Wednesday, April 4, 2012

Tucker Max V. Planned Parenthood

My name is Tucker Max, and I am an asshole. I get excessively drunk at inappropriate times, disregard social norms, indulge every whim, ignore the consequences of my actions, mock idiots and posers, sleep with more women than is safe or reasonable, and just generally act like a raging dickhead. -Tucker Max
It should come as no surprise that there are more than a few people in this world who don't hold Tucker Max in high esteem. He makes no pretenses of being what most would consider a nice person, and makes no apologies for his behavior; behavior which is apparently rewarded with tons of sex and money. Recently, however, this reputation prevented him from making a $500,000 donation to Planned Parenthood. Naturally, this generated something of a debate, full of plenty of moral outrage and inconsistent arguments. Since I've been thinking and writing about reasoning and arguing lately, I decided to treat myself and indulge in a little bit. I'll do my best to make this educational as well as personal, but I make no promises; this is predominately intellectual play for me.

                                  Sometimes you just have to kick back and treat yourself in a way that avoids going outside and enjoying the nice weather.
So here's the background, as it's been recounted: Tucker find himself with a tax burden that can be written off to some extent if he donates money charitably. Enterprising guy that he is, he also wants to donate the money in such a way that it can help generate publicity for his new book. After some deliberation, he settles on a donation of $500,000 to Planned Parenthood, as he describes himself as always having been pro-choice, having been helped by Planned Parenthood throughout his life, and, perhaps, finding the prospect funny. His condition for the donation is that he wanted his name on a clinic, which apparently is something Planned Parenthood will consider if you donate enough money. A meeting is scheduled to hammer out the details, but is cancelled a few hours before it was set to take place - as Tucker is driving to it - because Planned Parenthood suddenly became concerned about Tucker's reputation and backs out of the meeting without offering any alternative options.

I'll start by stating my opinion: Planned Parenthood made a bad call, and those who are arguing that Planned Parenthood made the correct call don't have a leg to stand on.

Here's what wasn't under debate: whether Planned Parenthood needed money. Their funding was apparently cut dramatically in Texas, where the donation was set to take place, and the money was badly needed. So if Planned Parenthood needed money and turned down such a large sum of it, one can only imagine they had some reasons to do so. One could also hope those reasons were good. From the various articles and comments on the articles that I've read defending Planned Parenthood's actions, there are two sets of reasons why they feel this decision was the right one. The first set I'll call the explicit arguments - what people say - and the second I'll call the implicit motivations - what I infer (or people occasionally say) the motivations behind the explicit arguments are.

                                                ...but didn't have access to any reproductive care, as the only Planned Parenthood near me closed.

The explicit arguments contain two main points. The first thrust of the attack is that Tucker's donation is selfish; his major goal is writing off his taxes and generating publicity, and this taints his action. That much is true, but from there this argument flounders. No one is demanding that Planned Parenthood only accept truly selfless donations. Planned Parenthood itself did not suggest that Tucker's self-interest had anything at all to do with why they rejected the offer. This explicit argument serves only one real purpose, and that's character assassination by way of framing Tucker's donation in the worst possible light. One big issue with this is that I find it rather silly to try and malign Tucker's character, as he does a fine job of that himself; his self-regarding personality is responsible for a good deal of why he's famous. Another big issue is that Tucker could have donated that money to any non-profit he wanted, and I doubt Planned Parenthood was the only way he could have achieved his main goals. Just because caring for Planned Parenthood might not have been his primary motive with the donation, it does not mean it played no part in motivating the decision. Similarly, just because someone's primary motivation for working at their job is money, it does not mean money is the only reason they chose the job they did, out of all the possible jobs they could have picked.

The second explicit argument is the more substantial half. Since Tucker Max is a notable asshole, many people voiced concerns that putting his name on a clinic would do Planned Parenthood a good deal of reputational damage, causing other people to withdraw or withhold their financial or political support. Ultimately, the costs of this reputational damage would end up outweighing Tucker's donation, so really, it was a smart economic (and political, and moral) move. In fact, one author goes so far as to suggest that taking Tucker's donation could have put the future of Planned Parenthood as a whole in jeopardy. This argument, at it's core, suggests that Planned Parenthood lost the battle (Tucker's donation) to win the war (securing future funding).

There are two big problems with this second argument. Most importantly, the negative outcome of accepting Tucker's donation is purely imagined. It might have happened, it might not have happened, and there's absolutely zero way of confirming whether it would have. That does not stop people from assuming that the worst would have happened, as making that assumption gives those defending Planned Parenthood an unverifiable potential victim. As I've mentioned before, having a victim on your side of the debate is crucial for engaging the moral psychology of others, and when people are making moral pronouncements they do actively search for victims. The other big problem with this second argument is that it's staggering inconsistent with the first. Remember, people were very critical of Tucker's motivations for the donation. One of the most frequently trotted out lines was, "If Tucker really cared about Planned Parenthood, he would have made the donation anonymously anyway. Then, he could have helped the women out and avoided the reputational harm he would have done to Planned Parenthood. Since he didn't donate anonymously (or at least, I think he didn't; that's kind of the rub with anonymous donations), he's just a total asshole".             

                             "I was going to go refill my birth control prescription here, but if Tucker Max helped keep this clinic open, maybe I'll just get pregnant instead"

The inconsistency is as follows: people assume that other donors would avoid or politically attack Planned Parenthood if Tucker Max was associated with it. Perhaps some women would even avoid the clinic itself, because it would make them feel upset. Again, maybe that would happen, maybe it wouldn't. Assuming that it would, one could make the case that if those other supporters really cared about Planned Parenthood, then they shouldn't let something like an association of a single clinic with Tucker Max dissuade them. The only reason that someone who previously supported Planned Parenthood would be put off would be for personal, self-interested reasons. The very same kind of motivation they criticized Tucker for initially. Instead of bloggers and commenters writing well-reasoned posts about how people shouldn't stop supporting Planned Parenthood just because Tucker Max has his name on one, they instead praise excluding his sizable donation. One would think anyone who truly supported Planned Parenthood would err on the side of making arguments concerning why people should continue to support it, not why it would be justifiable for people to pull their support in fear of association with someone they don't like.

Which brings us very nicely to the implicit motivations. The core issue here can be best summed up by Tucker himself:
Most charities are not run to help people, they are run because they are ways for people to signal status about themselves to other people...I wasn’t the “right type” of person to take money from so they’d rather close clinics. It’s the worst kind of elitism, the kind that cloaks itself in altruism. They care more about the perception of themselves and their organization than they care about its effectiveness at actually serving the reproductive needs of women.

People object to Tucker Max's donation on two main fronts: (1) they don't want to do anything that benefits Tucker in any way, and (2) they don't personally want to be associated with Tucker Max in any way. Those two motivations are implicitly followed by a, "...and that's more important to me than ensuring Planned Parenthood can continue to serve the women and men of their communities". It looks a lot like a costly display on the part of those who supported the decision. They're demonstrating their loyalty to their group, or to their ideals, and they're willing to endure a very large, very real cost to do so. At least, they're willing to let other people suffer that cost, as I don't assume all, or even most, of the bloggers and commenters will be directly impacted by this decision.

Whatever ideal it is that they're committed to, whatever group they're displaying for, it is not Planned Parenthood. Perhaps they feel they're fighting to end what they perceive as sexism, or misogyny, or a personal slight because Tucker wrote something about fat girls they found insulting. What they're fighting for specifically is irrelevant. What is relevant is that they're willing to see Planned Parenthoods close and men and women lose access to their services before they're willing to compromise whatever it is they're primarily fighting for. They might dress their objections up to make it look like they aren't self-interested or fighting some personal battle, but the disguise is thin indeed. One could make the case that such behavior, co-opting the suffering of another group to bolster your own cause, is rather selfish; the kind of thing a real asshole would do.

Tuesday, March 27, 2012

Communication As Persuasion

Can you even win debates? I’ve never heard someone go, "My opponent makes a ton of sense; I’m out." -Daniel Tosh
In my younger days, I lost a few years of my life to online gaming. Everquest was the culprit. Now, don't get me wrong, those years were perhaps some of the happiest in my life. Having something fun to do at all hours of the day with thousands of people to do it with has that effect. Those years just weren't exactly productive. While I was thoroughly entertained, when the gaming was over I didn't have anything to show for it. A few years after my gaming phase, I went through another one: chronic internet debating. Much like online gaming, it was oddly addictive and left me with nothing to show for it when it all ended. While I liked to try and justify it to myself - that I was learning a lot from the process, refining my thought process and arguments, and being a good intellectual - I can say with 72% certainty that I had wasted my time again, and this time I wasn't even having as much fun doing it. Barring a few instances of cleaning up grammar, I'm fairly certain no one changed my opinion about a thing and I changed about as many in return. You'd think with all the collective hours my fellow debaters and I had logged in that we might have been able to come to an agreement about something. We were all reasonable people seeking the truth, after all.

                                                                                        Just like this reasonable fellow.

Yet, despite that positive and affirming assumption, debate after debate devolved into someone - or everyone - throwing their hands up in frustration, accusing the other side of being intentionally ignorant, too biased, intellectually dishonest, unreasonable, liars, stupid, and otherwise horrible monsters (or, as I like to call it, suggesting your opponent is a human). Those characteristics must have been the reason the other side of the debate didn't accept that our side was the right side, because our side was, of course, objectively right. Debates are full of logical fallacies like those personal attacks, such as: appeals to authority, straw men, red herrings, and question begging, to name a few, yet somehow it only seems like the other side was doing it. People relentless dragged issues into debates that didn't have any bearing on the outcome, and they always seemed to apply their criticisms selectively. 

Take a previously-highlighted example from Amanda Marcotte: when discussing the hand-grip literature on resisting sexual assault, she complained that, "most of the studies were conducted on small, homogeneous groups of women, using subjective measurements." Pretty harsh words for a  study comprised of 232 college women between the ages of 18 and 35. When discussing another study that found results Amanda liked - a negligible difference in average humor ratings between men and women - she raised no concerns about "...small, homogeneous groups of women, using subjective measurements". That she didn't is hypocritical, considering the humor study had only 32 subjects (16 men and women, presumably undergraduates from some college) and used caption writing as the only measure of humor. So what gives: does Amanda care about the number of subjects when assessing the results or not?

The answer, I feel is a, "Yes, but only insomuch as it's useful to whatever point she's trying to make". The goal in debates - and communication more generally - is not logical consistency; it's persuasion. If consistency (or being accurate) gets in the way of persuasion, the former can easily be jettisoned for the latter. While being right, in some objective sense, is one way of persuading others, being right will not always make your argument the more persuasive one; the resistance to evolutionary theory has demonstrated as much. Make no mistake, this behavior is not limited to Amanda or the people that you happen to disagree with; research has shown that this is a behavior pretty much everyone takes part in at some point, and that includes you*. A second mistake I'd urge you not to make is to see this inconsistency as some kind of flaw in our reasoning abilities. There are some persuasive reasons to see inconsistency as reasoning working precisely how it was designed to, annoying as it might be to deal with.

                                                                 Much like my design for an airbag that deploys when you start the car.
As Mercier and Sperber (2011) point out, the question, "Why do humans reason?" is often left unexamined. The answer these authors provide is that our reasoning ability evolved primarily for an argumentative context: producing arguments to persuade others and evaluating the arguments others present. It's uncontroversial that communication between individuals can be massively beneficial. Information which can be difficult or time consuming to acquire at first can be imparted quickly and almost without effort to others. If you discovered how to complete some task successfully - perhaps how to build a tool or a catch fish more effectively - perhaps through a trial-and-error process, communicating that information to others allows them to avoid the need to undergo that same process themselves. Accordingly, trading information can be wildly profitable for all parties involved; everyone gets to save time and energy. However, while communication can offer large benefits, we also need to contend with the constant risk of misinformation. If I tell you that your friend is plotting to kill you, I'd have done you a great service if I was telling the truth; if the information I provided was either mistaken or fabricated, you'd have been better off ignoring me. In order to achieve these two major goals - knowing how to persuade others and when to be persuaded yourself - there's a certain trust barrier in communication that needs to be overcome. 

This is where Mercier and Sperber say our reasoning ability comes in: by giving others convincing justifications to accept our communications, as well as being able to better detect and avoid the misinformation of others, our reasoning abilities allow for more effective and useful communication. Absent any leviathan to enforce honesty, our reasoning abilities evolved to fill the niche. It is worth comparing this perspective to another: the idea that reasoning evolved as some general ability to improve or refine our knowledge across the board. In this scenario, our reasoning abilities more closely resemble some domain-general truth finders. If this latter perspective is true, we should expect no improvements in performance on reasoning tasks contingent on whether or not they are placed in an argumentative context. That is not what we observe, though. Poor performance on a number of abstracted reasoning problems, such as the Wason Selection Task, is markedly improved when those same problems are placed in an argumentative context.

While truth tends to win in cases like the Wason Selection Task being argued over, let's not get a big-head about it and insist that it implies our reasoning abilities will always push towards truth. It's important to note how divorced from reality situations like that one are: it's not often you find people with a mutual interest in truth, arguing over a matter they have no personal stake in, that also has a clearly defined and objective solution. While there's no doubt that reasoning can sometimes lead people to make better choices, it would be a mistake to assume that's the primary function of the ability, as reasoning frequently doesn't seem to lead people towards that destination. To the extent that reasoning tends to push us towards correct, or improved, answers, this is probably due to correct answers being easier to justify than incorrect ones.

As the Amanda Marcotte example demonstrated, when assessing an argument, often "[people] are not trying to form an opinion: They already have one. Their goal is argumentative rather than epistemic, and it ends up being pursed at the expense of epistemic soundness...People who have an opinion to defend don't really evaluate the arguments of their interlocutors in search for genuine information but rather consider them from the start as counterarguments to be rebutted." This behavior of assessing information by looking for arguments that support one's own views and rebut the views of others is known as motivated reasoning. If reasoning served some general knowledge-refining ability, this would be a strange behavior indeed. It seems people often end up strengthening not their knowledge about the world, but rather their existing opinions, a conclusion that fits nicely in the argumentative theory. While opinions that cannot be sustained eventually tend to get tossed aside, as reality does impose some constraints (Kunda, 1990), on fuzzier matters for which there aren't clear, objective answers - like morality - arguments have gotten bogged down for millenia.

                                       I'm not hearing anymore objections to the proposal that "might makes right". Looks like that debate has been resolved.

 Further still, the argumentative theory can explain a number of findings that economists tend to find odd. If you have a choice between two products that are equally desirable, adding a third and universally less-desirable option should not have any effect on your choice. For instance, let's say you have a choice between $5 today and $6 tomorrow; adding an additional option of $5 tomorrow to the mix shouldn't have any effect, according to standard economic rationality, because it's worse than either option. Like many assumptions of economics, it turns out to not hold up. If you add that additional option, you'll find people start picking the $5 today option more than they previously did. Why? Because it gives them a clear justification for their decision, as if they were anticipating having to defend it. While $5 today or $6 tomorrow might be equally as attractive, $5 today is certainly more attractive than $5 tomorrow, making the $5 decision more justifiable. Our reasoning abilities will frequently point us towards decisions that are more justifiable, even if they end up not making us more satisfied.

Previous conceptualizations about the function of reasoning have missed the mark, and, as a result, had been trying to jam a series of square pegs into the same round hole. They have been left unable to explain vast swaths of human behaviors, so researchers simply labeled those behaviors that didn't fit as biases, neglects, blind spots, errors, or fallacies, without ever succeeding in figuring out why they existed; why our reasoning abilities often seemed so poorly designed for reasoning. By placing all these previously anomalous findings under a proper theoretical lens and context, they suddenly start to make a lot more sense. While the people you find yourself arguing with may still seem like total morons, this theory may at least help you gain some insight into why they're acting so intolerable.             

*As a rule, it doesn't apply to me, so if you find yourself disagreeing with me, you're going to want to rethink your position. Sometimes life's just unfair that way.

References: Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108,  480-498.

Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111.