Wednesday, January 25, 2012

The Lacking Standards Of Philosophical Proof

Recently, I've reached that point in life that I know lots of us have struggled with: one day, you just wake up and say to yourself, "I know my multimillion dollar bank account might seem impressive, but I think I want more out of life than just a few million dollars. What I'd like would be more money. Much more". Unfortunately, the only way to get more money is to do this thing called "work" at a place called a "job", and these jobs aren't always the easiest thing to find - especially the cushy ones - in what I'm told is a down economy. Currently, my backup plan has been to become a professor in case that lucrative career as a rockstar doesn't pan out the way I keep hoping it will. 

                                           Unfortunately, working outside of the home means I'll have less time to spend entertaining my piles of cash. .

I've been doing well and feeling at home in various schools for almost my entire life, so I've not seen much point in leaving the warmth of the academic womb. However, I've recently been assured that my odds of securing such a position in one as a long-term career are probably somewhere between "not going to happen" and "never going to happen". So, half-full kind of guy that I am, I've decided to busy myself with pointing out why many people who already have these positions don't deserve them. With any luck, some universities may take notice and clear up some room in their budgets. Today, I'll again be turning my eye on the philosophy department. Michael Austin recently wrote this horrible piece over at Psychology Today about why we should reject moral relativism in favor of moral realism - the idea that there are objective moral truths out there to be discovered, like physical constants. Before taking his arguments apart, I'd like to stress that this man actually has a paid position at a university, and I feel the odds are good he makes more money than you. Now that that's out of the way, onto the fun part.
First, consider that one powerful argument in favor of moral realism involves pointing out certain objective moral truths. For example, "Cruelty for its own sake is wrong," "Torturing people for fun is wrong (as is rape, genocide, and racism)," "Compassion is a virtue," and "Parents ought to care for their children." A bit of thought here, and one can produce quite a list. If you are really a moral relativist, then you have to reject all of the above claims. And this an undesirable position to occupy, both philosophically and personally.
Translation: it's socially unacceptable to not agree with my views. It's a proof via threat of ostracism. What Austin attempts to slip by there is the premise that you cannot both think something is morally unacceptable to you without thinking it's morally unacceptable objectively. Rephrasing the example in the context of language allows us to see the flaw quickly: "You cannot think that the word "sex" refers to that thing you're really bad at without also thinking that the pattern of sounds that make up the word has some objective meaning which could never mean anything else". I'm perfectly capable of affirming the first proposition while denying the second. The word "sex" could have easily meant any number of things, or nothing at all, it just happens to refer to a certain thing for certain people. On the same note, I can both say "I find torturing kittens unacceptable" while realizing my statement is perfectly subjective. His argument is not what I would call a "powerful" one, though Austin seems to think it is.

                                          It wasn't the first time that philosophy rolled off my unsatisfied body and promptly fell asleep, pleased with itself.

Moving on:
 Second, consider a flaw in one of the arguments given on behalf of moral relativism. Some argue that given the extent of disagreement about moral issues, it follows that there are no objective moral truths...But there is a fact of the matter, even if we don't know what it is, or fail to agree about it. Similarly for morality, or any other subject. Mere disagreement, however widespread, does not entail that there is no truth about that subject.   
It is a bad argument to say that just because there is disagreement there is no fact of the matter. However, that gives us no reason to either accept moral realism or reject moral relativism; it just gives us grounds to reject that particular argument. Similarly, Austin's suggestion that there is definitely a fact of the matter in any subject - or morality specifically - isn't a good argument. In fact, it's not even an argument; it's an assertion. Personal tastes - such as what music sounds good, what food is delicious, and what deviant sexual acts are fun - are often the subject of disagreement and need not have an objective fact of the matter.

If Austin thinks disagreement isn't an argument against moral realism, he should probably not think that agreement is an argument for moral realism. Unfortunately for us, he does:
There are some moral values that societies share, because they are necessary for any society to continue to exist. We need to value human life and truth-telling, for example. Without these values, without prohibitions on murder and lying, a given society will ultimately crumble. I would add that there is another reason why we often get the impression that there is more moral disagreement than is in fact the case. The attention of the media is directed at the controversial moral issues, rather than those that are more settled. Debates about abortion, same-sex marriage, and the like get airtime, but there is no reason to have a debate about whether or not parents should care for the basic needs of their children, whether it is right for pharmacists to dilute medications in order to make more profit, or whether courage is a virtue.    
If most people agreed that the Sun went around Earth, that would in no way imply it was true. It's almost amazing how he can point out that an argument is bad, then turn around and use an identical argument in the next sentence thinking it's a killer point. Granted, if people were constantly stealing from and killing each other - that is, more than they do now - society probably wouldn't fare too well. What the existence of society has to do with whether or not morality is objective, I can't tell you. From these three points, Austin gives himself a congratulatory pat on the back, feeling confident that we can reject moral relativism and accept moral realism. With standards of proof that loose, philosophy could probably give birth and not even notice.

                                                                                   Congratulations! It's a really bad idea.

I'd be curious to see how Austin would deal with the species questions: are humans the only species with morality; do all animals have a sense of morality, social or otherwise; if they don't, and morality is objective, why not? Again, the question seems silly if you apply the underlying logic to certain other domains, like food preferences: is human waste a good source of nutrients? The answer to that question depends on what species you're talking about. There's no objective quality of our waste products that either has the inherent property of nutrition or non-nutrition.

Did I mention Austin is a professor? It's worth bearing in mind that someone who makes arguments that bad is actually being paid to work in a department dedicated to making and assessing arguments - in a down economy, no less. Even Psychology Today is paying him for his blogging services, I'm assuming. Certainly makes you wonder about the quality of candidates who didn't get hired.

Friday, January 20, 2012

Free Will Doesn't Matter Morally, But We Think It Does

The study of the world around is dubbed science, and in order to pursue it, you first need to purchase several large, expensive doohickeys in order to conduct experiments, hire scientists, and the like. The study of the theoretical world is dubbed mathematics, and you need only paper, a pencil, and a trashcan within reasonable distance. In comparison, the study of nothing in particular may be dubbed "philosophy", and all you have to do is keep talking. It may be noted that working on philosophical matters is generally very cheap to do. That may very well be because it isn't worth a penny to anyone anyway - Uncyclopedia
Academic xenophobe that I am, I don't care much for philosophers. One reason I don't much care for them is that they, as a group, have a habit of getting stuck in arguments that remain unresolved - or are even unresolvable - for centuries about topics of dubious importance. One of those topics that tends to evade clear thinking and relevance is that of free will. The definition of the term itself often avoids even being nailed down, meaning most of these arguments are probably not even being had about the same topic.

There's been a lot of hand-wringing over whether determinism precludes moral responsibility. Today, I'm going to briefly step foot into the world of philosophy to demonstrate why this debate has a simple answer, and hopefully, when we reach that point, we can start making some actual progress in understanding human moral psychology.   

                                                       An artist's depiction of a philosopher; notice how it does nothing important and goes nowhere.

Let's take a completely deterministic universe in which the movement and action of every single bit of matter and energy is perfectly predictable. Living organisms would be no exception here; you could predict every single behavior of an organism from before the moment of its conception till the moment of death. Every thought, every feeling, every movement of every part of its cellular machinery. People seem to worry that in this universe we would be unable to justifiably condemn people for their actions, as they are not seen as having a "choice" in the matter (choice is another one of those very blurry concepts, but we'll forget about what it's supposed to mean here; just use your best guess). What most people fail to realize about this example is that it in no way precludes making moral judgments ("he ought not to have done that") or holding people responsible for their actions. "But how can we justify holding someone responsible for a predetermined action?" I already hear someone missing the point objecting. The answer here is simple: you wouldn't need to justify those moral judgments or punishment in some objective sense anymore than a killer would need to justify why they killed.

If the killer was predetermined to kill, others were also predetermined to feel moral outrage at that killing; nothing about determinism precludes feelings, no matter their content. Additionally, those who feel that moral outrage are determined to attempt and convince others about the content of that outrage, which they may be successful at doing. From there, people are likewise determined to attempt and punish the person who committed the crime, and so on. Suffice it to say, a deterministic world would look no different than the world we currently inhabit, and determinism and moral responsibility get to live hand-in-hand. However, I already feel dirty enough playing philosopher that I don't feel a need to continue on with this example.

                                           I feel even dirtier than I did last Christmas. The reaction of those children - and that jury - was priceless though...

After successfully resolving centuries of philosophical debate in the matter of a few minutes (you're welcome), it's time to think about what this example can teach us about our moral psychology. Refreshingly, we will be stepping out of the realm of philosophy into that of science for this part. What I think is the most important lesson to take away from this example is the idea that if we can fully explain a behavior, we must also condone it (or, at the very least, not condemn others for it). Evolutionary psychology tends to get a fair share of scorn directed its way for even proposing that certain traits - typically politically unpalatable ones, such as sex differences or violence - are adaptations, and that ire typically comes in the form of, "well you're just trying to justify [spousal abuse/rape/sexism/etc] by explaining it". It's also worth noting that those claims will be tossed at evolutionary psychologists even if those same psychologists say, "We aren't trying to justify anything". 

I cited a figure a while back about how 86% of people viewed determinism as incompatible with moral responsibility, so this sentiment appears to be a rather popular one. There are two papers that have recently come across my desk that expand on this point a little further. The first comes from Miller, Gordon, and Buddie (1999), who basically demonstrated the effect I mentioned above. Subjects were presented with a vignette of a story involving a perpetrator causing some harm and asked to either try and explain that behavior first and then react to it, or react to it first and then explain it. The results showed that those who explained the behavior first took a significantly more forgiving and condoning stance towards the perpetrator. Additionally, when other observers read these explanations, the observers rated the attitudes of those doing the explaining as even more condoning of the harm than the explainers themselves had predicted.So while the explainers were slightly more condoning of the behavior of the perpetrator in the story, observers who read those explanations thought they were more condoning still. Sounds like the perfect mix for moral outrage.

                                       "We'd like to respectfully disagree with your well-articulated position, and, if that fails, burn you and your books."

Miller et al. (1999) went on to examine how different types of explanations might effect the explaining-condoning link. The authors suggest that explanations that portray the perpetrator as low in personal responsibility (it was the situation that made him do it) would be viewed as more condoning than those referencing the perpetrator's disposition (he acted that way because he's a cruel son-of-a-bitch). Towards this end, they presented subjects with the results of two hypothetical experiments: in one, the presence of a mirror dramatically affected the rates of cheating (5% cheating in the mirror condition, 90% cheating in the no mirror condition) or had no effect (50% cheating in both situations). The first experiment served to emphasize the effect of the situation, the second de-emphasizing the effect of the situation, as being the important explanatory factor.

The results here indicated that those who read the results stating that the situation held a lot of influence were more condoning of the cheating behavior when compared to those who read the dispositional explanations. What was more interesting, however, is that these same participants also rated their judgments of the cheater's behavior as being significantly more negative than what they thought the hypothetical researcher's judgments were. The subjects seemed to think the researchers were giving the perpetrators a pass.  

The second experiment was conducted by Greene and Cahill (2011). Here, the researchers tested, basically, the suggestion that neuroscience imaging might overwhelm participant's judgment with flashy pictures and leave them unable to consider the evidence of the case. In this experiment, participants were given the facts of a criminal case (either a low-severity or a high-severity case) and were presented with one of three conditions: (1) the defendant was labeled as psychotic by an expert; (2) in addition, results of neurological tests that found deficiencies consistent with damage to the frontal area of the defendants brain were presented ; and (3) in addition to that, colorful brain scans were presented documenting that damage.

The results of this study demonstrated that participants were about as likely to sentence the defendant to death across all three conditions when the defendant was deemed to be low in future dangerousness. However, when the defendant was high in future dangerousness they were overwhelmingly more likely to be sentenced to death, but only by group (1). In groups (2) and (3), they were far, far less likely to be sentenced to death (a drop from about 65% likely to be sentenced to death down to a low of near 15%, no different from the low-dangerousness group). Further, in conditions (2) and (3), the mock jurors rated the defendant as more remorseful and less in control of his behavior.

                         Unable to control his behavior and highly likely to be violent again? Sounds like the kind of guy we'd want to keep hanging around.

These two papers provide a complementary set of results, demonstrating some the effects that explanations can have both on our sense of moral responsibility and our perception of the explainer. What those two papers don't do, however, is explain those effects in any satisfying manner. I feel there are several interesting predictions to be made here, but placing these results into their proper theoretical context will be a job for another day. In the mean time, I'm going to go shower until that sullied feeling that philosophy brings on goes away. 

(One thought to consider is that perhaps terms like "free will" and "choice" are (sort of) intentionally nebulous, for to define them concretely and explain how they work would - like Kyrptonite to Superman - sap them of their ability to imbue moral responsibility)  

References: Greene, E. & Cahill, B.S. (2011). Effects of neuroimaging evidence on mock juror decision making. Behavioral Sciences & the Law, DOI: 10.1002/bsl.1993

Miller, A.G., Gordon, A.K., & Buddie, A.M. (1999). Accounting for evil and cruelty: Is to explain to condone? Personality and Social Psychology Review, 3, 254-268.

Wednesday, January 18, 2012

Was Freud Right? Are You Sexually Attracted To Your Parents?

No, you probably are not.

Well, that was easy. Given that sexual reproduction evolved specifically to introduce some genetic diversity to future generations in order to remain ahead of the more quickly evolving parasites (Ridley, 1993), the suggestion that humans would also have some adaptations that predisposed them to breed with their immediate relatives seems misguided. Freud - I'm told - had suggested that children really did want to have sex with their parents, and it was only through imposition of a cultural taboo against incest that such drives were thwarted. It's just one of the many things he was wrong about.

                                            "I don't always talk about your mother, but when I do....wait, never mind; I do always talk about your mother"

Might there have been something to that notion of Freud's though? No. Go read the introduction again if you're still confused on that point. However, there is at least one recent research paper in which the authors suggest that there may in fact be some forces at work that generate sexual attraction to closely related family members that a societal taboo is needed to stand in the way of. In a series of three experiments, Fraley & Marks (2010) attempt to demonstrate that possibility.

In the first experiment, subjects were either primed with a picture of their opposite sex parent, or were controls that were unrelated to that parent. Subjects were then asked to rate a few pictures of opposite sex strangers for their sexual attractiveness. The results showed a slight tendency for those who saw a picture of their parent to rate others as more attractive (a difference of about 0.2 on a scale of 1 to 7). The second study went a bit deeper. This time, participants had their own face morphed from 0 to 40% with those of opposite sex strangers and rated the new photos for attractiveness; the control group rated the same pictures, but were not the person being morphed into the photos. The results showed a similar pattern: there was a slight tendency for people who's faces had been morphed into the photos to rate them as more attractive (a difference of about a 0.4 on the same scale), relative to the controls. Finally, in the third experiment, the researchers lied to the participants about how much of their face had been morphed into the photos and mentioning the study was examining incestuous tendencies. This time, the effect reversed; participants rated the pictures with self-morphs as being slightly less attractive, relative to controls.

So where does that leave us?

                                                                                Hiding in our closet, aka "The Shame Cave"?

Are we to admit Freud was onto something? No, and stop asking that silly question. Since I'm a big fan of theory, naturally my first question was: what was theory guiding this research? According to Fraley & Marks, the following findings need an explanation: (1) people tend to enter into relationships with others who are similar physically on a variety of traits, (2) that people tend to enter into relationships with those who live around them and are familiar, and (3) people find those who they are exposed to more frequently more attractive than those they're exposed to less frequently. However, those three findings do not a theory make; they need a theory to explain them, preferably one that doesn't cut again incest avoidance. Here's a simple and probable one that accounts for at least part of the picture here: sexual selection.

Take any species; since I like peacocks, I'll use them. When mating season roles around, the peacocks flaunt for the peahens, they have steamy bird sex, and soon after a new generation of birds are hatched into this world. The peacocks will inherit their father's sexy tails, and the peahens will inherit something else: their mother's preferences for those sexy tails. If those sexual preferences weren't inherited, mating in the next generation would be random with respect to the tails. Since it isn't, we can safely assume that, to at least some extent, the preferences are hereditary.

                                                               Just like the preference for hunting equipment is. I'm a shotgun man myself. 

So let's return to the facts in need of an explanation. Picture your mother and father having sex to conceive you - make Freud proud. Whatever physical traits your parents had will be passed onto you. Additionally, whatever preferences your parents had for those traits that attracted them to each other will be passed on as well. That would seem to be able to explain (1) and the results of the photo manipulation study fairly well. By morphing in your own traits to the picture to some degree, you're morphing in those same traits that you're going to tend to have a preference towards. The result? You find those pictures slightly more attractive.

How about the first experiment that primed pictures of the parents? It seems at least plausible that if one truly found their opposite sex parent attractive, ratings of strangers would go down by comparison, not up. Concluding that one found strangers more sexually appealing because of that sexual aversion to their parents would be just as consistent with the data; at the very least, it can't be ruled out by the results found here. As for the third experiment, admitting a sexual attraction to one's own family can be quite socially damning, so it hardly seems surprising that people would avoid doing so.  

                                                       "You look just like my sister and that is so hot! Would you mind wearing her clothes?"

Now I want to look at how the authors explain their results. Fraley & Marks (2010) suggest the following:
...the mechanisms that promote familiarity, bonding, and attraction are most likely to operate on inputs experienced in the early family environment. For example, if sexual imprinting really takes place in humans, then one’s early interactions with primary attachment figures can play an influential role in shaping the “ideal” for what kinds of people one will find attractive...
A tempting suggestion for some, no doubt, until one asks some perfectly relevant questions, like: why would the sexual imprinting take place during early interactions in childhood? Why would the stimulus that the imprinting responds to be the caregivers in the house (especially them, given the costs of inbreeding), as opposed to the environment outside the family? Combining the two questions gives us the following: Why would anyone suppose evolution had designed our psychology to become sexually attracted (in the long term) to the physical traits of our close genetic relatives at a time that we are pre-reproductive? Frankly, I can't think of a reason we would expect that to happen, and one isn't suggested in the paper.

On the same token, Fraley & Marks (2010) go on to suggest that the aversion to incest is simply a matter of habituation - as opposed to the Westermarck effect - but again offer no reason as to why habituation would have this particular effect. At the same time, habituation would also seem to make people more attractive the more familiar they were, according to the author's interpretation of their work, and while Fraley & Marks (2010) note this contradiction, they don't do a good job of explaining it away. They try to draw on some kind of distinction between the conscious and unconscious recognition of the familiar, but I don't think they make a case for it.

On the whole, that is a very unsatisfying explanation, especially compared to other models of incest aversion. Point: Westermark. Freud is still wrong.
  
References: Fraley, R.C. & Marks, M.J. (2010). Westermarck, Freud, and the incest taboo: does familial resemblance activate sexual attraction? Personality and Social Psychology Bulletin, 36, 1202-1212

Ridley, M. (1993). The Red Queen: Sex and the Evolution of Human Nature. Harper: New York.

Sunday, January 15, 2012

What Are We To Make Of The Term "Race"?

In the language of biology, race has no hard definition. The most basic taxonomic classification that we as humans get without resorting to "eyeballin' it" is species, the most frequently referred to definition being: a group of organisms capable of interbreeding and producing fertile offspring. All races of humans definitely fall into the same species category (if they didn't, we'd hardly be calling ourselves humans). Additionally, in terms of our cognitive functioning, it's unlikely that people were ever selected to encode the races of other people, given that they were not likely to travel far enough to ever really encounter someone of a different race (Kurzban, Tooby, & Cosmides, 2001), not to mention the matter of what selective advantages the coding of other races would bring being unanswered.

So surely that means race is simply an arbitrary social construct with no real underlying differences between groups, right? Well...

                                                                            "Have fun with that, buddy. I'm going to sit this one out."

The answer is both a "yes" and a "no", but we'll get to that in a minute. Let's return to that definition of a species first. There is a hypothetical population of mice (A1), all from the same area of the world. Half of that population is randomly selected and moved to a new area (A2), so the two groups are reproductively isolated. It's unlikely to two groups of mice will evolve in the same direction, as each group will have to deal with different selection pressures and drift. Let's further say that each generation, you took a sample of mice from A1 and A2 and attempted to breed them, to see if they produced viable offspring. Turns out they do, leaving you quite unsurprised and able to publish your results in "who cares?" monthly . If you continued this experiment long enough, eventually you'd find that some percentage of the mice from the two groups would probably fail to successfully produce viable offspring.

In the span of a single generation then, two groups that used to be the same species would then not (all) be the same species anymore. That wouldn't happen because of any one sudden change, but would occur because of genetic differences that had been accumulating over time. This suggests that while there is less agreement over what counts as a race, relative to a species, the concept itself need not be discarded despite its fuzziness; it may actually refer to something worth considering, as evolution doesn't share our penchant for neat and tidy categorization. The example also demonstrates that the term species is not without ambiguity itself, despite it's clear definition. Consider that all the mice in A1 could successfully reproduce with all the other A1 mice (A1 = A1), all the A2 mice with other A2 mice (A2 = A2), but only a certain percentage of A1s could reproduce with A2s (some A1 = A2 and some other A1 =/= A2). This means that some of the A1 mice could be considered an identical and/or separate species from the A2 mice, depending on your frame of reference. (Another way of putting this would be that the difference between statistically significant and not statistically significant itself is not significant.)

                                                   Try to organize these by color, then tell me the exact moment one color transitions to another. 

Now, obviously, it hasn't even come close to that point when it comes to race in people. All humans are still very much the same species, and the degree of genetic diversity between individuals is rather small compared to chimps (Cosmides, Tooby, and Kurzban, 2003). The more general point is that just because that is true, and just because the definition of race generally amounts to an "I know it when I see it", it doesn't mean there are no genetic differences between races worth considering (contingent, of course, upon how one defines race, in all its fuzziness).  

It is also important to keep in mind that percentage of genetic difference per se does not determine the effect those differences will have. For instance, Cosmides, Tooby, and Kurzban (2003) note:
Within population genetic variance was found to be [approximately] 10 times greater than between-race genetic variance (i.e. two neighbors of the same ‘race’ differ many times more, genetically speaking, than a mathematically average member of one ‘race’ differs from an average member of another).
On the same token, the variance in height between the average man and the average women (a few inches) is less than the variance in height within genders (a few feet). I don't find such statements terribly useful. Sure, the statements may be matters of fact and they may tell us we share a lot more than we don't, but they in no way speak to the differences that do exist. Men are taller than women overall, and that needs an explanation. Put another way, humans share more - much more - of their DNA with chimpanzees, relative to the amount they do not share. However, the amount they don't share does not cease to be relevant because of that fact.

                                                             *This product has been tested on animals we only share 93% of our DNA with.

The real question is in what domains do different groups tend to differ from each other and what are the extent of those differences? Are those differences in terms of mean values or variances of a trait? Are they confined to non-psychological factors, like skin and hair color? I will admit near complete ignorance of what an answers to those questions would look like, nor do I feel they'd be particularly easy to obtain in many cases. Some examples could include issues of lactose intolerance among certain populations, sickle cell anemia in others, and the odd fact that while rates of identical twinning tend to be constant across races, the rates of dizyogtic twinning can range from as low as 1/330 in Asian populations to 1/63 among African populations (Segal, 2000). However, the point of this post was not to answer those questions; rather, the point was to demonstrate that such questions need not be immediately shunned because of the definitional issues (of which there are, to restate, plenty of) and political implications that come with the term race.

So while race may be a term that gets an arbitrary or subjective definition across different contexts and people, and while individuals differ more than races do, that does not imply that such a term is useless in all situations. People may disagree on precisely what colors should be considered blue, red, or purple, but that doesn't mean we should stop thinking about different colors altogether in favor of one single color. It should go without saying that just because differences might/do exist between groups of people in whatever form they do, that's no justification to treat any person as a representative member of their group rather than an individual, but it's probably something that should be said more often anyway. So there it is. 

References: Cosmides, L., Tooby, J., & Kurzban, R. (2003). Perceptions of race. Trends in Cognitive Sciences, 7, 173-179.

Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. Proceedings of the National Academy of Science, 98, 15387-15392

Segal, N.L. (2000). Entwined Lives: Twins, and What They Tell Us About Human Behavior. Plume.

Thursday, January 12, 2012

I Meme You No Harm

...[T]he evidence strongly suggests that war is not a primordial instinct that we share with chimpanzees but a cultural innovation, a virulent meme that began spreading around the world about 10,000 years ago and still infects us. - John Horgan
What a hopeful thought: humans have no innate predisposition for coalitional violence - the large scale version of which we would call war. No. Violence, you see, is a meme; it's an infection; part of this mysterious "culture" thing, which is not to be conflated in any way with biology. Apparently, it's also a meme that humans were capable of spreading to chimps, via the introduction of bananas to make naturalistic observations easier. Who knew that fruit came with, basically, a meme of the plot to 28 Days Later?

                                                                                    Bananas: the ultimate catalyst of war?

While this notion of "violence as a meme/infection, not anything innate" may sound hopeful to those who wish to see an end to violence, the babies that they are, it's also an incredibly dim view. For starters, you know those big canine teeth chimps have? They don't have them for eating. Rather than being utensils, they're the biological equivalent of having four mouth-daggers, used mainly to, you guessed it, seriously injure or kill other conspecifics (Alba et al., 2001). Given that for the vast majority of chimpanzee evolution there haven't been humans consistently handing out bananas - in turn prompting memes for fighting that lead to the evolution of large canine teeth - we can rightly conclude that the origins of coalitional violence go back a bit further than Horgan's hypothesis would predict.

However, perhaps handing out concentrated resources, in form of bananas, did actually increase violence in some chimp groups (as opposed to allowing researchers to simply observe more of it). This brings us to a question that gets at part of the reason memetics runs into serious problems explaining anything, and why Horgan's view of innateness seems lacking: why would handing out food increase violence in chimps over any other behavior, such as cooperation or masturbation? Once researchers provided additional food, that meant there were more resources available to be shared, or additional leisure time available, leading idle hands to drift to the genitals. So to rephrase the question in terms of memes: why would we expect additional resources to successfully further the reproduction of (or even create) memes for violence specifically, when they could have had any number of other effects?

                                                                  Bananas, free time, genitals; do you see the picture I'm painting here?

Before going any further, it would be helpful to clarify what is meant by the term "meme". I'll defer to Atran's (2002) use of the term: "Memes are hypothetical cultural units, an idea or practice, passed on by imiation. Although nonbiological, they undergo Darwinian selection, like genes. Cultures and religions are supposedly coalitions of memes seeking to maximize their own fitness, regardless of the fitness costs for their human hosts". As a thought experiment for understanding how evolution could work in a non-biological setting, the term works alright; when the idea runs up against reality, there are a lot of issues. I'd like to focus on what I feel is one of the biggest issues: the inability of meme theory to differentiate between the structure of the mind and the structure of the meme.

Memes aren't supposed to reproduce and spread randomly. For starters, they're generally species-specific: if you put a songbird in the same room as cat, provided the bird doesn't end up dead, the "meme" of birdsong will never transfer to the cat no matter how much singing the bird does. You can show chimpanzees pictures of LOLcats their entire life, and I don't think you'll ever get so much as a chuckle from the apes, much less any imitation. Even within species, the spread of memes is not random. Let's say I read something profoundly stupid about evolutionary psychology and, out of frustration, slam my head onto the keyboard to momentarily distract myself from the pain. The head-slam will generate a string of text, but that text won't inspire people to replicate it and pass it along. What makes that bit of text less likely to be passed around then a phrase like, "Tonight. You"?

                                                                           Sometimes, bananas get tired of waiting for idle hands.

An obvious candidate answer would be that one phrase appeals to our particular psychology in some way, whereas the other doesn't. This tells us that both within- and between-species, what information gets passed on is going to be highly dependent on the existing structure of the mind; specifically, what kind of information the existing modules are already sensitive towards. To explain why a meme for violence - specifically violence - spreads throughout a population, you'd need to reference an organism already prepared for violence. Memes don't create violence in a mind not already prepared for violence in certain situations; some degree of violence would need to be innate. Similarly, viruses don't create the ability of host cells to reproduce them; they use the preexisting machinery for that job. In the same fashion, you'd need to reference an organism already prepared for birdsong to explain why such a meme would catch on in birds, but not cats or chimps. 

I'm reminded of a story that's generally used to argue against the notion of the universe, or our planet, being "fine-tuned" for life, but I think it works well to torpedo Horgan's suggestion further. It goes something like this:
One day, a puddle awoke after a rainstorm. The puddle thought to itself, "Well, isn't this interesting? The hole I find myself laying in seems remarkably well-suited to me; in fact, the hole seems to fit my shape rather perfectly. It seems incredibly improbable that I would end up in a hole that just happens to fit me, of all the possible places I could have ended up. Therefore, I can only conclude the hole was designed to have me in it".
The shape of the water, obviously, is determined by the shape of its container - the hole. Likewise, the shape that information takes in a mind is determined by the shape of that mind - its modules, that all perceive, process, manipulate, and create information in their own fashion, rather than simply reproduce a high-fidelity copy (Atran, 2002). If you take away the container (the mind) you'll quickly discover that the water (memes) have no shape of their own, and that a random string of words is as good of a meme as any. 

                                                            A good example of both a meme and the depth of thought displayed by puddles.
  
Further, I don't see the concept of a meme adding anything above and beyond what predictions can already be drawn from the concept of a modular mind, nor do I think you can derive already existing states of affairs from meme theory. If the human mind has evolved to respond violently towards certain situations, contingent on context, we're in a stronger position to predict when and why violence will occur than if we just say, "there's a meme for violence". As far as I can tell, the latter proposition makes few to no specific predictions, harking back to the illusion of explanatory depth. ("Norms, I'd like you to meet Memes. No one can seem to figure out much about either of you, so I'm sure you two can bond over that.")

Though I have yet to hear any novel or useful predictions drawn from meme theory, I have heard plenty of smug comments along the lines of, "religion is just harmful meme, parasitizing your weak mind (and mine is strong enough to resist)", or the initial quote. Until I hear something useful coming from the field of memetics, it's probably best to pull back on the non-explanations passed off as worthwhile ones.    

References: Alba, D.M., Moya-Sola, S., & Kohler, M. (2001). Canine reduction in Miocene hominid Oreopithecus bambolii. behavioral and evolutionary implications. Journal of Human Evolution, 40, 1-16  
 
Atran, S. (2002). In gods we trust: The evolutionary landscape of religion. New York: Oxford University Press.

Sunday, January 8, 2012

So Drunk You're Seeing Double (Standards)

Jack and Jill (both played by Adam Sandler, for our purposes here) have been hanging out all night. While both have been drinking, Jill is substantially drunker than Jack. While Jill is in the bushes puking, she remembers that Jack had mentioned earlier he was out of cigarettes and wanted to get more. Once she emerges from the now-soiled side of the lawn she was on, Jill offers to drive Jack to the store so he can buy the cigarettes he wants. Along the way, they are pulled over for drunk driving. Jill wakes up the next day and doesn't even recall getting into the car, but she regrets doing it.

Was Jill responsible for making the decision to get behind the wheel?

                        More importantly, should the cop had let them off in the hopes that the drunk driving would have stopped this movie from ever being made?

 Jack and Jill (again, both played by Adam Sandler, for our purposes here) have been hanging out all night. While both have been drinking, Jill is substantially drunker than Jack. While Jill is in the bushes puking, she remembers that Jack had mentioned earlier he thought she was attractive and wanted to have sex. Once she emerges from the now-soiled side of the lawn she was on, Jill offers to have sex with Jack, and they go inside and get into bed together. The next morning, Jill wakes up, not remembering getting into bed with Jack, but she regrets doing it.

Was Jill responsible for making the decision to have sex?

                              More importantly, was what happened sex, incest, or masturbation? Either way, if Adam Sandler was doing it, it's definitely gross.

According to my completely unscientific digging around discussions regarding the issue online, I can conclusively state that opinions are definitely mixed on the second question, though not so much on the first. In both cases, the underlying logic is the same: person X makes decision Y willingly while under the influence of alcohol, and later does not remember and regrets Y. As seen previously, slight changes in phrasing can make all the difference when it comes to people's moral judgments, even if the underlying proposition is, essentially, the same. 

To explore these intuitions in one other context, let's turn down the dimmer, light some candles, pour some expensive wine (just not too much, to avoid impairing your judgment), and get a little more personal with them: You have been dating your partner - let's just say they're Adam Sandler, gendered to your preferences - who decided one night to go hang out with some friends. You keep in contact with your partner throughout the night, but as it gets later, the responses stop coming. The next day, you get a phone call; it's your partner. Their tone of voice is noticeably shaken. They tell you that after they had been drinking for a while, someone else at the bar had started buying them drinks. Their memory is very scattered, but they recall enough to let you know that they had cheated on you, and, that at the time, they had offered to have sex with the person they met at the bar. They go on to tell you they regret doing it.     

Would you blame your partner for what they did, or would you see them as faultless? How would you feel about them going out drinking alone the next weekend?

                                                      If you assumed the Asian man was the Asian woman's husband, you're a racist asshole.

Our perceptions of the situation and the responsibilities of the involved parties are going to be colored by self-interested factors (Kearns & Fincham, 2005). If you engage in a behavior that can do you or your reputation harm - like infidelity - you're more likely to try and justify that behavior in ways that remove as much personal responsibility as possible (such as: "I was drunk" or "They were really hot"). On the other hand, if you've been wronged you're also more likely to try and lump as much blame on others as possible on the party that wronged you, discounting environmental factors. Both perpetrators and victims bias their views on the situation, they just tend to do so in opposite directions.

What you can bet on, despite my not having available data on the matter, is that people won't take kindly to having either their status as "innocent from (most) wrong-doing" or a "victim" be questioned. There is often too much at stake, in one form or another, to let consistency get in the way. After all, being a justified victim can easily put one into a strong social position, just as being known as one who slanders others in an unjustified fashion can drop you down the social ladder like a stone.

                                                                      There's only one way to solve these problems once and for all.

References: Kearns, J.N. & Fincham, F.D. (2005). Victim and perpetrator accounts of interpersonal transgressions: Self-serving or relationship-serving biases? Personality and Social Psychology Bulletin, 31, 321-333