Saturday, December 31, 2011

Performance Enhancing Surgery

In the sporting world - which I occasionally visit via a muted TV on at a bar - I'm told that steroid use is something of hot topic. Many people don't seem to take too kindly to athletes that use these performance enhancing drugs, as they are seen as being dangerous and giving athletes an unfair advantage. As I've written previously, when concerns for "fairness" start getting raised, you can bet there's more than just a hint of inconsistency lurking right around the corner.     

                                               It starts with banning steroids, then, before you know it, I won't be able to use my car in the Tour de France.

On the one hand, steroids certainly allow people to surpass the level of physical prowess they could achieve without them; I get that. How that makes them unfair isn't exactly obvious, though. Surely, other athletes are just as capable of using steroids, which would level the playing field. "But what about those athletes who don't want to use steroids?" I already hear you objecting. Well, what about those athletes who don't want to exercise? Exercise and exercise equipment also allows people to surpass the level of physical prowess they could achieve without them, but I don't see anyone lining up to ban gym use.

Maybe the gym and steroids differ in some important, unspecified way. Sure, people who work out more may have an advantage of those who eschew the gym, but those advantages are not due to the same underlying reason that come with steroid use. How about glasses or contacts? Now, to the best of my provincial knowledge of the sporting world, no one has proposed we ban athletes from correcting their vision. As contact lenses allow one to artificially improve their natural vision, that could be a huge leg up, especially for any sports that involve visual acuity (almost all of them). A similar tool that allowed an athlete to run a little faster, throw a little faster, or hit a little harder, to makeup for some pre-existing biological deficit in strength would probably be ruled out of consideration from the outset.
                                                                              "Just try and tackle me now, you juiced up clowns!"
I don't think this intuition is limited to sports; we may also see it in the animosity directed towards plastic surgery. Given that most people in the world haven't been born with my exceptional level of charm and attractiveness, it's understandable that many turn to plastic surgery. A few hundred examples of people's thoughts surrounding plastic surgery can be found here. If you're not bored enough to scroll through them, here's a quick rundown of the opinions you'll find: I would definitely get it; I would never get it; I would only get it if I was disfigured by some accident - doing it for mere vanity is wrong.

Given that the surgery generally makes people more attractive (Dayan, Clark, & Ho, 2004), the most interesting question is why wouldn't people want it, barring a fear of looking better? The opposition towards plastic surgery - and those who get it - probably has a lot to do with the sending and receiving of honest signals. In order for a signal to be honest, it needs to be correlated to some underlying biological trait. Artificially improving facial attractiveness by normalizing traits somewhat, or improving symmetry, may make the bearer more physically attractive, but those attractive traits would not be passed on to their future offspring. It's the biological equivalent of paying for a purchase using counterfeit bills.

                                                              "I couldn't afford plastic surgery, so these discount face tattoos will have to do"

Similar opposition can sometimes be seen even towards people who choose to wear makeup. Any attempts to artificially increase one's attractiveness have a habit of drawing its fair share of detractors. As for why there seems to be a difference between compensating for a natural disadvantage (in the case of contacts) in some cases, but not for surpassing natural limits (in the case of steroids or plastic surgery) in others, I can't definitively say. Improving vision is somehow more legitimate than improving one's appearance, strength, or speed (in ways that don't involve lifting weights and training, anyway).

Perhaps it has something to do with people viewing attractiveness, strength, and speed as traits capable of being improved through "natural" methods - there's no machine at the gym for improving your vision, no matter how many new years resolutions you've made to start seeing better. Of course, there's also no machine at the gym for improving for your facial symmetry, but facial symmetry plays a much greater role in determining your physical attractiveness relative to visual acuity, so surgery could be viewed as form of cheating, in the biological sense, to a far greater extent than contacts.      

References: Dayan, S., Clark, K., & Ho, A.A. (2004). Altering first impressions after plastic surgery. Aesthetic Plastic Surgery, 28, 301-306. 

Saturday, December 17, 2011

Rushing To Get Your Results Out There? Try A Men's Magazine.

I have something of an issue with the rush some researchers feel to publicize their findings before the research is available to be read. While I completely understand the desire for self-aggrandizement and to do science-via-headlines, it puts me in a bind. While I would enjoy picking apart a study in more depth, I'm unable to adequately assess the quality of work at the time when everyone feels the urge to basically copy and paste the snippet of the study into their column and talk about how the results offend or delight them.

Today I'm going to go out on a limb and attempt to critique a study I haven't read. My sole sources of information will be the abstract and the media coverage. It's been getting a lot of press from people who also haven't read it - and probably never will, even after it becomes available - so I think it's about time there's a critical evaluation of the issue which is: are men's magazines normalizing and legitimizing hostile sexism?

                                                    "50 new ways for men to help keep women down? You have my undivided attention, magazine"

So let's start off with what has been said about the study: a numbers of quotes from "lad's mags" (the English versions of Maxim, as far as I can tell) and convicted rapists were collected; forty men and women were not able to reliably group them into their respective categories. When the quotes were presented as coming from rapists, men tended to identify with them less, relative to when they were presented as coming from a men's magazine. The conclusion, apparently, is that these magazines are normalizing and legitimizing sexism. Just toss in some moralizing about protecting children and you have yourself a top-shelf popular psychology article.

The first big question the limited information does not address is: how and why were these specific quotes selected? (Examples of the quotes can be found here.) I'm going to go out on another limb that seems fairly stable and say the selection process was not random; a good deal of personal judgment probably went into selecting these quotes for one reason or another. If the selection process was not random, it casts into doubt whether these quotes are representative of the views of the magazine/rapists on the whole regarding women and sex.

                                                                                       Their research staff, hard at work.

Perhaps it doesn't matter as to the views on the whole; simply that the magazines contained any passages that might have been confused for something a rapist might say is enough to make the point for some people. There is another issue looming, however: though no information is given, the quotes look to be edited to some degree; potentially, a very large one. Ellipses are present in 12 of the 16 quotes, with an average of one-and-a-half per quote. At the very least, even if the editing wasn't used selectively, none of the quotes are in context.

Now, I have no idea how much editing took place, nor what contexts they were originally in, (perhaps all contexts were horrific) but that's kind of the point. There's no way to assess the methods used in selecting their sample of magazine and rapists quotes and presenting them until the actual paper comes out - assuming the paper explains why these particular quotes were selected and how they were edited, of course -  at which point it will be old news that no one will care about anymore.

How about the results? That men were quicker to identify with quotes they thought weren't those of rapists doesn't tell us a whole lot more than men seem to have some crazy aversion towards wanting to identify with rapists. I honestly can't imagine why that might be the case.

                                                Go ahead and tell her you sometimes agree with things rapists say. There's no way that could go badly. 

Assuming that the results of the quote-labeling part of this study are taken at face-value, what would they tell us? If they merely serve to demonstrate that people aren't good at attributing some quotes about sex to rapists or non-rapists, fine; perhaps rapists don't use language that immediately distinguishes them from non-rapists, or people just aren't that good at telling the two apart. The content of a quote does not change contingent on the speaker, much like the essence of a person doesn't live on through objects they touched. That sweater you bought at that Nazi's garage sale is not a Nazi-sweater, just a boring old sweater-sweater.

It seems that the authors want to go beyond that conclusion towards one that says something about the effects these magazines may have on 'normalizing' or 'legitimizing' a behavior, or language, or sexism, or something. I feel about as inclined to discuss that idea as the authors felt to attempt and demonstrate it, which is to say not at all from what I've seen so far.

I will, however, say this: I'm sure that if you gave me the same sources used for this study - the men's magazines and the book of rapist interviews - and allowed me to pick out my own set of quotes, I could find very different results where people can easily distinguish between quotes from rapists and men's magazines. That would then conclusively demonstrate these magazines are not normalizing or legitimizing sexism, right?

Wednesday, December 14, 2011

Hardcore Porn-etry

Sex and sexuality are hot button topics. Not surprisingly, they are topics that also draw a lot of powerful sentiments out of people that have something of a long-distance relationship with reality. Opinions for the opposition can range from "porn is degrading for daring to depict women enjoying sex, causal or otherwise, and is a tool of men for oppressing women" to "The idea of porn isn't inherently offensive, but [porn needs to do more to depict love and caring/it's harmful for children, who need to be protected from its grasp/has some unfavorable side-effects that need to be dealt with/is too commonly used, and I'm running out of clean socks]".

                                     "Oh yeah; that's it. You like cuddling, you whore. You love your satisfying and loving relationship, don't you, you dirty girl?"

Rather than continue on with my normal style of critique, I've decided to give it another go in limerick form. I will return to this issue in time, once a certain article comes out from behind a journal's six-month embargo wall.

Porn, it would seem,
has a reputation unclean.
It seems innocuous at first,
but soon gets much worse,
making our sexuality mean.

"Those who watch porn",
the activists shout with scorn,
"will soon turn to rape
because of that tape,
and women will be left to mourn"

"While it might hurt your wrists,"
the research insists,
"There's no connection
between a porn-based erection,
and sex in which someone resists".

Still feeling the alarm
because porn just must do harm,
"Then the effects are more subtle,"
is the proffered rebuttal
"and leaves men with no sexual charm.

All those men will think,
when they catch some girl's wink,
that she likes it rough,
and she'll do all that nasty stuff,
without so much as a blink"

The research is taken aback
by this newly formed attack.
It seems a lot like the last,
as if formulated too fast
and delivered by a similar quack

"It sounds like you're reaching,
in the service of preaching.
Before you celebrate,
you'd be well-served to demonstrate
that it's actually the porn doing the teaching

It seems reasonable,
or, at the very least, conceivable,
that porn's not to blame
and is far more tame
than you feel is believable.

If people can tell the difference between
reality and the porn on a screen,
they are capable of inquiring
about what their partner is desiring,
instead of relying on a fantasy sex scene"

Perhaps there's some underlying reason
for this open porn-hunting season:
If decided by a cognitive system,
that "porn harms" is the dictum
Other ideas may as well be treason.

Friday, December 9, 2011

Is Working Together Cooperation?

"[P]rogress is often hindered by poor communication between scientists, with different people using the same term to mean different things, or different terms to mean the same thing...In the extreme, this can lead to debates or disputes when in fact there is no disagreement, or the illusion of agreement when there is disagreement" - West et al. (2007)
I assume most of you are little confused by the question, "Is working together cooperation?" Working together is indeed the very first definition of cooperation, so it would seem the answer should be a transparent "yes". However, according to a paper by West et al. (2007), there's some confusion that needs to be cleared up here. So buckle up for a little safari into the untamed jungles of academic semantic disagreements. 

                                                                        An apt metaphor for what clearing up confusion looks like.

West et al. (2007) seek to define cooperation as such:
Cooperation: a behavior which provides a benefit to another individual (recipient), and which is selected for because of its beneficial effect on the recipient. [emphasis, mine]
In this definition, benefits are defined in terms of ultimate fitness (reproductive) benefits. There is a certain usefulness to this definition, I admit. It can help differentiate between behaviors that are selected to deliver benefits from behaviors that deliver benefits as a byproduct. The example West et al. use is an elephant producing dung. The dung an elephant produces can be useful to other organisms, such as a dung beetle, but the function of dung production in the elephant is not to provide a benefit the beetle; it just happens to do so as a byproduct. On the other hand, if a plant produces nectar to attract pollinators, this is cooperation, as the nectar benefits the pollinators in the form of a meal, and the function of the nectar is to do so, in order to assist in reproduction by attracting pollinators.

However, this definition has some major drawbacks. First, it defines cooperative behavior in terms of actual function, not in terms of proper function. An example will make this distinction a touch clearer: let's say two teams are competing for a prize in a winner-take-all game. All the members of each team work together in an attempt to achieve the prize, but only one team gets it. By the definition West et al. use, only the winning team's behavior can be labeled "cooperation". Since the losers failed to deliver any benefit, their behavior would not be cooperation, even if their behavior was, more or less, identical. While most people would call teamwork cooperation - as the intended goal of the teamwork was to achieve a mutual goal - the West et al. definition leaves no room for this consideration.

                                                              I'll let you know which team was actually cooperating once the game is over.

West et al. (2007) also seem to have a problem with the term "reciprocal altruism", which is basically summed up by the phrase, "you scratch my back (now) and I'll scratch yours (at some point in the future)".  The authors have a problem with the term reciprocal altruism because this mutual delivery of benefits is not altruistic, which they define as such:
Altruism: a behavior which is costly to the actor and beneficial to the recipient; in this case and below, costs and benefits are defined on the basis of the lifetime direct fitness consequences of a behavior.
Since reciprocal altruism is eventually beneficial to the individual paying the initial cost, West et al. (2007) feel it should be classed as "reciprocal cooperation". Except there's an issue here: Let's consider another case: organism X pays a cost (c) to deliver a benefit (b) to another organism, Y, at some time (T1). At some later time (T2), organism Y pays a cost (c) to deliver a benefit (b) back to organism X. So long as (c) < (b), they feel we should call the interaction between X and Y cooperation, not reciprocal altruism. 

Here's the problem: the future is always uncertain. Let's say there's a parallel case to the one above, except at some point after (T1) and before (T2), organism X dies. Now, organism X would be defined as acting altruistically (paid a cost to deliver a benefit), and organism Y would be defined as acting selfishly (took a benefit without repaying). What this example tells us is that a behavior can be classed as being altruistic, mutually beneficial, cooperative, or selfish, depending on a temporal factor. In terms of "clearing up confusion" about how to properly use a term or classify a behavior, the definitions provided by West et al. (2007) are not terribly helpful. They note as much, when they write, "we end with the caveat that: (viii) classifying behaviors will not always be the easiest or most useful thing to do" (p.416), which, to me, seems to defeat the entire purpose of this paper.

                               "We've successfully cleared up the commuting issue, though using our roads might not be the easiest or most useful thing to do..."

One final point of contention is that West et al. (2007) feel "...behaviors should be classified according to their impact on total lifetime reproductive success" (emphasis, mine). I understand what they hope to achieve with that, but they make no case whatsoever for why we should stop considering the ultimate effects of a behavior at the end of an organism's individual lifetime. If an individual behaves in a way that ensures he leaves behind ten additional offspring by the time he dies, but, after he is dead, the fallout from those behaviors further ensures that none of those offspring reproduce, how is that behavior to be labeled?

It seems to me there are many different ways to think about an organism's behavior, and no one perspective needs to be monolithic across all disciplines. While such a unified approach no doubt has its uses, it's not always going to clear up confusion.  

References: West, S.A., Griffin, A.S., & Gardner, A. (2007). Social semantics: Altruism, cooperation, mutualism, strong reciprocity and group selection. Journal of Evolutionary Biology, 20, 415-432

Tuesday, December 6, 2011

Discount Engagement Rings

One day, a man is out shopping for an engagement ring in preparation to pop the question to his girlfriend. After a browse through a local jewelry store, he finds what he thinks is the perfect ring: It costs $3000, and even though he's a man of modest means, he figures he can just afford it. As he prepares to make his purchase, another customer walks up to him and informs the man that the jewelry store down the block is having a going out a business sale and selling an identical ring for only $300.

What's the boyfriend to do? Clearly, the ring on a 90% discount is the better deal, but something about buying a discount engagement ring just might not sit right with some people. While I don't have any data on the matter, I could imagine that if the girlfriend in question found out that her (previously) stunning engagement ring was bought at a step discount, she probably wouldn't be pleased with her boyfriend's financial responsibility, and that young bachelor who moved in down the hall might seem just a little more tall, dark, and handsome.   

                            One of these rings will lead to a lifelong marriage and the other to not having a girlfriend; neither one leads to sex with that woman. 

Pictured above is a $3000 diamond ring and a $300 cubic zirconia ring; try and tell the difference just by looking (good luck). The reason that a cubic zirconia ring, as opposed to traditional diamond one, would probably not sit well with many women is not because of any noticeable aesthetic quality of the ring itself.  There are, apparently, a number of ways to test and see whether you have a diamond or not, but that these tests exist (and can often be inconclusive to many) demonstrates that untrained people without special tools or knowledge have a hard time telling the two apart (which the comments confirm; many suggest the best way to tell them apart is always to ask an expert). In the specific case I gave initially, the two rings would, in fact, be identical in design and material, so the only difference would be the cost.

In the case of engagement rings, however, cost is the point. The high cost of an engagement ring functions as an honest signal; not honest in sense that they ensure fidelity or a lasting relationship, but honest in the sense that the signal is hard to fake. A poorer man could not afford a more expensive ring and his rent and his drinking problem. Dave Chapelle summed up this principle nicely when he said, "If a man could fuck a woman in a cardboard box, he wouldn't buy a house".

                                                      Not only does he still get laid all the damn time, he didn't have to give up drinking either.

This is precisely the reason people care about whether there's a diamond or a cubic zirconia in jewelry; while both are sparkly, only one represents an honest signal, where the other is a fake signal that does not reliably distinguish between the ability to invest and inability to do so; one can signal a willingness to invest, whereas the other does not signal as well.

Examples of signaling abound in the biological world, and for good reason: when the sex that does the most investing in offspring - typically the female - is seeking out a mate, they need to assess the quality of the many potential mates. Since the investing one will be stuck with the consequences, good or bad, for a long time, it's in their best interests to be more selective to get the best package of genes and/or investment. Males displaying costly ornaments - like peacocks - or behaviors - like bowerbirds - are able to demonstrate they can afford to shoulder the hard to fake costs involved in growing/maintaining them and still survive and flourish; they have been "tested" and they passed, guaranteeing their fitness to the choosy opposite sex (Zahavi, 1975).

We've all dealt with the inconvenience of pants that are too long at some point: you occasionally step on them, they shred as you walk along the street, they get dirty as they drag, water soaks up the back of them and feels awful on your legs, and that's only for pants that a slightly too long. Imagine having a pair of jeans that happen to be a few feet too long, that you make yourself, you can never take off, and all your prospective partners will judge you by their quality. Also, lions are trying to eat you.   

                                                                            Seriously, this thing is practically begging to be killed.

Only those who are able to find the required materials, invest the time and skill in building, cleaning, and maintaining the pants would be able to keep them in viewable shape. Further, those pants would be serious inconvenience when it comes to doing just about anything, so only those who were particularly able would be able to maintain garments like them and still function. Lazy, unskilled, careless, and/or clumsy people would reflect those unfavorable qualities in the state of their pants. One could take off the pants to avoid all the wasteful insanity, but in doing so they'd be all but committing themselves to a lifetime of celibacy, as those still wearing the pants would attract the partners.   

If you're a good observer - and I know you are - you'll probably have noticed that costly signaling can take many forms: from engagement rings, to bodily ornaments, to behavior. Costly signaling is relatively context independent: the important factor is merely that the behavior is hard to fake and expensive, in terms of time, money, energy, foregone opportunities, risk, etc. It can be used for a variety of goals, such as courting mates, impressing potential allies, or intimidating rivals. We all engage in it, to varying degrees, in different ways, for several purposes, likely without realizing most of it (Miller, 2009). It's something fun to think about next time you slip into an expensive designer shirt or rail against the evils of branded products.

References: Miller, G. Spent: Sex, Evolution, and Consumer Behavior. New York, NY: Viking

Zahavi, A. (1975). Mate selection - a selection for a handicap. Journal of Theoretical Biology, 53, 205-214 

Thursday, December 1, 2011

Is Description Explanation?

[Social Psychology] has been self-handicapped with a relentless insistence on theoretical shallowness: on endless demonstrations that People are Really Bad at X, which are then "explained" by an ever-lengthening list of Biases, Fallacies, Illusions, Neglects, Blindnesses, and Fundamental Errors, each of which restates the finding that people are really bad at X. Wilson, for example, defines "self-affirmation theory" as "the idea that when we feel a threat to our self-esteem that's difficult to deal with, sometimes the best thing we can do is to affirm ourselves in some completely different domain." Most scientists would not call this a "theory." It's a redescription of a phenomenon, which needs a theory to explain it. - Steven Pinker
If you've sat through (almost) any psychology course at any point, you can probably understand Pinker's point quite well (the full discussion can be found here). The theoretical shallowness that Steven references was the very dissatisfaction that drew me towards evolutionary theory so strongly. My first exposure to evolutionary psychology as an undergraduate immediately had me asking the sorely missing "why?" questions so often that I could have probably been mistaken for an annoying child (as if being an undergraduate didn't already do enough on that front).

                                     In keeping with the annoying child theme I also started beating up other young children, because I love consistency.

That same theoretical shallowness has returned to me lately in the form of what are known as "norms". As Fehr and Fischerbacher (2004) note, " is impossible to understand human societies without an adequate understanding of social norms", and "It is, therefore, not surprising that social scientists...invoke no other concept more frequently...". Did you read that? It's impossible to understand human behavior without norms, so don't even try. Of course, in the same paragraph they also note, "...we still know very little about how they are formed, the forces determining their content, how and why they change, their cognitive and emotional underpinnings, how they relate to values, how they shape our perceptions of justice and it's violations, and how they are shaped by and shape our neuropsychological architecture". So, just to recap, it's apparently vital to understand norms in order to understand human behavior, and, despite social scientists knowing pretty much nothing about them, they're referenced everywhere. Using my amazing powers of deduction, I only conclude that most social scientists think it's vital they maintain a commitment to not understanding human behavior.

By adding the concept of "norms", Fehr and Fischerbacher (2004) didn't actually add anything to what they were trying to explain (which was why some uninvolved bystanders will sometimes pay a generally small amount to punish a perceived misdeed that didn't directly affect them, if you were curious), but instead seemed to grant an illusion of explanatory depth (Rozenblits & Keil, 2002). It would seem neuroscience is capable of generating that same illusion.

                                               This thing cost more money than most people see in a lifetime; it damn sure better have some answers.

Can simply adding irrelevant neuroscience information to an otherwise bad explanation suddenly make it sound good? Apparently, that answer is a resounding "yes", at least for most people who aren't neuroscience graduate students or above. Weisburg et al (2008) gave adults, students in a neuroscience class, and experts in the neuroscience field a brief description of a psychological phenomena, and then offered either a 'good' or a 'bad' explanation of the phenomena in question. In keeping with the theme of this post, the 'bad' explanations were simply circular, redescriptions of the phenomena (or, as many social psychologists would call it, a theory). Additionally, those good and bad explanations also came either without any neuroscience, or with a brief and irrelevant neuroscience tidbit tacked on that described where some activity occurs in a brain scan.

Across all groups, unsurprisingly, good explanations were rated as being more satisfying than bad explanations. However, the adults and the students rated bad explanations with the irrelevant neuroscience information as actually being on the satisfying side of things, and among the students, good explanations with neuroscience sounded better as well. Only those in the expert group did not find the irrelevant neuroscience information more satisfying; if anything, they found it less so - making good explanations less satisfying, as compared to the same explanation without the neuroscience - as they understood that the neuroscience was superfluous and used awkwardly.

This cognitive illusion is quite fascinating: descriptions appear to be capable of playing the role of explanations in some cases, despite them being woefully ill-suited for the task. This could mean that descriptions may also be capable of playing the role of justifications, by way of explanations, just try not to convince yourself that I've explained why they function this way.

References: Fehr, E. & Fischerbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25, 63-87.

Rozenblit, L. & Keil, F. (2002). The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive Science, 26, 521-562

Weisberg, D.S., Keil, F.C., Goodstein, J. Rawson, E., & Gray, J.R. (2008). The seductive allure of neuroscience explanations. The Journal of Cognitive Neuroscience, 20, 470-477

Sunday, November 20, 2011

A Sense of Entitlement (Part 2)

One of the major issues that has divided people throughout recorded human history is precisely the matter of division - more specifically, how scarce resources ought to be divided. A number of different principles have been proposed, including: everyone should get precisely the same share, people should receive a share according to their needs, and people should receive a share according to how much effort they put in. None of those principles tend to be universally satisfying. The first two open the door wide for free-riders who are happy to take the benefits of others' work while contributing none of their own; the third option helps to curb the cheaters, but also leaves those who simply encounter bad luck on their own. Which principles people will tend to use to justify their stance on a matter will no doubt vary across contexts.

                                         If you really wanted toys - or a bed - perhaps you should have done a little more to earn them. Damn entitled kids...

The latter two options also open the door for active deception. If I can convince you that I worked particularly hard - perhaps a bit harder than I actually worked - then the amount I deserve goes up. This tendency to make oneself appear more valuable than one actually is is widespread, and one good example is that about 90% of college professors rate themselves above average in teaching ability. When I was collecting data on women's perceptions of how attractive they thought they were, from 0 to 9, I don't think I got a single rating below a 6 from over 40 people, though I did get many 8s and 9s. It's flattering to think my research attracts so many beauties, and it certainly bodes well for my future hook-up prospects (once all these women realize how good looking and talented I keep telling myself I am, at any rate).   

An alternative, though not mutually exclusive, path to getting more would be to convince others that your need is particularly great. If the marginal benefits of resources flowing to me are greater than the benefits of those same resources going to you, I have a better case for deserving them. Giving a millionaire another hundred dollars probably won't have much of an effect on their bottom line, but that hundred dollars could mean the difference between eating or not for some people. Understanding this strategy allows one to also understand why people working to change society in some way never use the motto, "Things are pretty good right now, but maybe they could be better".

                                                          Their views on societal issues are basically the opposite of their views on pizza

This brings us to the science (Xiao & Bicchieri, 2010). Today, we'll be playing a trust game. In this game, player A is given an option: he can end the game and both players will walk away with 40 points, or he can trust and give 10 points to player B, which then gets multiplied by three, meaning player B now has 70 points. At this time, player B is given the option of transferring some of their points back to player A. In order for player A to break even, B needs to send back 10 points; anymore than 10 and player A profits. This is a classic dilemma faced by anyone extending a favor to a friend: you suffer an initial cost by helping someone out, and you need to trust your friend will pay you back by being kind to you later, hopefully with some interest.   

Slightly more than half of the time (55%), player B gave 10 points or more back to player A, which also means that about half the time player B also took the reward and ran, leaving player A slightly poorer and a little more bitter. Now, here comes the manipulation: a second group played the same game, but this time the payoffs were different. In this group, if player A didn't trust player B and ended the game, they walk away with 80 points and B leaves with 40. If player A did trust, that meant they both now have 70 points; it also meant that if player B transferred any points back to player A, he would be putting himself at a relative disadvantage in terms of points.

In this second group, player B ended up giving back 10 or more points only 26% of the time. Apparently, repaying a favor isn't that important when the person you're repaying it to would be richer than you because of it. It would seem that fat cats don't get too much credit tossed their way, even if they behave in an identical fashion towards someone else. Interestingly, however, many player As understood this would happen; in fact, 61% of them expected to get nothing back in this condition (compared to 23% expecting nothing back in the first condition).

                                                  "Thanks for all the money. I would pay you back, it's just that I kind of deserved it in the first place."

That inequality seemed to do two things: the first is that it appeared to create a sense of entitlement on the behalf of the receiver that negated most of the desire for reciprocity. The second thing that happened is that the mindset of the people handing over the money changed; they fully expected to get nothing back, meaning many of these donations appeared to look more like charity, rather than a favor.  

Varying different aspects of these games allows researchers to tap different areas of human psychology, and it's important to keep that in mind when interpreting the results of these studies. In your classic dictator game, when receivers are allowed to write messages to the dictators to express their feelings about the split, grossly uneven splits are met with negative messages about 44% of the time (Xiao & Houser, 2009). However, in these conditions, receivers are passive, so what they get looks more like charity. When receivers have some negotiating power, like in an ultimatum game, they respond to unfair offers quite differently, with uneven splits being met by negative messages 79% of the time (Xiao & Houser, 2005). It would seem that giving someone some power also spikes their sense of entitlement; they're bargaining now, not getting a handout, and when they're bargaining they're likely to over-emphasize their need and their value to get more.         

Resources: Xiao, E. & Houser, D. (2005). Emotion expression in human punishment behavior. Proceedings of the Nation Academy of Sciences of the United States of America, 102, 7398-7401

Xiao, E. & Houser, D. (2009). Avoiding the sharp tongue: Anticipated written messages promote fair economic exchange. Journal of Economic Psychology, 30, 393-404

Xiao, E., & Houser, D. (2010). When equality trumps reciprocity. Journal of Economic Psychology, 31, 456-470

Saturday, November 19, 2011

A Sense of Entitlement (Part 1)

There's a lot to be said for studying behavior in the laboratory or some other artificially-generated context: it helps a researcher control a lot of the environment, making the data less noisy. Most researchers abhor noise almost as much they do having to agree with someone (when agreement isn't in the service of disagreeing with some mutual source of contention, anyway), so we generally like to keep things nice and neat. One of the downsides of keeping things so tidy is that the world often isn't, and it can be difficult to distinguish between "noise" and "variable of interest" at times. The flip-side of this issue is that the environment of the laboratory can also create certain conditions, some that are not seen in the real world, without realizing it. Needless to say, this can have an effect on the interpretation of results.

                   "Few of our female subjects achieved orgasm while under observation by five strangers. Therefore, reports of female orgasm must largely be a myth"

Here's a for instance: say that you're fortunate enough to find some money just laying around, perhaps on the street, inexplicably in a jar in front of a disheveled-looking man. Naturally you would collect the discarded cash, put it into your wallet, and be on your way. What you probably wouldn't do is give some of that cash to a stranger anonymously, much less half of it, yet this is precisely the behavior we see from many people in a dictator game. So why do they do it in the lab, but not in real life?

Part of the reason seems to lie in the dictator's perceptions of the expectations of the receivers. The rules of game set up a sense of entitlement on the behalf of the receivers - complete with a sense of obligation on behalf of the dictator - with the implicit suggestion being that dictators are supposed to share the pot, perhaps even fairly. But suppose there was a way to get around some of those expectations - how might that affect the behavior of dictators?

                               "Oh, you can split the pot however you want. You can even keep it all if you don't care about anyone but yourself. Just saying..."

The possibility was examined by Dana, Cain, and Dawes (2006), who ran the standard dictator game at first, telling dictators to decide how to divide up $10 between themselves and another participant. After the subjects entered their first offer, they were then given a new option: the receivers didn't know the game was being played yet, and if the dictators wanted, they could take $9 and leave, which means the receivers would get nothing, but would also never know they could have gotten something. Bear in mind, the dictators could have kept $9 and give a free dollar to someone else, or keep an additional dollar with the receiver still getting nothing in their initial offer, so this exit option destroys overall welfare for true ignorance. When given this option, about a third of the dictators opted out, taking the $9 welfare destroying option and leaving to make sure the receiver never knew.

These dictators exited the game when there was no threat of them being punished for dividing unfairly or having their identity disclosed to the receiver, implying that this behavior was largely self-imposed. This tells us many dictators aren't making positive offers - fair or otherwise - because they find the idea of giving away money particularly thrilling. They seem to want to appear to be fair and generous, but would rather avoid the costs of doing so, or rather the costs of not doing so and appearing selfish. There were, however, still a substantial number of dictators who offered more than nothing. There are a number of reasons this may be the case, most notably because the manipulation only allowed dictators to bypass the receiver's sense of entitlement effectively, not their own sense of obligation that the game itself helped to create.  

Another thing this tells us is that people are not going to respond universally gratefully to free money - a response many dictators apparently anticipated. Finding a free dollar on the street is a much different experience than being given a dollar by someone who is deciding how to divide ten. One triggers a sense of entitlement - clearly, you deserve more than a free dollar, right? - the other does not, meaning one will tend to be viewed as a loss, despite the fact that it isn't. 

How these perceptions of entitlement change under certain conditions will be the subject of the next post.

References: Dana, J., Cain, D.M., & Dawes, R.M. (2006). What you don't know won't hurt me: Costly (but quiet) exit in dictator games. Organizational Behavior and Human Decision Processes, 100, 193-201.

Tuesday, November 15, 2011

Somebody Else's Problem

Let's say you're a recent high-school graduate at the bank, trying to take out a loan to attend a fancy college so you can enjoy the privileges of complaining about how reading is, like, work, and living the rest of life in debt from student loans. A lone gunman rushes into the bank, intent on robbing the place. You notice the gunman is holding a revolver, meaning he only has six bullets. This is good news, as there happen to be 20 people in the bank; if you all rush him at the same time, he wouldn't be able to kill more than six people, max; realistically, he'd only be able to get off three or four shots before he was taken down, and there's no guarantee those shots will even kill the people they hit. The only practical solution here should be to work together to stop the robbery, right? 

                                                        Look on the bright side: if you pull through, this will look great on your admissions essay

The idea that evolutionary pressures would have selected for such self-sacrificing tendencies is known as "group selection", and is rightly considered nonsense by most people who understand evolutionary theory. Why doesn't it work? Here's one reason: let's go back to the bank. The benefits of stopping the robbery will be shared by everyone at the abstract level of the society, but the costs of stopping the robbery will be disproportionately shouldered by those who intervene. While everyone is charging the robber, if you decide that you're quite comfortable hiding in the back, thank you very much, your chances of getting shot decline dramatically and you still get the benefit; just let it be somebody else's problem. Of course, most other people should realize this as well, leaving everyone pretty uninclined to try and stop the robbery. Indeed, there are good reasons to suspect that free-riding is the best strategy (Dreber et al., 2008).   

There are, unfortunately, some people who think group selection works and actually selects for tendencies to incur costs at no benefit. Fehr, Fischbacher, and Gachter (2002) called their bad idea "strong reciprocity":
"A person is a strong reciprocator if she is willing to sacrifice resources (a) to be kind to those who are being kind... and (b) to punish those who are being unkind...even if this is costly and provides neither present nor future material rewards for the reciprocator" (p.3, emphasis theirs)
So the gist of the idea would seem to be (to use an economic example) that if you give away your money to people you think are nice - and burn your money to ensure that mean people's money also gets burned - with complete disregard for your own interests, you're going to somehow end up with more money. Got it? Me neither...

                                                               "I'm telling you, this giving away cash thing is going to catch on big time."

So what would drive Fehr, Fischbacher, and Gachter (2002) to put forth such a silly idea? They don't seem to think existing theories - like reciprocal altruism, kin selection, costly signaling theory, etc - can account for the way people behave in laboratory settings. That, and existing theories are based around selfishness, which isn't nice, and the world should be a nicer place. The authors seems to believe that those previous theories lead to predictions like: people should "...always defect in a sequential, one-shot [prisoner's dilemma]" when playing anonymously.That one sentence contains two major mistakes: the first mistake is that those theories most definitely do not say that. The second mistake is part of the first: they assume that people's proximate psychological functioning will automatically fall in-line with the conditions they attempt to create in the lab, which it does not (as I've mentioned recently). While it might be adaptive, in those conditions, to always defect at the ultimate level, it does not mean that the proximate level will behave that way. For instance, it's a popular theory that sex evolved for the purposes of reproduction. That people have sex with birth control does not mean the reproduction theory is unable to account for that behavior.

As it turns out, people's psychology did not evolve for life in a laboratory setting, nor is the functioning of our psychology going to be adaptive in each and every context we're in. Were this the case, returning to our birth control example, simply telling someone that having sex when the pill is involved removes the possibility of pregnancy would lead to people to immediately lose all interest in the act (either having sex or using the pill). Likewise, oral sex, anal sex, hand-jobs, gay sex, condom use, and masturbation should all disappear too, as none are particularly helpful in terms of reproduction.

                         Little known fact: this parade is actually a celebration of a firm understanding of the proximate/ultimate distinction. A very firm understanding.
Nevertheless, people do cooperate in experimental settings, even when cooperating is costly, the game is one-shot, there's no possibility of being punished, and everyone's ostensibly anonymous. This poses another problem for Fehr and his colleagues: their own theory predicts this shouldn't happen either. Let's consider an anonymous one-shot prisoner's dilemma with a strong reciprocator as one of the players. If they're playing against another strong reciprocator, they'll want to cooperate; if they're playing against a selfish individual, they'll want to defect. However, they don't know ahead of time who they're playing against, and once they make their decision it can't be adjusted. In this case, they run the risk of defecting on a strong reciprocator or benefiting a selfish individual while hurting themselves. The same goes for a dictator game; if they don't know the character of the person they're giving money to, how much should they give?

The implications of this extend even further: in a dictator game where the dictator decides to keep the entire pot, third-party strong reciprocators should not really be inclined to punish. Why? Because they don't know a thing about who the receiver is. Both the receiver and dictator could be selfish, so punishing wouldn't make much sense. The dictator could be a strong reciprocator and the receiver could be selfish, in which case punishment would make even less sense. Both could be strong reciprocators, unsure of the others' intentions. It would only make sense if the dictator was selfish and the receiver was a strong reciprocator, but a third-party has no way of knowing whether or not that's the case. (It also means if strong reciprocators and selfish individuals are about equal in the population, punishment in these cases would be a waste three-forths of the time - maybe half at best, if they want to punish selfish people no matter who they're playing against - meaning strong reciprocator third-parties should never punish).   

                                                                     There was some chance he might of been being a dick... I think.

The main question for Fehr and his colleagues would then not be, "why do people reciprocate cooperation in the lab" - as reciprocal altruism and the proximate/ultimate distinction can already explain that without resorting to group selection - but rather, "why is there any cooperation in the first place?" The simplest answer to the question might seem to be that some people are prone to give the opposing player the benefit of the doubt and cooperate on the first move, and then adjust their behavior accordingly (even if they are not going to be playing sequential rounds). The problem here is that this is what a tit-for-tat player already does, and it doesn't require group selection.     

It also doesn't look good for the theory of social preferences invoked by Fehr et al. (2002) when the vast majority of people don't seem to have preferences for fairness and honesty when they don't have to, as evidenced by 31 of 33 people strategically using an unequal distribution of information to their advantage in ultimatum games (Pillutla and Murnighan, 1995). In every case Fehr et al. (2002) looks at, outcomes have concrete values that everyone knows about and can observe. What happens when intentions can be obscured, or values misrepresented, as they often can be in real life? Behavior changes and being a strong reciprocator would be even harder. What might happen when the cost/punishment ratio changes from a universal static value, as it often does in real life (not everyone can punish others at the same rate)? Behavior will probably change again.

Simply assuming these behaviors are the result of group selection isn't enough.The odds are better that the results are only confusing when their interpreter has an incorrect sense of how things should have turned out.   

References: Dreber, A., Rand, D.G., Fudenberg, D, & Nowak, M.A. (2008). Winners don't punish. Nature, 452,  348-351

Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13, 1-25.

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.

Friday, November 11, 2011

The "I" In The Eye Of The Beholder

When "doing the right thing" is easy, people tend to not give others much credit for it. Chris Rock made light of this particular fact briefly in one of his more popular routines - so popular, in fact, that it has its own Wikipedia page. In this routine, Chris Rock says "Niggas always want some credit for some shit they supposed to do...a Nigga will brag about some shit a normal man just does". At this point, it's probably worth pointing out that Chris Rock has an issue with using the word "Nigga" because he felt it gave racist people the feeling they had the license to use it. However, he apparently has no issue at all using the word "Faggot". The hypocrisy of the human mind is always fun.

So here's a question: which opinion represents Chris Rock's real opinion? Does Chris believe in not using a word - even comically - that could be considered offensive because it might give some people with ill-intentions license to use it or does he not? If you understand the concept of modularity properly, you should also understand that the question itself is ill-phrased; implicit in the question is an assumption of a single true self somewhere in Chris Rock's brain, but that idea is no more than a (generally) useful fiction.  

                                                                  On second thought, maybe I don't really want to see your true colors...

Examples of this line of thought abound, however, despite the notion being faulty. For instance, Daniel Kahneman, who, when working as a psychologist (that apparently didn't appreciate modularity) in the military, felt he was observing people's true nature under conditions of stress.There's something called the Implicit Association Test - IAT for short. The basic principle behind the IAT is that people will respond more quickly and accurately when matching terms that are more strongly mentally associated compared to ones that aren't, and the speed of your responses demonstrates what's really in your mind. So let's say you have a series of faces and words that pop up in the middle of screen; your task is to hit one button if the face is white or the word is positive, and a different button if the face is black or the word is negative (also vice versa; i.e. one button for black person or positive word, and another button for white person or negative word). The administrators and supporters of this test often claim things like: It is well known that people don't always 'speak their minds', and it is suspected that people don't always 'know their minds', though the interpretation of the results of such a test is, well, open to interpretation.

                                                      "You're IAT results came back positive for hating pretty much everything about this guy"

Enter Nichols and Knobe,who conducted a little test to see if people were compatibilists or incompatibilists (that is, whether people feel the concept of moral responsibility is compatible with determinism or not). It turns out how you phrase the question matters: when people were asked to assume the universe was completely deterministic and given a concrete case of immoral behavior (in this case, a man killing his wife and three kids to run off with his secretary), 72% of people said he was fully morally responsible for his actions. Following this, they asked some other people about the abstract question ("in a completely deterministic universe, are people completely morally responsible for their actions?"), and, lo and behold, the answers do a complete flip; now, 86% of people endorsed an incompatibilist stance, saying people aren't morally responsible.

That's a pretty neat finding, to be sure. What caught my eye was what followed, when the author's write: "In the abstract condition, people’s underlying theory is revealed for what it is ─ incompatibilist" (p.16, emphasis mine). To continue to beat the point to death, the problem here is that the brain is not a single organ; it's composed of different, functionally specific, information processing modules, and the output of these modules is going to depend on specific contexts. Thus, asking about what the underlying theory is makes the question ill-phrased from the start. So let's jet over to the Occupy Wall Street movement that Jay-Z has recently decided to attempt to make some money on (money which he will not be using to support the movement, by the way):

                                                                                 For every 99 dollars he makes, you get none

When people demand that the 1% "pay their fair share" of the taxes, what is the word "fair" supposed to refer to? Currently, the federal tax code is progressive (that is, if you make more money, the proportion of that money that is taxed goes up), and if fairness is truly the goal, you'd suspect these people should be lobbying for a flat tax, demanding that everyone - no matter how rich or poor - pay the same proportion of what they make. It goes without saying that many people seem to oppose this idea, which raises some red flags about their use of the word "fair". Indeed, Pillutla and Murnighan (2003) make an excellent case for just how easy it is for people to manipulate the meaning of the concept to suit their own purposes in a given situation. I'll let them explain:
"Arguments that an action, an outcome, or decisions are not fair, when uttered by a recipient, most often reflect a strategic use of fairness, even if this is neither acknowledged nor even perceived by the claimant...The logical extension of these arguments is that claims of fairness are really cheap talk, i.e., unverifiable, costless information, possibly representing ulterior motives" (p.258)
The concept of fairness, in that sense, is a lot like the concept of a true, single self; it's a useful fiction that tends to be deployed strategically. It makes good sense to be a hypocrite when being a hypocrite is going to pay. Being logically consistent is not useful to you if it only ensures you give up benefits you could otherwise have and/or forces you to suffer loses you could avoid. The real trick to hypocrisy then, according to Pillutla and Murnighan, is to appear consistent to others. If your cover gets blown, the resulting loss of face is potentially costly, depending, of course, on the specific set of circumstances.      

References: Pillutla, M.M. & Murnighan, J.K. (2003). Fairness in bargaining. Social Justice Research, 16, 241- 262

Sunday, November 6, 2011

Proximately - Not Ultimately - Anonymous

As part of my recent reading for an upcoming research project, I've been poking around some of the literature on cooperation and punishment, specifically second- vs. third-party punishment. Let's say you have three people: A, B, and X. Person A and B are in a classic Prisoner's Dilemma; they can each opt to either cooperate or defect and receive payments according to their decisions. In the case of second-party punishment, person A or B can give up some of their payment to reduce the other player's payment after the choices have been made. For instance, once the game was run, person A could then give up points, with each point they give up reducing the payment of B by 3 points. This is akin to someone flirting with your boyfriend or girlfriend and you then blowing up the offender's car; sure, it cost you a little cash for the gas, bottle, rag, and lighter, but the losses suffered by the other party are far greater.

                                                           Not only does it serve them right, but it's also a more romantic gesture than flowers.

 Third-party punishment involves another person, X, who observes the interaction between A and B. While X is unaffected by the outcome of the interaction itself, they are then given the option to give up some payment of their own to reduce the payment of A or B. Essentially, person X would be Batman swinging in to deliver some street justice, even if X's parents may not have been murdered in front of their eyes.

Classic economic rationality would predict that no one should ever give up any of their payment to punish another player if the game is a one-shot deal. Paying to punish other players would only ensure that the punisher walks away with less money than they would otherwise have. Of course, we do see punishment in these games from both second- and third-parties when the option is available (though second-parties punish far more than third-parties). The reasons second-party punishment evolved don't appear terribly mysterious: games like these are rarely one-shot deals in real life, and punishment sends a clear signal that one is not to be shortchanged, encouraging future cooperation and avoiding future losses. The benefits to this in the long-term can overcome the short-term cost of the punishment, for if person A knows person B is unable or unwilling to punish transgressions, person A would be able to continuously take advantage of B. If I know that you won't waste your time pursuing me for burning your car down - since it won't bring your car back - there's nothing to dissuade me from burning it a second or tenth time.

Third-party punishment poses a bit more of a puzzle, which brings us to a paper by Fehr and Fischbacer (2004), who appear to be arguing in favor of group selection (at the very least, they don't seem to find the idea implausible, despite it being just that). Since third-parties aren't affected by the behavior of the others directly, there's less of a reason to get involved. Being Batman might seem glamorous, but I doubt many people would be willing to invest that much time and money - while incurring huge risks to their own life - to anonymously deliver a benefit to a stranger. One the possible ways third-party punishment could have benefited the punisher, as the authors note, is through reputational benefits: person X punishes person A for behaving unfairly, signaling to others that X is a cooperator and a friend - who also shouldn't be trifled with - and that kindness would be reciprocated in turn. In an attempt to control for these factors, Fehr and Fischbacer ran some one-shot economic games where all players were anonymous and there was no possibility of reciprocation. The authors seemed to imply that any punishment in these anonymous cases is ultimately driven by something other than reputational self-interest.      

                                                                       "We just had everyone wear one of these. Problem solved".

The real question is do playing these games in an anonymous, one-shot fashion actually control for these factors or remove them from consideration? I doubt that they fully do, and here's an example why: Alexander and Fisher (2003) surveyed men and women about their sexual history in anonymous and (potentially) non-anonymous conditions. Men reported an average of 3.7 partners in the non-anonymous condition and 4.2 in the anonymous one; women reported averages of 2.6 and 3.4 respectively. So there's some evidence that the anonymous conditions do help.

However, there was also a third condition where the participants were hooked up to a fake lie detector machine - though 'real' lie detector machines don't actually detect lies - and here the numbers (for women) changed again: 4 for men, 4.4 for women. While men's answers weren't particularly different across the three conditions, women's number of sexual partners rose from 2.6 to 3.4 to 4.4. This difference may not have reached statistical significance, but the pattern is unmistakable.

                                  On paper, she assured us that she found him sexy, and said her decision had nothing to do with his money. Good enough for me.

What I'm getting at is that it should not just be taken for granted that telling someone they're in an anonymous condition automatically makes people's psychology behave as if no one is watching, nor does it suggest that moral sentiments could have arisen via group selection (it's my intuition that truly anonymous one-shot conditions in our evolutionary history were probably rarely encountered, especially as far as punishment was concerned). Consider a few other examples: people don't enjoy eating fudge in the shape of dog shit, drinking juice that has been in contact with a sterilized cockroach, holding rubber vomit in their mouth, eating soup from a never-used bedpan, or using sugar from a glass labeled "cyanide", even if they labeled it themselves (Rozin, Millman, & Nemeroff 1986). Even though these people "know" that there's no real reason to be disgusted by rubber, metal, fudge, or a label, their psychology still (partly) functions as if there was one.

I'll leave you with one final example of how explicitly "knowing" something (i.e. this survey is anonymous; the sugar really isn't cyanide) can alter the functioning of your psychology in some cases, to some degree, but not in all cases.

If I tell you you're supposed to see a dalmatian in the left-hand picture, you'll quickly see it and never be able to look at that picture again without automatically seeing the dog. If I told you that the squares labeled A and B are actually the same color in the right-hand picture, you'd probably not believe me at first. Then, when you cover up all of that picture except A and B and find out that they actually are the same color you'll realize why people mistake me for Chris Angel from time to time.Also, when you are looking at the whole picture, you'll never be able to see A and B as the same color, because that explicit knowledge doesn't always filter down into other perceptual systems.

References: Alexander, M.G. & Fisher, T.D. (2003). Truth and consequences. Using the bogus pipeline to examine sex differences in self-reported sexuality. The Journal of Sex Research, 40, 27-35

Fehr, E. & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25, 63-87

Rozin, P., Millman, L., & Nemeroff, C. (1986). Operation of the laws of sympathetic magic in disgust and other domains. Journal of Personality and Social Psychology, 50, 703-712

Saturday, October 29, 2011

Excuses, Excuses, Excuses

I recently finished the latest book by Robert Trivers (2011), The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life, which is an odd title considering how little of the book is devoted to the logic of the intended topic. A better title would probably have been Things Robert Trivers Finds Interesting. After straining to stay awake through most of the 337 tedious pages of the book, I can't say I came away with any new insights or information on the subject of deception, though I did get the sense Trivers enjoys flirting with undergrads.

                                                              And who wouldn't? It's just one of the many, many benefits of getting tenure

 As Matt Ridley notes, Robert Kurzban (2010) also released a book not too long ago called Why Everyone (else) is a Hypocrite - which I can't recommend highly enough - that made a solid case for why "self" based research is problematic in the first place. The mind isn't a singular entity, but is rather a collection of different mental organs, each a functionally specific information processing mechanism. Any real mention of modularity is absent from Trivers' book, much less an active appreciation of it. I'd hesitate to say Trivers takes any idea further (as Matt does); if anything, Trivers stalls and rolls slightly backwards. Another impression I got from reading the book is that I can expect an angry phone call from Trivers if he ever reads this.

                                                         I'd like to discuss the merits of your recent review of my book in a calm, academic fashion

How might this false conception of a self effect thinking in other domains? One good example could be in the domain of morality. In this area, I get the sense the concept of the self has been tied heavily to moral culpability, where consciousness is king. Influences that are seen as originating outside the realm of conscious awareness are often used as attempts to exculpate various behaviors.

As an example, I'd offer up a paper by Sumithran et al (2011), examining how overweight people on diets often relapse and gain weight back after initial success at dropping some pounds. The authors measured various hormone levels in subject's bodies that are known to influence hunger and related behaviors, like energy expenditure and food intake, finding that dieting leads to changes in these circulating hormone levels. This could be the reason, they argue, that many dieters don't show long-term maintenance of weight loss. Fine. However, the authors lose me when they write this:
"...[A]n important finding of this study is that many of these alterations persist for 12 months after weight loss, even after the onset of weight regain, suggesting that the high rate of relapse among obese people who have lost weight has a strong physiological basis and is not simply the result of the voluntary resumption of old habits." (p. 1602, emphasis mine) 
Apparently, the authors find it interesting that they found a physiological basis for people not keeping the weight off, contrasting it with "voluntary" actions. My question would be, "What else would you even expect to find; a non-physiological basis?" After all, we are physical beings, so any changes in our thoughts or behaviors need to be the result of other physical changes. The implication seems to be that truly voluntary actions are supposed to be uninfluenced by physiology, while somehow having an influence on the behavior of the physical body.

                                                                             "It's not my choice, as I happen to have hormones"

This doesn't seem to be a terribly uncommon thought process; while sometimes people actively deny any influences of biology on behavior out of fear of justifying it, or claim (correctly) that biological doesn't justify behavior, those same people can very quickly accept behavior as being biologically based in the hopes of making it acceptable by saying "it's not a choice". That's some interesting hypocrisy there. Did I mention there's a very interesting - and a not so interesting - book that deals with that topic?

References: Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.

Sumithran, P., Prendergast, L.A., Delbridge, E., Purcell, K., Shulks, A., Kriketos, A.K., & Proietto, J. (2011). Long-term persistence of hormonal adaptations to weight loss. The New England Journal of Medicine, 365, 1597-1604.

Trivers, R. (2011). The folly of fools: The logic of deceit and self-deception in human life. New York, NY: Basic Books.