Sunday, November 20, 2011

A Sense of Entitlement (Part 2)

One of the major issues that has divided people throughout recorded human history is precisely the matter of division - more specifically, how scarce resources ought to be divided. A number of different principles have been proposed, including: everyone should get precisely the same share, people should receive a share according to their needs, and people should receive a share according to how much effort they put in. None of those principles tend to be universally satisfying. The first two open the door wide for free-riders who are happy to take the benefits of others' work while contributing none of their own; the third option helps to curb the cheaters, but also leaves those who simply encounter bad luck on their own. Which principles people will tend to use to justify their stance on a matter will no doubt vary across contexts.

                                         If you really wanted toys - or a bed - perhaps you should have done a little more to earn them. Damn entitled kids...

The latter two options also open the door for active deception. If I can convince you that I worked particularly hard - perhaps a bit harder than I actually worked - then the amount I deserve goes up. This tendency to make oneself appear more valuable than one actually is is widespread, and one good example is that about 90% of college professors rate themselves above average in teaching ability. When I was collecting data on women's perceptions of how attractive they thought they were, from 0 to 9, I don't think I got a single rating below a 6 from over 40 people, though I did get many 8s and 9s. It's flattering to think my research attracts so many beauties, and it certainly bodes well for my future hook-up prospects (once all these women realize how good looking and talented I keep telling myself I am, at any rate).   

An alternative, though not mutually exclusive, path to getting more would be to convince others that your need is particularly great. If the marginal benefits of resources flowing to me are greater than the benefits of those same resources going to you, I have a better case for deserving them. Giving a millionaire another hundred dollars probably won't have much of an effect on their bottom line, but that hundred dollars could mean the difference between eating or not for some people. Understanding this strategy allows one to also understand why people working to change society in some way never use the motto, "Things are pretty good right now, but maybe they could be better".

                                                          Their views on societal issues are basically the opposite of their views on pizza

This brings us to the science (Xiao & Bicchieri, 2010). Today, we'll be playing a trust game. In this game, player A is given an option: he can end the game and both players will walk away with 40 points, or he can trust and give 10 points to player B, which then gets multiplied by three, meaning player B now has 70 points. At this time, player B is given the option of transferring some of their points back to player A. In order for player A to break even, B needs to send back 10 points; anymore than 10 and player A profits. This is a classic dilemma faced by anyone extending a favor to a friend: you suffer an initial cost by helping someone out, and you need to trust your friend will pay you back by being kind to you later, hopefully with some interest.   

Slightly more than half of the time (55%), player B gave 10 points or more back to player A, which also means that about half the time player B also took the reward and ran, leaving player A slightly poorer and a little more bitter. Now, here comes the manipulation: a second group played the same game, but this time the payoffs were different. In this group, if player A didn't trust player B and ended the game, they walk away with 80 points and B leaves with 40. If player A did trust, that meant they both now have 70 points; it also meant that if player B transferred any points back to player A, he would be putting himself at a relative disadvantage in terms of points.

In this second group, player B ended up giving back 10 or more points only 26% of the time. Apparently, repaying a favor isn't that important when the person you're repaying it to would be richer than you because of it. It would seem that fat cats don't get too much credit tossed their way, even if they behave in an identical fashion towards someone else. Interestingly, however, many player As understood this would happen; in fact, 61% of them expected to get nothing back in this condition (compared to 23% expecting nothing back in the first condition).

                                                  "Thanks for all the money. I would pay you back, it's just that I kind of deserved it in the first place."

That inequality seemed to do two things: the first is that it appeared to create a sense of entitlement on the behalf of the receiver that negated most of the desire for reciprocity. The second thing that happened is that the mindset of the people handing over the money changed; they fully expected to get nothing back, meaning many of these donations appeared to look more like charity, rather than a favor.  

Varying different aspects of these games allows researchers to tap different areas of human psychology, and it's important to keep that in mind when interpreting the results of these studies. In your classic dictator game, when receivers are allowed to write messages to the dictators to express their feelings about the split, grossly uneven splits are met with negative messages about 44% of the time (Xiao & Houser, 2009). However, in these conditions, receivers are passive, so what they get looks more like charity. When receivers have some negotiating power, like in an ultimatum game, they respond to unfair offers quite differently, with uneven splits being met by negative messages 79% of the time (Xiao & Houser, 2005). It would seem that giving someone some power also spikes their sense of entitlement; they're bargaining now, not getting a handout, and when they're bargaining they're likely to over-emphasize their need and their value to get more.         

Resources: Xiao, E. & Houser, D. (2005). Emotion expression in human punishment behavior. Proceedings of the Nation Academy of Sciences of the United States of America, 102, 7398-7401

Xiao, E. & Houser, D. (2009). Avoiding the sharp tongue: Anticipated written messages promote fair economic exchange. Journal of Economic Psychology, 30, 393-404

Xiao, E., & Houser, D. (2010). When equality trumps reciprocity. Journal of Economic Psychology, 31, 456-470

Saturday, November 19, 2011

A Sense of Entitlement (Part 1)

There's a lot to be said for studying behavior in the laboratory or some other artificially-generated context: it helps a researcher control a lot of the environment, making the data less noisy. Most researchers abhor noise almost as much they do having to agree with someone (when agreement isn't in the service of disagreeing with some mutual source of contention, anyway), so we generally like to keep things nice and neat. One of the downsides of keeping things so tidy is that the world often isn't, and it can be difficult to distinguish between "noise" and "variable of interest" at times. The flip-side of this issue is that the environment of the laboratory can also create certain conditions, some that are not seen in the real world, without realizing it. Needless to say, this can have an effect on the interpretation of results.

                   "Few of our female subjects achieved orgasm while under observation by five strangers. Therefore, reports of female orgasm must largely be a myth"

Here's a for instance: say that you're fortunate enough to find some money just laying around, perhaps on the street, inexplicably in a jar in front of a disheveled-looking man. Naturally you would collect the discarded cash, put it into your wallet, and be on your way. What you probably wouldn't do is give some of that cash to a stranger anonymously, much less half of it, yet this is precisely the behavior we see from many people in a dictator game. So why do they do it in the lab, but not in real life?

Part of the reason seems to lie in the dictator's perceptions of the expectations of the receivers. The rules of game set up a sense of entitlement on the behalf of the receivers - complete with a sense of obligation on behalf of the dictator - with the implicit suggestion being that dictators are supposed to share the pot, perhaps even fairly. But suppose there was a way to get around some of those expectations - how might that affect the behavior of dictators?

                               "Oh, you can split the pot however you want. You can even keep it all if you don't care about anyone but yourself. Just saying..."

The possibility was examined by Dana, Cain, and Dawes (2006), who ran the standard dictator game at first, telling dictators to decide how to divide up $10 between themselves and another participant. After the subjects entered their first offer, they were then given a new option: the receivers didn't know the game was being played yet, and if the dictators wanted, they could take $9 and leave, which means the receivers would get nothing, but would also never know they could have gotten something. Bear in mind, the dictators could have kept $9 and give a free dollar to someone else, or keep an additional dollar with the receiver still getting nothing in their initial offer, so this exit option destroys overall welfare for true ignorance. When given this option, about a third of the dictators opted out, taking the $9 welfare destroying option and leaving to make sure the receiver never knew.

These dictators exited the game when there was no threat of them being punished for dividing unfairly or having their identity disclosed to the receiver, implying that this behavior was largely self-imposed. This tells us many dictators aren't making positive offers - fair or otherwise - because they find the idea of giving away money particularly thrilling. They seem to want to appear to be fair and generous, but would rather avoid the costs of doing so, or rather the costs of not doing so and appearing selfish. There were, however, still a substantial number of dictators who offered more than nothing. There are a number of reasons this may be the case, most notably because the manipulation only allowed dictators to bypass the receiver's sense of entitlement effectively, not their own sense of obligation that the game itself helped to create.  

Another thing this tells us is that people are not going to respond universally gratefully to free money - a response many dictators apparently anticipated. Finding a free dollar on the street is a much different experience than being given a dollar by someone who is deciding how to divide ten. One triggers a sense of entitlement - clearly, you deserve more than a free dollar, right? - the other does not, meaning one will tend to be viewed as a loss, despite the fact that it isn't. 

How these perceptions of entitlement change under certain conditions will be the subject of the next post.

References: Dana, J., Cain, D.M., & Dawes, R.M. (2006). What you don't know won't hurt me: Costly (but quiet) exit in dictator games. Organizational Behavior and Human Decision Processes, 100, 193-201.

Tuesday, November 15, 2011

Somebody Else's Problem

Let's say you're a recent high-school graduate at the bank, trying to take out a loan to attend a fancy college so you can enjoy the privileges of complaining about how reading is, like, work, and living the rest of life in debt from student loans. A lone gunman rushes into the bank, intent on robbing the place. You notice the gunman is holding a revolver, meaning he only has six bullets. This is good news, as there happen to be 20 people in the bank; if you all rush him at the same time, he wouldn't be able to kill more than six people, max; realistically, he'd only be able to get off three or four shots before he was taken down, and there's no guarantee those shots will even kill the people they hit. The only practical solution here should be to work together to stop the robbery, right? 

                                                        Look on the bright side: if you pull through, this will look great on your admissions essay

The idea that evolutionary pressures would have selected for such self-sacrificing tendencies is known as "group selection", and is rightly considered nonsense by most people who understand evolutionary theory. Why doesn't it work? Here's one reason: let's go back to the bank. The benefits of stopping the robbery will be shared by everyone at the abstract level of the society, but the costs of stopping the robbery will be disproportionately shouldered by those who intervene. While everyone is charging the robber, if you decide that you're quite comfortable hiding in the back, thank you very much, your chances of getting shot decline dramatically and you still get the benefit; just let it be somebody else's problem. Of course, most other people should realize this as well, leaving everyone pretty uninclined to try and stop the robbery. Indeed, there are good reasons to suspect that free-riding is the best strategy (Dreber et al., 2008).   

There are, unfortunately, some people who think group selection works and actually selects for tendencies to incur costs at no benefit. Fehr, Fischbacher, and Gachter (2002) called their bad idea "strong reciprocity":
"A person is a strong reciprocator if she is willing to sacrifice resources (a) to be kind to those who are being kind... and (b) to punish those who are being unkind...even if this is costly and provides neither present nor future material rewards for the reciprocator" (p.3, emphasis theirs)
So the gist of the idea would seem to be (to use an economic example) that if you give away your money to people you think are nice - and burn your money to ensure that mean people's money also gets burned - with complete disregard for your own interests, you're going to somehow end up with more money. Got it? Me neither...

                                                               "I'm telling you, this giving away cash thing is going to catch on big time."

So what would drive Fehr, Fischbacher, and Gachter (2002) to put forth such a silly idea? They don't seem to think existing theories - like reciprocal altruism, kin selection, costly signaling theory, etc - can account for the way people behave in laboratory settings. That, and existing theories are based around selfishness, which isn't nice, and the world should be a nicer place. The authors seems to believe that those previous theories lead to predictions like: people should "...always defect in a sequential, one-shot [prisoner's dilemma]" when playing anonymously.That one sentence contains two major mistakes: the first mistake is that those theories most definitely do not say that. The second mistake is part of the first: they assume that people's proximate psychological functioning will automatically fall in-line with the conditions they attempt to create in the lab, which it does not (as I've mentioned recently). While it might be adaptive, in those conditions, to always defect at the ultimate level, it does not mean that the proximate level will behave that way. For instance, it's a popular theory that sex evolved for the purposes of reproduction. That people have sex with birth control does not mean the reproduction theory is unable to account for that behavior.

As it turns out, people's psychology did not evolve for life in a laboratory setting, nor is the functioning of our psychology going to be adaptive in each and every context we're in. Were this the case, returning to our birth control example, simply telling someone that having sex when the pill is involved removes the possibility of pregnancy would lead to people to immediately lose all interest in the act (either having sex or using the pill). Likewise, oral sex, anal sex, hand-jobs, gay sex, condom use, and masturbation should all disappear too, as none are particularly helpful in terms of reproduction.

                         Little known fact: this parade is actually a celebration of a firm understanding of the proximate/ultimate distinction. A very firm understanding.
                                                             
Nevertheless, people do cooperate in experimental settings, even when cooperating is costly, the game is one-shot, there's no possibility of being punished, and everyone's ostensibly anonymous. This poses another problem for Fehr and his colleagues: their own theory predicts this shouldn't happen either. Let's consider an anonymous one-shot prisoner's dilemma with a strong reciprocator as one of the players. If they're playing against another strong reciprocator, they'll want to cooperate; if they're playing against a selfish individual, they'll want to defect. However, they don't know ahead of time who they're playing against, and once they make their decision it can't be adjusted. In this case, they run the risk of defecting on a strong reciprocator or benefiting a selfish individual while hurting themselves. The same goes for a dictator game; if they don't know the character of the person they're giving money to, how much should they give?

The implications of this extend even further: in a dictator game where the dictator decides to keep the entire pot, third-party strong reciprocators should not really be inclined to punish. Why? Because they don't know a thing about who the receiver is. Both the receiver and dictator could be selfish, so punishing wouldn't make much sense. The dictator could be a strong reciprocator and the receiver could be selfish, in which case punishment would make even less sense. Both could be strong reciprocators, unsure of the others' intentions. It would only make sense if the dictator was selfish and the receiver was a strong reciprocator, but a third-party has no way of knowing whether or not that's the case. (It also means if strong reciprocators and selfish individuals are about equal in the population, punishment in these cases would be a waste three-forths of the time - maybe half at best, if they want to punish selfish people no matter who they're playing against - meaning strong reciprocator third-parties should never punish).   

                                                                     There was some chance he might of been being a dick... I think.

The main question for Fehr and his colleagues would then not be, "why do people reciprocate cooperation in the lab" - as reciprocal altruism and the proximate/ultimate distinction can already explain that without resorting to group selection - but rather, "why is there any cooperation in the first place?" The simplest answer to the question might seem to be that some people are prone to give the opposing player the benefit of the doubt and cooperate on the first move, and then adjust their behavior accordingly (even if they are not going to be playing sequential rounds). The problem here is that this is what a tit-for-tat player already does, and it doesn't require group selection.     

It also doesn't look good for the theory of social preferences invoked by Fehr et al. (2002) when the vast majority of people don't seem to have preferences for fairness and honesty when they don't have to, as evidenced by 31 of 33 people strategically using an unequal distribution of information to their advantage in ultimatum games (Pillutla and Murnighan, 1995). In every case Fehr et al. (2002) looks at, outcomes have concrete values that everyone knows about and can observe. What happens when intentions can be obscured, or values misrepresented, as they often can be in real life? Behavior changes and being a strong reciprocator would be even harder. What might happen when the cost/punishment ratio changes from a universal static value, as it often does in real life (not everyone can punish others at the same rate)? Behavior will probably change again.

Simply assuming these behaviors are the result of group selection isn't enough.The odds are better that the results are only confusing when their interpreter has an incorrect sense of how things should have turned out.   

References: Dreber, A., Rand, D.G., Fudenberg, D, & Nowak, M.A. (2008). Winners don't punish. Nature, 452,  348-351

Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13, 1-25.

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.

Friday, November 11, 2011

The "I" In The Eye Of The Beholder

When "doing the right thing" is easy, people tend to not give others much credit for it. Chris Rock made light of this particular fact briefly in one of his more popular routines - so popular, in fact, that it has its own Wikipedia page. In this routine, Chris Rock says "Niggas always want some credit for some shit they supposed to do...a Nigga will brag about some shit a normal man just does". At this point, it's probably worth pointing out that Chris Rock has an issue with using the word "Nigga" because he felt it gave racist people the feeling they had the license to use it. However, he apparently has no issue at all using the word "Faggot". The hypocrisy of the human mind is always fun.

So here's a question: which opinion represents Chris Rock's real opinion? Does Chris believe in not using a word - even comically - that could be considered offensive because it might give some people with ill-intentions license to use it or does he not? If you understand the concept of modularity properly, you should also understand that the question itself is ill-phrased; implicit in the question is an assumption of a single true self somewhere in Chris Rock's brain, but that idea is no more than a (generally) useful fiction.  

                                                                  On second thought, maybe I don't really want to see your true colors...

Examples of this line of thought abound, however, despite the notion being faulty. For instance, Daniel Kahneman, who, when working as a psychologist (that apparently didn't appreciate modularity) in the military, felt he was observing people's true nature under conditions of stress.There's something called the Implicit Association Test - IAT for short. The basic principle behind the IAT is that people will respond more quickly and accurately when matching terms that are more strongly mentally associated compared to ones that aren't, and the speed of your responses demonstrates what's really in your mind. So let's say you have a series of faces and words that pop up in the middle of screen; your task is to hit one button if the face is white or the word is positive, and a different button if the face is black or the word is negative (also vice versa; i.e. one button for black person or positive word, and another button for white person or negative word). The administrators and supporters of this test often claim things like: It is well known that people don't always 'speak their minds', and it is suspected that people don't always 'know their minds', though the interpretation of the results of such a test is, well, open to interpretation.

                                                      "You're IAT results came back positive for hating pretty much everything about this guy"

Enter Nichols and Knobe,who conducted a little test to see if people were compatibilists or incompatibilists (that is, whether people feel the concept of moral responsibility is compatible with determinism or not). It turns out how you phrase the question matters: when people were asked to assume the universe was completely deterministic and given a concrete case of immoral behavior (in this case, a man killing his wife and three kids to run off with his secretary), 72% of people said he was fully morally responsible for his actions. Following this, they asked some other people about the abstract question ("in a completely deterministic universe, are people completely morally responsible for their actions?"), and, lo and behold, the answers do a complete flip; now, 86% of people endorsed an incompatibilist stance, saying people aren't morally responsible.

That's a pretty neat finding, to be sure. What caught my eye was what followed, when the author's write: "In the abstract condition, people’s underlying theory is revealed for what it is ─ incompatibilist" (p.16, emphasis mine). To continue to beat the point to death, the problem here is that the brain is not a single organ; it's composed of different, functionally specific, information processing modules, and the output of these modules is going to depend on specific contexts. Thus, asking about what the underlying theory is makes the question ill-phrased from the start. So let's jet over to the Occupy Wall Street movement that Jay-Z has recently decided to attempt to make some money on (money which he will not be using to support the movement, by the way):

                                                                                 For every 99 dollars he makes, you get none

When people demand that the 1% "pay their fair share" of the taxes, what is the word "fair" supposed to refer to? Currently, the federal tax code is progressive (that is, if you make more money, the proportion of that money that is taxed goes up), and if fairness is truly the goal, you'd suspect these people should be lobbying for a flat tax, demanding that everyone - no matter how rich or poor - pay the same proportion of what they make. It goes without saying that many people seem to oppose this idea, which raises some red flags about their use of the word "fair". Indeed, Pillutla and Murnighan (2003) make an excellent case for just how easy it is for people to manipulate the meaning of the concept to suit their own purposes in a given situation. I'll let them explain:
"Arguments that an action, an outcome, or decisions are not fair, when uttered by a recipient, most often reflect a strategic use of fairness, even if this is neither acknowledged nor even perceived by the claimant...The logical extension of these arguments is that claims of fairness are really cheap talk, i.e., unverifiable, costless information, possibly representing ulterior motives" (p.258)
The concept of fairness, in that sense, is a lot like the concept of a true, single self; it's a useful fiction that tends to be deployed strategically. It makes good sense to be a hypocrite when being a hypocrite is going to pay. Being logically consistent is not useful to you if it only ensures you give up benefits you could otherwise have and/or forces you to suffer loses you could avoid. The real trick to hypocrisy then, according to Pillutla and Murnighan, is to appear consistent to others. If your cover gets blown, the resulting loss of face is potentially costly, depending, of course, on the specific set of circumstances.      

References: Pillutla, M.M. & Murnighan, J.K. (2003). Fairness in bargaining. Social Justice Research, 16, 241- 262

Sunday, November 6, 2011

Proximately - Not Ultimately - Anonymous

As part of my recent reading for an upcoming research project, I've been poking around some of the literature on cooperation and punishment, specifically second- vs. third-party punishment. Let's say you have three people: A, B, and X. Person A and B are in a classic Prisoner's Dilemma; they can each opt to either cooperate or defect and receive payments according to their decisions. In the case of second-party punishment, person A or B can give up some of their payment to reduce the other player's payment after the choices have been made. For instance, once the game was run, person A could then give up points, with each point they give up reducing the payment of B by 3 points. This is akin to someone flirting with your boyfriend or girlfriend and you then blowing up the offender's car; sure, it cost you a little cash for the gas, bottle, rag, and lighter, but the losses suffered by the other party are far greater.

                                                           Not only does it serve them right, but it's also a more romantic gesture than flowers.

 Third-party punishment involves another person, X, who observes the interaction between A and B. While X is unaffected by the outcome of the interaction itself, they are then given the option to give up some payment of their own to reduce the payment of A or B. Essentially, person X would be Batman swinging in to deliver some street justice, even if X's parents may not have been murdered in front of their eyes.

Classic economic rationality would predict that no one should ever give up any of their payment to punish another player if the game is a one-shot deal. Paying to punish other players would only ensure that the punisher walks away with less money than they would otherwise have. Of course, we do see punishment in these games from both second- and third-parties when the option is available (though second-parties punish far more than third-parties). The reasons second-party punishment evolved don't appear terribly mysterious: games like these are rarely one-shot deals in real life, and punishment sends a clear signal that one is not to be shortchanged, encouraging future cooperation and avoiding future losses. The benefits to this in the long-term can overcome the short-term cost of the punishment, for if person A knows person B is unable or unwilling to punish transgressions, person A would be able to continuously take advantage of B. If I know that you won't waste your time pursuing me for burning your car down - since it won't bring your car back - there's nothing to dissuade me from burning it a second or tenth time.

Third-party punishment poses a bit more of a puzzle, which brings us to a paper by Fehr and Fischbacer (2004), who appear to be arguing in favor of group selection (at the very least, they don't seem to find the idea implausible, despite it being just that). Since third-parties aren't affected by the behavior of the others directly, there's less of a reason to get involved. Being Batman might seem glamorous, but I doubt many people would be willing to invest that much time and money - while incurring huge risks to their own life - to anonymously deliver a benefit to a stranger. One the possible ways third-party punishment could have benefited the punisher, as the authors note, is through reputational benefits: person X punishes person A for behaving unfairly, signaling to others that X is a cooperator and a friend - who also shouldn't be trifled with - and that kindness would be reciprocated in turn. In an attempt to control for these factors, Fehr and Fischbacer ran some one-shot economic games where all players were anonymous and there was no possibility of reciprocation. The authors seemed to imply that any punishment in these anonymous cases is ultimately driven by something other than reputational self-interest.      

                                                                       "We just had everyone wear one of these. Problem solved".

The real question is do playing these games in an anonymous, one-shot fashion actually control for these factors or remove them from consideration? I doubt that they fully do, and here's an example why: Alexander and Fisher (2003) surveyed men and women about their sexual history in anonymous and (potentially) non-anonymous conditions. Men reported an average of 3.7 partners in the non-anonymous condition and 4.2 in the anonymous one; women reported averages of 2.6 and 3.4 respectively. So there's some evidence that the anonymous conditions do help.

However, there was also a third condition where the participants were hooked up to a fake lie detector machine - though 'real' lie detector machines don't actually detect lies - and here the numbers (for women) changed again: 4 for men, 4.4 for women. While men's answers weren't particularly different across the three conditions, women's number of sexual partners rose from 2.6 to 3.4 to 4.4. This difference may not have reached statistical significance, but the pattern is unmistakable.

                                  On paper, she assured us that she found him sexy, and said her decision had nothing to do with his money. Good enough for me.

What I'm getting at is that it should not just be taken for granted that telling someone they're in an anonymous condition automatically makes people's psychology behave as if no one is watching, nor does it suggest that moral sentiments could have arisen via group selection (it's my intuition that truly anonymous one-shot conditions in our evolutionary history were probably rarely encountered, especially as far as punishment was concerned). Consider a few other examples: people don't enjoy eating fudge in the shape of dog shit, drinking juice that has been in contact with a sterilized cockroach, holding rubber vomit in their mouth, eating soup from a never-used bedpan, or using sugar from a glass labeled "cyanide", even if they labeled it themselves (Rozin, Millman, & Nemeroff 1986). Even though these people "know" that there's no real reason to be disgusted by rubber, metal, fudge, or a label, their psychology still (partly) functions as if there was one.

I'll leave you with one final example of how explicitly "knowing" something (i.e. this survey is anonymous; the sugar really isn't cyanide) can alter the functioning of your psychology in some cases, to some degree, but not in all cases.

                    
If I tell you you're supposed to see a dalmatian in the left-hand picture, you'll quickly see it and never be able to look at that picture again without automatically seeing the dog. If I told you that the squares labeled A and B are actually the same color in the right-hand picture, you'd probably not believe me at first. Then, when you cover up all of that picture except A and B and find out that they actually are the same color you'll realize why people mistake me for Chris Angel from time to time.Also, when you are looking at the whole picture, you'll never be able to see A and B as the same color, because that explicit knowledge doesn't always filter down into other perceptual systems.

References: Alexander, M.G. & Fisher, T.D. (2003). Truth and consequences. Using the bogus pipeline to examine sex differences in self-reported sexuality. The Journal of Sex Research, 40, 27-35

Fehr, E. & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25, 63-87

Rozin, P., Millman, L., & Nemeroff, C. (1986). Operation of the laws of sympathetic magic in disgust and other domains. Journal of Personality and Social Psychology, 50, 703-712