Tuesday, March 27, 2012

Communication As Persuasion

Can you even win debates? I’ve never heard someone go, "My opponent makes a ton of sense; I’m out." -Daniel Tosh
In my younger days, I lost a few years of my life to online gaming. Everquest was the culprit. Now, don't get me wrong, those years were perhaps some of the happiest in my life. Having something fun to do at all hours of the day with thousands of people to do it with has that effect. Those years just weren't exactly productive. While I was thoroughly entertained, when the gaming was over I didn't have anything to show for it. A few years after my gaming phase, I went through another one: chronic internet debating. Much like online gaming, it was oddly addictive and left me with nothing to show for it when it all ended. While I liked to try and justify it to myself - that I was learning a lot from the process, refining my thought process and arguments, and being a good intellectual - I can say with 72% certainty that I had wasted my time again, and this time I wasn't even having as much fun doing it. Barring a few instances of cleaning up grammar, I'm fairly certain no one changed my opinion about a thing and I changed about as many in return. You'd think with all the collective hours my fellow debaters and I had logged in that we might have been able to come to an agreement about something. We were all reasonable people seeking the truth, after all.

                                                                                        Just like this reasonable fellow.

Yet, despite that positive and affirming assumption, debate after debate devolved into someone - or everyone - throwing their hands up in frustration, accusing the other side of being intentionally ignorant, too biased, intellectually dishonest, unreasonable, liars, stupid, and otherwise horrible monsters (or, as I like to call it, suggesting your opponent is a human). Those characteristics must have been the reason the other side of the debate didn't accept that our side was the right side, because our side was, of course, objectively right. Debates are full of logical fallacies like those personal attacks, such as: appeals to authority, straw men, red herrings, and question begging, to name a few, yet somehow it only seems like the other side was doing it. People relentless dragged issues into debates that didn't have any bearing on the outcome, and they always seemed to apply their criticisms selectively. 

Take a previously-highlighted example from Amanda Marcotte: when discussing the hand-grip literature on resisting sexual assault, she complained that, "most of the studies were conducted on small, homogeneous groups of women, using subjective measurements." Pretty harsh words for a  study comprised of 232 college women between the ages of 18 and 35. When discussing another study that found results Amanda liked - a negligible difference in average humor ratings between men and women - she raised no concerns about "...small, homogeneous groups of women, using subjective measurements". That she didn't is hypocritical, considering the humor study had only 32 subjects (16 men and women, presumably undergraduates from some college) and used caption writing as the only measure of humor. So what gives: does Amanda care about the number of subjects when assessing the results or not?

The answer, I feel is a, "Yes, but only insomuch as it's useful to whatever point she's trying to make". The goal in debates - and communication more generally - is not logical consistency; it's persuasion. If consistency (or being accurate) gets in the way of persuasion, the former can easily be jettisoned for the latter. While being right, in some objective sense, is one way of persuading others, being right will not always make your argument the more persuasive one; the resistance to evolutionary theory has demonstrated as much. Make no mistake, this behavior is not limited to Amanda or the people that you happen to disagree with; research has shown that this is a behavior pretty much everyone takes part in at some point, and that includes you*. A second mistake I'd urge you not to make is to see this inconsistency as some kind of flaw in our reasoning abilities. There are some persuasive reasons to see inconsistency as reasoning working precisely how it was designed to, annoying as it might be to deal with.

                                                                 Much like my design for an airbag that deploys when you start the car.
   
As Mercier and Sperber (2011) point out, the question, "Why do humans reason?" is often left unexamined. The answer these authors provide is that our reasoning ability evolved primarily for an argumentative context: producing arguments to persuade others and evaluating the arguments others present. It's uncontroversial that communication between individuals can be massively beneficial. Information which can be difficult or time consuming to acquire at first can be imparted quickly and almost without effort to others. If you discovered how to complete some task successfully - perhaps how to build a tool or a catch fish more effectively - perhaps through a trial-and-error process, communicating that information to others allows them to avoid the need to undergo that same process themselves. Accordingly, trading information can be wildly profitable for all parties involved; everyone gets to save time and energy. However, while communication can offer large benefits, we also need to contend with the constant risk of misinformation. If I tell you that your friend is plotting to kill you, I'd have done you a great service if I was telling the truth; if the information I provided was either mistaken or fabricated, you'd have been better off ignoring me. In order to achieve these two major goals - knowing how to persuade others and when to be persuaded yourself - there's a certain trust barrier in communication that needs to be overcome. 

This is where Mercier and Sperber say our reasoning ability comes in: by giving others convincing justifications to accept our communications, as well as being able to better detect and avoid the misinformation of others, our reasoning abilities allow for more effective and useful communication. Absent any leviathan to enforce honesty, our reasoning abilities evolved to fill the niche. It is worth comparing this perspective to another: the idea that reasoning evolved as some general ability to improve or refine our knowledge across the board. In this scenario, our reasoning abilities more closely resemble some domain-general truth finders. If this latter perspective is true, we should expect no improvements in performance on reasoning tasks contingent on whether or not they are placed in an argumentative context. That is not what we observe, though. Poor performance on a number of abstracted reasoning problems, such as the Wason Selection Task, is markedly improved when those same problems are placed in an argumentative context.

While truth tends to win in cases like the Wason Selection Task being argued over, let's not get a big-head about it and insist that it implies our reasoning abilities will always push towards truth. It's important to note how divorced from reality situations like that one are: it's not often you find people with a mutual interest in truth, arguing over a matter they have no personal stake in, that also has a clearly defined and objective solution. While there's no doubt that reasoning can sometimes lead people to make better choices, it would be a mistake to assume that's the primary function of the ability, as reasoning frequently doesn't seem to lead people towards that destination. To the extent that reasoning tends to push us towards correct, or improved, answers, this is probably due to correct answers being easier to justify than incorrect ones.

As the Amanda Marcotte example demonstrated, when assessing an argument, often "[people] are not trying to form an opinion: They already have one. Their goal is argumentative rather than epistemic, and it ends up being pursed at the expense of epistemic soundness...People who have an opinion to defend don't really evaluate the arguments of their interlocutors in search for genuine information but rather consider them from the start as counterarguments to be rebutted." This behavior of assessing information by looking for arguments that support one's own views and rebut the views of others is known as motivated reasoning. If reasoning served some general knowledge-refining ability, this would be a strange behavior indeed. It seems people often end up strengthening not their knowledge about the world, but rather their existing opinions, a conclusion that fits nicely in the argumentative theory. While opinions that cannot be sustained eventually tend to get tossed aside, as reality does impose some constraints (Kunda, 1990), on fuzzier matters for which there aren't clear, objective answers - like morality - arguments have gotten bogged down for millenia.

                                       I'm not hearing anymore objections to the proposal that "might makes right". Looks like that debate has been resolved.

 Further still, the argumentative theory can explain a number of findings that economists tend to find odd. If you have a choice between two products that are equally desirable, adding a third and universally less-desirable option should not have any effect on your choice. For instance, let's say you have a choice between $5 today and $6 tomorrow; adding an additional option of $5 tomorrow to the mix shouldn't have any effect, according to standard economic rationality, because it's worse than either option. Like many assumptions of economics, it turns out to not hold up. If you add that additional option, you'll find people start picking the $5 today option more than they previously did. Why? Because it gives them a clear justification for their decision, as if they were anticipating having to defend it. While $5 today or $6 tomorrow might be equally as attractive, $5 today is certainly more attractive than $5 tomorrow, making the $5 decision more justifiable. Our reasoning abilities will frequently point us towards decisions that are more justifiable, even if they end up not making us more satisfied.

Previous conceptualizations about the function of reasoning have missed the mark, and, as a result, had been trying to jam a series of square pegs into the same round hole. They have been left unable to explain vast swaths of human behaviors, so researchers simply labeled those behaviors that didn't fit as biases, neglects, blind spots, errors, or fallacies, without ever succeeding in figuring out why they existed; why our reasoning abilities often seemed so poorly designed for reasoning. By placing all these previously anomalous findings under a proper theoretical lens and context, they suddenly start to make a lot more sense. While the people you find yourself arguing with may still seem like total morons, this theory may at least help you gain some insight into why they're acting so intolerable.             

*As a rule, it doesn't apply to me, so if you find yourself disagreeing with me, you're going to want to rethink your position. Sometimes life's just unfair that way.

References: Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108,  480-498.

Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111.

Friday, March 23, 2012

I Do Not Bite My Thumb At You, Sir, But I Do Bite My Thumb

I'm not a parent, but I can only imagine being one is a largely horrible affair for all parties involved. While I am currently a fantastically successful and good-looking man, I also represent a multiple-decade old ball of need and demands that has more than likely ruined many years of life for my parents. So, my bad there, I suppose; that one's on me. Out of the many reasons that I've been able to gather as to why being a parent is generally a pain in the ass, one is that children are notoriously finicky eaters. Frustrated with their children's lack of desire to eat this or that, one line that many parents will resort to is, in one form or another, "Finish what's on your plate. You're lucky to even have food; there are people starving in the world who wish they were you right now." I'm sure those hungry kids of the world can take some solace in knowing that the food wasn't wasted; it was forced on an unwilling recipient. Much better. 

Any child worth their salt, when faced with such an argument from their parents, would respond along the following lines: "It doesn't matter whether I eat the food or throw it away; either way, it won't have any impact on the starving children". They'd be right. 

                                                               "Well, so long as you made them eat all that extra food they didn't want..."

I've been speculating lately about what examples like this one can tell us about the functioning of our moral psychology. Here's another: Tom Cruise and Katie Holmes had been reported to be spending about $130,000 on their daughter's Christmas presents. The comments section of the article reveals that many people seem to find such behavior downright morally disgusting, if not outright evil, complete with lots of pictures of thoroughly malnourished children. Not only do Cruise and Holmes get painted as awful people for not using that money for other purposes, there is also rampant speculation as to how their child is going to turn out in the future because of it. Few of the predictions appear optimistic, while most speculate that she's going to turn into an awful person. Why does the story take on that tone, rather than one of, say, parental affection, or of demonstrating value in personal freedom to spend money however one sees fit? You know, individual rights, and all that.

These examples make a very important point: if you're trying to convince someone to do something - anything, it doesn't matter what - it helps to have a victim on your side of the debate; someone who is being harmed by the action in question. Having one or more victims allows you to attempt and appeal to the moral psychology of others; it allows you to gain the support of latent coalitions in your social environment that can help achieve your ends. However, the term "victim" - much like the terms fairness or race - has a great deal of ambiguity to it. Victimhood is not an "out-there" type of variable capable of being easily measured or observed, like height or eye color. Victimhood is something that needs to be inferred. What cues people pick up on or make use of when assessing and generating such claims has gone largely unexamined.  

Consider the victims that people outlined in the Tom and Katie example: first, they are hurting their daughter directly by spoiling her, as she will be unhappy in some way later in life because of it. Perhaps she won't value what she has very much because it comes easily. By extension, they're also hurting indirectly the people their daughter will come into contact with later in life, as she will turn out to be a nasty, entitled ass because of the treatment she received from her parents. Finally, Tom and Katie are hurting the starving children of world by choosing to spend some of their money in ways that aren't immediately alleviating their plight; they are hurting these children by not helping them. The issue comes to be viewed as their responsibility in some way because of their status and wealth, as if they are expected to do something about it. By acting as they are, they are shirking some perceived social debts they have to others, like not repaying a loan.

                          Maybe he saved some lives from stopping that horrible new virus in MI-2, but he could have saved more lives by fighting hunger in Africa.  

As the initial example of the finicky child demonstrates, victimhood connections are also open to be questioned and dismissed. By spending money on their daughter, Tom and Katie do not intend to do any harm; quite the opposite. Perhaps their daughter may grow up to love her own children deeply. After all, they're trying to make their daughter happy, which most people wouldn't class as a particularly heinous act. Any argument that applies to their lavish spending would apply with equal force to any non-vital spending. Almost all people in first world countries are capable of lowering their own standard of living and comfort to save at least one starving child from death. The Christmas spending itself is also benefiting others: the businesses being patronized, the employees of those businesses, the families of those employees, generating taxes for the government, and so on. Further, Tom and Katie are not, to the best of my knowledge, the cause of world hunger or the force maintaining it. I imagine you would be offended were you approached by a homeless man who insisted his being homeless was your responsibility and you owe him help to make up for your lavish lifestyle.

What all this demonstrates is that, in the service of promoting their views, people appear highly motivated to find victims. These victims might come from across the globe or live next door; they might live in the present, the future, or even the past; they might be victims in ways that in no way relate to the current situation; they might be victims without a face, like "society". In fact, the victims may be the very people someone is trying to help. In the service of denying accusations of immorality, people are also highly motivated to deny victims. The shooting of Trayvon Martin was recently, in part, blamed on how he was dressed; because he was wearing a hoodie. A similar phenomena is seen when people suggest women who get raped were in part responsible for the act due to a proactive style of dress. Those who are seen as causing their own misfortunes are rarely given much sympathy.   

This back and forth, between naming victims, assessing victimhood, and denying it, opens the way to what I feel are some sophisticated strategies on the parts of agents, patients, and third-parties. There's a very valuable resource under contention - the latent coalitions of the social world, and their capacity for punishment - and successfully harnessing that resource depends on manipulating the fuzzy perceptions of harm and responsibility. It should go without saying that a victim also needs a victimizer, and you can bet people can be just as motivated to perceive victimizers in similar fashions. Was that man biting his thumb at you, sir, or was he just biting his thumb? Will he bite his thumb at you in the future if you don't act to stop him now?

                                                                                  The 16th century equivalent of "suck it". 

As the social world we live in is a dynamic place, people must be prepared to assess these claims when made by others, defend against the claims when leveled against themselves, and generate and level these claims against others. As contexts change, we should be able to observe certain biases in information processing become more active or dormant within subjects. The same person who claims that Tom Cruise is a horrible person for spoiling his daughter will happily justify buying their own child a new iPad for Christmas. The child who asks for a new iPad but doesn't get it will complain vocally about how they're being mistreated, while third-parties judge these victimhood claims as lacking before going off to complain about how they're unappreciated at work by their asshole boss.

What makes a victim a better victim? What about a perpetrator? What are the costs and benefits to being seen as one or the other, and how does that interact with other factors, such as gender? How do these expectations of who ought to do what get formed? How do relationships and group affiliations play into the generation and assessment of these perceptions? There are many such questions currently lacking an answer.

Like a water to fish or air to humans, our abilities in this realm often go unnoticed or unappreciated, despite our being constantly surrounded by them. While we many notice the inconsistencies and hypocrisies of others as their place in the social world changes, we rarely, if ever, notice them in ourselves. Noticing these habits in yourself would do you few favors if your goal is to persuade others. Besides, biases are those things that other people have. They lack your awesome powers of insight and understanding. Who are they to question your perceptions of the social world?  

Friday, March 16, 2012

Why Domain General Does Not Equal Plasticity

"It is because of, and not despite, this specificity of inherent structure that the output of computational systems is so sensitively contingent on environmental inputs. It is just this sensitive contingency to subtitles of environmental variation that make a narrow intractability of outcomes unlikely" - Tooby and Cosmides 
In my last post, I mentioned that Stanton Peele directed at evolutionary psychology the criticism of genetic determinism. For those of you who didn't read the last entry, the reason he did this is because he's stupid and seems to have issues engaging with source material. This mistake - of confusing genetic determinism with evolutionary psychology - is unnervingly common among the critics who also seem to have issues engaging with source material. The mistake itself tends to take the form of pointing out that some behavior is variable, either across time, context, or people, and then saying, "therefore, genes (or biology) can't be a causal factor in determining it". For example, if people are nice sometimes and mean at others, it can't be the genes; genes can only make people nice or mean at all times, not contingently. This means there must be something in the environment - like the culture - that makes people differ in their behavior the way they do, and the cognitive mechanisms that generate this behavior must be general-purpose. In other words, rather than resembling a Swiss Army knife - a series of tools with specified functions - the mind more closely resembles an unformed lump of clay, ready to adapt to whatever it encounters. 

                                                    Unformed clay is known for being excellent at "solving problems" by "doing useful things".

There are two claims found in this misguided criticism of evolutionary psychology. The first is that environments matter when it comes to development, behavior, or anything really, which they clearly do. This is something that's been noted clearly and repeatedly by every professional evolutionary psychologist I've come across. The second claim is that to call a trait "genetic", or to note that our genes play a role in determining behavior, implies inflexibly across environmental contexts. This second claim is, of course, nonsense. The opposite of "genetic" is not "environmental" or "flexible" for a simple reason: organisms need to be adapted to do anything, flexibly or otherwise. (Note: that does not mean everything an organism does it was adapted to do; the two propositions are quite different)

A quick example should make this point clear: consider my experiments with cats. Not many people know this about me, but I'm a big fan of the field of aviation. While up in the air, I've been known to throw cats out of the airplane. You know, for things like science and grant money. My tests have shown the following pattern of results: cats suck at flying. No matter how many times I've run the experiment - and believe me, I've run it many, many times, just to be sure - the results are always the same. How should I interpret the fact that I'm quickly running out of cats?
 
                                                                       Discussion: The previous results were replicated purrrrfectly.

One way would be to suggest that cats would be able to fly, were they not constrained against flight by their genes; in other words, the cat's behavior would be more "domain general" - even capable of flight - if genetics played less of a role in determining how they acted and developed. Another, more sane, route would be to suggest that cats were never adapted for flight in the first place. They can't fly because their genes contain no programs that allow for it. Maybe that example sounds silly, but it does well to demonstrate a valuable point: adaptions do not make an organism's behavior less flexible; it makes them more flexible. In fact, adaptations are what allows an organism to behave at all in the first place; organisms that are not adapted to behave in certain ways won't behave at all.

So what about domain general abilities, like learning? For the same reasons, simply chalking some behavior up to "learning" or "culture" is often an inadequate explanation by itself. Learning is not something that just happens in the same way that flight doesn't just happen; the ability to learn itself is an adaptation. It should come as no surprise then that some organisms are relatively prone to learning some things and relatively resistant to learning others. As Dawkins once noted, there are many more ways of being dead than being alive. On a similar note, there are many more ways of learning being useless or harmful than there are of learning being helpful. If an organism learns about the wrong subjects, it wastes time and energy; if an organism learns the wrong thing about the right subject, or if the organism fails to learn the right thing quickly enough, the results would often be deadly.

Cook and Mineka (1989) ran a series of experiments looking at how Rhesus monkeys acquired their fear response. The lab-raised monkeys with no prior exposure to snakes or crocodiles do not show a fear response to toy models of the two potential threats. The researchers then attempted to condition fear into these animals vicariously by showing them a video of another monkey reacting fearfully to either a snake or crocodile model. As expected, after watching the fearful reaction of another monkey, the lab-raised monkeys themselves developed a fear response to the toys. They learned quickly to be afraid when observing that fear reaction in another individual. What was particularly interesting about these studies is that the researchers tried the same thing, but substituted either a brightly-colored flower or a rabbit in place of the snake or crocodile. In these trials, the monkeys did not acquire a fear response to flowers or rabbits. In other words, the monkeys were biologically prepared to quickly learn fear to some objects (historically deadly ones), but not others.

                                 Just remember, they're more afraid of you than you are of them. Also, remember fear can make one irritable and defensive.  

The results of this study make two very important points. The first is that, as I just mentioned, learning is not a completely open-ended process. We're prepared to learn some things (like certain fears, taste aversions, or language) relatively automatically, given the proper environmental stimulation. I can't stress the word "proper" there enough. For instance, there are also some learning associations that organisms are unable to make: rats will only learn taste aversion in the presence of nausea, not light or sound, though they will readily associate shocks with light and sound.

The second point is that these results (should) put to bed the mistaken notion that biology and environment are two competing sources of explanation; they are not. Genetics do not make an organism less flexible and environments do not make them more flexible. Learning is not something to be contrasted with biology, but rather learning is biology. This is a point that is repeatedly stressed in introduction level classes on evolutionary psychology, along with every major work within the field. Anyone who is still making this error in their criticisms is demonstrating a profound lack of expertise, and should be avoided.

References: Cook, M. & Mineka, S. (1989). Observational condition of fear to fear-relevant versus fear-irrelevant stimuli in Rhesus monkeys. Journal of Abnormal Psychology, 98, 448-459.    
 

Tuesday, March 6, 2012

Is It A Paradox, Or Are You Stupid?

Let's say you're an intellectual type, like I am. As an intellectual type, you'd probably enjoy spending a good deal of time researching questions of limited scope and even more limited importance. There's a high probably that your work will be largely ignored, flawed in some major way, or your results interpreted incorrectly by yourself or others. While you may not be the under-appreciated genius that you think you are, you may still be lucky enough to have been paid to do your poor work. Speaking of poor work that someone is getting paid for, here's a recent piece by Stanton Peele, over at Psychology Today.

                                                      I'm fairly certain he wrote his dissertation about his own smug sense of self-satisfaction.

The article itself is your fairly standard piece of moral outrage about how rich and/or powerful people are cheaters who should not be trusted. According to Stanton, research suggests that people who perceive themselves to be high in power are likely to be "deficient in empathy". This claim strikes me as a little fishy, especially coming from someone who feels so self-important that he links to other pieces he's written on five separate occasions in an article no longer than a few sentences. Such a display of ego suggests that Stanton thinks he's particularly high in power, and thus calls into question his empathy and honesty. It also suggests to me he's a sock-sniffer.

The title of his piece, "Cheaters Always Win - The Paradox of Getting Ahead in America", along with Stanton's idea that powerful people are "deficient in empathy" both work well to display the bias in his thinking. In the case of his title, there's only a paradox if one assumes that people who cheat would not win. We might not like when someone wins because they aren't playing by the rules, but I don't see any reason to think a (successful) cheater wouldn't win; they cheat because it tends to put them at an advantage. In the case of his empathy suggestion, Stanton seems to assume there is some level of empathy people high in power lack that they should otherwise have. However, one could just as easily phrase the suggestion in an opposite fashion: people low in power have too much empathy. How that correlation gets framed says more, I feel, about the preconceptions of the person making it, than the correlation itself.

Though that brings us to the matter of whether or not that claim is true. Are powerful people universally deficient in empathy in some major way? (Across-the-board, as Peele puts it) According to one of the papers Peele mentions, people who rate themselves high in their sense of power were less compassionate and experienced less distress when listening to a highly distressed speaker, relative to those who ranked themselves low in power (van Kleef et al, 2008). See? The powerful people really are "turning a blind eye to the suffering of others" (which is, in fact, the subtitle of the paper).

                                           The next model will also cover the ears in order to block out the sound from all the people begging for their lives.

It would seem that van Kleef et al (2008) share Peele's affection for hyperbole. The difference they found in self-reports of compassion and empathic distress between those highest and lowest in power was about a 0.65 on a scale of 1 to 7, or about 9%. We're not talking about some radical difference in kind, just one of mild degree. However, that difference only existed in the condition where the speaker was highly distressed; when the speaker was low in distress, the effect was reversed, with the higher power subjects reporting more compassion and distress to the speaker's story, to the tune of about 0.5 on the same scale. What conclusion one wants to draw from that study about the compassion and distress of high and low power individuals depends on which part of the data one is looking at. If you're looking at a highly distressed speaker, those who feel higher in power are less compassionate and empathic; if you're looking at a speaker lower in distress, those who feel they have more about are more compassionate and empathic. That would imply Peele is either giving the data a selective reading, or he never even bothered to read it.  

A second paper Peele mentions, by Piff et al (2012), found that self-reported social class was correlated with cheating behavior; the higher one was in social class, the more likely they were to cheat or otherwise behave like a asshole across a few different scenarios. However, this effect of class disappeared when the researchers controlled for attitudes towards greed. As it turns out, people who think greed is just dandy tend to cheat a bit more, whether they're low or high status. Further, asking those low in social status to write about three benefits of greed also eliminated this effect; those from lower social classes now behaving identically to those in the upper social class. It's almost as if these low status individuals experienced sudden onset empathy-deficiency syndrome. 

I'm skimming over most of the details of these papers because there's another, more pressing, matter I'd like to deal with. These papers that Peele uses are notably devoid of anything that could be considered a theory. They present a series of findings, but no framework to understand them in: Why might people who have some degree of social power be more or less prone to doing something? What cost and benefits accompany these actions for each party and how might they change? Are the actions of those in the upper and lower classes deployed strategically? How might these strategies change as context does? This sounds like just the kind of research that could really be guided and assisted by embracing an evolutionary perspective. 

                                                                 Sadly, some people don't take too kindly to our theoretical framework.

Unfortunately, because Peele is stupid, he has some harsh criticisms of genetic determinism that he directs at evolutionary psychologists:
"They also seem inconsistent with evolutionary psychologists who have been arguing lately (following "The Selfish Gene") that altruism is a species-inherited genetic destiny [emphasis, mine]....So, which is it? Do humans progress by being kinder to others and understanding the plights of the downtrodden, or do they do better to ignore these depressing stories?  Do societies advance by displaying empathy towards others outside of their borders and with different customs from their own?"
Such questions are about on the level of asking whether people are better off eating every waking moment or never eating again, followed by a self-congratulatory high-five. There are trade-offs to be made, and people aren't always going to be better served by doing one, and only one, thing at all times. This should not be a difficult point to understand, but, on the other hand, understanding things is clearly not Peele's strong suit; sock-sniffing is. I don't mind if, as he finishes writing his ramblings, Peele leans in to get a good whiff of his own odor after a long day battling positions held by legions of imaginary evolutionary psychologists. I just don't understand why Psychology Today feels the need to give his nonsense a platform.

References: Piff, P.K., Stancato, D.M., Cote, S., Mendoza-Denton, R., & Keltner, D. (2012). Higher social class predicts increased unethical behavior. Proceedings of the National Academy of Sciences.

van Kleef, G.A., Oveis, C., van der Lowe, H., LuoKogan, A., Goetz, J.,  Keltner, D. (2008). Power, distress and compassion: Turning a blind eye to the suffering of others. Psychological Science, 19, 1315-1322