The study of the world around is dubbed science, and in order to pursue it, you first need to purchase several large, expensive doohickeys in order to conduct experiments, hire scientists, and the like. The study of the theoretical world is dubbed mathematics, and you need only paper, a pencil, and a trashcan within reasonable distance. In comparison, the study of nothing in particular may be dubbed "philosophy", and all you have to do is keep talking. It may be noted that working on philosophical matters is generally very cheap to do. That may very well be because it isn't worth a penny to anyone anyway - UncyclopediaAcademic xenophobe that I am, I don't care much for philosophers. One reason I don't much care for them is that they, as a group, have a habit of getting stuck in arguments that remain unresolved - or are even unresolvable - for centuries about topics of dubious importance. One of those topics that tends to evade clear thinking and relevance is that of free will. The definition of the term itself often avoids even being nailed down, meaning most of these arguments are probably not even being had about the same topic.
There's been a lot of hand-wringing over whether determinism precludes moral responsibility. Today, I'm going to briefly step foot into the world of philosophy to demonstrate why this debate has a simple answer, and hopefully, when we reach that point, we can start making some actual progress in understanding human moral psychology.
An artist's depiction of a philosopher; notice how it does nothing important and goes nowhere.
Let's take a completely deterministic universe in which the movement and action of every single bit of matter and energy is perfectly predictable. Living organisms would be no exception here; you could predict every single behavior of an organism from before the moment of its conception till the moment of death. Every thought, every feeling, every movement of every part of its cellular machinery. People seem to worry that in this universe we would be unable to justifiably condemn people for their actions, as they are not seen as having a "choice" in the matter (choice is another one of those very blurry concepts, but we'll forget about what it's supposed to mean here; just use your best guess). What most people fail to realize about this example is that it in no way precludes making moral judgments ("he ought not to have done that") or holding people responsible for their actions. "But how can we justify holding someone responsible for a predetermined action?" I already hear someone missing the point objecting. The answer here is simple: you wouldn't need to justify those moral judgments or punishment in some objective sense anymore than a killer would need to justify why they killed.
If the killer was predetermined to kill, others were also predetermined to feel moral outrage at that killing; nothing about determinism precludes feelings, no matter their content. Additionally, those who feel that moral outrage are determined to attempt and convince others about the content of that outrage, which they may be successful at doing. From there, people are likewise determined to attempt and punish the person who committed the crime, and so on. Suffice it to say, a deterministic world would look no different than the world we currently inhabit, and determinism and moral responsibility get to live hand-in-hand. However, I already feel dirty enough playing philosopher that I don't feel a need to continue on with this example.
I feel even dirtier than I did last Christmas. The reaction of those children - and that jury - was priceless though...
After successfully resolving centuries of philosophical debate in the matter of a few minutes (you're welcome), it's time to think about what this example can teach us about our moral psychology. Refreshingly, we will be stepping out of the realm of philosophy into that of science for this part. What I think is the most important lesson to take away from this example is the idea that if we can fully explain a behavior, we must also condone it (or, at the very least, not condemn others for it). Evolutionary psychology tends to get a fair share of scorn directed its way for even proposing that certain traits - typically politically unpalatable ones, such as sex differences or violence - are adaptations, and that ire typically comes in the form of, "well you're just trying to justify [spousal abuse/rape/sexism/etc] by explaining it". It's also worth noting that those claims will be tossed at evolutionary psychologists even if those same psychologists say, "We aren't trying to justify anything".
I cited a figure a while back about how 86% of people viewed determinism as incompatible with moral responsibility, so this sentiment appears to be a rather popular one. There are two papers that have recently come across my desk that expand on this point a little further. The first comes from Miller, Gordon, and Buddie (1999), who basically demonstrated the effect I mentioned above. Subjects were presented with a vignette of a story involving a perpetrator causing some harm and asked to either try and explain that behavior first and then react to it, or react to it first and then explain it. The results showed that those who explained the behavior first took a significantly more forgiving and condoning stance towards the perpetrator. Additionally, when other observers read these explanations, the observers rated the attitudes of those doing the explaining as even more condoning of the harm than the explainers themselves had predicted.So while the explainers were slightly more condoning of the behavior of the perpetrator in the story, observers who read those explanations thought they were more condoning still. Sounds like the perfect mix for moral outrage.
We'd like to respectfully disagree with your well-articulated position, and, if that fails, burn you and your books."
Miller et al. (1999) went on to examine how different types of explanations might effect the explaining-condoning link. The authors suggest that explanations that portray the perpetrator as low in personal responsibility (it was the situation that made him do it) would be viewed as more condoning than those referencing the perpetrator's disposition (he acted that way because he's a cruel son-of-a-bitch). Towards this end, they presented subjects with the results of two hypothetical experiments: in one, the presence of a mirror dramatically affected the rates of cheating (5% cheating in the mirror condition, 90% cheating in the no mirror condition) or had no effect (50% cheating in both situations). The first experiment served to emphasize the effect of the situation, the second de-emphasizing the effect of the situation, as being the important explanatory factor.
The results here indicated that those who read the results stating that the situation held a lot of influence were more condoning of the cheating behavior when compared to those who read the dispositional explanations. What was more interesting, however, is that these same participants also rated their judgments of the cheater's behavior as being significantly more negative than what they thought the hypothetical researcher's judgments were. The subjects seemed to think the researchers were giving the perpetrators a pass.
The second experiment was conducted by Greene and Cahill (2011). Here, the researchers tested, basically, the suggestion that neuroscience imaging might overwhelm participant's judgment with flashy pictures and leave them unable to consider the evidence of the case. In this experiment, participants were given the facts of a criminal case (either a low-severity or a high-severity case) and were presented with one of three conditions: (1) the defendant was labeled as psychotic by an expert; (2) in addition, results of neurological tests that found deficiencies consistent with damage to the frontal area of the defendants brain were presented ; and (3) in addition to that, colorful brain scans were presented documenting that damage.
The results of this study demonstrated that participants were about as likely to sentence the defendant to death across all three conditions when the defendant was deemed to be low in future dangerousness. However, when the defendant was high in future dangerousness they were overwhelmingly more likely to be sentenced to death, but only by group (1). In groups (2) and (3), they were far, far less likely to be sentenced to death (a drop from about 65% likely to be sentenced to death down to a low of near 15%, no different from the low-dangerousness group). Further, in conditions (2) and (3), the mock jurors rated the defendant as more remorseful and less in control of his behavior.
Unable to control his behavior and highly likely to be violent again? Sounds like the kind of guy we'd want to keep hanging around.
These two papers provide a complementary set of results, demonstrating some the effects that explanations can have both on our sense of moral responsibility and our perception of the explainer. What those two papers don't do, however, is explain those effects in any satisfying manner. I feel there are several interesting predictions to be made here, but placing these results into their proper theoretical context will be a job for another day. In the mean time, I'm going to go shower until that sullied feeling that philosophy brings on goes away.
(One thought to consider is that perhaps terms like "free will" and "choice" are (sort of) intentionally nebulous, for to define them concretely and explain how they work would - like Kyrptonite to Superman - sap them of their ability to imbue moral responsibility)
References: Greene, E. & Cahill, B.S. (2011). Effects of neuroimaging evidence on mock juror decision making. Behavioral Sciences & the Law, DOI: 10.1002/bsl.1993
Miller, A.G., Gordon, A.K., & Buddie, A.M. (1999). Accounting for evil and cruelty: Is to explain to condone? Personality and Social Psychology Review, 3, 254-268.