Monday, June 6, 2016

Follow Up on John Mikhail and Universal Moral Grammar

Last month I discussed an article by John Mikhail who, borrowing from work done in lingustics, is investigating the universal moral grammar that appears to be at work when we make moral judgments. A fruitful example is the judgments we make when confronted by Trolley Problems. Mikhail was kind enough to respond to me and point out that he makes a distinction between this innate universal moral grammar and the actual moral principles we apply in the situations that call for moral judgment. The UMG helps us to form our moral principles, he says.  

Mikhail explained all this in a longer, technical article in February 2007. A 20 minute interview with Mikhail on this topic can be found at Philosophy Bites (2011). 

Evidence that Aspects of our Moral Intuition is Innate

Here is some of the evidence Mikhail points to for the proposition that humans possess an innate moral faculty that is analogous to our language faculty: 
  1. Children seem to come equipped to make moral judgements:   (a) 3-4 year old children can distinguish between intent and purpose; e.g. they can make moral distinctions between Johnny pushing Sally off the play structure, and Johnny accidentally bumping into Sally so she falls off the play-structure. They can also appreciate that the prohibition to not push each other off the play-structure is more serious than a prohibition to wear pajamas to school;  (b) 4-5 year olds have a sense that principals to wrong doing should be punished more severely than accessories to wrong doing. (c) 5-6 year olds use false factual belief, but not false moral belief to exculpate; e.g. if Johnny walks off with Sally's violin because he thinks it's his violin, that is o.k; if he walks off with Sally's violin because he doesn't think stealing is wrong, that is not o.k. 
  2. All languages contain deontic concepts like "obligatory," "permissible," and "forbidden." 
  3. Key moral concepts are universal. For example, prohibitions against murder, rape, and other aggression appear to be found in nearly all societies. So are legal distinctions based on causation, intention, and voluntariness of behavior. Just a few distinctions of UMG can successfully capture all distinctions between different criminal laws, says Mikhail. 
  4. There exist identifiable brain regions that are involved in moral cognition (the same in all of us). These can be measured with brain scan technologies. 

The Trolley Problems

Mikhail's 2007 article contains a good discussion of the trolley problems.  These are named after famous philosophical thought experiments created to test our moral intuitions. I've previously written about this HERE. People have thought up and tested many examples of these problems. Here are three:

Trolley:  An out of control trolley is hurtling down the track. The train engineer recognizes with horror that he is about to run over five people standing on the track in front of him. But by pushing a button he can switch the tracks so the trolley will move onto a side-spur. By pushing the button the trolley will avoid the five people but kill one person standing on the second track.  Confronted with this problem, 94% of respondents say it would be morally permissible for the train engineer to push the button--thereby avoiding killing the five, but killing the one. 

Bystander: Same scenario, but the engineer doesn't have a switch. Instead, you are the switch-man and you happen to be standing at the switch as you observe the out of control trolley.  Now the percentage of respondents who say it would be morally permissible to throw the switch is down to 90%. 

Footbridge: Same scenario, but the engineer doesn't have a button, and there is no switch-man, instead you are standing on a footbridge next to a fat man with his back turned to you. You realize that if you push the fat man he will slow the trolley sufficiently to save the five people. Unfortunately the fat man will die.  In this scenario respondents are revulsed: only 10% say it would be o.k. to push the fat man. 

Mikhail observes that these judgments are stable across demographically diverse populations, including children.  But respondents have a hard time explaining their different responses to these scenarios. He suggests that  some basic moral rules with reference to a) the prohibition against intentional battery; and b) the principal of double effect can explain the different judgments. 
The prohibition of intentional battery forbids purposefully or knowingly causing harmful or offensive contact with another individual ... without his or her consent. The principle of double effect ... holds that an otherwise prohibited action, such as battery, that has both good and bad effects may be permissible if the prohibited act itself is not directly intended, the good but not the bad effects are directly intended, the good effects outweigh the bad effects, and no morally preferable alternative is available.... The key distinction that explains the standard cases in the literature is that the agent commits one or more distinct batteries prior to and as a means of achieving his good end in the impermissible conditions (... Footbridge), whereas these violations are subsequent side effects in the permissible conditions (Trolley and Bystander). [Citations omitted, emphasis added]
That seems plausible: we have great inhibition against physically pushing someone off the footbridge, never mind that it will save 5 people; we have less inhibition to push a button or to pull a lever to switch the tracks (even though this has the effect of killing the fat man a short while later). In the Footbridge example the act of pushing is murder and it comes prior to the good result of saving five people; in the Trolley and Bystander examples, the act of pushing a button and the act of pulling a lever are in and of themselves neutral acts--they just have the unintended consequence of killing the fat man, and this unintended consequence comes after the button is pushed or the lever pulled and the five people are saved.

The moral grammar hypothesis holds that when people encounter trolley problems, they subconsciously compute structural descriptions by applying rules that pay attention to the sequence of events; our moral judgment will depend on how the problem is structured. Mikhail contends that structuring the Trolley problem in different syntactical ways can elicit different judgments. For example, subjects who are presented with the sequence (A) "The driver killed the man by turning the train" will find that justifiable because syntactically the turning of the train precedes the killing; on the other hand, if the description is (B) "The driver turned the train by killing the man," most people will find this unjustifiable because syntactically the killing precedes the turning of the train.

Now that is just random and weird. Just like the 4% distinction between the Trolley example (conductor pushing button) and the Bystander example (switch-man throwing switch) seems arbitrary. Of course, we know that. As I stated last summer, we have great revulsion to "You can help win the war by lining up 60,000 women and children in front of a ditch in the woods and shooting them..." and relatively much less revulsion to "You can help win the war by pushing a button and dropping an atomic bomb on 60,000 women and children...."  Our moral intuitions are not always the best guide to morality.

The Conversion Rules of UMG

Our moral snap judgments, or moral intuitions, are governed by rules that allow us to convert structural descriptions of moral actions, says Mikhail. The application of such conversion rules incorporates properties like ends, means, side effects and prima facie wrongs, such as battery.  And we do this, Mikhail conjectures, "even when the structural description of the action in question  contains no direct evidence for these properties." 
[Even in the absence of full data, we can engage in such calculations similar, for example,] to how we manage to recover a three-dimensional representation from a two-dimensional stimulus in the theory of vision.... Our structural operations include (i) identifying the various actions, (ii) placing them in an appropriate temporal order, (iii) decomposing them into their underlying causative and semantic structures, (iv) applying certain moral and logical principles to these underlying structures to generate representations of good and bad effects, (v) computing the intentional structure of the relevant acts and omissions by inferring (in the absence of conflicting evidence) that agents intend good effects and avoid bad ones, and (vi) deriving representations of morally salient acts like battery and situating them in the correct location of one’s act tree. Although each of these operations is relatively simple in its own right, the overall length, complexity and abstract nature of these computations, along with their rapid, intuitive and at least partially inaccessible character, lends support to the hypothesis that they depend on innate, domain-specific algorithms. [Citations omitted]

Our Moral Intuitions are Not the End All and Be All

The fact that much of our morality may be rooted in innate intuitions that are handed down to us through the gene pool and the fact that we may be doing subconscious calculations when confronted with moral questions seems very plausible. Studying our moral intuitions, both through psychological experiments like the Trolley Problems, and by studying our brain activity as we make moral judgments, and by trying to tease out the underlying calculations that might be in play seems like good and useful work. 

Mikhail suggests that future research might pay attention to how these innate moral mechanism bear on the development of legal systems, including contract rules, tort law, and criminal rules.  

Mikhail also makes reference to researchers (Greene and colleagues) who advocate that "moral intuitions result from the complex interplay of at least two distinct processes: domain-specific, social–emotional responses that are inherited from our primate ancestors, and a uniquely human capacity for ‘sophisticated abstract reasoning that can be applied to any subject matter.’" When it comes to morality we need to be able to reason our way through problems even when our innate moral tools lead us astray, or leave us in the lurch.  Where do we look for such tools?  In our traditions and in our values. Our genetics go back at least 200,000 years. Let's hope we have made some progress along the way, and that we can continue to make progress. 

Models of Neanderthal and Homo Sapien
by Alfons and Adrie Kennis


No comments:

Post a Comment