Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

A Problem with the Leading Theory

‘Moral dilemmas engender conflicts between two traditions: consequentialism [...] and deontology’

(Crockett, 2013, p. abstract)

Full quote ‘Moral dilemmas engender conflicts between two traditions: consequentialism, which evaluates actions based on their outcomes, and deontology, which evaluates actions themselves.’

Greene et al’s theory

‘controlled cognition’ (slow)

produces ‘utilitarian ... moral judgment aimed at promoting the “greater good” (Mill, 1861/1998)’

‘automatic emotional responses’ (fast)

produce ‘competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’

(Greene, 2015, p. 203)

So the challenge was to characterize the processes in our dual-process model.
This is an excellent response to that challenge: it provides two kinds of information. Greene’s answer tells us both about differences in how the processes are implemented (one involves emotion, the other reasoning), and about differences in what the processes compute (one computes utility, the other is based on rules of behaviour).
Even better, view generates readily testable predictions ...

prediction: ⇧time-pressure → ⇧deontological

This generates many predictions. One is this. If you give people a dilemma that involves killing one person to save several, they deontological and utilitarian responses will conflict. Accoridngly increasting time pressure or cognitive load should increase the relative influence of the automatic process and thereby make deontological responses to the dilemma more frequent.
Key bit of the evidence is to do with the effects of time pressure which, apparently, make people less inclined to utilitarian responses where there is a conflicting deontological response ...
High vs low conflict: ‘We distinguished personal dilemmas according to whether harm caused was depicted as a side-effect (low-conflict) or as intended as the means to an end (high-conflict).’
personal vs impersonal: ‘Personal dilem- mas—for instance, picture an overcrowded lifeboat in which a crew member throws someone out to keep it afloat—tend to engage emotional processing to a greater extent than dilemmas necessitating less personal agency (e.g., the crew member only needs to hit a switch to re- move the passenger).’
‘In the time-pressure condition, participants had no more than 8 s to read the question, consider it, and enter their judgment.’

Suter & Hertwig, 2011 figure 1

caption: ‘Fig. 1. Average proportion of deontological responses separately for conditions and type of moral dilemma (high- versus low-conflict personal and impersonal dilemmas) with data combined across the fast (i.e., time- pressure and self-paced-intuition) and slow conditions (no-time-pressure and self-paced-deliberation) in Experiments 1 and 2, respectively. Error bars represent standard errors. Only responses to high-conflict dilemmas differed significantly between the conditions’

‘participants in the time-pressure condition, relative to the no-time-pressure condition, were more likely to give ‘‘no’’ responses in high-conflict dilemmas’

(Suter & Hertwig, 2011, p. 456).

prediction: ⇧time-pressure → ⇧deontological ??

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

Rosas & Aguilar-Pardo, 2020 : converse

(Suter & Hertwig, 2011; Rosas & Aguilar-Pardo, 2020)
Btw, Neither study observed the effects of manipulating outcomes
Cf Gawronski & Beer (2017, p. 365): ‘a given judgment cannot be categorized as utilitarian without confirming its property of being sensitive to consequences, which requires a comparison of judgments across dilemmas with different consequences. Similarly, a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms, which requires a comparison of judgments across dilemmas with different moral norms’

Rosas & Aguilar-Pardo (2020)

This is just one example: they did various time intervals and also re-run the data just for participants who passed all comprehension questions.

prediction: ⇧time-pressure → ⇧deontological ??

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

Rosas & Aguilar-Pardo, 2020 : converse

pairwise inconsistent findings.
perfect for me as a philosopher: I make my living by thinking about apparently conflicting scientific findings.
Turning to theoretical considerations ...

fast is deontological

... because being utilitarian requires computing utility—the value of possible consequences—which demands effort.

fast is utilitarian

... because fast is ancient, and evolutionary pressures favour utilitarian outcomes

(at least for kin, perhaps for also in-group members)

Hrdy (1979)

Kurzban, DeScioli, & Fein (2012)

One indicator of the 'fast is utilitarian' idea comes from research on non-nutritional filial infanticide, which appears to be widespread in animals including humans (see Hrdy (1979, p. 16ff) on ‘Parental Manipulation’).
The idea is that parents are often willing to sacrifice an infant if doing so will disproportionately benefit existing or future siblings.
Further inspiration for this might be taken from Kurzban el at’s finding that people are quite willing to sacrifice one Friend or Brother to save five Friends or Brothers.
This is inspired by Kurzban et al. (2012) rather than something those authors endorse.
Kurzban et al. (2012) argue that the greater-willingness-to-kill-a-family-member-to-save-five-family-members effect is due to people showing greater altruism towards kin (and friends) than strangers: they are more willing to risk condemnation by others[^attack] in order to gain aggregate benefits. If this is right, the difference between responses to dilemmas with all kin vs with all strangers might work as a (probably group-level) measure of tribalism.
[^attack]: people avoid breaking moral rules to avoid ‘a coordinated moral attack against themselves’ (p. 333)
More carefully, Kurzban et al. (2012) say the effect arises because: i. there are at least two ‘decision processes’ involved, one that is sensitive to rules like ‘do not kill’ and one that just seeks ‘to maximize utility’ with no prohibitions; ii. it is costly for people to break the moral rules---doing so risks ‘a coordinated moral attack against themselves’ (p. 333); iii. it is altruistic to ignore the rules to obtain the best aggregate outcome; iv. people are more altruistic towards kin than strangers.
[Interesting case to think through as a potential contrast to ethical decoupling: because the different questions (would you ...? vs is it wrong to ...?) bring out different answers. there appear to be multiple, conflicting attitudes to pushing a Brother in the footbridge case. The ‘Is it wrong?’ question might be prompting people to think about laws, about what others will say, or about their own feelings of guilt.]
image source: ai

caveat: rich body of neuropsychological evidence

(Greene, 2014)

Imagine you are buying a used car from Donald Trump. Now you are in the right mindset to hear a philosopher—including me—talk about science.
Philosophers are just very untrustworthy when it comes to science.
But nevertheless I think it is time to consider alternatives to Greene et al’s theory ...

Greene et al’s theory

‘controlled cognition’ (slow)

produces ‘utilitarian ... moral judgment aimed at promoting the “greater good” (Mill, 1861/1998)’

‘automatic emotional responses’ (fast)

produce ‘competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’

(Greene, 2015, p. 203)

prediction: ⇧time-pressure → ⇧deontological

??

So my aim is to think about how we might start in constructing an alternative to Greene et al’s theory ...