Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Minimal Virtues in Ethical Cognition

My third area is ethical cognition. This is a completely new area to me: I find it gripping but am still thinking about whether to even pursue this.
Let me start with the leading theory ...

III

minimal models in ethical cognition

I wasn’t going to share my research on this—it’s too early—but I saw that people here have been doing research on ethics and I couldn’t resist ...

Greene et al’s theory

‘controlled cognition’

produces ‘utilitarian ... moral judgment aimed at promoting the “greater good” (Mill, 1861/1998)’

‘automatic emotional responses’

produce ‘competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’

(Greene, 2015, p. 203)

prediction: ⇧time-pressure → ⇧deontological

This generates many predictions. One is this. If you give people a dilemma that involves killing one person to save several, they deontological and utilitarian responses will conflict. Accoridngly increasting time pressure or cognitive load should increase the relative influence of the automatic process and thereby make deontological responses to the dilemma more frequent.
Key bit of the evidence is to do with the effects of time pressure which, apparently, make people less inclined to utilitarian responses where there is a conflicting deontological response ...
High vs low conflict: ‘We distinguished personal dilemmas according to whether harm caused was depicted as a side-effect (low-conflict) or as intended as the means to an end (high-conflict).’
personal vs impersonal: ‘Personal dilem- mas—for instance, picture an overcrowded lifeboat in which a crew member throws someone out to keep it afloat—tend to engage emotional processing to a greater extent than dilemmas necessitating less personal agency (e.g., the crew member only needs to hit a switch to re- move the passenger).’
‘In the time-pressure condition, participants had no more than 8 s to read the question, consider it, and enter their judgment.’

Suter & Hertwig, 2011 figure 1

caption: ‘Fig. 1. Average proportion of deontological responses separately for conditions and type of moral dilemma (high- versus low-conflict personal and impersonal dilemmas) with data combined across the fast (i.e., time- pressure and self-paced-intuition) and slow conditions (no-time-pressure and self-paced-deliberation) in Experiments 1 and 2, respectively. Error bars represent standard errors. Only responses to high-conflict dilemmas differed significantly between the conditions’

‘participants in the time-pressure condition, relative to the no-time-pressure condition, were more likely to give ‘‘no’’ responses in high-conflict dilemmas’

(Suter & Hertwig, 2011, p. 456).

fast ~ deontological; slow ~ utilitarian ?

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

Rosas & Aguilar-Pardo, 2020 : converse

(Suter & Hertwig, 2011; Rosas & Aguilar-Pardo, 2020)
Btw, Neither study observed the effects of manipulating outcomes
Cf Gawronski & Beer (2017, p. 365): ‘a given judgment cannot be categorized as utilitarian without confirming its property of being sensitive to consequences, which requires a comparison of judgments across dilemmas with different consequences. Similarly, a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms, which requires a comparison of judgments across dilemmas with different moral norms’

Rosas & Aguilar-Pardo (2020)

This is just one example: they did various time intervals and also re-run the data just for participants who passed all comprehension questions.

fast ~ deontological; slow ~ utilitarian ?

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

Rosas & Aguilar-Pardo, 2020 : converse

Turning to theoretical considerations ...

fast is deontological

... because being utilitarian requires computing utility—the value of possible consequences—which demands effort.

fast is utilitarian

... because fast is ancient, and evolutionary pressures favour utilitarian outcomes

(at least for kin, perhaps for also in-group members)

Kurzban, DeScioli, & Fein (2012)

caveat: rich body of neuropsychological evidence

(Greene, 2014)

I have cut Crockett from this slide (flow) but it’s relevant that not only Greene but others hold this view.

further problem: Which factors influence responses to dilemmas?

This problem is methodological
Important this this is really about what drives the dilemmas, rather than the dual-process idea specifically.
Waldmann, Nagel, & Wiegmann (2012, p. 288) offers a brief summary of some factors which have been considered to influence including:

- whether an agent is part of the danger or a bystander;

- whether an action involves forceful contact with a victim;

- whether an action targets an object or the victim;

- how the victim is described (Waldmann et al., 2012).

- whether there are irrelevant alternatives (Wiegmann, Horvath, & Meyer, 2020);

- order of presentation (Schwitzgebel & Cushman, 2015);

Wiegmann et al. (2020) show that they are subject to irrelevant additional options: like lay people, philosophers will more readily endorsing killing one person to save nine when given five alternatives than when given six alternatives. (These authors also demonstrate order-of-presentation effects.) This is not mentioned in Waldmann et al. (2012, p. 288)

‘various moral and nonmoral factors interact in the generation of moral judgments about dilemmas’

Given this, it is hard to make predictions about how time pressure might skew judgements.

‘almost all these confounding factors influence judgments, along with a number of others [...] The research suggests that various moral and nonmoral factors interact in the generation of moral judgments about dilemmas

(Waldmann et al., 2012, pp. 288–90).

an alternative: construct a minimal model of the ethical

Greene et al’s theory

‘controlled cognition’

produces ‘utilitarian ... moral judgment aimed at promoting the “greater good” (Mill, 1861/1998)’

‘automatic emotional responses’

produce ‘competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’

(Greene, 2015, p. 203)

prediction: ⇧time-pressure → ⇧deontological

??

What would a minimal model be a model of?

To work out what the minimal model should target, we first need some assumptions about what ethics is.
Probably best if we do not take these assumptions from our own intuitions. Maybe use survey of anthroplogists.
This is important because it tells us whether a candidate model is a model of the ethical or of some other domain.
[This is still about what ethics is—what is the minimal model attempting to re-create?]
Survey of anthropological research
Do not need to buy the whole moral foundations theory
Background: ‘intuitive ethics’ (Haidt & Joseph, 2004; Haidt & Graham, 2007) claims that there are five evolutionarily ancient, psychologically basic abilities linked to:

harm/care

fairness (including reciprocity)

in-group loyalty

respect for authority

purity, sanctity

(Haidt & Graham, 2007)

What processes might implement a minimal model?

instrumental conditioning

is this (Crockett, 2013; Cushman, 2013)?

situation -> action -> outcomes -> rewards

(Dickinson, 1994, p. 48)

You get rewarded based on the outcomes of your action. In selecting an action to perform you therefore need to work out how likely various outcomes are and how rewarding those would be.
This (the calculation of expected utility) turns out to be computationally costly.
This is another way of stating \emph{Thorndyke’s Law of Effect}:
> ‘The presenta­tion of an effective [=rewarding] outcome following an action [...] rein­forces > a connection between the stimuli present when the action is per­formed and the action itself > so that subsequent presentations of these stimuli elicit the [...] action as a response’ > (Dickinson, 1994, p. 48).
may be rewarding; if it is ...
When the action is performed in the presence of the simulus, the connection is strengthened (or ‘reinforced’) if the action is rewarded.
If the connection is strong enough, the presence of the stimulus will cause the action to occur.
This is the minimal model involved in goal-selection, just the situation and action. All the complexity concerning possible outcomes and their rewards is cut off.
And what is breathtakingly beautiful about this: the minimal model does a decent job of approximating maximization of expected utility as long as the outcomes of actions and their rewards are resonably stable.
Btw, neuroscientists call this ‘reinforcement learning’ but it has its origins in the animal learning literature and specifically in Thorndyke’s Law of Effect.

signature limit: persistence in extinction following devaluation

How to add an ethical twist?

Alt formulation: can we give this process an ethical twist?
So this system approximates the computation of expected utility.
Now the question is, how could we adapt the habitual processes to approximate the computation not of expected utility but of maximum utility—that is, the thing the utilitarians think should be maximised?
We have to mess with the rewards.
And there are two ways to do this, one exemplified by bitterness the other by vicarious rewards.

first idea: vicarious rewards — expected to maximum utility

Key questions for this idea will be (i) whether there really are vicarious rewards and punishments (we know that others’ being rewarded is in some ways rewarding; and there is a body of research on mirroring disgust); and (ii) whether the vicarious rewards and punishments really can have an effect on instrumental learning (Contreras-Huerta et al. (2023) is not a million miles away, but about prosocial action rather than instrumental learning). This is already a big research topic; all I’m suggesting is that we might usefully connect it to theories of ethical cognition.

second idea: bitterness modulates rewards

poison

marker: bitterness (Garcia & Hankins, 1975)

‘Hodgson (1970) [personal communication] related that he presented amines in gelatin capsules to sea anemones. They soon habituated to this novel way of feeding and ingested the capsules. He then filled a capsule with quinine and fed it to his subjects. A short time later they rejected the dissolving capsule, everting their gastrovascular cavities. After a single bitter experience the sea anemones refused to ingest another capsule.’ (Garcia & Hankins, 1975, p. 42)

implementation: aversion to bitterness

simple, old: sea anemones vs quinine one-shot learning (Garcia & Hankins, 1975, p. 42)

adaptive: higher sensitivty (Li & Zhang, 2014) + higher threshold (Ji, 1994) in herbivores

herbivores: more toxins (so need greater sensitivity = higher number of Tas2r genes) but also more bitter things in their diet so would miss too much if had a low threshold, whereas carniovores rarely encounter bitter things but can afford to skip all of them.
Chapman et al establish three things.
1. responses to bitterness are marked by activation of the levator labii muscle ‘which raises the upper lip and wrinkles the nose’
2. bitter responses are made not just to bitter tastes but also to ‘photographs of uncleanliness and contamination-related disgust stimuli, including feces, injuries, insects, etc.’
3. in a dictator game, ‘objective (facial motor) signs of disgust that were proportional to the degree of unfairness they experienced.’
Chapman et al. (2009, p. fig 1,3 (part))

Chapman et al. (2009, p. fig 1,3 (part))

Chapman et al. (2009, p. fig 1,3 (part))

If a limited but useful range of moral violations can produce bitter sensations, general-purpose learning mechanisms can produce averstion to actions that generate these moral violations.

Isn’t unfairness to abstract for a minimal theory?

12–14 month olds: Geraci & Surian (2011); Surian, Ueno, Itakura, & Meristo (2018)

Isn’t unfairness to abstract for a minimal theory?
On the face of it, unfairness seems to be too abstract for a minimal theory. But infants’ abilities indicate that there is a way of detecting unfairness which does not require great cognitive sophistication ... (no: unhelpful rather than unfair!!!)

How does unfairness trigger bitterness?

How does unfairness trigger bitterness?
This is a key question because we understand quite well how toxins trigger bitterness (taste receptors etc; even have a genetic story).

What processes might implement a minimal model?

instrumental conditioning

is this (Crockett, 2013; Cushman, 2013)?

situation -> action -> outcomes -> rewards

(Dickinson, 1994, p. 48)

signature limit: persistence in extinction following devaluation

How to add an ethical twist?

first idea: vicarious rewards — expected to maximum utility

second idea: bitterness modulates rewards

At this point I am far from having a minimal model of the ethical domain, but I have some pointers. The findings on vicarious rewards, and on the bitterness of moral violations, tell us something about what a minimal model of the ethical would likely include.
But we can perhaps see how it might begin if the focus on instrumental conditioning is correct ...

What would a minimal model be a model of?

harm/care

fairness (including reciprocity)

in-group loyalty

respect for authority

purity, sanctity

(Haidt & Graham, 2007)

In the minimal model, harm should be operationalised in relation to instrumental conditioning.
On this way of operationalising harm, the only harms that feature in a minimal model are those that would also count as rewards or punishments in the context of instrumental conditioning.
On the minimal model, we should be able to make nothing of hidden harms, nor of indirect harms.
in-group loyalty might not be part of the minimal model at all but just a side-effect of the fact that vicarious responses (including mirroring) tend to be stronger for in-group than out-group members. [many sources on this including the GROOP effect, but also a feature I think implied by the resemblance in Contreras-Huerta et al. (2023) [*but check!]]
Similiarly with fairness, we can see that this might be implemented partly by vicarious rewards (maximum happiness for us->max for me) and partly though mechanisms like bitterness.
And of course we should expect the minimal model to be concerned with only quite gross and immediate violations of fairness.
So causing fairness indirectly or as a distant consequence of some actions should not count as a fairness violation at all on the minimal model.
So fairness will not be a unified catgeory. But we can say, in general, that it will be limited to readily observable consequences of actions, and even only a small subset of them
[no idea about respect for authority]
Idea is to develop a minimal model that would be an alternative to the very dominant idea of a deontological/utilitarian split.