Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Introduction: Core Systems and Knowledge

Inferential Isolation Is a Barrier to Understanding Cognitive Development

This is a terrible title, but my aim is to point to what I think will be one of the next big issues, if not the next big issue, in cognitive development.
Last three decades dominated by two amazing ideas: core knowledge (in the first year of life) and shared intentionality from the second year of life. (Btw, one of the fun things about mindreading as a topic is that it includes both of these ideas.)
I think there is currently a broad consensus about three things.
First, from the first months of life, infants manifest abilities to track objects movements and their causal interactions, numerical and geometric features, and social attributes including some mental states.
Second, these abilities do not involve knowledge states. We know this because infants’ abilities are limited, and they would not be limited in these ways if they involved knowledge states. This is why Spelke and others use the label ‘core knowledge’.
Third, the abilities infants manifest in the first months of life support the acquisition of knowledge later in their development.
This thesis is neatly articulated and defended by Spelke in her 2022 book called What Babies Know (Spelke, 2022).
To illustrate broad consensus, note that these three claims are completely neutral on whether core systems enable belief tracking or are more limited. (For today I am going to steer well clear of any replication crisis.)

core system = module = fast, automatic process ...

No room for fine-grained distinctions between theories.
Also to ensure broad consensus I am not committing to any detailed claims about the core systems. In particular, whereas Spelke and Carey both insist that these involve concepts, I do not think we need to make any such commitment.

‘the diverse signatures of infants’ representations of objects depend on an interconnected set of abstract concepts

(Spelke, 2022, p. 63)

Could use the reaching-vs-anticipatory looking dissociation to suggest a lack of inferential integration?
Also the details of what the core systems can do will not matter for present purposes.

Perhaps ‘[t]he replicable implicit ToM tasks tap early-developing more basic type I processes, whereas more complex and sophisticated, truly meta-representational type II processes reveal themselves only later in development in explicit tasks.’

Rakoczy (2022, p. 10)

How do core systems support learning?

‘Core knowledge systems [...] their outputs give rise to the concepts and beliefs that populate our thoughts.’

Spelke (2022, p. 199)

This quote is not easy to understanding (‘give rise’) and it may be that the details are coming in Spelke’s next book.
But for now I want to start with the simplest idea: inferential integration.

Problem: inferential isolation

So this is Spelke’s picture assuming that ‘give rise to’ involves inferential integration

‘[...] core systems [...] are modular, in all the respects described by [...] Fodor in his Modularity of Mind.’

(Spelke, 2022, p. xx)

Spelke says things which I initially thought amounted to agreeing: ‘These ancient core systems have a further property: They are modular, in all the respects described by the philosopher and pioneering cognitive scientist Jerry Fodor in his Modularity of Mind.’ (Spelke, 2022, p. xx)

core systems ‘give rise to mental representations that are deeply inaccessible to our human, conscious minds

(Spelke, 2022, p. xx)

What looks compelling at this point: the internal representations of core systems\ are inferentially isolated but the outputs are not.
(This is probably how Fodorians think about the relation between perception and thought. I do think that is more puzzling than they realise.)
The examples are supposed to show that this simple idea does not work: the outputs of core systems are not available in thought