filmov
tv
Nancy A Salay: A Condition of Representational Intentionality
![preview_player](https://i.ytimg.com/vi/V9XaRguX5Yw/sddefault.jpg)
Показать описание
A Condition of Representational Intentionality In this presentation, I outline an approach to the problem of intentionality that has the twin virtues of 1) dividing the problem into (more) manageable subproblems and 2) clarifying an aspect of the hard problem. The two subproblems of intentionality are the Problem of Directed Intentionality — how objects become meaningful to agents — and the Problem of Representational Intentionality — how agents learn to interact with objects as representations of other objects. The standard practise in mainstream cognitive science and philosophy confuses these two questions by taking internal states, whether subpersonal or personal, to be the representations that intentional agents use. This confusion, I argue, yields both the grounding and the explanatory gap problems, and makes the Hard Problem harder. By keeping them separate, however, we gain clarity and uncover some interesting dependencies: 1) Directed intentionality (DI) is an aspect of basic sentience; 2) Directed intentionality is a necessary condition of representational intentionality (RI) While a full account of directed intentionality requires a solution to the hard problem, headway can be made on representational intentionality without it. First, DI alone cannot be a sufficient condition of RI: while many animals exhibit DI, that is, they perceive the objects that are meaningful to them, very few have developed sophisticated representational systems such as language. I argue that RI depends upon two further factors, one external — the existence of a linguistic cognitive niche — and one internal — a capacity for system-level expectation. In this presentation I speak only to the latter; elsewhere I detail the externalist account of how an agent can become a representation user without already being a representor. Associative learning analyses provide an excellent tool for investigating system-level expectation since subsystem-level expectation — neural priming — is already tracked using this approach. But some terminological changes are needed. On classical models, the system-level responses that are measured, e.g., rate of salivation, are only one of a bundle of responses. An animal responding to some meaningful stimulus isn't just salivating, for example, it is, at the same time, continually sensing the stimulus and, if it is capable, experiencing episodic flashes of other US situations. Though tracking this larger bundle of responses offers a serious operationalisation challenge, we can make headway by clearly representing the differences.
Edited by Emilio Manzotti
Edited by Emilio Manzotti