filmov
tv
Why some intelligent agents are conscious by Hakwan Lau

Показать описание
【Speaker】Hakwan Lau, Team Leader, Riken Institute,
【Book】
"In Consciousness We Trust: The Cognitive Neuroscience of Subjective Experience "
【Long abstract】
In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. Such an agent needs to develop the capacity to self-monitor the effectiveness of its various perceptual processes, which changes over time as the agent learns new perceptual categories. To perform these metacognitive functions well, the relevant representations need to demonstrate properties such as smoothness and sparsity in coding. With such a sensory coding scheme, learning is effective, and generalizes well to novel stimuli. One additional consequence is that the right kind of metarepresentations of these sensory codes can inform the agent regarding exactly how similar a sensory signal is, with respect to all other possible sensory signals. This rich set of similarity information is subjective in the sense that it concerns how the agent itself can discriminate between the relevant stimuli, rather than how physically similar the stimuli are. It is further assumed that the agent will need to develop the capacity for predictive coding, in which a generative model projects its output to the very same sensory mechanisms responsible for bottom-up processing. This creates the need for reality monitoring: that is, the agent needs to know whether a certain sensory activity is triggered by an external stimulus, and thereby reflects the state of the world right now, or whether it is generated endogenously, such as for the functions of planning and working memory. This reality monitoring mechanism can in turn facilitate metacognition. That’s because this mechanism contains rich statistical information about the sensory representations, so it is effective in distinguishing between meaningful sensory signals and noise. With these mechanisms in place, the agent may develop the capacity for general intelligence, that is the ability to make rational decisions at a symbolic level, using rule-based syntactic operations. This is made possible because the metacognitive mechanisms can filter out noisy perceptual decisions, and select only the more reliable ones for this kind of highly noise-sensitive processing. As such, when a percept is selected for this purpose, it acquires what we can call an ‘assertoric force’. From the general reasoning system’s perspective, such a percept presents itself as a given, as truthfully representing the current state of the world. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, without effort, the agent knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this overall architecture, for which consciousness might have arisen as a byproduct.
【Other Information】
Please send an email to subscribe our mailing list.
Please contact Hiroaki Hamada, Autonomous Agent Team, Araya Inc. if you have any questions.
Twitter: @HiroTaiyoHamada
【Book】
"In Consciousness We Trust: The Cognitive Neuroscience of Subjective Experience "
【Long abstract】
In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. Such an agent needs to develop the capacity to self-monitor the effectiveness of its various perceptual processes, which changes over time as the agent learns new perceptual categories. To perform these metacognitive functions well, the relevant representations need to demonstrate properties such as smoothness and sparsity in coding. With such a sensory coding scheme, learning is effective, and generalizes well to novel stimuli. One additional consequence is that the right kind of metarepresentations of these sensory codes can inform the agent regarding exactly how similar a sensory signal is, with respect to all other possible sensory signals. This rich set of similarity information is subjective in the sense that it concerns how the agent itself can discriminate between the relevant stimuli, rather than how physically similar the stimuli are. It is further assumed that the agent will need to develop the capacity for predictive coding, in which a generative model projects its output to the very same sensory mechanisms responsible for bottom-up processing. This creates the need for reality monitoring: that is, the agent needs to know whether a certain sensory activity is triggered by an external stimulus, and thereby reflects the state of the world right now, or whether it is generated endogenously, such as for the functions of planning and working memory. This reality monitoring mechanism can in turn facilitate metacognition. That’s because this mechanism contains rich statistical information about the sensory representations, so it is effective in distinguishing between meaningful sensory signals and noise. With these mechanisms in place, the agent may develop the capacity for general intelligence, that is the ability to make rational decisions at a symbolic level, using rule-based syntactic operations. This is made possible because the metacognitive mechanisms can filter out noisy perceptual decisions, and select only the more reliable ones for this kind of highly noise-sensitive processing. As such, when a percept is selected for this purpose, it acquires what we can call an ‘assertoric force’. From the general reasoning system’s perspective, such a percept presents itself as a given, as truthfully representing the current state of the world. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, without effort, the agent knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this overall architecture, for which consciousness might have arisen as a byproduct.
【Other Information】
Please send an email to subscribe our mailing list.
Please contact Hiroaki Hamada, Autonomous Agent Team, Araya Inc. if you have any questions.
Twitter: @HiroTaiyoHamada
Комментарии