Why some intelligent agents are conscious by Hakwan Lau

preview_player
Показать описание
【Speaker】Hakwan Lau, Team Leader, Riken Institute,

【Book】
"In Consciousness We Trust: The Cognitive Neuroscience of Subjective Experience "

【Long abstract】
In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. Such an agent needs to develop the capacity to self-monitor the effectiveness of its various perceptual processes, which changes over time as the agent learns new perceptual categories. To perform these metacognitive functions well, the relevant representations need to demonstrate properties such as smoothness and sparsity in coding. With such a sensory coding scheme, learning is effective, and generalizes well to novel stimuli. One additional consequence is that the right kind of metarepresentations of these sensory codes can inform the agent regarding exactly how similar a sensory signal is, with respect to all other possible sensory signals. This rich set of similarity information is subjective in the sense that it concerns how the agent itself can discriminate between the relevant stimuli, rather than how physically similar the stimuli are. It is further assumed that the agent will need to develop the capacity for predictive coding, in which a generative model projects its output to the very same sensory mechanisms responsible for bottom-up processing. This creates the need for reality monitoring: that is, the agent needs to know whether a certain sensory activity is triggered by an external stimulus, and thereby reflects the state of the world right now, or whether it is generated endogenously, such as for the functions of planning and working memory. This reality monitoring mechanism can in turn facilitate metacognition. That’s because this mechanism contains rich statistical information about the sensory representations, so it is effective in distinguishing between meaningful sensory signals and noise. With these mechanisms in place, the agent may develop the capacity for general intelligence, that is the ability to make rational decisions at a symbolic level, using rule-based syntactic operations. This is made possible because the metacognitive mechanisms can filter out noisy perceptual decisions, and select only the more reliable ones for this kind of highly noise-sensitive processing. As such, when a percept is selected for this purpose, it acquires what we can call an ‘assertoric force’. From the general reasoning system’s perspective, such a percept presents itself as a given, as truthfully representing the current state of the world. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, without effort, the agent knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this overall architecture, for which consciousness might have arisen as a byproduct.

【Other Information】

Please send an email to subscribe our mailing list.

Please contact Hiroaki Hamada, Autonomous Agent Team, Araya Inc. if you have any questions.
Twitter: @HiroTaiyoHamada
Рекомендации по теме
Комментарии
Автор

i think the current theories will push us basically off the deep end we will end up saying the logic gates are conscious fetuses
35:24
are conscious for not very scientific reasons and i think we would do well to do something in what what in genetics
35:31
people call just those stories which is somewhat uh not very flattering terms for calling
35:37
something that is like a pre theory theory it's not a real theory that i'm giving you but i'm trying to give you a
35:42
story or some intuitions actually would make sense that would fit into what we know about brains and and machines uh
35:49
evolutions and computational modeling and hopefully that would give you some some sort of insight what what a
35:56
specific theory of consciousness would look like and how we can do it and so
36:01
it's not a real theory i call it a gesture story and also there will be no mathematics so just not to disappoint i
36:06
think having equations is a very important thing if you have won't have an eventual uh
36:13
solid theory but i think we are very far from being there so i my personal uh
36:19
approach is to try to not make it so technical when you make it technical uh
36:24
is great because uh people tend to like that kind of work when people think that it looks sophisticated uh so it's easier
36:30
for you to to to promote your your ideas but i think you also narrow your your peer review uh
36:38
space because a lot of people are not so technical within the field so if you push a lot of equations in fact
36:45
it's easy for you to hide behind walls of equations some really poor philosophical ideas

margrietoregan