Integrated Information Theory Lecture 1

preview_player
Показать описание
Alexander Nedergaard presents his favorite theory of consciousness. This was at the Institute of Neuroinformatics in Zurich, Switzerland as part of the Consciousness: From Philosophy to Neuroscience course.
Рекомендации по теме
Комментарии
Автор

Very interesting ! Is there a Lecture 2 to come ?

Dopamine-officiel
Автор

The best way to describe the tricky concept of exclusion is that consciousness has borders. Things are in or they are out. My current experience may be the experience of a red apple. Exclusion says there is not also an experience happening which is exactly like my current experience but without the experience of red. Further, there is not also an experience happening which is just like my current experience but including the experience of consciousness of my blood pressure; that information is excluded from my current experience, even if there is an (necessarily smaller) integrated system in my body that DOES include that information. Only the experience that exists the most is the experience that actually exists. This is extremely important for the theory as lots of sub-systems in an integrated system have non-zero phi values but no separate existence because they are “subsumed” into one maximally integrated experience. Conversely, there may be a system in my brain, like my neo-cortex plus my cerebellum which has a non-zero phi value but, because of exclusion, doesn’t give rise to an experience because only the maximally real causal whole exists. If exclusion didn’t hold, every experience we have would just be the tip of the iceberg and a whole cascade of smaller experiences would be happening and that would be, not just weird, but causally over-efficacious. Every element in a system would be potentially contributing not just to the construction of one causal whole but to many.

mattsigl
Автор

Also, I think the lecturer is wrong (if I understood him right, which maybe I didn’t.) that a system MUST constrain both the future and the past with a non-zero value on both sides to generate consciousness. If it’s possible to have a system where the current state reduces no uncertainty about the past state and some or a lot of uncertainty about the future state, (or vice versa) that system would still be generating integrated information and consciousness. He’s also flat wrong about the lightbulb (photodiode) example producing a phi value of zero. The theory is explicit that a photodiode generates the maximally smallest amount of integrated information, but definitely greater than zero.

mattsigl