Is There a Mathematical Model of the Mind? (Panel Discussion)

preview_player
Показать описание
A Google TechTalk, presented by Bin Yu, Geoffrey Hinton, Jack Gallant, Lenore Blum, Percy Liang, Rina Panigrahy, 2021/05/17

The panelists:
Bin Yu - UC Berkeley
Geoffrey Hinton - University of Toronto and Google
Jack Gallant - UC Berkeley
Lenore Blum - CMU/UC Berkeley
Percy Liang - Stanford University
Rina Panigrahy - Google (moderator)
Рекомендации по теме
Комментарии
Автор

Fantastic panel! Lots of good ideas being said here. Especially loved the things said by Geoffrey Hinton.

Here's my personal takes, yes there's a possible mathematical model of the mind. I think the mind models a copy of the world around it in order to create inferences and try to continue to survive, so you have to do the same and make a mathematical model of the world first. Once you have that, make an agent that survives in your modeled world.

For what's missing in today's Deep Learning algorithms, I agree that there's lots missing. I would even wager to go so far as to say that the core idea of input-output holds back all of deep learning, and it will take a lot of work to make those sorts of algorithms to resemble any kind of general intelligence. I believe that the eventual algorithm of the mind will work from the notion of knowledge retention and inferencing, the idea of growth. Its goal won't be to produce or output anything, but to live and exist as an entity that seeks to know more about the world and our universe as a whole.

The next question about how do we remember things is a difficult question. Neuroscientists don't know exactly, but like Jack Gallant said, we have theories. I don't know either. But, if I have to answer, the way I think our memory works is a kind of large, dynamic, multi-dimensional inference graph. What I mean by that is that nothing our brains learn is ever completely static. Change in an inherent part of our universe. Any fact that we know is simply an extended assumption from several core feelings that we learned from our senses. Sleep is an important part of memory, as during that time we reshuffle our inferences around and solidify certain assertions to have them be recalled faster in the future. Time is also very ingrained in our system of memory, as sleep also removes ideas that we had in the past. Forgetting is a large part of memory management. What we don't think about often gets removed, as our brains see that we don't have to deal with those abstracts and is not necessary for survival. And lastly, as for how do we remember visual objects, I think there is some brain system that generates something like a series of Non-uniform rational B-splines (NURBS), and each detail that we can remember about an object is an additional NURBS. I have no answer yet as to how does the brain generate inferences from its senses data, but I'm working on it.

For the question, do humans have some sort of series of modules or stacks to train their skillsets or are there some kind of functions that call each other in there, no. Our graph stores information about what we did and what we did wrong, and using those two ideas it tells our muscles what to do. As the brain gets more information about the consequences of its actions, it updates the graph and this is how we improve. In essence there are certain memory regions that you could prod and query, but it's all so interconnected that you can't really point your finger at a spot and say, 'this is where we know how to do x'.

Why is recognizing a chair difficult for a Deep Learning system - because for the ideas I mentioned prior. It has no self, no reason for identifying chairs. You could make a model that tries to recognize a seat or certain kinds of chairs in a picture, but it cannot generalize the idea that chairs are there for a human to sit down on. Anything could be a chair, given the right circumstance, which is why it is hard for DL algorithms to identify. Same thing for food, walls, liquids and gases, toilets, homes, boats - they're too general as humans use any combination of materials found in the world to assist them with survival. You have to identify these by their use, and DL algorithms are too 'dumb' right now to be able to process that from a single photo.

For the question of consciousness in programs, my answer is that when you create an algorithm that utilizes a specialized graph for creating memories, that reads sensory data from attached peripherals, and creates ideas for its survival by modeling the world around it and therefore creates a sense of self, is when you have a conscious entity. It doesn't even need to specifically have feelings. It would be emotionally painful to turn off or disable that kind of program running with these parameters. Because then you are killing a potential.

Programming languages and Natural languages are very different. PL have rigid rules that allow source code to be compiled into machine-readable code. NL are so fluid and changing that they're impossible for a rigid machine to understand at face value. You'd need a NL processing algorithm built with a PL to let a machine understand what you're saying, and we're still a bit aways from that.

IgnatRemizov
Автор

How do you mathematically model free will ..unless you don't believe free will exists

TrollMeister_
Автор

I thought Geoffrey Hinton was George Soros..and I was wondering what was s**mbag George Soros doing in an AI discussion ?

TrollMeister_