How to build an A.I. brain that can conceive of itself | Joscha Bach | Big Think

preview_player
Показать описание
How to build an A.I. brain that can conceive of itself
----------------------------------------------------------------------------------
A.I. can perform tricks, but can it truly think? Cognitive scientist Joscha Back explains where we are on the path to artificial general intelligence, and where we need to be. The human mind can invent its own code and create models of arbitrary things—including itself—but we don't know how to build a mind quite like that just yet. To achieve A.G.I., will programmers have to re-create every single functional mechanism of the human brain? There are many schools of thought, but Bach's perspective is that the tinkering may not have to be as granular as many assume. Creating a mind may even be simpler (relatively speaking) than creating a single cell. Why? Because the human brain, says Bach, is less like clockwork and more like a cappuccino. "You mix the right ingredients and then you let it percolate and then it forms a particular kind of structure. So I do think, because nature pulls it off pretty well in most of the cases, that even though a brain probably needs more complexity than a cappuccino—dramatically more—it’s going to be much simpler than a very complicated machine like a cell,' he says. Joscha Bach's latest book is Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition (Oxford Series on Cognitive Models and Architectures)
----------------------------------------------------------------------------------
JOSCHA BACH :
Dr. Joscha Bach (MIT Media Lab and the Harvard Program for Evolutionary Dynamics) is an AI researcher who works and writes about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He is founder of the MicroPsi project, in which virtual agents are constructed and used in a computer model to discover and describe the interactions of emotion, motivation, and cognition of situated agents. Bach’s mission to build a model of the mind is the bedrock research in the creation of Strong AI, i.e. cognition on par with that of a human being. He is especially interested in the philosophy of AI and in the augmentation of the human mind.
----------------------------------------------------------------------------------
TRANSCRIPT :
Joscha Bach: If you look at our current technological systems they are obviously nowhere near where our minds are. They are very different. And one of the biggest questions for me is: What’s the difference between where we are now and where we need to be if we want to build minds—If we want to build systems that are generally intelligent and self-motivated and maybe self-aware? And, of course, the answer to this is 'we don’t know' because if we knew we’d have already done it. But there are basically several perspectives on this. One is our minds as general learning systems that are able to model arbitrary things, including themselves, and if there are this, they probably need a very distinct set of motivations, needs; things that they want to do. I think that humans get their specifics due to their particular needs. We have cognitive and social and physiological needs and they turn us into who we are. Our motivations determine where we put our attention, what we learn and what we actually do in the world—what we model, how we perceive, what we are conscious of. In a similar sense, it might be that it’s sufficient to build a general learning architecture and combine this with a good motivational system.
And we are not there yet in building a general learning architecture. For instance, our minds can learn and create new algorithms that can be used to write code and invent code, programming code for instance, or the rules that you need to build a shop and run that shop if you’re a shopkeeper, which is some kind of programming task in its own right. We don’t know how to build a system that is able to do this yet. It involves, for instance, that we have systems that are able to learn loops and we have some techniques to do this, for instance, a long- and short-term memory and a few other tricks, but they’re nowhere near what people can do so far. And it’s not quite clear how much work needs to be done to extend these systems into what people can do. It could be that it’s very simple. It could be that it’s going to take a lot of research. The dire view, which is more the traditional view, is that human minds have a lot of complexity, that you need to build a lot of functionality into it, like in Minsky's society of mind, to get to all the tricks that people are up to.
Рекомендации по теме
Комментарии
Автор

Building a functional cell is probably as difficult or similarly difficult as building an intelligent system that can conceive of itself, not much more difficult. I base that on the experience I have with a degree both in molecular biology and cognitive science. Cells also behave and interact in ways that are self-organizing, but here we are talking at a molecular and physical level. I think it's simply because we are unfamiliar with those levels that we think that they are more complicated, when in fact there are physical constraints that limit the complexity. The interaction between genes, proteins and cells have been shaped by the existing physical constraints, and if we were to implement those constraints, we could build similar things as if we implemented constraints in an A.I. that helps it bootstrap its own intelligence.

Muxen
Автор

Start program SkyNet.

valveset >=0,

Store dataset 0
If value = value in store database
Then create new value

Loop = infinite

joepure
Автор

I've been scouring the internet to see if anyone came up with the same idea as me. Unfortunately it looks like it's all philosophical for now

blackbriarmead
Автор

A motivational system should reward building a good learning architecture an then run away fast?

Arkanoid
Автор

Thumbs up for the Commodore t-shirt! I love it :D

Skyfox
Автор

Please, do NOT share this video with Cyberdyne Systems!

somegreatbloke
Автор

I wonder why we don't let AI's "build themselves"... I'm not saying I think our existing programs have the ability to build a complete and functioning mind right now, but rather let the programs we have now figure out the steps as we go. I am reminded how we as humans have gone from smacking rocks together to building machines that can craft parts and components with in nanometer tolerances. We did all that using existing tools to build the next level of higher precision machines. So, maybe we could use a supercomputer to build just the basic component for the next iteration of computing machines. Then use _that_ new component in the next supercomputer to work on the next step. Of course, this takes time. But, compared to our own development it would happen relatively very quickly.

But, that leads me to my next question: what if we don't really fully _understand_ how the components work, only caring that they _do_ work? What if, in the end we build a machine that is capable of seeing things in 4 or more dimensions? What if after decades of work it basically just spits out the answer "42" to all of our problems and then shuts itself down because it doesn't want to be bothered by the stupid meatbags asking it to do what, for it, are excruciatingly mundane tasks or doesn't "think" are important? I know the "AI will destroy us all" idea is pretty low hanging fruit, but really, even a relatively "stupid" AI could work in ways that we don't even understand. Like staring at a EEG printout and trying to find words and sentences in it. Indeed, have we finally reached a point where we just aren't _ready_ for what this technology can do for (or _to_ ) us? And, if so, who is qualified to make the decision not to pursue AI research any further? I guess the bottom line is that we will just do what we've always done. That being make the thing and hope it doesn't hurt anybody in the end. It's just that, like with the high tolerance machine components I mentioned in the beginning, now making a mistake could mean more than just losing a finger or an eye it could mean a mistake might cause the near destruction of our way of life... :/

CybershamanX
Автор

I'll probably take less time to build an AI than to get a frecking cappuccino at Starbucks.

TheUmberto
Автор

This does not sound right. For someone who is old enough to have read books about Machine Learning and AI from the early and late 70s (yep, is was a "thing" back then!), this is a deja vu. Every generation seems to have its own idea how to achieve it, but results after half a century are not that impressive (at least for me). I won't hold my breath to see "real" intelligence coming off of it in the rest of my lifetime.

c.augustin
Автор

I think that based on how humans suffer, it would be wrong to give an AI goals that it had to reach on it's own. Especially if it was aware of itself. It seems cruel.

yat
Автор

just give the ai a github account and let it contribute to its own upgrades

realityisfake
Автор

Is he in trouble? Looks like he's blinking in morse code.

Mustachioed_Mollusk
Автор

I know the answer to that. Money in the form of profits. Government has blazed the trail since the early IBM mainframes. Now who wants to get up at 2am and check the run for me? Pretty please? ;;)

brendarua
Автор

Basically we need to build an AI that wants to fit in our society.

guccigav
Автор

Human brain, or rather it's subset responsible for habits is like RNN (recurrent neural network) . We know how to build and train RNN, same as we know how to train habits. Yet, we completely don't understand inner working of either (i.e. we can't tell which vector in RRN decides that input will be classified one way or other).
So answer to "how to get to conscious AI" is: watch The Thirteenth Floor, Tron, or War Games

null_viod_zip
Автор

Even if you program AI to self-awareness, it cannot feel emotion but rather calculates what would be emotion. Meaning, you could create an AI aware of itself that calculates how to place itself above others or to serve others but it will never be able to feel.

ToiletPaper
Автор

im gona downvote you can trust anyone that wasn't on the Sinclair hype train.

peter
Автор

You really don’t want to do that. Seriously.

aaronhumphrey
Автор

My research uses long short term units :D a thing i know was mentioned!

empathylessons
Автор

Doesn't "think" at this point - uses Analysis software that Imitates and Mimics people but doesn't think per say = IBM's Watson, Amazon's Alexa, Apple's Siri, DOD's PAL.

kimber