How to build an A.I. brain that can surpass human intelligence | Ben Goertzel

preview_player
Показать описание
How to build an A.I. brain that can surpass human intelligence

----------------------------------------------------------------------------------
Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially. It's all good to be super-intelligent, he argues, but if you don't have rationality and empathy to match it the results will be wasted and we could just end up with an incredible number-cruncher. In this illuminating chat, he makes the case for thinking bigger. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
----------------------------------------------------------------------------------
BEN GOERTZEL:

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
----------------------------------------------------------------------------------
TRANSCRIPT:

Ben Goertzel: If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.

If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.

A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.

Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.

Рекомендации по теме
Комментарии
Автор

Think of all the gifs this video could spawn.

MoovySoundtrax
Автор

I like how positive this guy is and how much he loves AI

erionmema
Автор

In 5 years he might actually pull it off.

Let it be in the records that I fully support our future overlords.

abz
Автор

As a CS major, this is like crack to me.

MusixProu
Автор

The most important topic that no one is talking about on here is how blockchain will allow AI scientists to accelerate progress towards general intelligence exponentially. Opencog has been a stepping stone to Singularity Net, Ben's main project. Singularity Net is a decentralized marketplace for AI services that will allow AI to become much more accessible to the entire planet. It will also allow AI devs to easily work together and allow their AIs to communicate with and learn from eachother. This is a step in AI that no previous computer scientists could ever have imagined. Singularity Net is the wild card that will cause things to progress faster than anyone would have predicted, like the internet...

Tyler-kmmi
Автор

I'm glad he threw empathy in there at the very end. I was getting concerned.

sanders
Автор

This is incredibly interesting. By creating a multi-part AGI you get to decide the importance of each kind of mental processes (as simulated by some algorythm, or deep neural networks) and thus you can, in theory attain a human level or super human AGI that is value oriented with us.

The paperclip maximizer is a good analogy here. Basically if you make a superhuman AI and tell it to maximize the production of paperclips, it could logically start taking over the world in order to maximize availability of material and subsequently work toward turning the entire universe into paperclips.
What's needed to prevent a bad outcome such as this is some form of common sense, or any other mechanism that would prevent what we would call : an irrational action.
It can be simulated morality, a set of rules to follow, etc.

Finding a way to create beneficial superhuman AGI is probably the most important challenge we can choose to pursue.
What's endlessly exciting to me, is the possibility that we create a better thing than ourselves.
Human beings are pretty great, but we have innate and immense flaws. Flaws that an engineered intelligence probably wouln't have unless we make them. (Think cognitive biases, faulty logic, heuristic shortcuts, etc).

Time for an intelligence revolution. We definitely need it.

Kavriel
Автор

Ben Goertzel has been working on AI for like 40 years, he is getting scary close to General AI and probably will be the one to achieve it faster then anyone thought possible.

joeysipos
Автор

I was wondering what happened to Brendan Fraser

chadwick
Автор

When I see Ben Goertzel appear, I press the like button. He gives me hope for a better world.

JB
Автор

I just wrote my first neural network with one hidden layer that actually works from scratch no library except numpy used

abcdxx
Автор

See that shit Kids ? That's the future right there!

YoSomePerson
Автор

Why create something that we know later that would be out of our control 😞 ?
Really ?

elishap
Автор

I'm amazed that traditional concepts in A.I. are brought back to the forefront and are made to sound new.

c-j-p
Автор

What will the A.I. do without hand gestures?

Lupocide
Автор

I don't know why, but he reminds me of Slash from Guns n Roses..

musicmakesyoustrong
Автор

Congrats. Dr. Ben, father of AI. Please give us your update as soon as you can. Our world desperately need AGi and ASI. Best regards.

Amerikan.kartali.turk.yilani.
Автор

Hopefully they'll have the A.I. teach these humans that can't seem to understand my clock.

MikeClohset
Автор

Hey! I didn't know Mitch Hedberg had a brother!

burritocandy
Автор

Ben Goertzel,
What operational functions should an AGI have besides empathy and self-doubt/humility (anti-hubris) ?

I recall a fictional portrayal of AGI running in thousands of parallel simulations. Sooner or later, all resulted in the deaths of Sim humans. By recursively combining and mutating the best outcomes, the penultimate AGI had a melancholic, fatalistic, condescending, yet reluctantly cooperative nature. At least it never killed Sims.

Reminded me of a certain SitCom butler.

avidg