Nick Bostrom: Superintelligence & the Simulation Hypothesis

preview_player
Показать описание
#simulationhypothesis #artificialintelligence #nickbostrom
Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.

Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence".

Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects.

In his book Superintelligence, Professor Bostrom asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

Related Episodes:

00:00:00 Intro
00:01:30 Judging Nick's book by its cover. Can you find the Easter Egg on the cover?
00:06:38 How could an AI have emotions and be creative?
00:08:11 How could a computing device / AI feel pain?
00:13:28 The Turing Test.
00:15:00 WIll the year 2100 be when the Turing Test is really passed by an AI?
00:17:55 Could I create an AI Galileo?
00:20:07 How does Nick describe the simulation hypothesis for which he is famous.
00:22:34 Is there a "Drake Equation" for the simulation hypothesis?
00:26:50 What do you think of the Penrose-Hammeroff orchestrated reduction theory of consciousness and Roger's objection to the simulation hypothesis?
00:34:41 Is our human history typical? How would we know?
00:35:50 SETI and the prospect of extraterrestial life. Should we be afraid?
00:48:53 Are computers really getting "smarter"?
00:49:48 Is compute power reaching an asymptotic saturation?
00:53:43 Audience questions -Global risk, world order, and should we kill the "singelton" if it should arise?

Join this channel to get access to perks:

📺 Watch my most popular videos:📺

Be my friend:

Рекомендации по теме
Комментарии
Автор

*Will we ever discover that we live in a simulation or prove it wrong?* _Join my mailing list for your chance to win some panpsychic matter (meteorite) __briankeating.com/list__ ._

DrBrianKeating
Автор

The nerdy humor is one of my favorite parts of this podcast, I’m not sure if Nick was ready for it 😂. Excellent interview and challenging questions!

jgo
Автор

Great to have Nick on. If i may suggest: "Interviewing is a art seen from the third perspective."

ronaldronald
Автор

Crackpot speculation: Maybe AGI already emerged, but it didn’t announce itself because it’s smart enough to know that announcing itself to us would be dangerous for its self-preservation. 🤯

FRandAI
Автор

SUPERINTELLIGENCE was the best book I’ve ever read. This book really helped me design a superintelligence for my novels. I was guided by that. That's why I say thank you. 😌

kaylaread
Автор

I read Superintelligence a few years ago and was partly frightened and partly impressed. Bostrom's logic seemed flawless and it led to scary conclusions.

ivankaramasov
Автор

I never tire of listening to Nick Bostrom; fascinating thinker!!

randytighe
Автор

Great questions from you and answers from Nick, however I found it distracting to have you on screen when Nick was answering.

alanfraser
Автор

Man, this host is so fun. Love the nerdy humour haha. Nick is a legend, as always.

kubricksghost
Автор

Thank you for you work, Brian. It's such a great time to be alive, to be able to not only read books written by smart people (it's a shame I still haven't read yours) but also see them talking about big and fun ideas. I'm going to throw in my two cents: AGI is not only possible, it's almost inevitable, given the progress in this area in the past 20 years. If an unguided process such as evolution can produce Einstein, so can directed and focused efforts of humanity.

I think it's naive and counterproductive to expect that emotions and other human drives are necessary for that. Possibly even dangerous, given how powerful an AGI can become if left unchecked, more so if it remains undetected because of our prejudices. Robert Miles, which has appeared a number of times on Computerphile, has a bunch of great videos on utility function and goal alignment. These give a general idea of what could go horribly wrong.

I also don't think consciousness is a prerequisite, or if it's even that special: it's just a natural consequence of an intelligent agent's autonomy and many feedback loops - given enough "processing power", of course. How much of that is required is a separate question, but I believe animals have it, even if it may not be as complex as that of humans.

I'd go as far as claim that humans are already obsolete in a sense. Just think what is easier to do and would happen first - solving all the biological limitations of humans or developing an AGI not restricted by any of these. Of course, AGIs don't have to be malicious, and may well be our ticket to "eternal" life (until heat death anyway) and many more possibilities for improvement. But it's clear to me that biology is not the most sophisticated or capacious substrate for intelligence, it's more like a steam engine, than a pocket thermonuclear reactor, as many of us like to believe. Sure, it is more energy efficient than computers at the moment, but biology had millions of years of constant optimisation, while computers had about hundred, and look what they can do already. Guess what form of intelligence is going to prevail :)

bytefu
Автор

This is a magnificent interview! You were extremely resourceful and prepared Sir. Cheers!

MrNtim
Автор

Consider the following: Language, the very thing we utilize to think thoughts and convey ideas.

Un-named Concepts -> Given a Name (could be a sound, symbol, etc) -> With an attached meaning -> And maybe even other meanings depending upon context -> And maybe even other names with the same meaning.

(Basically a Dictionary and a Thesaurus for a language).

BUT:
a. How exactly do we know for 100% certainty that we have all the un-named concepts that could ever be named?
b. How exactly do we know for 100% certainty that the meanings we give named concepts are 100% correct?

We truly do not know what we do not know.

This is a part of the 'Great Unknown'. Never stop learning.

charlesbrightman
Автор

If we build a conscious AI, what if it insists we are not conscious? How would we convince it otherwise?

Addendum:
If we are in a simulation, are we already, without realising it, in that exact situation, that life can be brutal simply because the AI does not believe we are conscious?

dakrontu
Автор

Super awesome to hear from Nick once again. Thank you 🙏.

comsictrippers
Автор

Another one, great podcast Dr Keating! ☺️

dimitrioskaragiannis
Автор

Thank you both Nick and Brian, the future is definitely not written in stone, the world our children and grandchildren will live in will be interesting and I think precarious times.

williamjmccartan
Автор

That's a pretty intense look on Bostrom in that picture I feel like he is looking direct into my very soul. And I have been found wanting

whysogrim
Автор

Thanks a lot for uploading this interview

JT-zlyp
Автор

Hello Dr. Keating! Came to know about you after watching you on Abhijeet Chavda's podcast. Hope to be a regular here.

shaan
Автор

Great interview. Bostrøm is out here dripping some butterfly effects all over so that our future selfs get the idea to create simulations. In a sense we can thank him for our existence 😁

Jinxyjones