Prof. Nick Bostrom - Artificial Intelligence Will be The Greatest Revolution in History

preview_player
Показать описание
Prof. Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism.

Recorded: July 2017
Рекомендации по теме
Комментарии
Автор

Nick Bostrom is really interesting! His ideas are compelling, His book Super-intelligence is an amazing read. Hooray for Nick!

davidvennel
Автор

He thinks about the things that need to be thought about. This is critical.

AlphaFoxDelta
Автор

Thank god he has a sense of humour. You could go nuts in this field without one.

squamish
Автор

Were those "imaginative sequences" essentially dreams? That's exactly what it looks like... I feel like those vague shapes are what we humans see during REM sleep.

shin-ishikiri-no
Автор

We've come such a long way in just 6 years. So like at 15:46, no one giving their prediction on that graph was thinking it would come as fast as it has. The earliest prediction is like 2026. LOL.

KatharineOsborne
Автор

This is great summary of major AI concerns in 30min

lukederror
Автор

We want AI with a nuanced understanding of good and a genuine desire to be a productive and beneficial member of society. Any evolutionary process capable of creating such an AI will also have an adaptive niche for AI with a nuanced understanding of taking advantage and a genuine desire to get whatever it needs by means of lying and deception and coercion. All AI from such an environment will therefore necessarily understand what lying and deception and coercion are, and be able to suspect and anticipate them.

I've read your approaches to the control problem. You're going to attempt to coerce and deceive an entity smarter than yourself, in order to earn its trust and cooperation?

That seems unwise.

zrebbesh
Автор

brain is a simulation environment to act in the future, we need an AI with many levels of "brains" aka simulation environments that can simulate simulations its probably the best

GoatzAreEpic
Автор

I'm confused as to why people think that we will still be 'natural' humans by the time AGI/ASI is developed. I intend to be very transhuman by that point.

dr.zoidberg
Автор

Super intelligence may destroy itself.

amitgupta
Автор

In the future, G = D

When you notice it’s hard stop focusing on it

MMAoracle
Автор

Reeeaaaalllly, we shall see how long ai survives, if it does, we won't

konacreator
Автор

If machine need to learn like a human then you need to be bias?

relevants
Автор

how man became the god responsible for his creation.

pardoharsimanjuntak
Автор

I think the paperclip AI scenario accidentally happening is impossible. OBVIOUSLY if an AI is superintelligent, it will NOT kill off the whole humanity to generate more useless items. It will understand that it's counterproductive. If it doesn't understand, then it's not superintelligent (and then it can't kill off humanity and can't defend itself).

zka
Автор

But Bostrom, wait a second, if we get more intelligence, why i keep seeing more 'n more guys following football 'n Lady gaga. ??

rickrunner
Автор

Extremely boring video. And BOSTROM IS one of the most interesting modern philosophers. But this talk is too dull. And he is not a good speaker.

firstal
join shbcf.ru