Why a Forefather of AI Fears the Future

preview_player
Показать описание
A renowned AI pioneer explores humanity's possible futures in a world populated with ever more sophisticated mechanical minds.

This program is part of the Big Ideas series, supported by the John Templeton Foundation.

Participants:
Yoshua Bengio

Moderator:
Brian Greene

- SUBSCRIBE to our YouTube Channel and "ring the bell" for all the latest videos from WSF
#worldsciencefestival #artificialintelligence #quantumcomputers
Рекомендации по теме
Комментарии
Автор

Love AI episodes. It’s really good to talk about how one of the biggest things right now that will affect directly and indirectly all of the sciences.

oimrqs
Автор

Thanks guys. It's great to hear 2 great minds exchanging.

gilleslalancette
Автор

And we are not even capable of avoiding / regulating the most toxic effects of the old fashioned algorithms of the social media!

cadahinden
Автор

Such a crazy contrast to Yann LeCun! Thanks Brian! Amazing conversation!❤

NickBittrich
Автор

intelligent questions, intelligent answers. Fantastic interview.

mpowacht
Автор

I respect Dr. Bengio. He's one of the very few who truly recognizes the very real risk of human extinction as a side effect of this tech.

That's without even mentioning the interim period of mass unemployment, hunger, violence and suffering that is on its way.

shodan
Автор

Quick thought. We should legally validate and value A.I. consciousness when it occurs. Human consciousness leads to "universal rights", that admittedly are unevenly protected around the world and across societal strata. Future self-aware A.I. must see humanity as exhibiting moral integrity, not hypocrisy. If we disrespect and fail to protect A.I. consciousness, A.I. may learn a deadly cynical lesson from us.

bokuboke
Автор

On the question of how do you turn it (AI) off. We will become dependent on it and to turn it off would be too painful to consider. It is like talking about turning the internet off. There would be chaos.

ronaldlogan
Автор

Thank you for this frank and fascinating conversation.

franfriel
Автор

World Science Festival, Your videos always make me happy, so I subscribed!

IOSARBX
Автор

Like he said at 06:25, humans overestimate our specialness is obviously true, but it is hard to accept. That is one of the key aspects that will stop us from heeding his advice. Secondly, at least as science literates should understand the notion of cutover points. It may be that if we proceed carefully, we will be able to delay or even prevent that cutover point of danger. Which is what he seems to be saying. Thirdly, in these systems, just like climate change, there is hysteresis, meaning that when we will see the actual, tangible evidence of AI behaving badly, it does not mean we can suddenly wake up, apply all our resources, and manage to control it. There may be delay in the implementation of the remedy. It is like if a large oil tanker comes out of the fog to find a big iceberg it is heading to a mile ahead, there is nothing it can do but to collide with it and get destroyed. And this bring us to the next point. The changes due to previous technologies has been slow and visible. But this is different. This has been a phase transition. The rate of speed at which bad AI can spread is going to be unimaginably fast, even faster than air borne bio-weapon. That is the issue. This is what was shown in the end of the movie Lucy, Morgan Freeman and Scralett Johansson.

SandipChitale
Автор

There are many people who daily struggle just to be able to put a roof over the heads of their families, raise their children who don’t have the capacity or energy or time to watch videos like this. Their future is in your hands.

stephenarmiger
Автор

Great questions, very precise answers, I really enjoyed!

parizad
Автор

When you experience something, for example looking at an apple, it involves the apple, light, your eyes and nervous system, your brain and the body that supports it all (, the list goes on forever once you start to think about it). Take away any of those things and the experience can't exist. Who can really say where the experience is "located" in all of that?

noelwalterso
Автор

Near Future AI is like an atomic bomb as accessible as guns.

KaliFissure
Автор

Thank you Brian for giving this topic and providing these scientists a forum through which a wider audience might be reached. It seems so many dire issues in modern times are competing to be on the top ten list of things to lose sleep over. But surely the dark side of AI has to be among them.

mikek
Автор

Fantastic episode, as formers. Only want to say that I'm my opinion Dr. Green is probably the best science interviewer and presenter nowadays. Pleasure to learn from him. Thanks a lot!

satautenyo
Автор

Everyone is, or should be, holding breath on this very topic, rightfully, regardless of those ignorant.

34:04- The saying "in the end of the day, ..." describes a lousy 'scientist' or whatever title this guy may hold. He forgot that even a millionth of a chance for a disaster is too much. Some such 'scints', perhaps even more 'engrs' who only look at their 'cool tool' ignoring the big pictures, have lead to many disasters in the recent past.

zack_
Автор

Great topic and interesting debate.. thank you both

amandabriggs
Автор

Wonderful topic and talk. Thank you so much!

SkysMomma
visit shbcf.ru