How Not To Destroy the World With AI - Stuart Russell

preview_player
Показать описание
Stuart Russell, Professor of Computer Science, UC Berkeley

About Talk:

It is reasonable to expect that artificial intelligence (AI) capabilities will eventually exceed those of humans across a range of real-world decision-making scenarios. Should this be a cause for concern, as Alan Turing and others have suggested? Will we lose control over our future? Or will AI complement and augment human intelligence in beneficial ways? It turns out that both views are correct, but they are talking about completely different forms of AI. To achieve the positive outcome, a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. Russell will argue that this is possible as well as necessary. The new approach to AI opens up many avenues for research and brings into sharp focus several questions at the foundations of moral philosophy.

About Speaker:

Stuart Russell, OBE, is a professor of computer science at the University of California, Berkeley, and an honorary fellow of Wadham College at the University of Oxford. He is a leading researcher in artificial intelligence and the author, with Peter Norvig, of “Artificial Intelligence: A Modern Approach,” the standard text in the field. He has been active in arms control for nuclear and autonomous weapons. His latest book, “Human Compatible,” addresses the long-term impact of AI on humanity.

About the Series:

The CITRIS Research Exchange and Berkeley Artificial Intelligence Research Lab (BAIR) present a distinguished speaker series exploring the recent breakthroughs of AI, its broader societal implications and its future potential. Each seminar takes place on Wednesdays from noon to 1:00 p.m. in the Banatao Auditorium at Sutardja Dai Hall on the UC Berkeley campus and will be livestreamed on YouTube. All talks are free and open to the public.

Support CITRIS as we develop technology solutions for challenges around the world: wildfires, the health of an aging population, the future of a workforce augmented by artificial intelligence, and more. In all we do, we prioritize diversity, equity, and inclusion across each of our research initiatives.
Рекомендации по теме
Комментарии
Автор

This was fantastic. I'm so grateful Stuart is part of this conversation -- we really him to stay actively involved to help us find our way through his informed, sensible, well grounded, intelligent, and knowledgeable approach. I'm so glad he signed the "open letter" petition. Thank you for posting this lecture.

waakdfms
Автор

This is one of the deepest yet well-rounded discussions I have run across while trying to understand the technology and where it's leading us. Thanks for both the technical content and the multiple levels at which you look at this problem - and for offering a hopeful path forward.

MegawattKS
Автор

56:00 summary and questions was perhaps the best part

erikals
Автор

The elephant in the room is control of AI. Which Country, Company or individual will have the most powerful. Will they compete with each other and to what purpose. Their will be no holding back of AI. In fact their is a race on right now to develope it faster then any competitor, be it country, company or person.

tellitasitis
Автор

I think this is the single most insightful and inspiring talk I've seen on the subject. Really gets to the heart of the problem we face and has 'sparks of a solution' to the cliff-like problem we're rapidly driving toward.

jsbarretto
Автор

I liked this conference very much. Host more of these and the world will be a much better place.

Slaci-vlio
Автор

Common and indexical goals need not be exclusive. Everyone at the coffee shop has an indexical goal of getting coffee, and yet, a common goal can emerge by which people wait in line to their coffee when it is their turn. Of course, we could all storm the shop but it would at best work once, and then the whole infrastructure would be destroyed. Like we can layer goals in different time frames with different degrees of precision, we can also layer indexical and common goals. Can machines do the same?

gregniemeyer
Автор

Question so does this mean that the 60 minutes episodes with Google saying their AI is scary good and they can't even explain why it's so smart (leaning a new language other than english) ...does all that have no weight and is all hype to self promote thier platform?

sagesingh
Автор

Great talk. If the AI remains unsure of what our preferences are, surely the shortest path is to simply influence our preferences until it can predict them perfectly? I’m not superhuman so I assume the AI will come up with a better plan than this.

RichardWatson
Автор

Great insight on the responsible development of AI! 😊👍 We need more people like Stuart Russell! #grateful #AIethics

IngvildCasanas-frwd
Автор

This was brilliant, that's for the upload!

chillingFriend
Автор

12:59 I thought it was predicted to take 10-20 years, if 100 is true it must have been earlier, I rather doubt that high number within the last 10 years. Anyone know?

Lovin_It