AI Safety…Ok Doomer: with Anca Dragan

preview_player
Показать описание
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.

Want to share feedback? Have a suggestion for a guest that we should have on next? Why not leave a review on YouTube and stay tuned for future episodes.

Timecodes:
00:00 Introduction to Anca Dragan
02:16 Short and long term risks
04:35 Designing a safe bridge
05:36 Robotics
06:56 Human and AI interaction
12:33 The objective of alignment
14:30 Value alignment and recommendation systems
17:57 Ways to approach alignment with competing objectives
19:54 Deliberative alignment
22:24 Scalable oversight
23:33 Example of scalable oversight
26:14 What comes next?
27:20 Gemini
30:14 Long term risk and frontier safety framework
35:09 Importance of AI safety
38:02 Conclusion

Further reading:

Thanks to everyone who made this possible, including but not limited to:

Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Commissioner & Producer: Emma Yousif
Music composition: Eleni Shaw

Camera Director and Video Editor: Tommy Bruce
Audio Engineer: Perry Rogantin
Video Studio Production: Nicholas Duke
Video Editor: Bilal Merhi
Video Production Design: James Barton
Visual Identity and Design: Eleanor Tomlinson
Commissioned by Google DeepMind
Рекомендации по теме
Комментарии
Автор

We need more resources put towards alignment and safety.

justinparnell
Автор

Eh... "Ok, Doomer" Wouldn't that better describe someone who is over exaggerating the dangers of AGI? As in: "doom(er) and gloom(er)". Or did I miss something?

henrikbergman
Автор

Really facinating to imagine Anca is going through thought processes that are almost unimaginable. Re the driverless car scenario & creating the ability to consider everything in relation to the supposed human intereaction something popped into my head which I'm quite certain isn't original. Would these AI systems only ever interact with humans in the ultimate way if we, all of us were fitted with chips?
As said, the concept is ancient & I for one would fight to the end before allowing a chip inside of me! But it's easy to see how management of AI systems coupled with chipped humans would work. Killing two birds with one stone as they say.

ajadrew
Автор

I've been following AI for years and I still can't find the exact point in time where pulling the plug was no longer an option.

swagger
Автор

Fantastic discussion. It seems to me that the missing piece in the safety efforts is a serious and capable public contribution. How can we have smart oversight from smart policymakers and lawmakers? How fast can we get that?

sthompson
Автор

What a pleasant surprise to find you here doctor

jorgerangel
Автор

Let's go. Podcasts are great keep going

EnGmA
Автор

Always really enjoy these interviews, thank you. (Also, loving the robot behind Hannah!)

aiforculture
Автор

Huh? Why isn't Timnit Gebru todays guest on Hannah Fry's podcast? Did I miss something?? What's going on???

✌️

WillyB-sk
Автор

Computers will become actually intelligent when pigs fly.

Reach