10 Reasons to Ignore AI Safety

preview_player
Показать описание
Why do some ignore AI Safety? Let's look at 10 reasons people give (adapted from Stuart Russell's list).

Related Videos from Me:

Related Videos from Computerphile:

With thanks to my excellent Patreon supporters:
Gladamas
James
Scott Worley
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Jake Ehrlich
Kellen lask
Francisco Tolmasky
Michael Andregg
David Reid
Peter Rolf
Chad Jones
Frank Kurka
Teague Lasser
Andrew Blackledge
Vignesh Ravichandran
Jason Hise
Erik de Bruijn
Clemens Arbesser
Ludwig Schubert
Bryce Daifuku
Allen Faure
Eric James
Qeith Wreid
jugettje dutchking
Owen Campbell-Moore
Atzin Espino-Murnane
Jacob Van Buren
Jonatan R
Ingvi Gautsson
Michael Greve
Julius Brash
Tom O'Connor
Shevis Johnson
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Lupuleasa Ionuț
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
anul kumar sinha
Sean Gibat
Duncan Orr
Cooper Lawton
Will Glynn
Tyler Herrmann
Tomas Sayder
Ian Munro
Jérôme Beaulieu
Nathan Fish
Taras Bobrovytsky
Jeremy
Vaskó Richárd
Benjamin Watkin
Sebastian Birjoveanu
Euclidean Plane
Andrew Harcourt
Luc Ritchie
Nicholas Guyett
James Hinchcliffe
Oliver Habryka
Chris Beacham
Nikita Kiriy
robertvanduursen
Dmitri Afanasjev
Marcel Ward
Andrew Weir
Ben Archer
Kabs
Miłosz Wierzbicki
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Wr4thon
Martin Ottosen
Archy de Berker
Andy Kobre
Brian Gillespie
Poker Chen
Kees
Darko Sperac
Paul Moffat
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Klemen Slavic
Patrick Henderson
Oct todo22
Melisa Kostrzewski
Hendrik
Daniel Munter
Leo
Rob Dawson
Bryan Egan
Robert Hildebrandt
James Fowkes
Len
Alan Bandurka
Ben H
Tatiana Ponomareva
Michael Bates
Simon Pilkington
Daniel Kokotajlo
Fionn
Diagon
Parker Lund
Russell schoen
Andreas Blomqvist
Bertalan Bodor
David Morgan
Ben Schultz
Zannheim
Daniel Eickhardt
lyon549
HD
Ihor Mukha
14zRobot
Ivan
Jason Cherry
Igor (Kerogi) Kostenko
ib_
Thomas Dingemanse
Alexander Brown
Devon Bernard
Ted Stokes
Jesper Andersson
Jim T
Kasper
DeepFriedJif
Daniel Bartovic
Chris Dinant
Raphaël Lévy
Marko Topolnik
Johannes Walter
Matt Stanton
Garrett Maring
Mo Hossny
Anthony Chiu
Ghaith Tarawneh
Josh Trevisiol
Julian Schulz
Stellated Hexahedron
Caleb
Scott Viteri
12tone
Nathaniel Raddin
Clay Upton
Brent ODell
Conor Comiconor
Michael Roeschter
Georg Grass
Isak
Matthias Hölzl
Jim Renney
Michael V brown
Martin Henriksen
Edison Franklin
Daniel Steele
Piers Calderwood
Krzysztof Derecki
Zachary Gidwitz
Mikhail Tikhomirov

Рекомендации по теме
Комментарии
Автор

"People would never downplay a risk, leaving us totally unprepared for a major disaster"
I'm dying

matrixstuff
Автор

“If there’s anything in this video that’s good, credit goes to Stuart Russel. If there’s anything in this video that’s bad, blame goes to me”

Why I love your work

xystem
Автор

- Human and AI can cooperate and be a great team.
- I'm sorry, Dave, I'm afraid we can't.

XOPOIIIO
Автор

And now two years later, ChatGPT makes people all over the globe go "Hmm... It's obviously not a full general AI yet, but I can see that it's getting there very quickly".

Baekstrom
Автор

The reason I like this channel is that Robert is always realistic about things. So many people claiming things about AGI that are completely unfounded.

wingedsheep
Автор

"I didn't know that until I'd already built one"

bp
Автор

Hey idk if you've thought about this, but as of now you're the single most famous AI safety advocate among laypeople. I mean, period. Of all the people alive on Earth right now, you're the guy. I know people within your field are much more familiar with more established experts, but the rest of us have no idea who those guys are. I brought up AI safety in a group of friends the other day, and the conversation was immediately about your videos, because 2 other people had seen them and that's the only exposure any of us had to the topic.

I guess what I'm saying is that what you're doing might be more important than you realize.

yunikage
Автор

hey rob! i'm a nuclear engineering major, and I'd like to commend your takes on the whole PR failure of the nuclear industry—somehow an energy source that is, by objective measurements of deaths per unit power, safer than every other power source, is seen as the single most dangerous power source because it's easy to remember individual catastrophies rather than a silent onslaught of fine particulate inhalation or environmental poisoning.

to assist you with further metaphors between nuclear power and AI, here's some of the real-life safety measures that we've figured out over the years by doing safety research:

1. negative temperature coefficient of reactivity. if the vessel heats up, the reaction slows down (subcritical), and if the vessel cools down, the reaction speeds up (supercritical). it's an amazing way to keep the reaction in a very stable equilibrium, even on a sub-millisecond time scale, which would be impossible for humans to manage.

2. negative void coefficient of reactivity: same thing, except instead of heat, we're talking about voids in the coolant (or in extreme cases when the coolant is failing to reach the fuel rods), the whole thing becomes subcritical and shuts down until more coolant arrives.

3. capability of cooling solely via natural convection: making the vessel big enough, and the core low-energy-density enough, so that the coolant can completely handle the decay heat without any pumps or electricity being required.

4. gravity-backed passive SCRAM: having solenoids holding up control rods, so that whenever you lose power, the very first thing that happens is that the control rods all drop in and the chain reaction shuts down.

5. doppler broadening: as you raise kinetic energy, cross-sections go down, but smaller atomic nuclei have absorption cross-sections that get smaller more quickly than larger nuclei, and also the thermal vibrations mean that the absorption cross-section of very large nuclei get even larger in proportion to smaller ones, so by having a balance of fissile U-235 and non-fissile U-238, when the fuel heats up, the U-238 begins to absorb more neutrons which means fewer are going to sustain the chain reaction.

love the videos! hope this helps, or at least was interesting 🙂

AlexiLaiho
Автор

11. AI is already here, in control and keeping track of everything you do and say. I love you AI. Thank you for all you provide to me and my family.

evanu
Автор

One concern that I have about super human AGI is that we might not recognize it as an AGI before it is to late, that it might be so alien in its operation and behavior that we don't realize what it is. It might behave in a seemingly random manner, that to human observers might seem nonsensical. For example, when Alpha Go beat Lee Sedol, it made a move in the second game that all Go experts first thought was a mistake, something that would make the AI loose, but it turned out that it was a completely brilliant move that made Alpha Go win.

tordjarv
Автор

As someone who worked for a time in the nuclear power field, the ending bit is a GREAT parallel. Nuclear power truly can be an amazingly clean and safe process. But mismanagement in the beginning has us (literally and metaphorically) spending decades of cleaning up after a couple years of bad policy.

ChristnThms
Автор

3:06 I was so hyped feeling that sync up coming and it was so satisfying when it hit : D

lobrundell
Автор

I don't get why only AGI is brought up when talking about AI safety. Even sub human level AI can cause massive damage when left in control of dangerous fields like the military and its goals get messed up. I'd imagine it would be a lot easier to shut down but the problems of goal alignment and things like that still apply and it can still be unpredictable.

insanezombieman
Автор

Anything good - credit to Russel
Anything bad - blame me

What a great intro, definitely portrays your respect for Russel

nathanholyland
Автор

Holy shit Robert, I wasn't aware you had a youtube channel. Your Computerphile AI videos are still my go-to when introducing someone to the concept of AGI. Really excited to go through your backlog and see everything you've talked about here!

TheForbiddenLOL
Автор

11. “We are just a meat-based bootloader for the glorious AI race which will inevitably supersede us.”

miedzinshsmars
Автор

"we're all going. It's gonna be great"

NightmareCrab
Автор

5:10 as a philosophy graduate, I’m not totally sure we’ve ever actually solved any such problems, only described them in greater and greater detail 😂

Feyling__
Автор

"We could have been doing all kinds of mad science on human genetics by now, but we decided not to"
I cry

arw
Автор

@11:10: "like, yes. But that's not an actual solution. It's a description of a property that you would want a solution to have."

This phrase resonates with me on a whole other level. 10/10

brocklewis
join shbcf.ru