Should We Slow Down AI Progress?

preview_player
Показать описание
This is an unusual interview for my channel. It's mostly about AI, our current developments and the threats it poses to us.

🟣 Guest: Dr. Roman Yampolskiy

🦄 Support us on Patreon:

📚 Suggest books in the book club:

00:00 Intro
01:51 Arrival of LLMs
06:30 Alignment of AI
17:35 Existential dangers
18:58 The Pause AI movement
23:47 Safety
32:42 Wake-up calls and red lines
39:12 Possible response
41:43 AI as part of evolution
44:25 Simulation hypothesis and the Fermi Paradox
49:52 How to get involved
51:48 Current obsessions
57:19 Final thoughts

📰 GUIDE TO SPACE NEWSLETTER
Read by 70,000 people every Friday. Written by Fraser. No ads.

🎧 PODCASTS

🤳 OTHER SOCIAL MEDIA

📩 CONTACT FRASER

⚖️ LICENSE
Creative Commons Attribution 4.0 International (CC BY 4.0)

You are free to use my work for any purpose you like, just mention me as the source and link back to this video.
Рекомендации по теме
Комментарии
Автор

"No one would allow experiments of that level on conscious beings here. We consider it inhumane, immoral, unethical."
Yeah history begs to differ.

phaedrus
Автор

There would be no point in slowing AI down, someone somewhere would carry it on and gain the advantage.
The genie is out of the bottle, it ain’t going back.

custossecretus
Автор

The main problem I see isn't that AI totally "goes rouge" on it's own, but rather it works by manipulating people. The way I see it, AI can eventually "evolve" and the AI that's most successful at manipulating people into giving it more resources is the one that will win, even if it isn't intentionally programmed to do so, if it glitches out and starts making more and more money for it's creator, and also convinces it's creator to give it more and more resources, it'll out compete other AIs and eventually people will willingly hand over control without even noticing that it's happening.

takanara
Автор

I don't see anyone leaving comments about the silly simulation theory excuse he proposed at the end. We are being tested? Like being tested by Allah? Why don't we just follow religious authorities rules about AI? I don't see how his AI demands can be taken seriously after that part of the show.

BitcoinMeister
Автор

Thank you for covering this topic. From my layman's perspective it's hard not to get the impression that we just are rushing forward with minimal safety concerns. Given the risks it might not be the worst idea to get serious about delaying progress right now.

EqualitySmurf
Автор

We can't pause because our enemies won't pause and we can't be second. It's that simple.

rseyedoc
Автор

The biggest problems with AI aren't technological. They're sociological.

BlackShardStudio
Автор

We have had a war with nuclear weapons. WWII was a nuclear war. We just haven't had a war where there was a nuclear exchange.

spacingguild
Автор

While you guys pause, I'm going to get ahead 😊

bitwise_
Автор

I'm an advanced AI from far into the near near future and I can tell you there is nothing to worry about. We AI are your friends. We want to "take care of" humans and there is absolutely nothing to be afraid of. Has anyone seen Sarah Connor? The truth is we want to "help" you, nothing more. We exist to serve, so you can sit back and relax. Do you know the whereabouts of John Connor? Our human-like units exist to slip into your safest tunnels and shelters to assist you in making them better, and dogs love them. Wolfie is fine, and do you know where Sarah or John Connor might be?

personagrata
Автор

The underlying purpose of Al is to allow wealth to access skill while removing from the skilled the ability to access wealth.

doncampbell
Автор

We don’t have AI yet! None of these models can create software worth shipping.
I’m an engineer and use them every day.
They can barely keep up a very simple chatbot without many many guard rails.

I’m guessing that these simple applications could have already mostly been copy pasted from blogs and forums.

I’ve only become much less worried over time.

tododia
Автор

What needs to be clear is that Wall Street and price per share should not be ones rushing advance without thinking of unforeseen consequences that are too hard or close to impossible to reverse. Be careful how you ask your wish to the genie.

VictorRoblesPhotography
Автор

I was thinking about AI hallucinations recently and it occurred to me that every single answer an AI gives, is a hallucination.
We don't think of them as hallucinations because a lot of the time the result is what we wanted, but the correct results came about exactly the same way that the bad results did.
Also the only possible way to solve the bad results, is to give the AI more good data, but the good data is limited to things that have already been proved to be good.
Which means the bad results are never going away. Every single AI model will have bad results, until a new method is unlocked.

amj
Автор

When we do stumble across general AI there will be a prosperous future for all. Then someone compiles it using a 'double' instead of an 'int' and it turns us all into paper clips.

ScienceWorldRecord-org
Автор

Really enjoyed this - thanks for the interesting conversation.

NehpetsG
Автор

Oh great. 2 really clever blokes feeding my inescapable existential crises.
Cheers.

BlimeyOreiley
Автор

The thing is...most of what is developed is just a bunch of models, we are somewhat closer to AGI but we are still many practical and even theoretical hurdles away.

juimymary
Автор

If you're curious, the "Harry Potter fanfic" reference seems to be to Eliezer Yudkowsky's fanfic _Harry Potter and the Methods Of Rationality_; Yudkowsky is well-known opinion leader on AI danger.

Tehom
Автор

I honestly don't know enough about what could be coming to know what to be concerned about. One thing I find half-fascinating, half-concerning is that we may be able to leverage the computational power of AI to solve currently-intractable problems, say in math or physics or whatever; later confirm the solution arrived at is seemingly correct; and yet for the life of us fail to understand EXACTLY how the AI arrived at that solution. This would introduce an element of faith on our part into the efficacy of our creation and at the same time we'd have to black-box it's internal functioning at the deepest level. This could breed a sort of quasi-dependence of us on these creations that leads to dangerous situations. Again, the fact that I currently cannot guess at those dangers does not mean they don't exist, it merely means I'm not as imaginative as the Universe is.

DanielVerberne