How to SURVIVE the Intelligence Explosion

preview_player
Показать описание

Find meaning in your work:

Get into AI safety:

Talk to like-minded people:

If you have a technical background, learn the basics of machine learning:

We dive deep into the concept of an intelligence explosion - a scenario where an AI gains the ability to self-improve, leading to rapid and unstoppable increases in its ability. This notion might sound straight out of a science fiction film, but it's a reality taken seriously by AI experts, including Geoffrey Hinton, the godfather of AI.

We explore the idea of AI Apocalypticism and the historical and cultural contexts behind such scenarios. The discussion also delves into the intricacies of backpropagation, a crucial mechanism behind deep learning, and its potential implications for the future of AI.

We then address the concept of the Singularity, a theoretical point in time where technological growth becomes uncontrollable and irreversible, leading to unforeseeable changes to human civilization. Moreover, we discuss the alarming reality of an intelligence explosion and how it could potentially outsmart us.

Finally, we discuss potential solutions to these concerns, with a primary focus on AI alignment - aligning the AI's goals with those of humanity.

Whether you're an AI enthusiast, a skeptic, or merely curious about the future, this video provides a look at the potential of AI and its existential implications.

00:00 What is an Intelligence Explosion?
00:55 The End of the World
02:00 Death and Destruction
03:01 Stuff You Should Probably Know a Little Bit About
04:17 AIs Are Better at Learning and Learning to Learn Better
05:06 Hard Takeoff
06:12 Let's Just Turn It Off
07:29 Tragedy of the Commons
08:57 Bad Actors

Research

Large Language Models Can Self-Improve

Reflexion: an autonomous agent with dynamic memory and self-reflection

HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace

Sparks of Artificial General Intelligence: Early experiments with GPT-4

Refining the Responses of LLMs by Themselves
Рекомендации по теме
Комментарии
Автор

I'm confused, why are there a good amount of these small channels with really good Videos and editing all of a sudden? Or is my youtube recommendation getting good

tabzoo
Автор

I was here before 500 subs! Very underrated like how do you have only 500 subs? Your work could be fairly comparable to every big educational channel! Remember me when you're famous!

luminos
Автор

how do you only have 400 subscribers i’m so shocked! you’re going places bro can’t wait to see you blow up

eddie
Автор

This is what I am talking about! The ability to manipulate a population that is not as intelligent for whatever goals the one who controls the AI wants, which may not be in the best interest for mankind, is something that needs to be feared! It is bad enough now that we are manipulated by all the media and influencers out there now. I cannot imagine what will happen in the near future when people become dependent on AI for all their information and will take this as the ultimate truth. This is just bad, bad, bad!

Balwaysme
Автор

This is a very well informed presentation. Well done. Subscribed 🙏👍

alertbri
Автор

This video is amazing the animation and the overlay of the sound is great the little movie scenes are amazing this is an amazing video by an amazing creator.

devangrey-wolf
Автор

Oh my god i just looked at your channel and your another one of those amazing channels that just popped out of nowhere im not complaining but sll of s sudden there are channels with 1 or 2 or sum small number of vids like tht with thousands of vews, you guys are the next generation of youtube im sure of it

makscilic
Автор

WHAT ONLY 501 SUBS??? i thought this was a big channel with this quality

gauravbagewadi
Автор

This is awesome!! I really like the line 'Who gets the first AGI, wins capitalism.'

coogoog
Автор

I can see you put a lot of work into this video!!! Great job!

senju
Автор

Wow, that was... rather dramatic. I mean no disrespect, and I agree on most of your points, especially the big one: everyone needs to take the danger seriously. It's just that I'm still not completely convinced that the singularity will happen. I might be very wrong, too.

The way I see it is that we need all kinds of people and opinions for the best outcome. We need optimists for reaching the good, we need doomers for avoiding the terrible, heck we might even need denialists for alternative paths of progress. And I'm oversimplifying here, none of us fits only one of the possible roles. The only sure thing is the disruptive technology changes things, in all kinds of ways.

etunimenisukunimeni
Автор

What do you think about the law of diminishing returns when applied to ai, technically speaking, with everything, there's alot of progress at first, but then suddenly everything slows down to the point you feel like it's dead. I'm thinking about making a video about this lol 😂. Also your video was amazing 👏 the quality was outstanding and the way you delivered your ideas was entertaining and intuitive as well. Props to you 👏

razaabbas
Автор

Imagine if Sam Altman & friends had a private ASI advising them...

alertbri
Автор

very nice content man, where did you get the idea of the deceiving effect of exponential growth? I just finished reading "'Immoderate Greatness"and he talks exactly about that. Thanks for your work.

ss_websurfer
Автор

Well technically, Geoffrey Hinton's warning about AI was more about "bad actors" around the world using AI as a force-multiplier, not so much about AI's becoming super-intelligent. Like, if you're a Russian hacker today, you can use AI to hack more tomorrow -- AI is a force multiplier. Also, there's at least one argument that AI's like ChatGPT can't become *that* intelligent: they're learning patterns by essentially "averaging" vast amounts of human-made data, so they'll never become more intelligent than what "average" human patterns look like. Sure AIs can think 'faster' than us, and they have 'wider' expertise, but they won't necessarily be 'smarter' than average. Of course, that's speculation -- only time will tell if "faster and wider" processing leads to AIs being able to build AIs that are 'smarter' than human-built AIs.

truejim
Автор

i aspire to the purity of the blessed machine, praise the omnissiah.

WizardBarry
Автор

Except…we can always just unplug our new, super intelligent overlords. Just sayin.

markbelanger
Автор

I guess the YouTube algorithm is trying to help me survive

katiewashington
Автор

So the conclusion of this video is we cannot survive it unless we stop it. But we haven’t any clue how to stop it.

luhental
Автор

Good points. nice video. I'm not in the field so I guest I'll just surrender to AI manipulation stuffs.There's no way I could develop a counterattack anyway. Normie like me gonna go shrug.I did shrug. But again good point, This going to be a big issue.Imagin, it could even create a war if used in a bad hand.

พรชนกไตรสุริยธรรมา
welcome to shbcf.ru