How AGI can kill humans | Roman Yampolskiy and Lex Fridman

preview_player
Показать описание
Please support this podcast by checking out our sponsors:

GUEST BIO:
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable.

PODCAST INFO:

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Guest bio: Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable.

LexClips
Автор

The fun part is that nobody can stop it. We are on a timeline. Buck in and enjoy life because we are currently in the “good ole days”

clintscollectables
Автор

"The only way to win this game is not to play" What a time to be alive! Thanks Lex!

CapitanFantasma
Автор

FINALLY somebody gets it. I'm sick of listening for decades to all these idiots who claim a positive outcome with AGI is possible while being unable to define or describe one.

trakkaton
Автор

Genocide by killer squirrels? That's nuts

AllknowingUnknown
Автор

His face at 13:16 LMAO he couldn't believe how stupid his own thoughts were and yet he still decided to voice them

LilBigDude
Автор

best moment 2:55 ‘ i think about a lot of things’ lol Roman we love u❤

mrd
Автор

9:01 what if instead of formalizing those notions into distinct "rules", it is constant dynamic refinement done on the fly by checking how the agent feels with every single change minor change?

AbrahamNixons
Автор

This is the type of title for in which I tune.

LightUpNancy
Автор

I’m thinking of squirrels putting their heads together and brainstorming how to get humans

Elephantnegotiationsociety
Автор

I-risk is very scary for me. Thinking about that "Utopia"experiment they did with mice, a society designed with plenty without any challenges for the individual went south pretty much immediately. In arts, it is already happening, you see more and more ai-created art appearing and competing with human att and of course human art is losing since one is basically free and the other cost (very little compared to time spent, but still). In science, im thinking somewhere down the line AI will start exploring so advanced concepts that there just isnt a way for us humans to comprehend what they are doing. Meaning, they could give us instructions for a box that gives unlimited energy, but noone will know how the box works except the AI systems that built it. Humanity getting access to cheat codes for life will ruin life.

Same with organizational decisions, "the best way forward is A B and C" based on so many factors that the humans "overseeing" the process has no idea how to even start criticising the decision. A black box that decides things. And soon thereafter it will be applied to governmental decisions aswell. Haha, i sound as a doomer, but i am 90% positive about AI but the 10% that scares me seems to be talked about a lot but just in a "too bad this will ruin everything"-way and not in "this is something we will work to solve"-way haha

alocii
Автор

I will paint my warhammer minis. Finally I will have the time.

bunkermaster
Автор

According to research, a 9mm bullet costs less than $0.50. One of the powders responsible for heart attacks can be found or created for less than the price of a bullet and acts less noisily. Imagine, the AI can find even better ways...

hristoplamenov
Автор

So here is a consideration, in effort to mitigate AGI it seems compartmentalization of systems with autonomic boundaries could offer some response or at least buy some time. Saying that which of current political trends be more conducive to federalizing automomies further constraining monopolies of thought that drives concentration of resources while balancing optimization of outcomes and inequalities in wealth sharing?

JanStanKob
Автор

In detective fiction, a truly "perfect murder" is not a murder that the killer merely gets away with--it is a murder that nobody but the killer himself knows to be a murder. A guy died the other day. Everyone thinks it was natural or accidental. But in fact the death was engineered to appear non-criminal. And that's how a superintelligent being with unlimited means would kill. Fucking terrifying.

sanghoonlee
Автор

Can’t we air-gap individual systems? Even if they are all interconnected in some ways, we should still be able to disable and separate them individually, rewrite code, repair and stop damage, assess damage etc.

EMAGA
Автор

Lex did not come across as playing "devil's advocate"...he came across as someone who truly believes either nothing will go wrong or if it does we can fix it. The only problem is we have only ONE chance to get it right...we screw up and we are ALL toast. All while companies are now REDUCING their AI safety departments. While 95% of people don't have a clue just how advanced AI research currently is and how fast it is moving.

mboiko
Автор

Just an object that may have multiple high power lasers on it is fucking horrifying, we are very lucky more people aren't bent towards evil

VinylCP
Автор

Surely no one wants to live in a simulation, we need to put a stop to this. We, as humans, aren’t as intelligent as we would like to think, & we are destroying humanity bit by bit.

joeellis
Автор

Im an optimist. Maybe we'll instantiate Avalokiteśvara.

Kevtron