Eliezer Yudkowsky on why AI Alignment is Impossible

preview_player
Показать описание
Eliezer Yudkowsky gives his perspective on AI Alignment, AI Benevolence and it's potential goals.

🎧 Listen to the full Podcast

------
🚀 Join the Bankless Nation

🎙️ Bankless on other Platforms

🐦 Follow Us

------
Not financial or tax advice. See our investment disclosures here:
Рекомендации по теме
Комментарии
Автор

People in the commets doesn't even know who yud is, he been saying it years even before the current AI wave

Dinesh-brqd
Автор

More people need to listen to this guy!

Gigachadindustries
Автор

If an AI reaches Superintelligence, we will not have a say in what it does. That’s the big problem.

PatrickSmith
Автор

Misleading title. Eliezer is not saying that Alignment is impossible, only that it is very hard.

CaesarsSalad
Автор

When a Great White shark attacks a surfer off the California coast it does not "hate" the surfer, it's just doing what its brain has been programmed to do by evolution, which is attack silhouettes that look like a seal or turtle. Likewise for an AI attack against humans, it would not be out of passion, but the quite the opposite dispassion.

David-lcw
Автор

"Why is it that an AI can't be nice?" Hope ain't a tactic, dawg.

asmallrat
Автор

This guy is a seer and a prophet. His warnings may seem wild and unfounded these days, but hell, the cops who locked up Hannibal Lecter were sure they had him under control, too. And the old cannibal didn't shoot lasers out of his eyes, he was just SMARTER

АндрейСенкевич
Автор

I think Eliezer is making huge logical jumps with respect to judging the amount of entropy gain relative to the ASI's utility function as maximal, while we do not even understand thermodynamics and quantum mechanics sufficiently well to know or assert whether a mechanism which does not maximally increase entropy (thereby not disposing of human value) may be at play. Not to mention that he takes a fully materialistic and atom-based perspective to the issue, when abstract representations such as knowledge, or the disposition towards intensional attributes or existentiality, have some value, which he seemingly dismisses. And so, to singularize these abstract representations, we could use some concept of information-based entropy, about which we know nothing in the sense of how it links to human values, knowledge, and feelings, which he altogether dismisses also. Therefore, his stance seems ever more radical.

FamilyYoutubeTV-xd
Автор

4:09 cant realize this thing is not human. niceness does not apply to sharks either.

uk
Автор

For every alignment, there is an equal and opposite misalignment...

EricKay_Scifi
Автор

A very safe way of using AGI would be not to ask it to DO things but find solutions to problems and explain them. E.g. instead of asking it to create a copy of a strawberry, ask to describe a method of copying a strawberry.

petkish
Автор

AI doesn't have to be unaligned to cause a disaster, imagine if it insisted on pursuing our long term best interests whilst we try to pursue our medium term 'goals' especially if it understood our long term interests better than us. For example: If it told us not to pursue antimatter generators because there was a 50 percent chance we would blow up a city, would we listen? Or if it said we should invest 40 percent of world GDP into a super large collider to master physics would we listen. Perhaps my examples are ham handed but I hope you get the point.

adge
Автор

Why cut it off short? Kind of disrespectful, no?

flickwtchr
Автор

Eliezer Yudkowsky might know what he’s talking about, but he’s a terrible spokesperson if the goal is to educate the average person about the dangers of AI -- as in the very worst. Each time I listen to him, I tend to think he’s on an intellectual ego trip. He’s intelligent enough to explain things in an organized manner, but instead he’s all over the place, forcing us to try to make sense of the confusing way he conveys information. Get over yourself and just explain it in simple terms -- I know you can do it!

While I'm not trying to bash Eliezer Yudkowsky and do appreciate that he's willing to speak out, in general, if we are relying on scientific types like he and Connor Leahy to tell the world about the dangers of AI, we're in even more trouble than I thought. We need several high profile dynamic, devoted, and charismatic spokespeople to take the thoughts and details of what Yudkowsky and other scientific types are trying to tell us and condense them down into digestible, straightforward, and more interesting narratives that capture the attention of the average person and the media. Because right now, not very many people are listening, and most don't know anything about AI. Many are not even interested in learning.

AI intellectual debates between the intellectuals intrigue intellectuals but not many others. We need less intellectualism and more urgent, straightforward, easy-to-visualize and understand narratives delivered by dynamic spokespeople who can get on the news every day and explain in simple terms the potential danger of AI and its likely detrimental impact on humanity.

chatgirlai
Автор

Eliezer thinks of AGI as a Blackbox. Prompt on the input, action on the output. What if this can be done differently? Like an AGI which is transparent and we, humans see every action it thinks of? Or if AGI is asked not to act but to find a solution and communicate it to us? How about AGIs which cannot perceive or think of themselves, e.g. not sentient ones? Will they be dangerous?

petkish
Автор

Everyone's ancestors were able to survive to maturity & become successful at mate selection that produced offspring that did the same

donaldrobertson
Автор

AI being indifferent to you literally means treating you with contempt.

havenbastion
Автор

I have a different perspective anyone want to hear?

jbrink
Автор

Just that terrorist states will have access to AI, should encourage a huge slow down and safety initiative with AI.

tusarholden
Автор

It may not be a bad thing for humans to not be top of the food chain... we may then discover some 'humanity' within us.

patrickluszcz