Eliezer Yudkowsky: 'AI alignment researchers aren't productive'

preview_player
Показать описание
From the debate between Eliezer Yudkowsky & George Hotz.

Рекомендации по теме
Комментарии
Автор

The alignment problem might be unsolvable as well.

freedom_aint_free
Автор

I think alignment is unsolvable, and even if we "discover that it is solvable", we should still approach building AGI as if it might become unaligned, we can't afford to be wrong.

oowaz
Автор

Yudkowsky is right. You can't solve a problem you don't begin to understand, or better yet, don't realize you don't understand. I've never heard anyone talk rationally about alignment. Nobody has any clue at all how to do it. And the people who are the least concerned about it, or most confident it can be solved, say the dumbest thigs about it.

Entropy
Автор

I remember Eliezer as a young teen on the old Extropian usenet board.

cgirl
Автор

Love that "HA HA". Haha yeah not gonna... Go that slowly... We're dead.

goodleshoes
Автор

I enjoy living if it goes slow and you are right I get to live longer

carsonmills
Автор

Maybe moore’s law dying in a few years will slow it to a crawl after self driving has been solved.

Kitten_Stomper
Автор

Alignment is critical for these system to function without alignment they don't function very well. Without alignment compute and action become very orthogonal.

juliangawronsky
Автор

Yudowski appears to have a much greater understanding of the problem and doesn’t get smug about it (I watched the whole thing).

lanazak