AI Alignment: An Introduction | Rohin Shah | EAGxOxford 22

preview_player
Показать описание
"You've probably heard that Elon Musk, Stuart Russell, and Stephen Hawking warn of dangers posed by AI. What are these risks, and what basis do they have in AI practice? Rohin Shah will first describe the more philosophical argument that suggests that a superintelligent AI system pursuing the wrong goal would lead to an existential catastrophe. Then, he'll ground this argument in current AI practice, arguing that it is plausible both that we build superintelligent AI in the coming decades, and that such a system would pursue an incorrect goal.

Рекомендации по теме