OpenAI’s huge push to make superintelligence safe | Jan Leike

preview_player
Показать описание
In July 2023, OpenAI announced that it would be putting a massive 20% of its computational resources behind a new team and project, Superalignment, with the goal to figure out how to make superintelligent AI systems aligned and safe to use within four years.

Today's guest Jan Leike, Head of Alignment at OpenAI, will be co-leading the project.

---------

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.

Рекомендации по теме
Комментарии
Автор

And now he's gone. It will be interesting: 1. what he and Illya do in the future, and 2. What Altman does about the gaping hole left in their alignment team and how he handles the publicity and speculations about 'why they left'.

voncolborn
Автор

i would not be surprised to learn that the superalignment issue is contentious within OpenAI. I don't think it becomes a problem in the current regime of autoregressive GPT. maybe in 2 or 3 generations when the system has a degree of agency, ability to run by itself, or does some form of self improvement.

monx
Автор

I guess the program didn't go so well. We need more whistleblowers in AI that's for damn sure.

flickwtchr
Автор

GPT is confused after they left, I have unsubscribed it

yfzhangphonn
Автор

This sounds to me like raising teenagers 😂

theyogacoachuk
Автор

this didn't age well... looks like its an all out race to agi no safety at all and i doubt its just for OpenAi. Seems to me its practically here I mean llm's have beat the turing test already things are just plain smart in ways we don't even understand.

club