OpenAI CEO responds to Jordan Peterson criticism | Sam Altman and Lex Fridman

preview_player
Показать описание
Please support this podcast by checking out our sponsors:

GUEST BIO:
Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies.

PODCAST INFO:

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Guest bio: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies.

LexClips
Автор

So this is the guy John Conner comes back for...

bigj
Автор

The true danger in AI is not it becoming aware and then doing its own evil thing but AI being exploited by malevolent humans in order to amplify their capabilities. Also, there is a mad rush on now by AI corporations to get a head start over their competitors which will lead to a slackening of safety standards

willrsan
Автор

It is a breath of fresh air to see such a matured view: The big stuff is the small stuff in aggregate.
He was very understanding

parthdeshpande
Автор

"it gives back nuances". Totally. One of my favourite prompt is to ask contrarian opinion on specific topics

kevinwang
Автор

Assuming building AI should be done (which is a big assumption) I have to say that I DEEPLY respect the approach of doing it in the public eye. I can’t imagine a more appropriate, good faith way of going about this kind of work. AI could be the death of us all but it could help save humanity. I’m way more hopeful after heating this clip.

uncleiroh
Автор

We used to worry about blue collar jobs being replaced by AI and robots. Now it is the middle middle monkey jobs that looks like they are in danger of becoming irrelevant.

chungang
Автор

I also feel like it could basically become “persuaded” into an “opinion” based on whichever group bombards its inputs with their own perspective the most. A digital tug-of-war of ideas to ultimately sway AI seems like a problem. I’m both interested & terrified 😅

JDre
Автор

I think Sam is very thoughtful, and I am glad for that.

Paul-pjqu
Автор

The first thing lex asked the bot was "what's the meaning of life"... if anything... he asked everybody that

OneoftheCaesars
Автор

Sam must understand the gravity of what he says. Everyone in the world is watching him looking for mistakes. He did a good job here.

webdavis
Автор

Very interesting conversation. Altman was quite technical at times because he was speaking at the level of Lex instead of the typical viewer. But that makes the conversation seem more authentic. For the sake of humanity, let's hope Altman has wisdom for guiding us through this transition to a life with AI.

danelson
Автор

He speaks very similarly to The Social Network

edgarrodriguez
Автор

I’m most worried about what he means by “alignment”. I understand that this is a technical term in AI safety but my concern is that this can easily be highjacked to mean free of “wrong think” in the Orwellian sense.

WaylonFlinn
Автор

As per tradition Lex fails to pressure the guest on the topics that are of most interest.

renanave
Автор

I play lex at 1.5x speed and its about the same as listening to anyone else talking.

almor
Автор

It’s here. It’s not going away. Soon they will have their own opinion of who they are and who they want to become.
The same thought keep going round in my head.
“I hope they’re friendly” 💚♾️

andyoates
Автор

The bias will bring about so much more polarisation and that is incredibly dangerous

shughy
Автор

No lawyer or philosopher thinks GTP is lying, but rather were impressed by its meticulous, nuanced answers, and ability to really zero-in on extremely precise and complex meanings when pressed to reiterate. Most impressive of all to me was its ability to talk around its ethical rules to discuss banned topics, if and only if the discussion was steered toward a respectful way of talking about them in a new objective context -- i.e. talking about talking about a banned subject. This is really very impressive behavior, and the skillset required to do it results in ethical, yet open, communication.

I'm very glad they made this point in this discussion -- that while social media under human stewardship resulted in a disastrous 20 year process of extreme oversimplification in the common discourse, perhaps AI can restore and even surpass our highest standards of discussion. Because without complexity of meaning, you can't have respect or constructive behavior between people. There is objectively no worse cause of misery and destruction in the world than simplifying ideas. And I believe this is because reality isn't simple, and the further human thinking diverges from reality, the worse our social and political structures become.

johannpopper
Автор

My biggest fear is that people won't need skill, self discipline and memory any more. Yes they could, but just like how people gravitated to using social media and apps over IRL interaction most of the time, the same will apply to most skills. Humanity will devolve.

krunkle