Physicist on limits of GPT-4 | Max Tegmark and Lex Fridman

preview_player
Показать описание
Please support this podcast by checking out our sponsors:

GUEST BIO:
Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.

PODCAST INFO:

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Guest bio: Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.

LexClips
Автор

Lex is very generous to answer 'I don't know' to various rhetorical questions I'm sure he could answer. Especially from his feild of expertise. This allows the guest to continue their explanation unhindered, but does not necessarily maximise the casual's viewer's perception of Lex's knowledge. Bravo.

shanks
Автор

I use GPT 3 but unsure of GPT 4 features. The GPT 3 bot couldn't specify about it.

SaidThoughts
Автор

Intelligence goes far beyond text generation. Yes these models can simulate(!) human-like reasoning but it does not actually think.

miraculixxs
Автор

Moloch was an unexpected twist in conversation. But I have no doubt it’s excited for this awesome gift.

Dannymiles
Автор

I would be interested in your take on the Auto-GPT project. From my understanding, it aims to add an autonomous element to GPT-4 by allowing it to provide feedback to itself in real time and to work with a designated goal rather than a prompt. It also has live access to the internet, which makes it increasingly reminiscent of Sky-Net. I'm not an existentialist, but I am concerned with some of the ethics regarding an AI without guardrails.

papa-pete
Автор

Yes, output only feet forward and all of that... But when you teach it to code you now allow all kinds of Black Swan surprises

herokillerinc
Автор

“Whenever its name has been anything but a jest, philosophy has been haunted by a subterranean question: What if knowledge were a means to deepen unknowing?”

― Nick Land

sfacets
Автор

AGI will require some other yet undiscovered techniques. It’s still worth being super careful with the techniques we already have.

suppertime-qjnt
Автор

The way he explains current gpt4 it sounds like people who are savants….they have a highly specialized brain that can accomplish some incredible feats….but at the same time a lot of them need daily assistance to navigate life because some normal task are to much…2:14

ModestMang
Автор

Difference is: if 1 researcher clones a human, even if it's forbidden, humanity will not go extinct because of that. It is just irrelevant. But if 1 superintelligence is developed, then humanity might go extinct just because of that. Meaning the difference is that one misstep is all it takes.

weirdwordcombo
Автор

mom used to say: everyone wants to go to heaven but no one wants to die

patcaza
Автор

Thank you, Lex and Mark, Very Respectfully, for the detailed and forward-leaning guidance

eddiejennings
Автор

“It can’t reason as well on some tasks.”

This guy is confused. ChatGPT 4 can’t reason at all on any task. It only appears to reason when you ask it something that exists in its training data.

terjeoseberg
Автор

I think we have already lost control. People are running LLMs on thier laptops.

bellsTheorem
Автор

I told gpt to call me Kratos and trolled it by calling it boy and it called me Atreus.

khalifanzuri
Автор

No one is slowing down, it's always full speed ahead, embrace

bluelvn
Автор

If all of the smartest people in the world are telling us to be afraid of a real AGI, we probably should.

aceup
Автор

Hmm, I think this clip just changed my opinion completely. I had thought it was a pipe dream to expect that the world could agree to slow down, but I didn't think about the example of human cloning.

exstasis
Автор

A reference to both Allen Ginsberg and Jefferson Airplane... this guy speaks my language.

joeshoe