Sam Altman on GPT-5 | Lex Fridman Podcast

preview_player
Показать описание
Please support this podcast by checking out our sponsors:

GUEST BIO:
Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies.

PODCAST INFO:

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Guest bio: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies.

LexClips
Автор

I asked GPT4 to model a baseball dropped from outer space at a particular orbit and tell me the terminal velocity and how long it would take to reach sea level. It couldn't figure it out at all, but if I coached it along each step so to speak, giving it prompts on which methods and assumptions to make, it was able to get impressively close to a symbolic solution. I believe this is what Altman is referring to about it's limitation to figure out lots of steps in the necessary to reach a particular solution. It has the knowledge, but the model is not really able to put it all together when it's not a well established process.

TB-niur
Автор

I remember having an argument in 2001 with my friends about whether computers would ever have a terabyte.

Idlewyld
Автор

Would people agree that the title is misleading ? They didn't talk about chat GPT 5. The only talked about chat GPT4

scubagrant
Автор

00:02:07 Use GPT as a brainstorming partner for creative problem-solving.
00:02:47 Explore how GPT can assist in breaking down tasks into multiple steps and collaborating iteratively with a human.
00:05:09 Reflect on the exponential curve of technology advancement to anticipate future improvements beyond GPT 4.
00:08:00 Consider integrating GPT into your workflow as a starting point for various knowledge work tasks.
00:09:10 Implement fact-checking procedures after using GPT to ensure accuracy and avoid misinformation.

ReflectionOcean
Автор

As a newly graduated physician I can say GPT4 is invaluable in the studying progress. Almost no friction or wasted time researching why certain things are correct or underlying pathophysiology.

Revolutionized how easy it is to consume medical knowledge

ChrisCapelloMD
Автор

Chat GPT sucks from my experience with it. It needs a factual probability percentage with each answer to make it useful because it gives a lot of wrong information and says it like it's 100% correct. It should show in a percentage of how confident it is with each answer.

DaveSimkus
Автор

This Sam guy definitely has something bothering him throughout this whole interview

Bofum
Автор

I like questioning gpt on coding topic. It gives me better answers than wikipedia.

My go to thing is to ask for real world examples of X. And then if it can quote them I assume the events are real (every time I check they are)

As I understand it a lot of the errors are happening in cases where the answer requires thinking and not just memorising.

But most of what I wana read on requires compiling and memorising sources which make it REALLY good at it.

nevokrien
Автор

Altman is like, always disagreeing but telling you you’re close to being right, really close! Still wrong though, sorry!

cgcrosby
Автор

he knows something that we don't know. he speaks very carefully but confidently about the near future.

chadkndr
Автор

I'd never heard of ChatGPT 3 or 3.5. ChatGPT 4 was a pivotal moment for me and always will be.

HalfLapJoint
Автор

The best way I can describe AI right now is it's like having a 1-hour or longer Google research session (+ maybe reading some books) where you read numerous links and synthesize what you think is the truth all in 15 seconds. This transfer of information is incredibly beneficial. Now, this is for topics that have data to learn from. So if you are asking something about undergraduate programming that it has learned from using 150 formal textbooks and thousands of online discussions, it will summarize anything you ask quite well (except when it hallucinates or learned to reproduce a common misconception, which is especially possible for narrower, more technical topics or topics overloaded with political meanings of right and wrong or other situations I'm sure). If you ask it something novel that has 1 research paper about it, it basically has 0% chance of getting it correct.

In my opinion, this limitation will not be surpassed. The transfer of known information will continue to improve hopefully, but ultimately, it has to have the information in the first place in the training set. Is it disproven that it will start to think like a human and produce novel ideas on complex topics like doing scientific research or developing end-to-end a corporate system in need of coding? No. It might happen. I don't think it will though. Personal prediction.

The four issues I see going forward are: Hallucinations, dependency on training set data to know an answer, training sets becoming corrupted with AI-generated content, and scaling the input/output size + computation time + energy used computing. That last category I think most people don't think about at all even though it was discussed some here. The most typical use, as he admits, is a person putting in a few paragraphs maximum and needing a few paragraphs out maximum. It's a stretch to think of one of these as programming a corporate system that consists of 500, 000 lines of code all bound to further complexities like what hardware it will run on, how it will be deployed, what monitoring will run as the code executes (like # of requests per minute to a web server), what alarms will sound based on those metrics, and what will be logged for debugging investigation. Oh, and it will need to be able to translate a list of needs described somehow (apparently in plain English) into a complicated program without much representation anywhere. We aren't talking about asking it to solve an algorithms interview question, which has thousands of examples in the training set, or to build a web scraper, which also has thousands of examples in the training set.

AG-ldrv
Автор

Shouldn’t it be able to fact check itself? Or tell it to not make information up if it knows it isn’t 100% accurate?

digitalmc
Автор

Save yourself the time, he doesn’t talk about gpt-5

darrelmuonekwu
Автор

I thought it worked better when it was first released. They blocked it from business building. IMO

biggy_fishy
Автор

Here we see a classic demonstration of how a strict parent sees their child even if they are impressing the entire world: 0:43

lattice.d
Автор

How do you get ChatGPT 4 to read a book?

gianttwinkie
Автор

Can we get a body language expert to break this down?

motionsick
Автор

Great interview as usual. The idea of fact checking concerns me as we are rapidly needing to engage with later Wittgenstein and some of Nietzsche ...no facts only interpretations.

tcpip