OpenAI Co-Founder Ilya Sutskever: What's Next for Large Language Models (LLMs)

preview_player
Показать описание


Bio: Ilya Sutskever is the Co-founder and Chief Scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models.

Рекомендации по теме
Комментарии
Автор

Ilya's examples, and the way he puts together these key concepts are explaining ML/AI (and particularly LLMs) to me in a way I'd never remotely understood. Amazing work they're doing here!

csmith
Автор

Ilya is the best and great of all times

LethukuthulaMathe
Автор

From Illia's speach I just realized how the real calm and internal harmony comes with intellect and knowledge.

glorytoukraine
Автор

Ilya is a brilliant man! Such a pleasure to listen to this guy.

ok
Автор

11:15 Key Idea behind LLMs. If you can make really good guess on what's coming next you need to have a meaningful degree of understanding

akshaykamathb
Автор

Ilya has that great ability to tailor his speech to his audience: expressing complex concepts in simple words. And, Alexandr knows exactly the question to ask. It was delightful and informative to listen to you !

Free_Ya_Mind
Автор

What an amazing individual. OpenAI is the future!

EarleHolder
Автор

The amazing thing is that everything can be improved: the data, the algorithm, the size of the model, the hardware
This field is obviously to become a lot better

JazevoAudiosurf
Автор

Thank you so much for making this! Please have Ilya on again at some point!

Nova-Rift
Автор

Really great interview. You asked great questions. i think Ilya is a great thinker in the AI field. He drives the field forward with great diligence and commitment.

orhanguengoer
Автор

I wonder what would happen if you had Dall-E and CLIP feed into each other in a loop, with and without random factor thrown in. Would they stabilize on a single description and image without the randomization? Would it wander far away from the original image with the randomization factor?

tomcraver
Автор

Just think back how things looked like back in the 90s and how they look like now...Mind blowing stuff!
It's going to be very interesting to see how humanity will deal with the newest wave of automation that we're about to witness.

ebateru
Автор

You just know this man is the brains behind Chat GPT

dominokid
Автор

love how this is barely 2 years old and already feels like ancient history in the AI world

DonG-
Автор

Always pleasant to see smart people talk! Nice respite from today's tik toks of the world :D

Ruslan-S
Автор

I agree with the guess of human neurons vs artificial neurons. Latest brain research shows there is memory and the micro microtubules are far more complex than thought 10 years ago.

kurtdobson
Автор

2 to 3 months later they had an internal version of ChatGPT ready (Sam Altman said they had it 10 months before release).

HenkPoley
Автор

To allow us to move forward we need more multi modal data. The problem for current AI models is that they only have text and image data, we need other modes of data like smell, sound and touch

MikeKleinsteuber
Автор

So that thinking talk to Ai, could get answer, how?

hsiaowanglin
Автор

the explanation about generalization is so clear!

vernonzhou