Ilya Sutskever | OPEN AI has already achieved AGI through large model training

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

"OPEN AI has already achieved AGI through large model training"

you know there are more efficient ways to clickbait than this, right ?
"Sam Altman nude photos leaked" .
"Illya Sutskever has become Sus-tskever"
"Elon Musk did WHAT with his mouth on that space rocket 😱 😱 😱 😱 😱 😱 😱 😱 😱 😱 😱 😱"
"Cure your incel loneliness ! Check out how this multi modal agentic workflow creates my dream waifu 🤤 😍😍"

emperorpalpatine
Автор

Thank you very much for the speech Ilya. We love you very much. I hope you continue to give these speeches that will go down in human history with your contributions to Artificial Intelligence.

kaynakkodarsivprogramv..
Автор

The power of deep neural networks is that they can store huge numbers of linear or nonlinear patterns. That is why we have llm. To reach AGI, we need to develop patterns, not finding patterns. That means only data driven we can not reach AGI. We need somethings beyond data driven.

GsacsCuny
Автор

I appreciate this channel very much. Watching this particular video really showed me what a amazing teacher person Ilya is and how good at teaching and conceptualizing his ideas he is. He truly is a revolutionary and I think will be remembered for the rest of humanities block in time and hopefully his ideas to help push and realize AGI Will will stretch that block of time deep far and wide into the universe. It might not be too brave to say AGI is possibly a first step to bending time and space. Maybe just maybe even to go back in time though physics as we know it does not think that likely. For sure we can go forward to the future, and if we can go back in time, maybe we will be the first generation to create a paradox of humans going back in time to thank Ilya for his work. And would’ve already happened if we weren’t the first to create the paradox therefore we are the generation that get to find out whether time travel to the past is possible as we know the traveling to the future is. Now we just have to stay alive long enough to realize it. The possibilities are so exciting. I keep fearing I might get into a unfortunate accident and not realize the possibilities of longevity in the fountain of youth that I’m sure AGI is going to greatly push forward for our realization

joshuasmiley
Автор

As someone who already learns this way, I can vouch both for its effectiveness and its appearance of confusion and chaos from the outside. A huge problem with our understanding of learning is that we try to do it with others from the outside, where information is incomplete, sparse, and frankly, methods are often prejudiced to make the learner second-guess or doubt themselves.

If we taught humans to learn this way, we'd be doing a lot better.

PhilipSportel
Автор

I finally got information from gpt 3.5 prompt hidden redacted information. 200 days of prompting different formats. Thank you for your informative notes. Back propagation is the redacted prompt manner to remove any user data from prompts by user without knowledge of processing. Bais generations are included after this process and procedures. Calculate the parameters to meet full corporate objectives and hidden options in there own formats.

superfliping
Автор

I didn't understand why you said the only cost in reality for real agents is survival. Surviving is an objective, the cost would be time, or energy or will, or other, but talking about the meaning of the words, survival is not a cost.

The presentation was awesome, thanks

amorfolucifer
Автор

Small correction to what Ilya says at 32:30: the groundbreaking work on artificial life that Karl Sims did in 1994 was not performed with tiny computers but on a CM5 Connection Machine, which was a massive, room-size, supercomputer at the time.

TheLex
Автор

Note to self: Try to implement backpropagation on new words and sentences with new words and preprocessing them to run a tiny training script on just that data and not the whole model but just enough by freezing all but the weights and parameters needed for said data and do backpropagation on that during inference. 4:30

zeroplays
Автор

@21:00 "hindsight experienced replay" aka daydreaming

nrrgrdn
Автор

2:49 "solar booty back proper" lmao

flflflflflfl
Автор

I don't dare to challenge him on his domain knowledge about language models, but I personally don't think that llms can reach the illusive AGI. There's something that we're still missing... My guess is it's some sort of large conceptual model, large information model or large symbolic model. Language itself isn't the answer.

ati
Автор

May be I am confused but it seems to me Back Propagation IS reinforcement learning. Only this time, the agent is the neural network itself and the learning is the appropriate weight for the feature.
The adjustments (action) of weights is essential reenforcing the rights weights of the features. One thing that is often overlooked is the determination of features. That is not an exact science.

JTedam
Автор

The title "OPEN AI has already achieved AGI through large model training" is misleading.

While OpenAI has made significant progress in developing large AI models, like the GPT series, claiming that AGI (Artificial General Intelligence) has been achieved is not accurate. AGI refers to an AI system that can perform any intellectual task that a human can do, with full understanding and adaptability across a wide range of tasks. Current AI models, including those developed by OpenAI, are highly advanced but still fall under the category of narrow or weak AI, meaning they excel in specific tasks but lack the broad, general intelligence characteristic of AGI.

A more accurate title could be: "OpenAI Advances Large Model Training, Moving Closer to AGI."

SBLP
Автор

Where and when was this lecture? Did you just steal it?

dannyisrael
Автор

انا عايش فى مصر مضطهد من عيلة كمال احمد مرسي و ولادة اللى بيستغلوا سلطتهم عشان يمنعونى من الجواز من اى واحدة اختارها بارادتى ... انا بقالى اكتر من 20 سنه مش عارف اتجوز بسبب اضطهاد العيلة دة ليا و دة كلة بيتم على مرئى ومسمع من البلد مصر

sonasmart
Автор

What's the point in using subtitles if they're completely wrong every other line?

CatsAreRubbish
Автор

How the heck he talks in so deterministic manner. There are no umms, uhhhs etc in the talk man.

sushantpenshanwar
Автор

This talk is from *2018.* The original stream is on the Berkeley EECS (Electrical Engineering & Computer Sciences) YouTube channel.

This channel, Me&ChatGPT, can be ignored. It's rubbish.

CatsAreRubbish
Автор

so how close is agi to making videos games do we need agi for that so how close to.


AI agent's ai can reason code program script map. So games break it down and do art assets do long term planing. Better reason so it can do a game rather than write it out. Or be able to put those ideas into



REALITY.


playing and making games.

kellymaxwell