Ilya Sustekever Finally Reveals Whats Next In AI... (Superintelligence)

preview_player
Показать описание


Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

Music Used

LEMMiNO - Cipher
CC BY-SA 4.0
LEMMiNO - Encounters

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Рекомендации по теме
Комментарии
Автор

They're not worried AI will destroy humanity - they're terrified it'll set us free 😏

passiveftp
Автор

The smarter AI gets the less hair Ilia has.

djpuplex
Автор

Please do not add background music. I know you are trying new things, but it really doesn't help.

WiseMan_
Автор

Come on Sutskever…
We need ASÍ !!!
And you are the man who can deliver it to us!

SuperStargazer
Автор

The secret sauce to AI always had Illya's hair in it. The smarter it gets, the more hair it needs🤫

apo
Автор

For pities sake loose the background music we have our own music if we want music. Two types of music playing is damage yourchoiceof music is not needed

yub-
Автор

The missing link that causes a slowdown in AI Models intelligence is the lack of training sets originating in human environments. Lack of stereovision and scale causes glitches in recreation of visual artifacts (6 fingers humans, facial morphism for the same character...) Also, it lacks physical interaction and true daily intellectual communication in a physical context. It is something that will only be improved through the introduction of robots in human environments or training with advanced synthetic data. Another challenge lies in the Question-answer format. The model forms an answer solely based on the question. Few models ask precisions about missing axioms or ambiguity in the questioning.

PierreH
Автор

Not a fan of the music in the background, kinda distracting for me.

cdyanand
Автор

AI needs to be able to think independently, without relying on user prompts.

dfastcf
Автор

I find the background music very distracting.

brandon
Автор

its crazy to see how we as humans have progressed so much, 200-300 years ago people were dying before turning 35 and now we are basically on the verge of creating smthing similar to god thats one hell of a speed run ngl.

Barrel_Of_Lube
Автор

"If you're wondering who Reuters are"… buddy, were you just born? Reuters, AP, AFP, any of those ring a bell? They're one of the biggest news agencies in the world, that feed information to (literally) thousands of journals and media companies. It's not some random website, they have ~25k employees and are the *original source* for a huge number of "news" reports you hear or read about.

desmond-hawkins
Автор

Why the crap music droning on in the background? Also, it ain't gonna be as simple as you make out, particularly because the o1 series is still the same old LLM structure of AI. And biologic intelligence is not based on LLMs. We need a total paradigm shift in our models

MikeKleinsteuber
Автор

can’t wait to see how Aliagents evolves, this project has serious potential in the AI spacecan’t wait to see how Aliagents evolves, this project has serious potential in the AI space

EricCooleric
Автор

Aliagents is leading the way with their unique approach to tokenized AI systems

MillerGraph
Автор

It is time to learn the Law of Correspondence. This law is used to understand phenomena beyond our comprehension by drawing analogies with what we already know.

Fascination is more important than science fiction.
Step 1: Temporal precedence
Step 2 : Non-spuriousness

Cory-vw
Автор

It’s just gonna get smarter and smarter and smarter and smarter!

aixtechofficial
Автор

the progress Aliagents is making in AI is worth paying attention to, the future is bright

MarkWether
Автор

I have a suggestion for the music: Compress the dynamics and Equalize the tone.
- Compress: use TDR Kotelnikov (free) set cutoff to the lowest level of the music then set ratio to 7:1. Then bring makeup gain up to 10db lower than voice. No distracting volume changes anymore.
- Use an EQ plugin (VST or built into video editor) to lower the frequencies where your voice should fit: 1k-3k. You can remove low and high end as well to make it more subtle.

johnford
Автор

Im not buying this we are hitting limitations bs. We arent even close to maxing out current models. It takes models as large as chatgpt just to get the majority of facts right and not hallucinate. We need even bigger models. Then we need to allow these models to synthesize new data. For instance, ask one to generate a list of 5 random pairs of unrelated things, then have it list out similarities. It will do so and elegantly if using gpt or claude. That is new data that is usable. Not garbage. These models need to be told to do that and build a new model to include all that. We have barely begun to try and max them out. Unimaginably more compute and storage are needed. Current tech and models could be use to build AGI today. I predict we may have something we call AGI once blackwell arch comes online, possibly early next year.

jamiethomas