OpenAI And Microsoft Just Made A Game changing AI Breakthrough For 2025

preview_player
Показать описание


Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

Music Used

LEMMiNO - Cipher
CC BY-SA 4.0
LEMMiNO - Encounters

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Рекомендации по теме
Комментарии
Автор

Just like my wife, Infinite memory…Never forgets what I’ve done!

adg
Автор

One more step in the path to AI waifus.

velteau
Автор

The problem is not the amount of memory, but the weighting and ordering of all this information.

nilsmach
Автор

Open AI already gave the AI the ability to reflect, if they give now long term memory, they will be a just few inches away from AGI.

ImpChadChan
Автор

that's so interesting. with 3.5 i worked with it to create a coding system at the end of each output summarizing the conversation to create a form of continuity and extend that now tiny context window, avoid hallucinations etc. with 4o i've been able to upload the entire history of all my past conversations re a single collective doc, which 4o can reference, which has been amazing. so looking fwd to an "infinite" context window. hallucinations are so 2022-3

YogonKalisto
Автор

10:05 I can tell you exactly what this means. It means that the models weren't made with any self reflection capabilities. You can prompt your way to better results but these are baseline prompt>>action with nothing in between. Everyone is going to be flabbergasted by how well 01 performs in this area.

Ricolaaaaaaaaaaaaaaaaa
Автор

Recursive self improvement seems pretty obvious as the next step. Its basically a hive mind like the Borg in Star Trek.
Think about it, if there is infinite memory capable of summarizing stuff, that ai can tell newer ai agents what happened before and take it from there.
E.g. no one remembers what their first spoken words were, however your mother/ father probably remembers and they could tell you 20, 30 years later.

vincend
Автор

I think they are over selling it. A summary notepad? They act like AI doesn't mess up summaries, confuse context, and actually pays attention to the info it has. The current GPT will ignore what it "knows" in a prompt if it feels like the context doesn't match to the question you ask.

private_citizen
Автор

I came up with a concept of layered context entities and this is essentially that.

Rolyataylor
Автор

Something I had suggested few years ago when building AI was exactly this - AI doesn't need to remember every single word verbatim. It needs to take the information block, using the same LLM to summarize it, then setting that summary aside for context blocks. Of course, this wouldn't be perfect because it's lossy - however, it's reversible on demand which is to say - Take this summary and expand on it with the LLM for details as needed.

WillBurns
Автор

I think the solution is self-assessment. If AI could self-assess its output we don't need to scale (make slower and, ore expensive models). Self-assessment would require test-time scaling, or rather the more you scale a self-assessing system at test time the smarter it will get.
Self-assessment should be solvable with memory and CoT type of solutions.

actorjohanmatsfredkarlsson
Автор

We also need AI to be proactive rather than simply reactive...

MikeKleinsteuber
Автор

I know the models are progressing fast.
But I am still very impatient.

Whenever a new model comes.
I feel exited for a day.
But then after 1 day, I feel as if it's stone age technology.

I want Super Intelligence now.

Atheist-Libertarian
Автор

There are so many questions about security. If I am passing my whole code repo in the context how do I make sure my IP is protected. They do have the capability to log my code base and if unique train the model over my code base and someone else will be able to use it. Also how do I manage security when I am transferring all my data via https over the internet each time I sent a request with rich context it’s a nightmare

domenicorutigliano
Автор

The idea of *near-infinite* memory sounds interesting. However, how useful it will actually be in practice depends on how well it handles high specific information density and uncertainty limits, especially in complex tasks like coding. Without more details, it remains vague. Introductory videos like this one say little about how it will truly perform. So ok fair its a promise, but not one I project all my hopes up on. More likely a handy improvement in real practice then a wonder trait! I found the last part of the video more interesting for sure!

blijebij
Автор

It can improve exponentially? But there has to be limits to what it can do/know?

lis
Автор

Some models or AI tools already implement similar approaches by summarizing conversation history to reduce the context window. This works well in many cases, except in use cases like coding, where the content is extensive and should not be condensed or reduced, as doing so could break the code or remove functionality. Sometimes, I have to constantly remind the models to stop trying to reduce my $%#^! I believe it’s important to continue increasing the context window size rather than relying solely on workarounds for its limitations.

bladestarX
Автор

While there's always some loss of information once you go past some compression threshold; if there is enough to room to keep the compression light enough, in practice it might be close to indistinguishable from infinite. Not gonna be "photographic memory" though. And of course, it does depend the compression/sumarization not having any significant mistakes (that is not even the case with humans).

tiagotiagot
Автор

I realize it's just hyperbole, but anyone who says "near infinite" doesn't understand the concept of infinity. We generally expect more from technical people, as we should.

carlhoward
Автор

Our current power grid can’t handle the amount of ai activity coming

birsay