Meta's STUNNING New LLM Architecture is a GAME-CHANGER!

preview_player
Показать описание
The latest AI News. Learn about LLMs, Gen AI and get ready for the rollout of AGI. Wes Roth covers the latest happenings in the world of OpenAI, Google, Anthropic, NVIDIA and Open Source AI.

My Links 🔗

#ai #openai #llm
Рекомендации по теме
Комментарии
Автор

Closer and closer to how we work. When we think of an argument, we don't think of 100 different words. We think of one point (concept) and start articulating around that.

BigSources
Автор

this is more than meets the eye. Concepts are also the building blocks of beliefs. An LCM with a "reasoning method" can compress concepts into beliefs (which can be re-evaluated when more concepts or better reasoning is available). Which is just 1 step away from awareness. Even more, beliefs are great for alignment because you can inspect the core "principles" resulted from concepts and reasoning. Also concepts can be expanded down into words . This is revolutionary!

calinciobanu
Автор

"Do you have a plan to produce a sota AI model?"

"I have concepts of a plan" -Meta

BigSources
Автор

The way LLM work, they already have to come up with concepts but they were too "zoomed in". This method allows the models to do better what they already did but with bird's eye view by not forcing the model to think in tokens but step above that. Imagine trying to understand the essence of a long sentence as a collection of fractured words instead of just one concept as a whole. Pretty beautiful idea and it just makes sense, everything is obvious in retrospect

anemoiaf
Автор

Great content, Wes. I really enjoy watching your videos as a way for me to keep up with all the recent advancements in the field. Keep up the good work.

trabpukcip
Автор

The LCM represents, I think, more compression in the neural network.

patrickmchargue
Автор

Can you please share inks to the papers you show in the video

vulon_
Автор

To me, this is one more building block on the road to "consciousness". I don't see it replacing the LLM approach, I see it as additive. Like lobes of the human brain, imagine that you have simultaneous processing approaches to data and inputs that feed into an "overseer" or interpreter that then reasons out and summarizes the findings/output. (An LLM could be one "lobe", an LCM another, future models yet another, etc.) I think human consciousness emerges from this type of collaboration, and believe consciousness in AI will similarly emerge from the whole being greater than the sum of many parts.

NorthernKitty
Автор

yeah I always thought that the main difference between humans and LLMs is that our words come from reasoning, while LLMs' reasoning comes from words

algorithmblessedboy
Автор

Roman Outline Format is what I have always used when giving lectures. I think it is a good example for understanding an LCM.

This type of outline structure organizes points hierarchically using Roman numerals, capital letters, Arabic numerals, and lowercase letters. It’s commonly used in academic papers, formal reports, and legal documents. Here’s an example of how it’s structured:

Example:

I. Main Topic
A. Subtopic
1. Detail
a. Sub-detail
b. Sub-detail
2. Another Detail
B. Another Subtopic

jpoole
Автор

Keep doing what you are doing! This format (Specifically, the highlighting of text as you speak) is an excellent way to learn in a field that is moving and changing faster than thought.

npecom
Автор

LOVE THIS APPROACH!! We can use atomic structure as an analogy with letters being particles, and tokens being atoms. LCM approach appears to work at the molecule level.

RonBarrett
Автор

This paired with a dynamic neural network (training on the fly, as demonstrated in a recent research model) while processing a prompt, has huge potential.

djayjp
Автор

now we need you to do jazz hands on something

travisporco
Автор

This is the one. This is the next step, which will also change how we 'talk' to LLM's (or LCM's in this case because be honest, this whole prompting business is positively archaic.

BennySalto
Автор

This concept of training AI on concepts instead of just words is mind-blowing! It's like teaching it to think in ideas, not just vocabulary. The example of the researcher giving a talk and how the core concepts remain the same regardless of language is a perfect illustration. Great explanation!

genai-level-up
Автор

Here is a crazy idea. Why stop at concepts? Let the network synthesize its own ”tokens” and ”layers of abstraction”

keffbarn
Автор

As a consultant thriving on the idea and practice of concepts, I’m loving this approach 👌🏻

jsivonenVR
Автор

Hmm, how does this differ from contextual embeddings after they go through the attention heads? Traditional LLMs embeddings already get enhanced to add more contextual / semantic / conceptual meaning at that point, so I'm not understanding how this is different.

brennan
Автор

A different topic for today...05:24 Have you have ever had a discussion when a person without an inner monologue is trying to explain "thinking in concepts not words" you will have encountered the mutual bamboozlement and near disbelief that follows. I suspect that this development will just be what those of us without an internal monologue have been expecting, and will be surprising to people who do "think in words".

Juttutin