OpenAI'S Q-STAR Has More SHOCKING LEAKED Details! (Open AI Q*)

preview_player
Показать описание

Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
Рекомендации по теме
Комментарии
Автор

🎯 Key Takeaways for quick navigation:

00:55 *🧠 Q-STAR aims to enhance dialogue systems through energy-based models, shifting focus from token prediction methods to mimicking human thought processes during complex problem solving.*
02:05 *⚙️ Q-STAR's core operates on assessing response compatibility through energy levels, allowing holistic evaluation beyond token predictions.*
03:13 *🔄 Q-STAR's training involves minimizing energy for compatible response pairs while ensuring higher energy for incompatible pairs, promising more efficient dialog generation.*
04:08 *🚀 Q-STAR's approach represents a departure from traditional language modeling, offering a potentially more powerful method for generating dialog responses and human-like reasoning.*
08:35 *💡 Energy-based models measure the extent to which an answer is suitable for a prompt, aiding in generating optimal responses by minimizing energy through an optimization process.*
11:08 *❓ While the Q-STAR leak's authenticity is questionable, it suggests OpenAI's active pursuit of breakthroughs in dialog systems, aligning with past statements and research on energy-based models.*
13:30 *📈 Meta's involvement in energy-based models, as confirmed by Yan LeCun, hints at ongoing advancements in dialog systems beyond OpenAI's efforts, promising further insights into this technology.*

Made with HARPA AI

-Evil-Genius-
Автор

I've done NLP for years. I've also done Energy Systems concentration for Mech Engineering. This makes a lot of sense given the paper recently released. I've even thought of approaches like this

xenophobe
Автор

It seems to me, that looking at the whole picture and iterating over it, must certainly be a fundamentally better approach, than predicting the next piece of a puzzle, when it comes to quality output. It probably would take way more compute to reach the same speed though.

ishi...
Автор

This EBM is exactly what Yan Lecun was talking about would be a better alternative to LLMs in his podcast with Lex Friedman. I believe he said that it would that it would be very inefficient to train. So perhaps OpenAI have found a feasible way to do it. EDIT: I saw you mentioned it later in the video.

torarinvik
Автор

..what happened to Q star being able to graduate level math??? So nobody knows...what ever...

GBuckne
Автор

"we're not ready to talk about this" Damn right you're not, running inference probably costs me more than my rent LMAO

Roboss_Is_Alive
Автор

i don;t watch a video unless its shocking

gregmatthews
Автор

Sounds like a good application for a quantum computer, considering how they work.

Doubter
Автор

Just like neurons in our brain, Q-Star considers multiple possible answers in parallel within its abstract representation space. This is akin to our brains activating and evaluating different neural pathways simultaneously when faced with a situation or prompt.

The energy scoring system that Q-Star uses to assess the compatibility of each potential answer is reminiscent of how our brains reinforce and optimize neural pathways through long-term potentiation (LTP) and long-term depression (LTD). Answers with lower energy scores in Q-Star are more "compatible" and preferred, just like our brains tend to favor and strengthen neural pathways that lead to successful outcomes or align with our goals and experiences.

Furthermore, the autoregressive decoder that Q-Star employs to translate the optimal abstract representation into a coherent textual response is analogous to how our brains ultimately settle on a specific thought, behavior, or response by allowing the neural signals to propagate along the most reinforced and least resistant pathway.

So, while the specifics of the technology differ, the overarching principles of Q-Star's energy-based model do seem to parallel the way our brains process information, consider multiple possibilities, reinforce successful pathways, and ultimately arrive at a response that is most compatible or optimal given the circumstances.

Just as our brains continuously learn, adapt, and refine their neural pathways through experience and feedback from the environment, Q-Star's energy-based model and its ability to evaluate and choose the most compatible response could potentially be further optimized and refined through training on more diverse dialogue data.

In essence, both systems leverage the principles of reinforcement, optimization, and the selection of the "path of least resistance" or highest compatibility to arrive at their respective outputs, whether that's a textual response or a thought, behavior, or action.

mylesanthony
Автор

"...it's just been floating around on the internet..." *rolls eyes*

zalzalahbuttsaab
Автор

Maybe Q* is a manifestation of how the human brain works. If we're talking about minimum energy levels in Q* then this might be an analog to the minimum energy landscape as talked about in quantum computing. The solution in a quantum computing problem is talked about as the minimum energy state of a set of entangled Q-bits. Perhaps the human brain is using micro-tubules to execute quantum computing calculation in its own learning algorithm. This may also be why the human brain is so efficient at learning.

dazhoiden
Автор

Robot at my doorstep: "you are needed at the proteine plant"
Me: "oh, yeah, totally not shocking. Can I bring my toothbrush?"
"you wont be needing your toothbrush.
but you will be stunned"

gammaraygem
Автор

EBMs are shocking. LLMs were yesterday's shocker. All stunning!

dennis
Автор

Insane explanation. Thank you so much for giving us easy access to knowledge about how *Star works

dimakasenka
Автор

Why aren't we drawing the seemingly obvious connection between Q* and Quiet-STaR? Seems seemingly and shockingly obvious.

TeamLorie
Автор

just remember, they actually showed us what sora can do, so whatever they are not showing us is more impressive than sora

homewardboundphotos
Автор

he shut down the Q* question so quick haha

DiceDecides
Автор

This describes context designs in OpenAIs cognitive AI roadmap. Used in building superintelligence. Same reasoning approach as human cognition. Extends induction & attention circuits beyond simple token projection Dig deeper. The key is in the design of relevance.

exacognitionai
Автор

Reasoning as an optimization problem. Interesting.

szymskiPL
Автор

dig this stuff! consider slapping a compressor on your voice since the volume is all over the place

the_jawker