GPT5 unlocks LLM System 2 Thinking?

preview_player
Показать описание
Human think fast & slow, but how about LLM? How would GPT5 resolve this?
101 guide on how to unlock your LLM system 2 thinking to tackle bigger problems

🔗 Links

⏱️ Timestamps
0:00 Intro
1:00 System 1 VS System 2
2:48 How does human do system 2 thinking
3:33 GPT5 system 2 thinking
4:47 Tactics to enforce System 2 thinking
5:08 Prompt strategy
8:27 Communicative agents
11:03 Example to setup communicative agents

👋🏻 About Me

#gpt5 #autogen #gpt4 #autogpt #ai #artificialintelligence #tutorial #stepbystep #openai #llm #chatgpt #largelanguagemodels #largelanguagemodel #bestaiagent #chatgpt #agentgpt #agent #babyagi
Рекомендации по теме
Комментарии
Автор

I appreciate your speaking speed and succinctness. This is the first video I haven't had to watch on 2x in ages. Thanks.

rickevans
Автор

Thanks for all your hard work producing these videos! I appreciate the speed, you are very concise with no filler. I'd really like to see a more in depth video about how to create new skills for these autogen agents. I feel like I'm missing something obvious about how to make them work well.

treadaar
Автор

As of now, this same behavior is able to be accomplished with CrewAI. Simply create a new task, set the process as sequential, and have the task be "Puzzle solver; when given a puzzle, pass it first off to the Puzzle Solver, then to the Puzzle Reviewer." One thing I would also highlight about CrewAI is the ability to highlight a backstory, where you can use emotion in order to improve performance as shown by Li et. al in "Large Language Models Understand and Can be Enhanced by Emotional Stimuli". Great content!

alexandernanda
Автор

I always come back to your channel.. full of gems, you're a life saver 🙏🏽 thank you seriously

therdworlder
Автор

I really enjoyed this video, keep up the good work Jason!

AndresTan-uzql
Автор

Great content. Can't believe I'm just discovering this channel. I really believe this year or next we will get a localized approximation of AGI. System 2 thinking is core. Maybe extend Meta system 2 attention with gated differentiable memory? However, I envision this along with extremely robust in-context learning, linearized knowledge graphs, and extremely long context. We can achieve what most people would believe digital AGI will look like, performance wise. With graph-based prompting methods, we can exploit extended test-time compute and brute force our way to solve most task. Even expert level task. Why I believe smaller models are primed to be majority winners honestly. 99% of task don't require superhuman intelligence. A 10b model with tinyllama scaling law, on a phi-2 style dataset would give you gpt-4.

Exciting times.

zandrrlife
Автор

excellent information as always, I'll be testing this out this afternoon

Andy-Fairweather
Автор

Very impressive structured presentation of every concept and ideas, and explaination.

somewhere
Автор

Great video, love the practical demo!

MarkSze
Автор

Didn't realise so many similarity between how human brain & LLM works; great video!

jasonfinance
Автор

This is brilliant!
Also, small typo at 0:55, RHS bat line should read 'x+1.0' : )

ojsh_
Автор

Mamba (S6) models are the future imo. system 2 thinking requires knowing where you are in relation to where you have been and are going. Latent space is crucial!

Slappydafrog_
Автор

The multi-agent setup feels like the inner dialog that happens in your brain as you work through the problem. It is interesting that the same model can get the solution wrong but the self-check is able to prove a solution is bad or good. I wonder if it could figure out there are 2 solutions as well.

GrindThisGame
Автор

I think I understand the problem (somewhat at least), it is that LLMs try to give the immediate answer to what you ask without thinking about the long term. For example, the other day I tried to create an app in Python using only ChatGPT for everything, from creating and designing the folder architecture, going through each line of code, configuring the database and even trying to deploy it all from bash. I clearly explained the conditions of the project and my resources to ChatGPT but what it did was desperately try to schedule many files that it later forgot to name or renamed, etc. I guess ChatGPT tried to do what it meant to create the app but it didn't think through the steps for it, it didn't create a centralized architecture or think about file names or how to connect them properly before starting to program them. It's like what happened with the students in the Veritasium video, they simply said the first thing they thought in an almost automatic or instinctive way. Right now I can be there to guide ChatGPT but if they manage to overcome that barrier, programming will definitely take a backseat, at least non-super specialized programming.

yeysoncano
Автор

Very interesting, thanks for your beautiful work!

xraymao
Автор

I am reading the book at the moment. Had the same thoughts on it. And I think Auto Gen 2 is the way to go T the moment. SUBSCRIBED!

iHERZmyiPhone
Автор

Great video, seems like this will be the next big thing!!

SHASHWATHPAIML--
Автор

I’ve been wanting to try figuring out how to get multiple GPTs to cooperate to achieve an objective or output for a while now. But just an hour ago I decided I was going to look further into how it could be done. Now I come across this video in my recommendations even _BEFORE_ I made any attempt to find a guide. Sometimes it really feels like the universe is trying to send me a message.

nemonomen
Автор

excellent work, great dissemination - thank you

ShaneHolloman
Автор

This is very helpful. I was looking for an example of how to set up for more deliberative thinking with a walkthrough to hear the thought process.

ManiSaintVictor