OpenAI o1 explained | Thinking Fast and Slow

preview_player
Показать описание
In this video, we understand what is OpenAI o1 and what makes it different from gpt-4o?

We also look at some exciting applications of OpenA1 o1 and try to understand how o1 works.

=================================================

=================================================
Vizuara philosophy:

As we learn AI/ML/DL the material, we will share thoughts on what is actually useful in industry and what has become irrelevant. We will also share a lot of information on which subject contains open areas of research. Interested students can also start their research journey there.

Students who are confused or stuck in their ML journey, maybe courses and offline videos are not inspiring enough. What might inspire you is if you see someone else learning and implementing machine learning from scratch.

No cost. No hidden charges. Pure old school teaching and learning.

=================================================

🌟 Meet Our Team: 🌟

🎓 Dr. Raj Dandekar (MIT PhD, IIT Madras department topper)

🎓 Dr. Rajat Dandekar (Purdue PhD, IIT Madras department gold medalist)

🎓 Dr. Sreedath Panat (MIT PhD, IIT Madras department gold medalist)

🎓 Sahil Pocker (Machine Learning Engineer at Vizuara)

🎓 Abhijeet Singh (Software Developer at Vizuara, GSOC 24, SOB 23)

🎓 Sourav Jana (Software Developer at Vizuara)
Рекомендации по теме
Комментарии
Автор

TANKQ SIR for your time and knowledge to us

damakoushik
Автор

AI is amazingly scary !! With this speed, it can prepare a 5 year invested fresh PhD thesis in less than my 3 minute thesis presentation !!

parthaojha
Автор

How much compute power would they need to incorporate RLHF and CoT in every request. I wonder how much multi-agent is at work here. Thank you for your awesome video. Nice to see an actual breakdown instead of just hype talk.

helrod
Автор

Firstly, thank you for sharing your thoughts on the Open AI o1 preview with a detailed explanation. As we know all the LLM models work on human feedback to retrain the model or to produce output better, what if many humans intentionally say the wrong answer as the correct answer to the LLM? I mean How can we solve the above problem?

umamaheswarareddymusirika
Автор

Thankyou Sir for these videos! Would help a lot if you could make 1 on Small Language Models (SLM) and actually build one from scratch

atharvaarya
Автор

Please share your thoughts on the LLama 3.2 model and compare it with the Open AI o1 preview and Google Gemini 1.5 versions.

umamaheswarareddymusirika