Groq-LPU™ Inference Engine Better Than OpenAI Chatgpt And Nvidia

preview_player
Показать описание
Groq is on a mission to set the standard for GenAI inference speed, helping real-time AI applications come to life today.
An LPU Inference Engine, with LPU standing for Language Processing Unit™, is a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component to them, such as AI language applications (LLMs).
--------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
-----------------------------------------------------------------------------------

►Data Science Projects:

►Learn In One Tutorials

End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's

►Learn In a Week Playlist

---------------------------------------------------------------------------------------------------
My Recording Gear
Рекомендации по теме
Комментарии
Автор

Probably the most important video for anyone who are preparing for Genertaive AI :)

krishnaik
Автор

I wish I could give you a 1000 likes.
Thanks for making these videos!

utkarshkapil
Автор

Krish ! Your videos are amazing and I learned a lot about Langchain the most from your videos. Thank you!

shalabhchaturvedi
Автор

Thank you Krish sir ! Your videos are amazing ...

dhanashrikolekar-je
Автор

Always no words to explain your content ✨️ 😌 it's amazing

vishnuannavarapu
Автор

We are waiting for that video.. as a beginner we need that video a lot..

BharathPARUL
Автор

one like from me for choosing such amazing topics.. waiting eagerly for ur next video on this with code for me to try on👍

bikrampani
Автор

Amazing Krish. Thankyou for sharing about GROQ. Came to know the importance of inferencing. Hope you will be uploading projects using GROQ inference engine soon 😊.

badrinarayanans
Автор

hi @krishnaik sir, this is a personal feedback .Have been following your videos and your amazing way of explaining things.
Based on the same enrolled in one of the courses(generative AI) in ineuron.Though the topics covered are great, you definitely need to have a look at the articulation skills of the instructor.The quality is shockingly low and same is visible in the freecodecamp video as well.They are just throwing around some words while explaining key concepts and that too not very clearly.

You are such a great teacher with extraordinary knowledge and we all believe in your brand but you should look into this as this was not a great experience.

ankitpersie
Автор

NSA is already ahead of the race. They are invisible and invincible. 😊

AravindBadiger
Автор

amazing !! eagerly waiting for the project

yatinkundra
Автор

Thank you sir for sharing this video ❤❤

HariKrishna-ylyi
Автор

Sir, I feel olama was trying to achieve the same thing in terms of Inferencing time. It would be amazing to watch the real time Inferencing comparison b/w olama and Groq-LPU on top a open LLM model(s)

seemanshushukla
Автор

Hi Krish, thank you for your work. Could you make a video for fine tuning of a multimodal model for example for medical?

robinchriqui
Автор

i give you likes sir, Thank you so much for such a amazing videos.

CodeWithHimanshu
Автор

Hello Sir, thanks for this video ❤❤
Kindly also teach us on llamaparse, qdrant, nomic embeddings etc for RAG system

SantK
Автор

Many companies prefer not to utilize third-party APIs due to concerns about potential data leakage. I tried using Replicate API to use Llama2, but they rejected the proposal for this reason.

Arun-znvd
Автор

Sir you said that next video will be AI engineer roadmap and with resources...

BharathPARUL
Автор

how high school students can land internship or job in generative ai or ai in whole

Ishaheennabi
Автор

Excellent video, Krish, I'd love to have access to the project that you mentioned. Please let me know if it's already available somewhere.

mahtabsoin