Llama 3 Groq vs MetaAI

preview_player
Показать описание
Side by side demo of the new Llama3 70B model from @AIatMeta running on the Groq LPU™ Inference Engine. Check out the speed, already benchmarking as the industry leader for performance.
Рекомендации по теме
Комментарии
Автор

Most of the time Groq actually takes longer to reply because of this: You are in the queue. If there is no queue then Groq is the KING

ChikadorangFrog
Автор

Honestly Groq is a revolutionary technology 🤯

yusufersayyem
Автор

i hope Groq would implement Read Aloud just like Copilot/Bing Ai and Gemini

ChikadorangFrog
Автор

Thank you Groq for always providing the world access to the best open source models at groundbreaking speeds.

zerobot_tech
Автор

Can I get one chip or two... or five. For free? Pretty please. :3

BooleanDisorder
Автор

when streaming support for function calling?

ajaychandrasekaran
Автор

when will you start charging for this.. free now but I assume eventually you will start charging

adamfilipowicz
Автор

Llama new release doesn’t mean faster result fetch .. not a good comparison

amjads
Автор

Groq is amazing, but we need finetuning so hard T_Td

hocinehope
Автор

you need to run such test with tempreature zero so result will be same lenght

sguploads
Автор

It is a few seconds faster, so what? How many real use cases care about this level of difference? Not to say cost, throughput and stability.

oooooo