LLaMA 405b is here! Open-source is now FRONTIER!

preview_player
Показать описание
Here's a breakdown of LLaMA 3.1 release, including 405b and 8b's HUGE improvement.

(North America only)

Join My Newsletter for Regular AI Updates 👇🏼

My Links 🔗

Need AI Consulting? 📈

Media/Sponsorship Inquiries ✅

Links:
Рекомендации по теме
Комментарии
Автор

I already tested 405b with my LLM Rubric, how do you think it did? 😉

matthew_berman
Автор

Economics have been sounding off on just how bad they think the next downturn might be. I need ideas and advice on what investments to make to set myself up for retirement

Tetsu-pg
Автор

Meta is the last company I would have imagined doing this.

piemasta
Автор

looks like zuckerbergs ethernet cables are leaking light... and giving him a tan

ErnestZDodson
Автор

Zucc is looking more and more like a surfer bro

austinpatteson
Автор

Facebook, who stole our data, are now giving it back, so I'd say we're even.

PrincessBeeRelink
Автор

💥 A giant leap for the Open Source community. Many good products will come from it. 🎉❤❤❤

DihelsonMendonca
Автор

Zuc definitely hopped on the psychedelics or ketamine with Elon or something. He's evolving

rct
Автор

Zucks Ai implant is making him seem more human lately

PeterKato
Автор

Meta is democratizing the use of AI. Amazing. Greetings from Argentina

lucascostantini
Автор

just please adjust your questions, now LLMs are trained to answer questions like "code snake game in python". You need to give harder questions, like "code chess game in python" or "code go game in python"

gileneusz
Автор

Never thought I'd be rooting for the Zucc. This is awesome. Can't wait to try it out.

jeremybristol
Автор

I'm favoured, $27K every week! I can now give back to the locals in my community and also support God's work and the church. God bless America.

annellemiano
Автор

The synthetic data can be used to train a small model to be a specialist at a specific set of related tasks. Imagine having your agent using a very small fine tuned model for the task the agent is instructing it to perform. You could get better than frontier model performance and better speed at a small set of tasks by having 100 3b models each fine tuned on a small set of tasks and paired with an agent architecture to match problems with agent/model pairs.

JamesRogersProgrammer
Автор

Zucc became a legend🙌 totally changed my mind about him 😊

andrew.derevo
Автор

I'm curious how much computational power is needed to support this model. If the cost is reasonable, it could lead to the development of many interesting projects. Meta has truly become the ambassadors of open-source AI, unlike OpenAI.

devdavkup
Автор

Same, i would never thought i would jave thought Zuck would have play such a fair game, but so far he did and im happy to change my mind. Also, Merci Yann LeCun!

vincentthomas
Автор

When I heard "16, 000 H100 GPUs" I started freaking out like Doc Brown going, "1.21 GIGAWATTS!?"

JohnSmithAB
Автор

Zucc's move here is intelligent. Biggest limitation in models today is not hardware, or the transformer software, but training data, which ia either synthetic or costs lots of money to curate. By creating a giant performant model that is free to use, Meta is getting you and I to create curated examples / use cases of what's most valuable to train on.

NOTNOTJON
Автор

Am I starting to like Meta!? Thank you Zuck, and you Mat! ❤

MoDs_