Все публикации

Meet Llama 3.1

How can changes to the Llama 3 tokenizer help drive down inference costs? #llama3

What does a larger vocabulary enable in Llama 3? #llama3

Understanding the Meta Llama 3 Tokenizer | Llama for Developers

Run Llama 3 on Windows | Build with Meta Llama

More ways to run Llama 3 | Build with Meta Llama

Run Llama 3 on Mac | Build with Meta Llama

Run Llama 3 on Linux | Build with Meta Llama

10 Years of Advancing the State-of-the-Art through Open Science at FAIR | AI at Meta

What does Meta Scalable Video Processor enable?

Architecture of Meta's First-Generation AI Inference Accelerator

Meta's Research SuperCluster enables new, breakthrough research

Next-Generation Datacenter Designs

Three challenges addressed by MTIA

MSVP Improves Quality and Compression Efficiency at Meta's Scale

Research SuperCluster is one of the fastest AI supercomputers in the world

Reimagining Meta's Infrastructure for the AI Age | AI at Meta

AI Infra @Scale | AI at Meta

MSVP - Meta's First In-House Silicon for Video Processing | AI at Meta

MTIA - Meta's First-Generation AI Inference Accelerator | AI at Meta

Accelerating Research with Meta's Research SuperCluster | AI at Meta

Segment Anything Model - A Promptable Segmentation System #Shorts

Origin Stories | AI at Meta

Inside the Lab: Building for the metaverse with AI (2022) | AI at Meta