Amazing Milestone! Million Experts Model

preview_player
Показать описание
A top researcher at Google DeepMind just released an important paper, “Mixture of a Million Experts.” As the paper’s title announces, it describes an approach that resulted in the first-known Transformer model with more than a million experts.

For context, the number of experts currently seen in smaller models varies between 4 and 32, and ranges up to 128 for most of the bigger ones.

This video reviews the Mixture-of-Experts method, including why and where it’s used, and the computational challenges associated with doing this. Next, it summarizes the findings of another important paper from earlier this year, where a new scaling law was introduced for Mixture-of-Experts models. That sets us up to review the “Million Experts” paper by Xu He.

The video then describes two key strategies that enabled scale to over a million experts by creating experts that are only a single neuron large. Next, it shares a process map for the new approach, and concludes with ideas about where this might be most relevant, including applications that involve continuous data streams.
Рекомендации по теме
Комментарии
Автор

Great video! Thank you for the clear explanation. The calm tone made it easier to undertand.

hausenmusic
Автор

Here’s a link to the paper I featured in this video: “Mixture of A Million Experts, ” by Xu He at Google DeepMind

AIMasterGroup
visit shbcf.ru