Flamingo: Visual Language Model for Few-Shot Learning

preview_player
Показать описание
Flamingo is a family of Visual Language Models. It includes key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. Flamingo models are evaluated on open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.

In this video, I will talk about the following: What tasks can Flamingo models do? What is the architecture of Flamingo models? How do Flamingo models perform?

Alayrac, Jean-Baptiste, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc et al. "Flamingo: a visual language model for few-shot learning." Advances in Neural Information Processing Systems 35 (2022): 23716-23736.
Рекомендации по теме
Комментарии
Автор

Thank you, It was very well explained, which is easier for me to understand.. rather than reading the whole paper. Well done!

rickyS-D
welcome to shbcf.ru