OpenAI CLIP: ConnectingText and Images (Paper Explained)

preview_player
Показать описание
#ai #openai #technology

Paper Title: Learning Transferable Visual Models From Natural Language Supervision
CLIP trains on 400 million images scraped from the web, along with text descriptions to learn a model that can connect the two modalities. The core idea is a contrastive objective combined with a large batch size. The resulting model can be turned into arbitrary zero-shot classifiers for new image & text tasks.

OUTLINE:
0:00 - Introduction
3:15 - Overview
4:40 - Connecting Images & Text
9:00 - Building Zero-Shot Classifiers
14:40 - CLIP Contrastive Training Objective
22:25 - Encoder Choices
25:00 - Zero-Shot CLIP vs Linear ResNet-50
31:50 - Zero-Shot vs Few-Shot
35:35 - Scaling Properties
36:35 - Comparison on different tasks
37:40 - Robustness to Data Shift
44:20 - Broader Impact Section
47:00 - Conclusion & Comments

Abstract:
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on.

Authors: Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever

Links:

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Рекомендации по теме
Комментарии
Автор

This channel is insanely good. Deserves even more recognition. Great work! Subscribed <3

hmate
Автор

This is a really important paper, I suggest people pay particular attention to Yannic's "robustness to data shift" section if you are short on time. I hope we can get the authors on to discuss this!

MachineLearningStreetTalk
Автор

The idea is so simple and so hard to believe it is this effective! Okay, I see, NLP is so useful in vision now.

ghostlv
Автор

Thank you so much for this,
especially for not keeping the promise on cutting the video short!

jonatani
Автор

Thank you so much for this video! It really helped me understand Clip!
Best regards from Vienna!

alteshaus
Автор

Man, you have a talent to explain hard things! And your english is awesome!!

yxnhtmn
Автор

so good video. I understand CLIP after your explanation.

charrylee
Автор

Interestingly, similar training methods have been explored in the field of information retrieval for searching relevant documents to the given query. So, probably a good application of CLIP could be searching a wanted photo on the internet by using a text query.

shengyaozhuang
Автор

Absolutely loved the Alec meme, thanks!

MeatFingerSteam
Автор

Thanks a lot for this awesome video! The explanations are very digestible even for a beginner.

aminasadi
Автор

Dude you are doing a great job. Perfect for the work..

growthmpsfunnels
Автор

Trained on "the internet" - so technically speaking, it is a porn classifier, right? Except if it used a separate algorithm for "adult image filtering". Fascinating! (And funny!)

vsiegel
Автор

I can't thank you enough for making such useful videos.

jenishah
Автор

Imagine this but with more sensory data - audio, video, text, hell any string of bytes even. Wild...

jeshweedleon
Автор

In a 8 of 20 examples presented in this paper review is really measured by different compilers of models, but not only this same in 20, 45, 60 bites for a 1mm³ pixel outer the third output layer.

GGilbertProduction
Автор

New video from yannic!!! Saved my day :D

florianhonicke
Автор

Just imagine a version of CLIP trained on random Youtube video frames + Title or Subtitles.

Xaelum
Автор

Great video, thanks for sharing! Just one wonder if mine,
why are we 100% sure that all these old known datasets are not just subsets of the images CLIP was trained on?

ophir
Автор

Hey @YannicKilcher /all, it seems like OpenAI is only referring to performance on the class of bananas at 39:05 (figure 13) not that zero-shot CLIP outperforms resnet in general on ImageNet. Earlier in the paper (8:15) they achieve 40% accuracy on ImageNet. Is 39:05, (figure 13) showing 72% accuracy on bananas or overall?

keythacity
Автор

I'm missing a bit your critique points here! But thanks, good intro to CLIP

uniquedve