filmov
tv
VirTex: Learning Visual Representations from Textual Annotations (Paper Explained)
![preview_player](https://i.ytimg.com/vi/ZfDZRX3WiJg/maxresdefault.jpg)
Показать описание
Pre-training a CNN backbone for visual transfer learning has recently seen a big push into the direction of incorporating more data, at the cost of less supervision. This paper investigates the opposite: Visual transfer learning by pre-training from very few, but very high-quality samples on an image captioning task.
OUTLINE:
0:00 - Intro & Overview
1:00 - Pre-Training for Visual Tasks
3:40 - Quality-Quantity Tradeoff
5:50 - Image Captioning
8:35 - VirTex Method
14:30 - Linear Classification
20:30 - Ablations
22:05 - Fine-Tuning
25:45 - Attention Visualization
27:30 - Conclusion & Remarks
Abstract:
The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images.
Authors: Karan Desai, Justin Johnson
Links:
OUTLINE:
0:00 - Intro & Overview
1:00 - Pre-Training for Visual Tasks
3:40 - Quality-Quantity Tradeoff
5:50 - Image Captioning
8:35 - VirTex Method
14:30 - Linear Classification
20:30 - Ablations
22:05 - Fine-Tuning
25:45 - Attention Visualization
27:30 - Conclusion & Remarks
Abstract:
The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images.
Authors: Karan Desai, Justin Johnson
Links:
Комментарии