How DINO learns to see the world - Paper Explained

preview_player
Показать описание
Paper Explained: Emerging Properties in Self-Supervised Vision Transformers

The DINO paper(s) were a breakthrough in Representation Learning using Self-Supervised Learning! Let’s see how they achieved this state of the art performance and what properties come with using Vision Transformers.

⬇️ Follow me on my other socials and feel free to DM questions! ⬇️

#ai #research #paper
Рекомендации по теме
Комментарии
Автор

Loved your series on self-supervised learning. Are you also planning to cover DINOv2? I am particularly curios about the emergence property of the model -- how it is able to regress semantically consistent features for different parts of the objects (and not simple FG-BG separation as in DINOv1)!

akshaymundra
Автор

Very good content. Congrats 👍. Reading papers can be tough for many people, and such videos make it a lot easier to keep up with these state of the art advancements. As a fellow researcher, do you think investing time in self-supervised learning research is worth it right now? Considering that me and my team do not have access to such computational power as META and Google, I am not sure if we can keep up.

nasosgerontopoulos
Автор

In the training of dino I got same loss every time 10.09030 however I changed the Teacher Temperature hyperparameter below 0.06 which written in the paper.?can anyone suggest something beacuse I seen on the internet evryone have same problem with same exact loss value 10.09030 …please wrtite down any golbal solution!!!!

pankajmaheshwari
Автор

it dose has the projection head though

menkiguo
join shbcf.ru