Contrastive Learning in PyTorch - Part 2: CL on Point Clouds

preview_player
Показать описание
▬▬ Papers/Sources ▬▬▬▬▬▬▬

▬▬ Used Icons ▬▬▬▬▬▬▬▬▬▬

▬▬ Used Music ▬▬▬▬▬▬▬▬▬▬▬
Music from Uppbeat (free for Creators!):
License code: KJ7PFP0HB9BWHJOF

▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction
00:22 Errors from last video
01:41 Notebook Setup [CODE]
02:42 Dataset Intro [CODE]
05:07 Augmentations and Bias
06:26 Augmentations [CODE]
09:12 Machine Learning on Point Clouds
11:48 PointNet
13:30 PointNet-pp
14:32 EdgeConv
15:53 Other Methods
16:09 Model Architecture
17:25 Model Implementation [CODE]
20:11 Training [CODE]
21:05 Batch sizes in CL
22:00 Training cont [CODE]
22:40 Batching in CL
23:15 Training cont [CODE]
24:08 Embedding evaluation
27:00 Outro

▬▬ Support me if you like 🌟

▬▬ My equipment 💻
Рекомендации по теме
Комментарии
Автор

It’s earning a lot of trust how you corrected the oops from your last video. Kudos.

mkamp
Автор

Great work! Combined with the part 1, a very complete picture of how contrastive learning is presented. Thanks. It helps a lot.

yuanluo
Автор

Very cool videos, thank you man!

Your channel is truly underrated, keep it going!

evgenii.v
Автор

Great tutorial! In my opinion, to enhance the embedding evaluation part, it's crucial to establish a solid baseline. One effective approach would involve applying TSNE directly to the (high-dimensional) point cloud data (e.g., after applying a simple permutation-invariant operation). By comparing these TSNE plots with the ones generated from the learned embeddings, we can effectively gauge the impact of the contrastive deep learning framework on the separation performance. This would provide a fair assessment of how well the framework has improved the data representation.

amirrezafarnoosh
Автор

Great followup video. I really like the way you speak simply about whatever concept you are explaining. I think it has good pedagogical value. Keep it up 👍

thegimel
Автор

You're awesome! Thanks for making these videos and sharing your knowledge. I hope you keep creating this kind of content, I'll keep watching!!!

hmind
Автор

Very excellent tutorials! Thank you very much for your generous help to beginners like me.

elviska
Автор

Very useful and informative video, especially PointNet and batch size parts. Special thanks for using point cloud domain!

divelix
Автор

Keep up the good work. You are amazing. You have the talent for teaching! It would be fantastic if you can implement something like DINO!

juikaiwang
Автор

The quality of your videos and explanation is amazing! Actually, I am really thinking about requesting your expertise. Any work is going to be done on GNN Contrastive Learning? I am very interested into the idea of extracting invariant features from GNN embeddings? Keep up the good work! ^_^

mohammadnafie
Автор

good video, but 1 thing, I do not think voxel is a sparse representation. sparse is happened when points has distance betwen them like low resolution point cloud. mesh and voxel does not have this effect!

MrXboyx
Автор

Thank you so much. Pleas keep positing

lores
Автор

What's the similarity between NTXentLoss (or InfoNCE) and SimCLR loss?

GayalKuruppu
Автор

Great explanation! In the case of semantic segmentation how can we calculate the contrastive loss for pixel embeddings?

nikosspyrou
Автор

Thanks for the video. Will the next be a Part 3? With a Infomax (Contrastive GNN) tutorial ? ;)

eranjitkumar
Автор

in section "install torch"
what is link?

Shayeste_kf
Автор

useful video, question: why in test time no augmentations are applied?

NouhaShab
Автор

Video is very nice, thank you! But it is hard to fix (update) the code in 2025(

mechanicsmechatronicsandro
Автор

Good explanation but bad example which is difficult to follow the point could. I wish you would have picked 2d images instead

hussainshaik
Автор

Can it be implemented in text data as well? IF so can you make a vid for it?

loveislulu
visit shbcf.ru