Not enough data for deep learning? Try this with your #Python code #shorts

preview_player
Показать описание
Data augmentation can be a quick way to generate new data for deep learning.

Oh, and don't forget to connect with me!

Happy coding!
Nick

P.s. Let me know how you go and drop a comment if you need a hand!
Рекомендации по теме
Комментарии
Автор

I’ve watched at least 100 AI and deep learning videos in the last week - but, as a complete beginner - this one was by far the most valuable one for me. It presents a clear real-world use case - a common problem - and a way to solve it - all in less than a minute!

Tenly
Автор

Thank you sir. That was very helpful.
It's my goal that someday I'll get as capable as you are.

samvrittiwari
Автор

No guys, data augmentation is not the solution to the biggest problem in AI Research. Data augmentation does not enrich the semantics of your datasets and don't provide new labels. It just tricks a little bit you model so that it can eventually become more robust. The lack of suitable datasets is still a massive problem for AI researchers. A better solution is to simulate images with Diffusion Models based on a ground-truth for instance. But this is still not enough.

marcod
Автор

This actually was useful for my face recognition

siddhantgulia
Автор

We all had have this problem where we don't have enough image data. This solution is revolutionary. Love all your videos.

baskarpalani
Автор

Transfer learning + tuning, self-supervised learning working on relevant online scrapped images, or generative modelling such as GANs for augmentation might be good alternatives too.

prof_shixo
Автор

I love how you describe the business guy as the suit 😂😂

malumbosinkamba
Автор

Very, very great content . You are tremendously precise in what you do.
I have a suggestion. Can you do shorts about pytorch because I think it is gaining popularity more than tensorflow .Also it is more pythonic and can be very easy to understand what really happened there .
Thank you 😊

brainbooom
Автор

Your channel is gold, I enjoy your vids a lot, keep doing what you are doing you really inspire me

khaledalwithinani
Автор

I use Albumentations as it's flexible for segmentation and bounding box datasets too

mercy
Автор

Thank you very much sir, i saw all your videos!
And found that how intelligent you’re !!!

Thanks for uploading these videos…

darshitgoyani
Автор

Also we can use prettained model as well and finetune it according to the use case.

shivangkhandelwal
Автор

Dude that is incredibly helpful! These guys at tensorflow think of everything :D
Thanks a lot for sharing hat tip with us! :)

And i hope no potato has been hurt in the making of this video, defective or not :D

NoMercy
Автор

Thank you sir.
Your vedios are really helpful.
Could you please make a vedio on vedio summariser project

amithasm
Автор

Dude you have me laughing in the middle of the night looking at that potato 😅😅😅😅😅😅

hannav
Автор

Also could use albumentation library . . . !, Still is it reliable if there are 5 images? I never tried any models with that low data set :)

PUBUDUCG
Автор

i dont understand. so from that one potato, you duplicate them but vary the color property? isn't it called data redundancy?

danielniels
Автор

Well what about zero shot and few shot models like vit. Though we need to augment in order increase accuracy even with zero shot and few shots.

techtam
Автор

The suit clearly has no idea.
Wiser words have never before been said.

jamesabhilash
Автор

Augmenting is best, but here's my problem I got same problem with my madam who wants us to build this model but the problem is about the leaves like potato leaves if they have any diseases now the data is like around 100-200 photos but they all are having single single photos and what if the unseen prediction goes with multiple leaves at a time? Have any dataset or solution ?

sudhitpanchal