Tensors for Deep Learning - Broadcasting and Element-wise Operations with PyTorch

preview_player
Показать описание
💡Enroll to gain access to the full course:

Learn about tensor broadcasting for artificial neural network programming and element-wise operations using Python, PyTorch, and NumPy.

🕒🦎 VIDEO SECTIONS 🦎🕒

00:30 Help deeplizard add video timestamps - See example in the description
12:34 Collective Intelligence and the DEEPLIZARD HIVEMIND

💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥

👋 Hey, we're Chris and Mandy, the creators of deeplizard!

👉 Check out the website for more learning material:

💻 ENROLL TO GET DOWNLOAD ACCESS TO CODE FILES

🧠 Support collective intelligence, join the deeplizard hivemind:

🧠 Use code DEEPLIZARD at checkout to receive 15% off your first Neurohacker order
👉 Use your receipt from Neurohacker to get a discount on deeplizard courses

👀 CHECK OUT OUR VLOG:

❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
Tammy
Mano Prime
Ling Li

🚀 Boost collective intelligence by sharing this video on social media!

👀 Follow deeplizard:

🎓 Deep Learning with deeplizard:

🎓 Other Courses:

🛒 Check out products deeplizard recommends on Amazon:

🎵 deeplizard uses music by Kevin MacLeod

❤️ Please use the knowledge gained from deeplizard content for good, not evil.
Рекомендации по теме
Комментарии
Автор

I think it's very humble of you to include clips from other streamers/speakers/youtubers in your videos. Rather than purely ripping off their exact explanation and delivery, you have identified that this other person X has explained something the best, so let's just put X's explanation in my video directly, rather than copying their explanation.

ravihammond
Автор

This series is so much underrated. I don't know why this has so low views and likes. It should be on the top.

ShahxadAkram
Автор

Dude!! This is probably the best learning channel for anything Deep Learning with Python. The explanations with the visuals make things SO much easier to understand.

sytekdd
Автор

I see the speciality here! The way you're combining the math to improve the way we're writing the code is the meaning of a comprehensive workflow. Thank you so much for your efforts!

aiahmed
Автор

I love this channel! keep the good work. It will be great if you can continue explaining more advance architectures after going through Deep NN and Convolutionary NN. Maybe LSTM's, GAN's and many other interesting and useful tools.

rubencg
Автор

As one who studied computer science, these are basic matrix transformations (scalar multiplications) and it is explained really intuitively in the video for people without any linear algebra knowledge.

BenjaminGolding
Автор

Thank you for this exhaustive explanation of the important and critical concept of broadcasting. This really helps.

Brahma
Автор

I feel like you are like god... this channel literally saved me...I desperately need someone who can explain Pytorch functionality to me and this channel is best of best of the bestest...
Thank you soo much please post more videos...

ravitejavarma
Автор

Great works! I love your Pytorch videos.

mdafjalhossain
Автор

Please add videos on GANS, autoencoders etc. Videos are way too good and explanation is perfect.

adarshkedia
Автор

Great works!! Nice detailed explanation

李祥泰
Автор

what is that minus zero by t.neg(), at time 10:48 :)

biplobdas
Автор

Isn't the typing sound the smoothest and most awesome thing you've ever heard in your life?

toremama
Автор

Hi, thank you for your videos, they're really useful and I love them.


Question though: is there a way to write our own element-wise function and ask PyTorch to apply it for us, like at 10:45 where methods "t.neg()" and "t.sqrt()" are applied element-wise ?


Something like this:

t.func(lambda x: x*x) # Output would be the same as t.sqrt()


Or even including other tensors like so:
t1 = torch.tensor([1, 1, 1])
t2 = torch.tensor([2, 2, 2])
add = lambda a, b: a + b
t1.func(add, t2) # Output would be the same as t1 + t2


Thanks.

minimatamou
Автор

7:35 I decided to try another case with t3 = torch.tensor([[2], [4]], dtype=torch.float32). I expected t1 + t3 to be equal to tensor([[3., 3.], [5., 5.]]), but instead it returned just the same as t1 + t2.

felipealco
Автор

Hello.. I have a question. i have a 100 x 768 matrix of test data. And 100 x768 matrix of train data. I'm doing KNN so i need to compute the Euclidean Distance between the test and train data, and map it into a 100x100 matrix. Now the trick it, i cant use any loops here. So I've got to do it completely through broadcasting. Any ideas how i might go about?

hassaanahmad
Автор

thankyou for introducing tensors, its a topic many shy away from explaining but it now seems very simple. a topic i still dont quite get is merge layers such as dot, more specifically the axes argument in keras, (not sure what is the pytorch equivalent), is it similar to the .cat function? perhaps i should start using pytorch, it seems more practical. thanks again.

stacksonchain
Автор

Great Tutorial. How can I check if every element in a tensor is True (not truthy)? I already tried: any(t.reshape(1, -1).numpy().squeeze()) but any() also returns True if every element is not zero (truthy).

antonmarshall
Автор

Kinda random, but can you link the audio file during the coding segments? the intense vacuum noise lol

grombly
Автор

When I was trying out the element-wise comparison operations on Jupyter Notebook it showed me True/False instead of 1/0 as an output. I wrote exactly the same code shown here. Can anyone please explain to me why that happened?

rizvanahmedrafsan