Broadcasting Explained - Tensors for Deep Learning and Neural Networks

preview_player
Показать описание
Tensors are the data structures of deep learning, and broadcasting is one of the most important operations that streamlines neural network programming operations.

Over the last couple of videos, we've immersed ourselves in tensors, and hopefully now, we have a good understanding of how to work with, transform, and operate on them. If you recall, a couple videos back, I mentioned the term “broadcasting” and said that we would later make use of it to vastly simplify our VGG16 preprocessing code.

That's exactly what we'll be doing in this video!

Observable notebook:

🕒🦎 VIDEO SECTIONS 🦎🕒

00:30 Help deeplizard add video timestamps - See example in the description
10:50 Collective Intelligence and the DEEPLIZARD HIVEMIND

💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥

👋 Hey, we're Chris and Mandy, the creators of deeplizard!

👉 Check out the website for more learning material:

💻 ENROLL TO GET DOWNLOAD ACCESS TO CODE FILES

🧠 Support collective intelligence, join the deeplizard hivemind:

🧠 Use code DEEPLIZARD at checkout to receive 15% off your first Neurohacker order
👉 Use your receipt from Neurohacker to get a discount on deeplizard courses

👀 CHECK OUT OUR VLOG:

❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
Tammy
Mano Prime
Ling Li

🚀 Boost collective intelligence by sharing this video on social media!

👀 Follow deeplizard:

🎓 Deep Learning with deeplizard:

🎓 Other Courses:

🛒 Check out products deeplizard recommends on Amazon:

🎵 deeplizard uses music by Kevin MacLeod

❤️ Please use the knowledge gained from deeplizard content for good, not evil.
Рекомендации по теме
Комментарии
Автор

👉 Check out the blog post and other resources for this video:

deeplizard
Автор

Video is very valuable even for those who don't know js. Thanks a lot!

egorgladin
Автор

Broadcasting is a truly wonderful gift. I just saw it explained in your PyTorch videos, now in Javascript with Tensoforflow.js -- I'm certain it's also there in R. The day has been filled with inspired learning. Thanks again!

sphynxusa
Автор

Two tensors could be broadcasted if they have a SYMMETRY in their shape, and at least one dimension is 1.
For example:
a = np.arange(6).reshape((1, 2, 3))
b = np.arange(6).reshape((3, 2, 1))
c = np.arange(6).reshape((2, 3, 1))
d = np.arange(6).reshape((2, 1, 3))
a+b - correct, shapes (1, 2, 3) and (3, 2, 1) are symmetrical
a+c - error
a+d - correct, shapes (1, 2, 3) and (2, 1, 3) are symmetrical
b+c - error
b+d - error
c+d - correct, shapes (2, 3, 1) and (2, 1, 3) are symmetrical

Carbon-XII
Автор

Thanks for great videos so far!!
I have one question: What if we want to sum up two tensors of different shapes like this case: 1x2x3 and 2x3? It is only said we "substitute a one in for the missing dimensions", but how can we know which is the missing dimension of the 2nd tensor, like should it be 0x2x3 or 2x0x3 or 2x3x0?

hungnguyenquoc