TensorFlow Tutorial 4 - Convolutional Neural Networks with Sequential and Functional API

preview_player
Показать описание
In this video we will learn how to build a convolutional neural network (cnn) in TensorFlow 2.0 using the Keras Sequential and Functional API. We take a look at how to do max pooling layers, batch normalization etc.

Resources or prerequisite videos:
1. Deep learning specialization course 4 (No need to watch all, for example not after C4W2L02):

2. Batch Normalization video:

❤️ Support the channel ❤️

Paid Courses I recommend for learning (affiliate links, no extra cost for you):

✨ Free Resources that are great:

💻 My Deep Learning Setup and Recording Setup:

GitHub Repository:

✅ One-Time Donations:

▶️ You Can Connect with me on:
Рекомендации по теме
Комментарии
Автор

* Corrections to the video:

When re watching the video there were two things I felt weren't clear or that I didn't cover that I should've:



2. I showed how to do a simple conv net using the Sequential and the Functional, and showed how we can use batch normalization when using the Functional. This can of course also be done using the Sequential API also, I just did it to mix things up! So there wasn't really any reason to use the Functional API here, it was just two illustrate of the implementation would differ between the two APIs.

Also I was inspired and learned the basics of TensorFlow after I completed the TensorFlow specialization on coursera. Personally I think these videos I created give a similar understanding but if you wanna check it out you can. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link which helps me create more future videos.

AladdinPersson
Автор

Dude, I know how long it takes to make a video. But you need to upload more videos per day. These are just so awesome. I see this channel growing.

universe
Автор

dude there is a reason why there are no dislikes on this video and that's because you are amazing
keep doing your work

navalsurange
Автор

Your videos are awsome! One of the best teaching material I saw! Keep going! Good luck in your channel development! Thanks for NLP series!

dktdklc
Автор

The explanation is smooth and crisp. Thanks for the tutorial and would like to see more of these. However, I was looking for building the CNN with only TensorFlow, not with Keras. Keras makes it much easier and less flexible.

sridharaddagatla
Автор

I liked the way how you are moving in the concepts. I wanted to refresh my concepts and your videos are on dot. Thanks for the Videos.

srinivasadineshparupalli
Автор

One of best ways of explanation. Thanks.

underlecht
Автор

Thank you Sir, for updating such a great tutorial, please keep it up. Sir your teaching methodology is superb

AltafHussain-gkxe
Автор

Correct me if I am wrong. I think layer normalization has a flaw!

Consider two features: House area and number of rooms.


House 1: Area = 1 m², Rooms = 1. After normalization, this would be [0, 0]
House 2: Area = 100 m², Rooms = 100. This would also result in [0, 0] after normalization.

How can two distinct sets of features become the same after normalization?

temanangka
Автор

HI, i love your teaching Tensorflow, can you show how to do hyperparameter tuning for the models please

rimeethreddy
Автор

hey Aladdin,
awesome content mate...easy to understnd and to the point.
could you please help me with some resources about the differents aspects of tensorflow like flatten, batch sizes, batchnorm etc...i want to understand when and why they are required and how they change the model flow. any recommendations (blogs, video, books...anything) for the same?

puneetbhasin
Автор

How come do you have 10 outputs in the last layer without softmax and are using This loss function works on single output which is not one-hot encoded, and you haven't one-hot encoded your y_train labels. Can you explain this please?

*EDIT:*
Is it because of the *train_logits=True* ?

googlable
Автор

Why do we not specify a soft max activation for the output? Do we not do this with convnets?

prod.kashkari
Автор

x = layers.Conv2D(64, 5, padding='same')(x)

Why change from 3 to 5? and why change from 'valid' to 'same'?

orjihvy
Автор

While the original paper talks about applying batch norm just before the activation function, it has been found in practice that applying batch norm after the activation yields better results.

MohammadJahidIbnaBasher-besm
Автор

Hi,
First of all, its an amazing tutorial. I have a doubt.
Whats the reason for adding two Dense layers one after the other?
Is there any difference in the accuracy if I direvtly convert it 10 channels (by passing one of the Dense layers)?

Thanks

bhaveshgoyal
Автор

12:50 Why increase from 32 to 64 to 128 in different layer? I don't quite get it. Thanks

gavin
Автор

shouldn't we use batch-normalization to normalize the output of activation function A as you write in 11:31 you normalize Z
I have seen CNN

donfeto
Автор

In the function my_model(), in the second line "x=layers.Conv2D(32, 3)(inputs)". I am geting error
" Dimension value must be integer or None or have an __index__ method, got value 'TensorShape([None, 32, 32, 3])' with type '<class "

jatinlakhani
Автор

Can you do a video on transformer models?

grahamhenry
visit shbcf.ru