How to Build Your First Neural Network in Python and Keras

preview_player
Показать описание
Let's start with implementing your first neural network using Python and Keras on a Jupyter Notebook. In this section, we will build our first neural network and train it using the data we prepared in the previous lesson.

RESOURCES:

COURSES:

Рекомендации по теме
Комментарии
Автор

I wish more women in tech speaks like this. Straightforward, detail-oriented, clearly spoken.

EL-gzsh
Автор

Thank you for a great explanation! Helped my understanding a lot.

elinsofiemaria
Автор

Great video! Please add a "Thanks" button to videos in this playlist.

GregThatcher
Автор

Thank you for your videos. I like it because you compare math and tensorflow.
I don't understand why you used 2 hidden layers, and 300 neurons for the first and 100 for the second. Can you explain that and compare with less and more hidden layer, and what happens if you change the number of neurons?
Some time I saw some example for the input layer, people use Conv2D for similar dataset. Which is the best?

loicmartin
Автор

Are those 32 samples per each batch_size, choen randomly or the model starts from the beggining of the training data set? If it is chosen randomly, how can be sure that some are not repeated many times while some of the samples won't show up in the training process at all?

alisaghi
Автор

mam you are doing very good job please keep it up 🙂

mohsintufail
Автор

I have a question about batch size. Using MNIST as an example, each photo forms each input layer. And a batch size of 32 would mean 32 photos of digits are passed forward through the NN and the accuracy of the entire batch of 32 is calculated. And the average error of the 32 is backpropagated through the network adjusting the weights. And then the process repeats. Do I have that much correct?

As I understand it, the optimal batch size is a function of the amount of main memory ram and GPU ram (if Nvidia). The larger the batch (that fits within memory and GPU memory) the faster it will loop through the training data set. Smaller batches will require more loops through the DS. So smaller batch sizes are slower? Am I correct? If I'm correct, how do you choose a batch size? For example, let's assume 32 gigs of RAM and 11 gigs of GPU Ram? Again, using MNIST, is the size of the input equal to the size of photos times the batch size? How much RAM does each batch of 32 photos require? Am I on the right track here?

I've asked this of other instructors / presenters and I've never gotten a clear answer... I think its an important question.

Again, great videos. I'm learning new things from you and you are reinforcing what I already know. Both are important!

lakeguy
Автор

Hello,
Thank you very much for these videos, but I wrote the same code and I have got this error
Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 784), found shape=(None, 28, 28)
So I did reshaping and this error appeared
Creating variables on a non-first call to a function decorated with tf.function.
​I need help plz and thank you again

esdidff
welcome to shbcf.ru