Recurrent Neural Networks (RNN) - Deep Learning w/ Python, TensorFlow & Keras p.7

preview_player
Показать описание
In this part we're going to be covering recurrent neural networks. The idea of a recurrent neural network is that sequences and order matters. For many operations, this definitely does.

Рекомендации по теме
Комментарии
Автор

Didn't knew Edward Snowden teaches machine learning on youtube lol

mrweeed
Автор

Note: Since Tensorflow 2.0 it will automatically take the CuDNN version if you specify no activation function

gabriellugmayr
Автор

Awesome video series! Thanks for inspiration Harrison. I'm about to get my first job, thanks to you. Thanks a lot man, keep up.

baltac
Автор

Brilliant as always. No bullshit, just the real deal

prateeknayak
Автор

It would be great if you could draw out the architecture of the mnist example in terms of inputs and blocks. I have a little trouble visualizaing how a 28x28 array feeds into a layer with 128 LSTM blocks. Otherwise, terrific tutorial!

neddolphin
Автор

Can't believe this video is out here available for free. Thank you. Very informative.

pemessh
Автор

I watch your guides parallel to my Machine learning course for my Master of Finance, very helpful

_alex_
Автор

Thank you for making this video basic.
I am very new to tensorflow and keras in general, just learnt it last 2 weeks.
Thank you.

destinyjames
Автор

this is amazing, looking forward for more RNN tutorials

mohamedeffat
Автор

This is really helpful. I was looking for a simple intro to RNN and LSTM, but couldn't find anywhere in tensorflow 2.1. But this one is simple, up-to-date version. Many thanks.

MadhuranandaPahar
Автор

happiness is 18:06 . love the video btw

nabeelkhaan
Автор

One thing I noticed is that most of the processing time that TensorFlow spends processing is actually wasted printing the progress on screen so if you silence it defining model.fit( ..., verbose=0, ... ) it runs WAY faster!

VascoCC
Автор

The training accuracy is less than validation accuracy as during training, dropout works and some nodes are switched off.However, during test time dropout doesn't switch off any nodes so all nodes are involved in the computation of validation accuracy.

shivamchandhok
Автор

You are actually a God-send sir. This has been an incredible tutorial. Thank you so much!

Triumphant
Автор

In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. But with conditions - must use default activation function 'tanh'.

jasonproconsult
Автор

Thank you very much for our efforts. Wonderful. I have one suggestion to this video.
Please print the model predicted result of a data point and compare it with actual output. Print the images for comparison. The viewer can appreciate more.

subrahmanyamkesani
Автор

well talk about how good all the tutorials are, and how he is making it easy to understand tensorflow and keras, we'll discuss that later, has anyone else noticed the cups, 1st video I saw a shark for a cup, I skipped to this one and seeing an Octuopus...I can be seeing these nice cups alone right :)

Thanks for the very simple tutorials man, making Deep leaning fun...

musawenkosisangweni
Автор

Thanks a lot for the video series!!
For M1 Mac users: I could speed up execution by using Tensorflow-cpu and get speeds just like CuDNN

ashrielsamy
Автор

Please do some videos on Transfer learning

eswarsaikrishna
Автор

Most people probably know this but when he normalizes the data and grabs "255.0" out of the sky it is because the mnist data-set is giving a 28x28 array of number digits in gray-scale by assigning a pixel shade of 0-255 ; 0 being black space and 255 being white; if you print(x_train[1]) you can tell it is a '0' and prove that by printing y_train[1]; dividing all pixels by 255 scales all image data between 0 and 1

swooshonln