How to Manage GPU Resource Utilization in Tensorflow and Keras

preview_player
Показать описание
I'll show you how to keep Tensorflow and Keras from hogging all your VRAM, so that you can run multiple models on the same gpu, in parallel.

#Tensorflow #Keras #Deeplearning

Learn how to turn deep reinforcement learning papers into code:

Get instant access to all my courses, including the new Prioritized Experience Replay course, with my subscription service. $29 a month gives you instant access to 42 hours of instructional content plus access to future updates, added monthly.

Or, pickup my Udemy courses here:

Deep Q Learning:

Actor Critic Methods:

Curiosity Driven Deep Reinforcement Learning

Natural Language Processing from First Principles:
Reinforcement Learning Fundamentals

Here are some books / courses I recommend (affiliate links):

Come hang out on Discord here:

Рекомендации по теме
Комментарии
Автор

This content is sponsored by my Udemy courses. Level up your skills by learning to turn papers into code. See the links in the description.

MachineLearningwithPhil
Автор

Great video Phil. However, I wish you use Tensorflow 2.0 for the videos. TF 1.x will be legacy soon.

Ehsan_
Автор

Im so glad I found your channel its exciting to see something im interested in. Perhaps a topic for next video? freelance ML engineer vs corporation, pros and cons, benefits, lifestyle ...ect.

esanchez
Автор

Hey, Phil, I was wondering what are some practical applications that you've worked on with Reinforcement Learning?

paulgarcia
Автор

These tutorials are helping me a lot. I would request you to please make a tutorial on how to increase the training speed of RL algorithm using Threading and Multiprocessing on GPU.

pranavagarwal
Автор

Great Video. I was wondering what the CPU RAM usage was in your case since you tried loading 2 models. In my case, whenever the CUDA libs were loaded the CPU memory usage spikes to around 70%. Was that the same case for you?

rajasdeshpande
Автор

How about high batch size with these setup? Say, 128.

MohdAkmalZakiIO
Автор

Assigning takes effect only if it is executed before Keras is imported / loaded. Unfortunately when on a two GPU system I tried to assign to all models one thread and then to models of another, while the value of the environment variable changed the allocated GPU of the second thread was still GPU 0

aladjov
Автор

My motherboard supports up to 8 GPUs. 6 of the PCIE slots are 1x though, will this affect Tensorflow’s ability to run single model spread in parallel among all 8 GPUs?

justinberken
Автор

Hello, i keep getting this error: OOM when allocating tensor with shape[33162368, 64] and type float on by allocator GPU_0_bfc [Op:RandomUniform]
even after setting the os.environ variables like you specified.
My GPU is a NVIDIA GTX 1650 with 3, 911 MiB.
Im working with the Melanoma Kaggle images which are 1024x1024 but i created new folders with only 600 images so it wouldnt need too much memory, any idea what might be causing this or how could i solve it?

self-made-datascientist
Автор

do you have discord @
Machine Learning with Phil ?

theshortcut
Автор

from tensorflow.python.util import deprecation

pranavagarwal