Up to 150x GPU PANDAS Speedup with No Code Changes

preview_player
Показать описание
In this video I introduce the latest version of CUDF that offers up to 150x speed improvements in Pandas, with no code changes, using a GPU.

~~~~~~~~~~~~~~~ LINKS ~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~ SUPPORT ME 🙏~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#pandas #cudf #cuda #nvidia #rapids #machinelearning
Рекомендации по теме
Комментарии
Автор

Just the technique I’m looking for! My cpu is saved 😂😂😂

yl
Автор

Quick note, at 7:15, the final bullet point can be ignored. Everything is installed directly from conda/pip.

HeatonResearch
Автор

Just wanted to chime in, I really your work on this channel! Purely out of curiosity though, any thoughts on eventually making videos about RAG or LLM tuning, or not interested?

tisisonlytemporary
Автор

that would be nice to see how that works during training network.. I mean, when dataset is too big to load and preprocess in memory ( or saving pre-processed copy of dataset is also outof the question), we need to take it part by part (batches). BUT during training GPU is busy with model.. so how that access to GPU will look like, I wonder..

Malins
Автор

I recently started learning CUDf. A friend gifted me a Tesla P4 8gb for my learning lab. Will the P4 do the job? I’m new to GPUs.

kevindunlap
Автор

This is stupid hard to install/get working on a standard pip windows machine... too many errors.

bigdhav
Автор

Nvida has shit drivers for linux, writes code that needs to run on linux. I know, lets run it on linux in windows. What a joke.

zyxwvutsrqponmlkh