#117: Accelerating ML with RAPIDS by Nvidia AI

preview_player
Показать описание
Data science demands the interactive exploration of large volumes of data, combined with computationally intensive algorithms and analytics. Today, the computational limits of CPUs are being realized, and a new approach is needed.
In this talk, we will discuss how GPUs can enable data scientists to perform feature engineering and train machine learning models at scale using RAPIDS

Рекомендации по теме
Комментарии
Автор

Q&A:
Q: Are there any prerequisite libraries/ languages we need to know about?
Q: At each split point, do you consider every single available feature
or pick a random sample of features and try to do the best using a sub-sample of features? Thank you."
Q: I’m a good fan of NVidia products, I was in the process of using RAPIDS but was hurdled by the technical issues behind getting it installed. Who can we get in touch with in the community to get support in the future?
Q: Can I install it on both Kaggle and Google Colab? There can be discrepancies in their environments at time - would love to hear about it and resources to use when things don’t work.
Q: Can I uses these GPU enabled libraries between environments with and without GPU access?
Q: What is not yet converted to GPU mode in the world of Python/Data Science into the GPU enabled libraries?
Q: How can we then use GPU enabled libraries ie. TF/Keras/Transformers with your libraries so that we can seamlessly use them?
Q: How about NLP based libraries, is there going to be port to GPU based ones as well?
Q: Getting these speedups is very nice, but what would be the average increase in cost?
Q: Is RAPIDS available on other cloud providers i.e Oracle Cloud, and other private cloud providers
Q: Is there a generic script I can use to do the same thing you mentiond about HyperParam Opt w/ RAPIDS cloud agnostic? Like ansible script or shell scriot?
Q: How's RAPID's integration with Spark (mainly for Java/Scala APIs)?
Q: Please tell me more about Numba plus RAPIDS, is this out of the box? Or is it the same interface Numba provides? Numba already provided GPU support.
Q: can we also use java like platform independent language also?
Q: Can I use it in autoML processes? Or use it on autoML platforms like AWS sagemaker? And also Keras or other deep learning frameworks?
Q: Im aware of the RAPIDS Kaggle resource you have shared, although when I had an issue recently on Kaggle there was no response and the errors are quite obscure sometimes.
Q: Can I uses these GPU enabled libraries between environments with and without GPU access? I meant that if there is no GPU it switches to CPU mode which helps quite a bit with the coding part of things.
Q: How about using RAPIDS on a jetson nano? How can I compare it with a Volta or other high-end GPU?
Q: Is there an intention to extend Rapids to cover Neural Network algorithms ?
Q: is there support to keras?
Q: Are these libraries supported on Intel x86 HW only or some other systems with NVIDIA are supported too ( e.g., IBM POWER, etc. )?"

AICamp