Intro to JAX: Accelerating Machine Learning research

preview_player
Показать описание
JAX is a Python package that combines a NumPy-like API with a set of powerful composable transformations for automatic differentiation, vectorization, parallelization, and JIT compilation. Your code can run on CPU, GPU or TPU. This talk will get you started accelerating your ML with JAX!

Resources:

Speaker:
Jake VanderPlas (Software Engineer)

#MLCommunityDay

product: TensorFlow - General; event: ML Community Day 2021; fullname: Jake VanderPlas; re_ty: Publish;
Рекомендации по теме
Комментарии
Автор

Active: Jax enters Evasion, a defensive stance, for up to 2 seconds, causing all basic attacks against him to miss.

HibeePin
Автор

This guy is so epic. He looks like he's enjoying every second of life.

domenicovalles
Автор

This video maximizes dInsights/dtime, is well written and easy to understand! I want to see more videos from Jake!

EnricoRos
Автор

I burst out laughing with the ExpressoMaker that overloads the + operator.

pablo_brianese
Автор

Looks great! I tend to default to numpy when I want to do something that is not fully supported in keras or pytorch and if i can get paralellization on gpu very easily from this that is perfect!

emiljanQ
Автор

Thank you for this good intro to JAX. Very easy to follow and understand, Jake. Definitely going to add this to my toolkit. 👍🙏

OtRatsaphong
Автор

i have a question, whats the porpuse of doing so many frameworks? time? efficiency? cuz i don't see it.

lacasadeacero
Автор

JAX seems to be more similar to PyTorch i.e., dynamic graph instead of static graph as in Tensorflow.

subipan
Автор

Thiis sounds very good especially the grad and vmap functionality. I think more libraries would have to be released to compete with pytorch.

karansarkar
Автор

How are you going to compare torch to tf/jax when run on a different GPU? There is no way you can argue the 2 gpus are comparable, they will be faster/slower at different types of computation regardless of software used. Should have compared the 3 on a common gpu if for some reason torch couldnt be run on the tpuv3.

joshuasmith
Автор

Seeing JAX on the TensorFlow channel, now I am scared they'll mess this codebase too. Please don't, k thx.

kuretaxyz
Автор

What is the difference btw numerical vs automatic differentiation?

brandomiranda
Автор

1:14 lol they compared TPU runtimes with GPU runtimes

sashanktalakola
Автор

Something's wrong with the audio. His voice gets so soft it's hard to hear at the end of some sentences.

RH-mkrp
Автор

Ok, this is seriously cool. Is this brand new? Haven't seen it before.

Also, in the first code sample did you mean to import vmap and pmap instead of map, or is that some kind of namespace black magic I don't understand?

TohaBgood
Автор

2:14 why in predict function inputs is reassigned but never used ? should be outputs = np.tanh(outputs)

CharlesMacKay
Автор

Thanks! For me it helps alot! Being a C/C++ / Python developer, somehow I left behind such an important framework / library.

valshaev
Автор

I thought JAX was running as default in tensorflow, am I missing something here?

AlphaMoury
Автор

Googles Bard sent me here . Anyone know why ?

RoyRogersMusicShop
Автор

Does this support apples gpus in M1 max?

brandomiranda