Adjoint Sensitivities over nonlinear equation with JAX Automatic Differentiation

preview_player
Показать описание

Taking derivatives is tiring, especially if the underlying function is complicated and nested. We can simplify this error-prone process by employing Algorithmic/Automatic Differentiation (sometimes also called backpropagation), a programming paradigm that is able to produce derivatives to machine precision for computer programs.

Each modern deep learning framework like TensorFlow, PyTorch or JAX is essentially a combination of high performance array based computing (on accelerators like GPUs) and a tool for Automatic Differentiation. We can not only use that for evaluating and training deep Neural Networks, but also for the additional derivative quantities in forward or adjoint sensitivity methods.

-------

-------

Timestamps:
00:00 Intro
00:35 Recap on sensitivities for Nonlinear Equations
00:59 Additional derivative information
01:43 Status Quo
02:05 Change to JAX NumPy
03:00 Use JAX Automatic Differentiation
05:24 Double precision floating points in JAX
06:25 Outro
Рекомендации по теме
Комментарии
Автор

I realy enjoy the format of your video ! Thanks you very much

julienblanchon
Автор

Thank you for the video. Can we calculate gradient of loss function which is constrained by ODEs?

jahongirxayrullaev