Neural ODEs (NODEs) [Physics Informed Machine Learning]

preview_player
Показать описание
This video describes Neural ODEs, a powerful machine learning approach to learn ODEs from data.

This video was produced at the University of Washington, and we acknowledge funding support from the Boeing Company

%%% CHAPTERS %%%
00:00 Intro
02:09 Background: ResNet
05:05 From ResNet to ODE
07:59 ODE Essential Insight/ Why ODE outperforms ResNet
// 09:05 ODE Essential Insight Rephrase 1
// 09:54 ODE Essential Insight Rephrase 2
11:11 ODE Performance vs ResNet Performance
12:52 ODE extension: HNNs
14:03 ODE extension: LNNs
14:45 ODE algorithm overview/ ODEs and Adjoint Calculation
22:24 Outro
Рекомендации по теме
Комментарии
Автор

I have been playing with NODEs for a few weeks now. The video is really helpful and intuitive. Probably it is the clearest explanation I have heard so far. Thank you, Professor.

smustavee
Автор

Thanks Dr. Brunton for making a video on Neural ODE. Came across this paper as soon as it came out back in 2018. Still goes over my head particularly the introduction of the 2nd differential equation/ adjoint sensitivity method. Would really appreciate if you explain it in detail.

mohammadxahid
Автор

Thank you for taking me back to Engineering Class control Systems

tshepisosoetsane
Автор

Love your content ! Went through the entire complex analysis videos, and now gonna go through this one as well !

astledsa
Автор

So basically rising awareness that there are better approximations to "residual" integration. Thanks for the reminder.
From my course on numerical computation, using better integrators is actually better than making smaller time steps, rising the possible accuracy given some limited amount of bits for your floating point numbers.

kepler_b
Автор

Great video, I learned a lot! Piqued my interest and inspired me to do a deep dive into all the topics mentioned

stefm.w.
Автор

So if I understand correctly, ODE networks fit a vector field as a function of x by optimizing the entire trajectory along that field simultaneously, whereas the residual network optimizes one step of the trajectory at a time?

hyperplano
Автор

this is great --- i think about this stuff all the time, but didn't know others did :/

OnionKnight
Автор

Awesome video and very helpful. Thanks

anthonymiller
Автор

Cool summary and intro for liquid NNs.

lucynowacki
Автор

very interesting course, love such great video...

daniellu
Автор

Can you please teach latent neural ode in detail?

SohamShaw-bxfq
Автор

Awesome video. One question I'm asking myself is: Why isn't everybody using NODEs instead of resnets if they are so much better?

-mwolf
Автор

@Eigensteve is the nth order runge kutta integrator not just what a UNet is, after its being properly trained. The structure appears the same and the coefficients would be learned.

merrickcloete
Автор

Fantastic video! Do you have any references for the mathematics behind the continuous adjoint method?

osianshelley
Автор

I study neural ode for quite a long time, and found it is good for initial value problem, however, for external input problem, it is really hard to train.

HD-qqbn
Автор

I would vote for more details on the adjoint part. It is not very clear to me how to use AD for df/dx(t) now that x changes continuously (or do we select a clever integrator during training?) .

etiennetiennetienne
Автор

I couldn't understand what a problem the NODE solves. What is the source data and what is the goal? Perhaps, you are trying to approximate a dynamical system (the rhs function of it) with a NN (i.e. you approximate the rhs as a composition of activation and linear functions), s.t. trajectories of the synthetic system look like the source data. Is this correct?
Is it like an alternative to HMM?

maksim-surov
Автор

Why is it implicit that x(k+1)=x(k)+f(x) is Euler integration ? Can be any integrator depending on how you build f(x), Runge Kutta for example f is
f(x) =h/6*(k1+2*k2+2*k3+k4).

marcelotoledo
Автор

Nice video, but I really miss the connection point between the NNs and the math part. I have a PhD in physics and I've worked a lot with the math you're talking about. Also I've worked a few years as a data scientist and I kinda understand how it goes with the neural networks.
But I really miss the point how you make these two work together. Sorry if I sound dumb here.

digriz