Neural Networks for Dynamical Systems

preview_player
Показать описание

This lecture shows how neural networks can be trained for use with dynamical systems, providing an efficient tool for time-stepping and forecasting.
Комментарии
Автор

One of the best professors in system analysis.

AliRadmehrir
Автор

Thank you for posting this great video, Professor!

dr.alikhudhair
Автор

the best demonstrating way i have even seen

人類之信仰現代の精神
Автор

Excited to see how this can be used for IK in robotics, thanks for your time.

georgekerwood
Автор

Thank you for posting this great video, Professor! This technique seems quite simple, yet extremely powerful. I am curious what types of otherwise intractable systems might become tractable if one uses this technique with more powerful modern NNs.

mattkafker
Автор

Hi Professor Kutz,

I was wondering what the intuition was behind choosing your three activation functions (lines 25-27). I've seen logsig and tanh functions used a lot for the hidden layers of regression networks but I'm not sure what the radial basis function brings to the table and why it is placed in the middle.

In addition, I thought the output layer for matlab's regression networks is 'purelin' by default so is the third hidden layer a bit redundant? Or is there a reason you chose to have the last hidden layer to have a purelin activation function?

chrisprasanna
Автор

Great video Nathan, I recently got your other book "Data-Driven modeling &Scienfitif computation" I'm enjoying

Anorve
Автор

Hi, thanks for the interesting vedio. Is there a method that we could include the "b; sig; r;" as input paramaters for nerual network?

suningok
Автор

I was searching around for a neural network architecture that works by taking in an input multi-modal "image" at time T, and predicts multi-modal output at tiime T+1. But, how would you represent actuators in this scheme? The position of a motor as an input and predicted output (y_a1 - x_a1) ... would that diff trigger a motor to actuate the world? since LLMs are all based on next-frame prediction, it seems that you could make a generic box and hook up some inputs to pixels, other inputs to positions of actuators; but it's mysterious how the output would control the actuators. ie: if a neural network is trying to minimize surprise, then it can minimize surprise by actuating the world on its own.

rrrbb
Автор

What might be the advantage of training a NN to solve an IVP? It seems that to train a NN to solve for the IVP one must already have a way to generate trajectories.

TheSugarDealers
Автор

Does someone know how this approach changes when you also consider an input, u, in your system?

optimizacioneningenieria
Автор

Most complicated example I’ve seen in my entire life. And I’m very old.

dosomething