A Hands-on Introduction to Physics-informed Machine Learning

preview_player
Показать описание
2021.05.26 Ilias Bilionis, Atharva Hans, Purdue University
Table of Contents below.

Can you make a neural network satisfy a physical law? There are two main types of these laws: symmetries and ordinary/partial differential equations. I will focus on differential equations in this short presentation. The simplest way to bake information about a differential equation with neural networks is to create a regularization term for the loss function used in training. I will explain the mathematics of this idea. I will also talk about applying physics-informed neural networks to a plethora of applications spanning the range from solving differential equations for all possible parameters in one sweep (e.g., solve for all boundary conditions) to calibrating differential equations using data to design optimization. Then, we will work on a hands-on activity that shows you to implement the ideas in PyTorch. I am assuming some familiarity with how conventional neural networks are trained (stochastic gradient descent).

Table of Contents:
00:00 A Hands-on Introduction to Physics-informed Machine Learning
01:57 Objective
02:08 Reminder - What are neural networks?
03:09 Reminder - How do we train neural networks?
04:28 Reminder - How do we train neural networks?
06:28 Illustrative Example 1: Solving an ODE
07:15 From ODE to a loss function
09:35 Solving the problem with stochastic gradient descent
10:59 Results (Part of Hands-on activity)
11:32 Illustrative Example 2: Solving an elliptic PDE
11:40 From PDEs to a loss function - Integrated squared approach
12:57 From PDEs to a loss function - Energy approach
14:36 I can already solve ODEs/PDEs. Why is this useful?
15:14 Illustrative Example 3: Solving PDEs for all possible parameterizations
16:31 Representing the solution of the PDE with a DNN
17:05 From PDEs to a loss function - Energy approach
18:02 One network for all kinds of random fields
18:19 One network for all kinds of random fields
19:03 What are the applications of this?
22:11 What is the catch?
24:04 Hands-on activity led by Atharva Hans
24:09 Demonstration
41:37 Q&A
Рекомендации по теме
Комментарии
Автор

Excellent presentation. Thanks for sharing it. The only issue is the bad audio quality.

vahidnikoofard
Автор

Thanks a lot. This is an exciting and promising direction for NN's evolution. Maybe I'm wrong, but the formula for the Dirichlet principle should contain the squared gradient of u(x, ..) (it could be obtained by multiplying diff. equation by u(x, ..) and integrating by parts)

petroromanets
Автор

Nice and clear presentation. The fourier features in the last network class worked excellent for my work. Can I somehow apply the same technique for images??

solomon
Автор

What an effective presentation. Is it possible to download the Jupyter notebook?

juliosdutra
Автор

I cant find the notebook, I just see the video and presentation

joseavalos
Автор

Are there publicly available codes for these examples?

TURALOWEN
Автор

How did you get the formula at 11:17? exp(-x / 5.0) *cos(x) - Psi / 5.0. Thank you for your help

facesquare
Автор

sir, you should have good mic for clear voice. At least, subtitles should be there. thanku

backbenchrs
Автор

Can you specify the ref [Raise 2019] ?

fehminajar
Автор

Could someone suggest some more info (a book, maybe a course) to dive in in the field?

nikoluk
Автор

UPDATE: it's working now. Need to put energy_tensor[j,  0] = 0.5 * (torch.sum(F**2) - 2.0) - torch.log((torch.det(F))) + 50.0*torch.log((torch.det(F)))**2

I'm getting RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn, for the Example 2 in this notebook.
Is anyone getting this too?
It seems like energy_tensor.requires_grad is False so can't actually do l.backward().

chinamatt