filmov
tv
DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators.

Показать описание
George Karniadakis, Brown University
Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions, however, a less known but powerful result is that a NN with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. To realize this theorem, we design a new NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study, in particular, different formulations of the input function space and its effect on the generalization error.
Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions, however, a less known but powerful result is that a NN with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. To realize this theorem, we design a new NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. We study, in particular, different formulations of the input function space and its effect on the generalization error.
DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators.
DeepOnet: Learning nonlinear operators based on the universal approximation theorem of operators
Seminario | From PINNs To DeepOnets... - George Em Karniadakis
EI 2023 Plenary 1: Neural Operators for Solving PDEs
DeepONet Tutorial in JAX
George Karniadakis - From PINNs to DeepOnets
George Karniadakis - Approximating functions, functionals and operators using DNNs
Learning operators using deep neural networks for multiphysics, multiscale, & multifidelity prob...
Comparative Study of Bubble Growth Dynamics with DeepONet
DDPS | Deep neural operators with reliable extrapolation for multiphysics & multiscale problems
Learning Physics Informed Machine Learning Part 3- Physics Informed DeepONets
HOW it Works: Deep Neural Operators (DeepONets)
Somdatta Goswami - Transfer Learning in Physics-Based Applications with Deep Neural Operators
Fractional physics informed neural networks, by Prof. George Karniadakis & Mr. Liu Yang
DDPS | Approximating functions, functionals, and operators using deep neural networks
Deep Operator Networks (DeepONet) [Physics Informed Machine Learning]
Prof. Yue Yu | Learning Neural Operators for Biological Tissue Modeling
Simulation By Data ONLY: Fourier Neural Operator (FNO)
Continuous-in-Depth Neural Networks through Interpretation of Learned Dynamics
Fourier Neural Operator on Navier Stokes Equation with viscosity=1e-4. Super-resolution.
Learning a PDE and its Solution + Underconstrained for AIChE
MOX Colloquia - George Em Karniadakis - 26/06/2020
TransferLab Seminar: Learning Function Operators with Neural Networks - Samuel Burbulla
Deep Learning: Known Operator Learning - Part 1
Комментарии