filmov
tv
DDPS | Big Data Inverse Problems — Promoting Sparsity and Learning to Regularize by Mattias Chung

Показать описание
Abstract: Emerging fields such as data analytics, machine learning, and uncertainty quantification heavily rely on efficient computational methods for solving inverse problems. With growing model complexities and ever-increasing data volumes, state-of-the-art inference method exceeded their limits of applicability and novel methods are urgently needed.
In this talk, we present novel methods for the broad spectrum of inverse problems where the aim is to reconstruct quantities with a sparse representation on some vector space. The associated optimization problems with L1 regularization have received significant attention, due to their wide applicability in compressed sensing, dictionary learning, and imaging problems, to name a few. We present a new method based on variable projection and describe a new approach that uses deep neural networks (DNNs) to obtain regularization parameters for solving inverse problems.
The aim of this talk is to engage students and faculty alike and initiate a discussion about future directions of computational mathematics in the world of big data and machine learning.
Bio: Matthias (Tia) Chung is an Associate Professor in the Department of Mathematics at Emory University. He joined Emory University in 2022. Prior he held faculty positions at Virginia Tech and Texas State University and was Post Doctoral Fellow at Emory University. He holds a Dipl. math. (Master of Science equivalent) from the University of Hamburg, Germany, and a Dr. rer. nat. (Ph.D. equivalent degree) in Computational Mathematics from the University of Luebeck, Germany. Tia Chung is a Humboldt Fellow and an active member of the Society for Industrial and Applied Mathematics (SIAM).
Tia Chung's research concerns various forms of cross-disciplinary inverse problems including scientific machine learning and iterative method. Challenges such as ill-posedness, large-scale, and uncertainty estimates are addressed by utilizing tools from and developing methods for regularization, randomized methods, Bayesian inversion, and scientific machine learning. Driven by its application, he and his lab develops and analyzes efficient numerical methods with applications in -- but not limited to -- imaging processing, dynamical systems, and big data problems.
LLNL-VIDEO-847728
#LLNL #ArtificialIntelligence #DataDrivenPhysicalSimulations
In this talk, we present novel methods for the broad spectrum of inverse problems where the aim is to reconstruct quantities with a sparse representation on some vector space. The associated optimization problems with L1 regularization have received significant attention, due to their wide applicability in compressed sensing, dictionary learning, and imaging problems, to name a few. We present a new method based on variable projection and describe a new approach that uses deep neural networks (DNNs) to obtain regularization parameters for solving inverse problems.
The aim of this talk is to engage students and faculty alike and initiate a discussion about future directions of computational mathematics in the world of big data and machine learning.
Bio: Matthias (Tia) Chung is an Associate Professor in the Department of Mathematics at Emory University. He joined Emory University in 2022. Prior he held faculty positions at Virginia Tech and Texas State University and was Post Doctoral Fellow at Emory University. He holds a Dipl. math. (Master of Science equivalent) from the University of Hamburg, Germany, and a Dr. rer. nat. (Ph.D. equivalent degree) in Computational Mathematics from the University of Luebeck, Germany. Tia Chung is a Humboldt Fellow and an active member of the Society for Industrial and Applied Mathematics (SIAM).
Tia Chung's research concerns various forms of cross-disciplinary inverse problems including scientific machine learning and iterative method. Challenges such as ill-posedness, large-scale, and uncertainty estimates are addressed by utilizing tools from and developing methods for regularization, randomized methods, Bayesian inversion, and scientific machine learning. Driven by its application, he and his lab develops and analyzes efficient numerical methods with applications in -- but not limited to -- imaging processing, dynamical systems, and big data problems.
LLNL-VIDEO-847728
#LLNL #ArtificialIntelligence #DataDrivenPhysicalSimulations