filmov
tv
Structure-preserving Approximate Bayesian Computation (ABC) for stochastic neuronal models
Показать описание
The presentation by Massimiliano Tamborrino, from the Department of Statistics at University of Warwick, is part of the Pathways to the 2023 IHP thematic project Random Processes in the Brain.
In this seminar, Tamborrino tells how ABC has become one of the major tools for parameter inference in complex mathematical models in the last decade. The method is based on the idea of deriving an approximate posterior density aiming to target the true (unavailable) posterior by running massive simulations from the model for different parameters to replace the intractable likelihood, choosing then those parameters whose simulations are good matches to the observed data. When applying ABC to stochastic models, the derivation of effective summary statistics and proper distances is particularly challenging, since simulations from the model under the same parameter configuration result in different output. Moreover, since exact simulation from complex stochastic models is rarely possible, reliable numerical methods need to be applied. In this talk, we show how to use the underlying structural properties of the model to construct specific ABC summaries that are less sensitive to the intrinsic stochasticity of the model, and the importance of adopting reliable property-preserving numerical (splitting) schemes for the synthetic data generation. Indeed, the commonly used Euler-Maruyama scheme may drastically fail even with very small stepsizes. The proposed approach is illustrated first on the stochastic FitzHugh-Nagumo model, and then on the broad class of partially observed Hamiltonian stochastic differential equations, in particular on the stochastic Jensen-and-Rit neural mass model, both with simulated and with real electroencephalography (EEG) data, for both one neural population and a network of neural populations (ongoing work).
In this seminar, Tamborrino tells how ABC has become one of the major tools for parameter inference in complex mathematical models in the last decade. The method is based on the idea of deriving an approximate posterior density aiming to target the true (unavailable) posterior by running massive simulations from the model for different parameters to replace the intractable likelihood, choosing then those parameters whose simulations are good matches to the observed data. When applying ABC to stochastic models, the derivation of effective summary statistics and proper distances is particularly challenging, since simulations from the model under the same parameter configuration result in different output. Moreover, since exact simulation from complex stochastic models is rarely possible, reliable numerical methods need to be applied. In this talk, we show how to use the underlying structural properties of the model to construct specific ABC summaries that are less sensitive to the intrinsic stochasticity of the model, and the importance of adopting reliable property-preserving numerical (splitting) schemes for the synthetic data generation. Indeed, the commonly used Euler-Maruyama scheme may drastically fail even with very small stepsizes. The proposed approach is illustrated first on the stochastic FitzHugh-Nagumo model, and then on the broad class of partially observed Hamiltonian stochastic differential equations, in particular on the stochastic Jensen-and-Rit neural mass model, both with simulated and with real electroencephalography (EEG) data, for both one neural population and a network of neural populations (ongoing work).