Flexible statistical inference for mechanistic models of neural dynamics
Abstract: One of the central goals of computational neuroscience is to understand the dynamics of single neurons and neural ensembles. However, linking mechanistic models of neural dynamics to empirical observations of neural activity has been challenging. Statistical inference is only possible for a few models of neural dynamics (e.g. GLMs), and no generally applicable, effective statistical inference algorithms are available: as a consequence, comparisons between models and data are either qualitative or rely on manual parameter tweaking, parameter-fitting using heuristics or brute-force search (Druckmann et al. 2007). Furthermore, parameter-fitting approaches typically return a single best-fitting estimate, but do not characterize the entire space of models that would be consistent with data.
We overcome this limitation by presenting a general method for Bayesian inference on mechanistic models of neural dynamics. Our approach can be applied in a `black box' manner to a wide range of neural models without requiring model-specific modifications. In particular, it extends to models without explicit likelihoods (e.g. most spiking networks). We achieve this goal by building on recent advances in likelihood-free Bayesian inference (Papamakarios and Murray 2016, Moreno et al. 2016): the key idea is to simulate multiple data-sets from different parameters, and then to train a probabilistic neural network which approximates the mapping from data to posterior distribution.
We illustrate this approach using single- and multi-compartment models of single neurons and models of spiking networks: On simulated data, estimated posterior distributions recover ground-truth parameters, and reveal the manifold of parameters for which the model exhibits the same behaviour. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, and voltage traces accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neural dynamics models without having to design model-specific algorithms, closing the gap between biophysical and statistical approaches to neural dynamics.
Joint work with Jan-Matthis Lueckmann, Giacomo Bassetto, Kaan Oecal, Marcel Nonnenmacher and Jakob H. Macke.
About the speaker: Pedro is a postdoc working in Jakob Macke's lab since January 2016. He is broadly interested in building biologically constrained theoretical models (combining methods from dynamical systems, statistical physics and machine learning) to guide new experiments and ultimately refine the models to further our understanding of neural systems.
Before joining Jakob Macke's lab, Pedro was a postdoctoral research fellow working with Maneesh Sahani at the Gatsby Computational Neuroscience Unit, UCL. His PhD was supervised by Christian Machens at Ecole normale superieure in Paris (2012).