Laplace’s Demon: A Seminar Series about Bayesian Machine Learning at Scale

Machine learning is changing the world we live in at a break neck pace. From image recognition and generation, to the deployment of recommender systems, it seems to be breaking new ground constantly and influencing almost every aspect of our lives. In ths seminar series we ask distinguished speakers to comment on what role Bayesian statistics and Bayesian machine learning have in this rapidly changing landscape. Do we need to optimally process information or borrow strength in the big data era? Are philosophical concepts such as coherence and the likelihood principle relevant when you are running a large scale recommender system? Are variational approximations, MCMC or EP appropriate in a production environment? Can I use the propensity score and call myself a Bayesian? How can I elicit a prior over a massive dataset? Is Bayes a reasonable theory of how to be perfect but a hopeless theory of how to be good? Do we need Bayes when we can just A/B test? What combinations of pragmatism and idealism can be used to deploy Bayesian machine learning in a large scale live system? We ask Bayesian believers, Bayesian pragmatists and Bayesian sceptics to comment on all of these subjects and more.

The audience is machine learning practitioners and statisticians from academia and industry.

To stay informed follow us on twitter, or we have a Google Group for general announcements and discussions related to the seminar series. Join the group here.

Full schedule

We have great speakers over the whole year so please check out the full schedule below.

The registration link will allow you to see the time of the event in your time zone.

If you are interested in the seminar series as a whole then please join our list.
You should register individually for each seminar.

 

Please note there is a limit of 500 registrations to an event, so please only register if plan to attend.

Past talks can be found hereafter.

Date Time UTC Time Paris Time New York Speaker Title Recording
12 April 2022 15.00 17.00 11.00 Eric Moulines NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform
19 April 2022 15.00 17.00 11.00 Arnaud Doucet Diffusion Schrodinger Bridges – From Generative Modeling to Posterior Simulation Video
17 May 2022 15.00 17.00 11.00 Darren Wilkinson Compositional approaches to scalable Bayesian computation Video
23 May 2022 15.00 17.00 11.00 Alexandre Gilotte Learning From Aggregated Data Video
12 July 2022 15.00 17.00 11.00 Mingyuan Zhou Adaptive Diffusion-based Deep Generative Models Video

Organisers

Scientific Committee

Nicolas Chopin (ENSAE)

Mike Gartrell (Criteo)

Alberto Lumbreras (Criteo)

David Rohde (Criteo)

Otmane Sakhi (Criteo)

Maxime Vono (Criteo)

Organising Committee

Tatiana Podgorbunschih (Criteo)

Previous Editions

July 21st, 2021

Yixin Wang

Yixin Wang

'Representation Learning: A Causal Perspective
June 9th, 2021

Patrick Gallineri

Patrick Gallineri

Dynamical state-space models for videos: stochastic prediction and spatio-temporal disentanglement >> Replay
May 19th, 2021

Simon Barthelmé

Simon Barthelmé

Kernel matrices in the flat limit >> Replay
April 21st, 2021

Frank van der Meulen

Frank van der Meulen

Automatic Backward Filtering Forward Guiding for Markov processes and graphical models >> Replay
Apr 7th, 2021

Rémi Bardenet

Rémi Bardenet

Monte Carlo integration with repulsive point processes >> Replay
Mar 24th, 2021

Anthony Lee

Anthony Lee

A general perspective on the Metropolis–Hastings kernel – Part 2 >> Replay
Mar 17th, 2021

Christophe Andrieu

Christophe Andrieu

A general perspective on the Metropolis–Hastings kernel - Part 1 >> Replay
Feb 24th, 2021

Florence Forbes

Florence Forbes

Approximate Bayesian computation with surrogate posteriors >> Replay
Feb 10th, 2021

Art Owen

Art Owen

Backfitting for large scale crossed random effects regressions >> Replay
January 27th, 2021

Omiros Papaspiliopoulos

Omiros Papaspiliopoulos

Scalable computation for Bayesian hierarchical models >> Replay
December 16th, 2020

Sara Wade & Karla Monterrubio-Gómez

Sara Wade & Karla Monterrubio-Gómez

On MCMC for variationally sparse Gaussian process: A pseudo-marginal approach >> Replay
December 2nd, 2020

Nicolas Chopin

Nicolas Chopin

The Surprisingly Overlooked Efficiency of Sequential Monte Carlo (and how to make it even more efficient) >> Replay
November 18th

Sarah Filippi

Sarah Filippi

Interpreting Bayesian Deep Neural Networks Through Variable Importance
November 13th, 2020

Dawen Liang

Dawen Liang

Variational Autoencoders for Recommender Systems: A Critical Retrospective and a (Hopefully) Optimistic Prospective >> Replay
September 4th, 2020

Pierre Latouche

Pierre Latouche

Unsupervised Bayesian variable selection >> Replay
October 21st, 2020

Chris Oates and Takuo Matsubara

Chris Oates and Takuo Matsubara

A Covariance Function Approach to Prior Specification for Bayesian Neural Networks >> Replay
September 23rd, 2020

Stephan Mandt

Stephan Mandt

Compressing Variational Bayes >> Replay
September 16th, 2020

François Caron

François Caron

Statistical models with double power-law behavior
September 9th, 2020

Maxime Vono

Maxime Vono

Efficient and parallel MCMC sampling using ADMM-type splitting >> Talk page
>> Replay
August 26th, 2020

Andrew Gelman

Andrew Gelman

Bayesian Workflow >> Replay
July 29th, 2020

Cheng Zhang

Cheng Zhang

Efficient element-wise information acquisition with Bayesian experimental design >> Talk page
>> Replay
July 8th, 2020

Victor Elvira

Victor Elvira

Importance sampling as a mindset >> Talk page
>> Replay
July 1st, 2020

John Ormerod

John Ormerod

Cake priors for Bayesian hypothesis testing and extensions via variational Bayes >> Talk page
>> Replay
June 24th, 2020

Aki Vehtari

Aki Vehtari

Use of reference models in variable selection >> Talk page
>> Replay
June 17th, 2020

Jake Hofman

Jake Hofman

How visualizing inferential uncertainty can mislead readers about treatment effects in scientific results >> Talk page
>> Replay
May 13th, 2020

Christian Robert

Christian Robert

Component-wise approximate Bayesian computation via Gibbs-like steps >> Talk page
>> Replay