Laplace’s Demon: A Seminar Series about Bayesian Machine Learning at Scale

Machine learning is changing the world we live in at a break neck pace. From image recognition and generation, to the deployment of recommender systems, it seems to be breaking new ground constantly and influencing almost every aspect of our lives. In ths seminar series we ask distinguished speakers to comment on what role Bayesian statistics and Bayesian machine learning have in this rapidly changing landscape. Do we need to optimally process information or borrow strength in the big data era? Are philosophical concepts such as coherence and the likelihood principle relevant when you are running a large scale recommender system? Are variational approximations, MCMC or EP appropriate in a production environment? Can I use the propensity score and call myself a Bayesian? How can I elicit a prior over a massive dataset? Is Bayes a reasonable theory of how to be perfect but a hopeless theory of how to be good? Do we need Bayes when we can just A/B test? What combinations of pragmatism and idealism can be used to deploy Bayesian machine learning in a large scale live system? We ask Bayesian believers, Bayesian pragmatists and Bayesian sceptics to comment on all of these subjects and more.

The audience is machine learning practitioners and statisticians from academia and industry.

To stay informed follow us on twitter, or we have a Google Group for general announcements and discussions related to the seminar series. Join the group here.

Full schedule

We have great speakers ove the whole year so please check out the full schedule below.

The registration link will allow you to see the time of the event in your timezone.

If you are interested in the seminar series as a whole then please join our list.
You should register individually for each seminar.

Date Time UTC Time Paris Time New York Speaker Title Registration Link
27 Jan 21 15.00 17.00 11.00 Omiros Papaspiliopoulos Scalable computation for Bayesian hierarchical models Open
10 Feb 21 15.00 17.00 11.00 Art Owen TBD TBD
24 Feb 21 15.00 17.00 11.00 Florence Forbes TBD TBD

Please note there is a limit of 500 registrations to an event, so please only register if plan to attend.

 

Organisers

Scientific Committee

Nicolas Chopin (ENSAE)

Mike Gartrell (Criteo)

Alberto Lumbreras (Criteo)

David Rohde (Criteo)

Otmane Sakhi (Criteo)

Organising Committee

Alexandra Hayere

Carole Nouet

Previous Editions

December 16th, 2020

Sara Wade & Karla Monterrubio-Gómez

Sara Wade & Karla Monterrubio-Gómez

On MCMC for variationally sparse Gaussian process: A pseudo-marginal approach >> Replay
December 2nd, 2020

Nicolas Chopin

Nicolas Chopin

The Surprisingly Overlooked Efficiency of Sequential Monte Carlo (and how to make it even more efficient) >> Replay
November 18th

Sarah Filippi

Sarah Filippi

Interpreting Bayesian Deep Neural Networks Through Variable Importance
November 13th, 2020

Dawen Liang

Dawen Liang

Variational Autoencoders for Recommender Systems: A Critical Retrospective and a (Hopefully) Optimistic Prospective >> Replay
September 4th, 2020

Pierre Latouche

Pierre Latouche

Unsupervised Bayesian variable selection >> Replay
October 21st, 2020

Chris Oates and Takuo Matsubara

Chris Oates and Takuo Matsubara

A Covariance Function Approach to Prior Specification for Bayesian Neural Networks >> Replay
September 23rd, 2020

Stephan Mandt

Stephan Mandt

Compressing Variational Bayes >> Replay
September 16th, 2020

François Caron

François Caron

Statistical models with double power-law behavior
September 9th, 2020

Maxime Vono

Maxime Vono

Efficient and parallel MCMC sampling using ADMM-type splitting >> Talk page
>> Replay
August 26th, 2020

Andrew Gelman

Andrew Gelman

Bayesian Workflow >> Replay
July 29th, 2020

Cheng Zhang

Cheng Zhang

Efficient element-wise information acquisition with Bayesian experimental design >> Talk page
>> Replay
July 8th, 2020

Victor Elvira

Victor Elvira

Importance sampling as a mindset >> Talk page
>> Replay
July 1st, 2020

John Ormerod

John Ormerod

Cake priors for Bayesian hypothesis testing and extensions via variational Bayes >> Talk page
>> Replay
June 24th, 2020

Aki Vehtari

Aki Vehtari

Use of reference models in variable selection >> Talk page
>> Replay
June 17th, 2020

Jake Hofman

Jake Hofman

How visualizing inferential uncertainty can mislead readers about treatment effects in scientific results >> Talk page
>> Replay
May 13th, 2020

Christian Robert

Christian Robert

Component-wise approximate Bayesian computation via Gibbs-like steps >> Talk page
>> Replay
Curabitur vel, venenatis accumsan elementum neque. nec leo. libero Lorem et, porta.