Abstracts
Iain Murray
(University of Edinburgh)
Learning priors, likelihoods, or posteriors
Abstract.
As the description of the workshop states: variational and
Monte Carlo methods are currently the mainstream techniques
for approximate Bayesian inference. However, we can also
apply machine learning models to solve inference problems in
several ways. Firstly, there's no point doing careful
Bayesian inference if the model is silly. We can represent
good models, often with hard-to-specify priors or expensive
likelihoods, with surrogates learned from data. Secondly, we
can learn how to do inference from experience or simulated
data. However, this is a workshop, so we can have a friendly
conversation... There's a huge choice of what to do here,
and frankly it's often not clear what the best approach is,
and there are a lot of open theoretical questions. I'll give
some thoughts, but may raise more questions than answers.
Yingzhen Li
(University of Cambridge)
Gradient Estimators for Implicit Models
Abstract.
This talk is organised in two parts. First I will start by
revisiting fundamental tractability issues of Bayesian
computation and argue that density evaluation of the
approximate posterior is mostly unnecessary. Then I will
present one of our recent work on an algorithm for fitting
implicit posterior distributions. In a nutshell, we proposed
a gradient estimation method that allow variational
inference to be applied to those approximate distributions
without a tractable density.
Variational Autoencoders for Recommendation
Abstract.
In this talk, I will present how we extend variational
autoencoders (VAEs) to collaborative filtering for implicit
feedback. We introduce a different regularization parameter
for the learning objective, which proves to be crucial for
achieving competitive performance. The resulting model and
learning algorithm has information-theoretic connections to
maximum entropy discrimination and the information
bottleneck principle, as well as many recent work on
understanding the trade-offs in learning latent variable
models with VAEs. Empirically, we show that the proposed
approach significantly outperforms state-of-the-art
baselines on several real-world datasets. Finally, we
identify the pros and cons of employing a principled
Bayesian inference approach and characterize settings where
it provides the most significant improvements.
Variational inference in deep Gaussian processes
Abstract.
Combining deep nets with probabilistic reasoning is
challenging, because uncertainty needs to be propagated
across the neural network during inference. This comes in
addition to the (easier) propagation of gradients. In this
talk I will talk about a family of variational approximation
methods developed to tackle the aforementioned computational
issue in Deep Gaussian processes, which can be seen as
non-parametric Bayesian neural networks.
Differential privacy and Bayesian learning
Abstract.
Differential privacy allows deriving strong privacy
guarantees for algorithms using private data. In my talk I
will introduce and review different approaches for
differentially private Bayesian learning building upon
different forms of exact and approximate inference.