NOTE: You are viewing the 2015 year of the workshop. To view the most recent workshop, see here.
The ever-increasing size of data sets has resulted in an immense effort in Bayesian statistics to develop more expressive probabilistic models. Inference remains a challenge and limits the use of these models in large-scale scientific and industrial applications. Thus we must resort to approximate inference, which is computationally efficient on massive and streaming data—without compromising on the complexity of these models. This workshop aims to bring together researchers and practitioners in order to discuss recent advances in approximate inference; we also aim to discuss the methodological and foundational issues in such techniques in order to consider future improvements.
The resurgence of interest in approximate inference has furthered development in many techniques: for example, scalability, black box techniques, and dependency in variational inference; divide and conquer techniques in expectation propagation; dimensionality reduction using random projections; and stochastic variants of Laplace approximation-based methods. Despite this interest, there remain significant trade-offs in speed, accuracy, generalizability, and learned model complexity. In this workshop, we will discuss how to rigorously characterize these tradeoffs, as well as how they might be made more favourable. Moreover, we will address the issues of its adoption in scientific communities which could benefit from advice on their practical usage and the development of relevant software packages.
This workshop is a continuation of past years:
Invited speakers
Panel: Tricks of the Trade
Panel: On the Foundations and Future of Approximate Inference