
Gareth Roberts
Principled subsampling and superefficiency for Bayesian inference
This talk will discuss the problem of Bayesian computation for posterior
densities which are expensive to compute, typically due to the size of
the data set under consideration. While subsampling large data sets is
being used effectively for optimisation with large data sets, the
problem of fully Bayesian posterior exploration is harder and invariably
leads to systematic biases in estimation. Two potential solutions to
this problem will be presented. Both have the property that although
they both use subsampling, they are examples of socalled “exact
approximate” algorithms with no systematic bias. The two algorithms
described are the SCaLE algorithm, which works in a framework which
combines MCMC and SMC to realise an evanescent Markov process whose
quasistationary distribution is the target distribution. The second
method is an example of Piecewise Deterministic Markov Processes, the
socalled ZigZag algorithm which also utilises a continuoustime
nonreversible Markov process whose stationary distribution is the
required target.
References
The zigzag process and superefficient sampling for Bayesian analysis
of big data J Bierkens, P Fearnhead, G Roberts The Annals of Statistics
47 (3), 12881320, 2019
The scalable Langevin exact algorithm: Bayesian inference for big data
M Pollock, P Fearnhead, AM Johansen, GO Roberts arXiv preprint
arXiv:1609.03436, 2017

Holger Dette
Functional data analysis in the Banach space of continuous functions
Functional data analysis is typically conducted within the L^2Hilbert space framework. There is by now a fully developed statistical toolbox allowing for the principled application of the functional data machinery to realworld problems, often based on dimension reduction techniques such as functional principal component analysis. At the same time, there have recently been a number of publications that sidestep dimension reduction steps and focus on a fully functional L^2methodology. This paper goes one step further and develops data analysis methodology for functional time series in the space of all continuous functions. The work is motivated by the fact that objects with rather different shapes may still have a small L^2distance and are therefore identified as similar when using an L^2metric. However, in applications it is often desirable to use metrics reflecting the visualization of the curves in the statistical analysis. The methodological contributions are focused on developing twosample and changepoint tests as well as confidence bands, as these procedures appear to be conducive to the proposed setting. Particular interest is put on relevant differences; that is, on not trying to test for exact equality, but rather for prespecified deviations under the null hypothesis.
Dette, H., Kokot, K., and Aue, A. (2019). Functional data analysis in the banach space of continuous functions. Annals of Statistics, to appear; ArXiv eprint 1710.07781v2.
