# Van Dantzig Seminar

#### nationwide series of lectures in statistics

 Home      David van Dantzig      About the seminar      Upcoming seminars      Previous seminars      Slides      Contact

## Van Dantzig Seminar: 3 June 2013

#### Programme: (click names or scroll down for titles and abstracts)

 14:00 - 14:05 Opening 14:05 - 15:05 Richard Samworth (University of Cambridge) 15:05 - 15:25 Break 15:25 - 16:25 Subhashis Ghosal (North Carolina State University) 16:25 - 16:30 Organizational discussion 16:30 - 17:30 Drinks
 Location: Amsterdam Science Park, UvA Science Building, Room C1.112

## Titles and abstracts

• Richard Samworth

Log-concave density estimation with applications

Log-concave densities on $\mathbb{R}^d$ form an attractive infinite-dimensional class that includes many standard parametric families and has several useful properties. For instance, in the context of density estimation, the log-concave maximum likelihood estimator is a fully automatic nonparametric estimator, with no smoothing parameters to choose. More generally, I will discuss ideas of log-concave projection and its relevance for density estimation, regression, testing and Independent Component Analysis problems.

• Subhashis Ghosal

Bayesian methods for high dimensional models: Convergence issues and computational challenges

In modern statistical applications, it is very common to encounter observations that can be only represented as objects of very high dimension, typically far exceeding the available sample size. In spite of the very high complexity of the data, a key feature that allows valid statistical analysis is sparsity, which essentially means irrelevance of many components of the observations. Classical statistical methods, typically based on penalization techniques, have been developed and their convergence properties have been studied. More recently, Bayesian methods for high dimensional observations have been considered. A particularly satisfying feature of a Bayesian method is assessment of uncertainty of each conceivable submodel arising out of all possible sparsity structures. Sparsity is easily induced in a Bayesian framework by a mixture of a point mass and a continuous distribution as a prior on the underlying parameters. However, in most situations, the resulting posterior computation involves Markov chain Monte-Carlo sampling which is required to move over an enormous space of models, and hence is not practically feasible. We discuss some examples such as additive regression, covariance estimation using graphical model, covariance estimation using Bayesian graphical lasso and nonparametric density regression using finite random series prior where posterior computation does not require Markov chain Monte-Carlo methods, but can be computed using conjugacy, Laplace approximation or direct sampling. We investigate convergence rates and oracle properties for these problems.