The seminar will be online. Zoom link: TBA
Meeting ID: TBA
Passcode: TBA
Titles and abstracts
Titles and abstracts
|
-
Ryan Martin
Imprecise probability and valid statistical inference
Statistical inference aims to quantify uncertainty about unknowns based on data. To formalize this, an inferential model (IM) is a mapping data, etc., to a capacity on the parameter space that assigns data-driven degrees of belief to assertions about the unknowns; this generalizes Bayes, fiducial, and other distribution-based inference approaches. Important questions include: what statistical properties should an IM satisfy, and what do these statistical properties imply about the mathematical structure of its capacity? In this talk, I'll start by defining a "validity" property and its consequences. Then I'll summarize some recent results saying that (a) an IM whose capacity is a precise/additive probability can't be valid, and (b) achieving validity, and frequentist error rate control more generally, is very closely tied to IMs whose capacities are nice imprecise/non-additive probabilities. Illustrations and practical implications of these will be presented. Then I'll conclude with some additional details about the interpretation and possible generalizations of the validity property, and some open questions.
This talk is largely based on work presented in https://researchers.one/articles/21.01.00002
-
Aaron Smith
Free Lunches and Approximate Markov chain Monte Carlo
It is widely known that the performance of MCMC algorithms can degrade
quite quickly when targeting computationally expensive posterior
distributions, including the posteriors associated with any large
dataset. This has motivated the search for MCMC variants that scale well
for large datasets. One general approach, taken by several research
groups, has been to look at only a subsample of the data at every step.
In this talk, we'll discuss some basic "no-free-lunch" results that
sometimes provide limits on the performance of many such algorithms.
We'll then apply these generic results to realistic statistical problems
and proposed algorithms. Finally, I'll discuss some of the many examples
that can avoid our generic results; some of these seem to provide a free
(or at least cheap) lunch, while others are (to my knowledge) open
problems.
(Based primarily on work with James Johndrow and Natesh Pillai, as well
as Patrick Conrad, Andrew Davis, Youssef Marzouk, Tanya Schmah, Pengfei
Wang and Aimeric Zoungrana.)
|
The Van Dantzig seminar is a nationwide series of lectures in statistics,
which features renowned international and local speakers, from the full width of
the statistical sciences. The name honours David van Dantzig (1900-1959), who
was the first modern statistician in the Netherlands, and professor in the "Theory
of Collective Phenomena" (i.e. statistics) in Amsterdam. The seminar will convene
4 to 6 times a year at varying locations, and is supported financially by among others
the STAR cluster and the Section Mathematical Statistics of the VVS-OR.
|
Supported by
|
|
|
|
|
|