Aaditya Ramdas – Sequential uncertainty quantificationA large fraction of published research in top journals in applied sciences such as medicine and psychology has been claimed as irreproducable. In light of this 'replicability crisis’, traditional methods for hypothesis testing, most notably those based on pvalues, have come under intense scrutiny. One central problem is the following: if our test result is promising but nonconclusive (say, p = 0.07) we cannot simply decide to gather a few more data points. While this practice is ubiquitous in science, it invalidates pvalues and error guarantees and makes the results of standard metaanalyses very hard to interpret. This issue is not unique for pvalues: other approaches, such as replacing testing by estimation with confidence intervals, suffer from similar optional continuation problems. Over the last few years several distinct but closely related solutions have been proposed, such as the anytime confidence sequences and pvalues, and safe tests. Remarkably, all these approaches can be understood in terms of (sequential) gambling. One formulates a gambling strategy under which one would not expect to gain any money if the null hypothesis were true. If for the given data one would have won a large amount of money in this game, this provides evidence against the null hypothesis. The test statistic in traditional statistics gets replaced by the gambling strategy; the pvalue gets replaced by the (virtual) amount of money gained. In more mathematical terms, evidence against the null and confidence sets are derived in terms of nonnegative supermartingales. While this idea in essence goes back to Wald’s sequential testing of the 1950s and its extensions by Robbins and co in the early 1960s and Lai in the 1970s, it never really caught on because it used to be applicable only to very simple statistical models and testing scenarios. However, recent work shows that this idea is essentially universally applicable – one can design supermartingales for large classes of nonparametric tests and many estimation problems, and one can analyze them using novel tools such as nonasymptotic versions of the law of the iterated logarithm. Also, these directions are able to somewhat unite Bayesian, frequentist ways of thinking; with the explicit ability to use prior knowledge, with correct frequentist inference often using Bayesian techniques. Anytimevalid, safe confidence intervals and pvalues (package) (tutorial)
Multiarmed banditsAll of the aforementioned techniques come in handy when designing new algorithms for multiarmed bandit problems, as well as to understand what existing algorithms are doing in quite some generality.
