Aaditya Ramdas – Distribution-free uncertainty quantification

It is of great practical interest to quantify the uncertainty of predictions from regression and classification algorithms used in machine learning. One such framework that allows us to produce calibrated prediction sets with no distributional assumptions (beyond exchangeability of the data) is called conformal prediction, and has been studied by Vladimir Vovk and colleagues since 2000 or so. At a very high level, the idea involves training the regression algorithm multiple times and using either the in-sample or out-of-sample training residuals within the training or holdout set to quantify and calibrate the uncertainty of the prediction at a new point. Remarkably, the validity of the method does not depend on which ML algorithm was used, black box or not, or any other model assumptions. Naturally the length or size of the prediction set will depend on these factors, but not correctness. Another interesting problem is that of classifier calibration, which can also be achieved without distributional assumptions. I am actively working on theoretical and practical problems within this area. You can also find some software packages linked below.


Distribution-free uncertainty quantification (conformal, calibration) (package 1) (package 2) (tutorial)


  • Top-label calibration
    C. Gupta, A. Ramdas       arxiv  

  • Distribution-free calibration guarantees for histogram binning without sample splitting
    C. Gupta, A. Ramdas       ICML, 2021   arxiv   proc

  • Distribution-free uncertainty quantification for classification under label shift
    A. Podkopaev, A. Ramdas       UAI, 2021   arxiv  

  • Distribution-free binary classification: prediction sets, confidence intervals and calibration
    C. Gupta, A. Podkopaev, A. Ramdas       NeurIPS, 2020   arxiv   proc   talk

  • Nested conformal prediction and quantile out-of-bag ensemble methods
    C. Gupta, A. Kuchibhotla, A. Ramdas       Pattern Recognition, Special Issue on Conformal Prediction   arxiv   code   talk

  • Predictive inference with the jackknife+
    R. Barber, E. Candes, A. Ramdas, R. Tibshirani       Annals of Stat., 2020   arxiv   code   proc

  • The limits of distribution-free conditional predictive inference
    R. Barber, E. Candes, A. Ramdas, R. Tibshirani       Information and Inference, 2020   arxiv   proc

  • Conformal prediction under covariate shift
    R. Tibshirani, R. Barber, E. Candes, A. Ramdas       NeurIPS, 2019   arxiv   proc

  • Distribution-free prediction sets with random effects
    R. Dunn, L. Wasserman, A. Ramdas       (JASA, revision)   arxiv