Aaditya Ramdas

 

Aaditya Ramdas (PhD, 2015)
Assistant Professor
Department of Statistics and Data Science (75%)
Machine Learning Department (25%)
Carnegie Mellon University

Visiting academic, Amazon (20%).

132H Baker Hall
aramdas AT {empty or stat or cs} DOT cmu FULLSTOP edu
[http://www.stat.cmu.edu/~aramdas]

These keywords quickly get my attention:

I work on “practical theory”, meaning that the vast majority of my papers are about designing theoretically principled algorithms that directly solve practical problems, and are usually based on simple, aesthetically elegant (in my opinion) ideas. A theoretician's goal is not to prove theorems, just as a writer's goal is not to write sentences. My goals are to improve my own (and eventually the field's) understanding of important problems, design creative algorithms for unsolved questions and figure out when and why they work (or don't), and often simply to ask an intriguing question that has not yet been asked.

I'm co-editing a special issue on Conformal Prediction, Probabilistic Calibration and Distribution-Free Uncertainty Quantification, with a submission deadline of Jan 7, 2024. Please consider submitting some of your novel work on the topic.

I'm co-editing (with P. Grunwald) a special issue on Game-theoretic statistics and safe, anytime-valid inference, whose first round of reviews are completed, and which should finally appear in early 2024.

Group

Courses, Workshops, Tutorials, Software, Talks, etc.

Biography

Aaditya Ramdas (PhD, 2015) is an assistant professor at Carnegie Mellon University, in the Departments of Statistics and Machine Learning. He was a postdoc at UC Berkeley (2015–2018) mentored by Michael Jordan and Martin Wainwright, and obtained his PhD at CMU (2010–2015) under Aarti Singh and Larry Wasserman, receiving the Umesh K. Gavaskar Memorial Thesis Award. His undergraduate degree was in Computer Science from IIT Bombay (2005-09).

Aaditya received the 2024 Sloan fellowship in mathematics, the IMS Peter Gavin Hall Early Career Prize (2023), was an inaugural recipient of the COPSS Emerging Leader Award (2021), and a recipient of the Bernoulli New Researcher Award (2021). His work is supported by an NSF CAREER Award (2020), an Adobe Faculty Research Award (2019), a Google Research Scholar award (2022). He was a CUSO lecturer in 2022, a Lunteren lecturer in 2023, and a keynote speaker at AISTATS 2024.

Aaditya's research is in mathematical statistics and learning, with an eye towards designing algorithms that both have strong theoretical guarantees and also work well in practice. His main research interests include selective and simultaneous inference (interactive, structured, online, post-hoc control of false decision rates, etc), game-theoretic statistics (sequential uncertainty quantification, confidence sequences, always-valid p-values, safe anytime-valid inference, e-processes, supermartingales, etc), and distribution-free black-box predictive inference (conformal prediction, post-hoc calibration, etc). His areas of applied interest include privacy, neuroscience, genetics and auditing (elections, real-estate, financial, fairness), and his group's work has received multiple best paper awards.

He is one of the organizers of the amazing and diverse StatML Group at CMU. Outside of work, some easy topics for conversation include travel/outdoors (hiking, scuba, etc.), trash-free living, completing the Ironman triathlon and long-distance bicycle rides.

Curriculum Vitae

Preprints (under review or revision)

  1. Combining evidence across filtrations (with Y.J. Choe).       arXiv | TLDR

  2. Distribution-uniform strong laws of large numbers (with I. Waudby-Smith, M. Larsson).       arXiv | TLDR

  3. Positive semidefinite supermartingales and randomized matrix concentration inequalities (with H. Wang).       arXiv | TLDR

  4. Merging uncertainty sets via majority vote (with M. Gasparin).       arXiv | TLDR

  5. Sequential Monte-Carlo testing by betting (with L. Fischer).       arXiv | TLDR

  6. Time-uniform confidence spheres for means of random vectors (with B. Chugg, H. Wang).       arXiv | TLDR

  7. Distribution-uniform anytime-valid inference (with I. Waudby-Smith).       arXiv | TLDR

  8. Time-uniform self-normalized concentration for vector-valued processes (with J. Whitehouse, S. Wu).       arXiv | TLDR

  9. Anytime-valid t-tests and confidence sequences for Gaussian means with unknown variance (with H. Wang).       arXiv | TLDR

  10. On the near-optimality of betting confidence sets for bounded means (with S. Shekhar).       arXiv | TLDR

  11. Scalable causal structure learning via amortized conditional independence testing (with J. Leiner, B. Manzo, W. Tansey).       arXiv | code | TLDR

  12. Reducing sequential change detection to sequential estimation (with S. Shekhar).       arXiv | TLDR

  13. Total variation floodgate for variable importance inference in classification (with W. Wang, L. Janson, L. Lei).       arXiv | TLDR

  14. More powerful multiple testing under dependence via randomization (with Z. Xu).       arXiv | TLDR

  15. Randomized and exchangeable improvements of Markov's, Chebyshev's and Chernoff's inequalities (with T. Manole).       arXiv

  16. The extended Ville's inequality for nonintegrable nonnegative supermartingales (with H. Wang).       arXiv | TLDR

  17. A sequential test for log-concavity (with A. Gangrade, A. Rinaldo).       arXiv

  18. Admissible anytime-valid sequential inference must rely on nonnegative martingales (with J. Ruf, M. Larsson, W. Koolen).       arXiv

  19. Time-uniform central limit theory and asymptotic confidence sequences (with I. Waudby-Smith, D. Arbour, R. Sinha, E. H. Kennedy).       arXiv | code

  20. Post-selection inference for e-value based confidence intervals (with Z. Xu, R. Wang).       arXiv | talk | slides | TLDR

  21. Interactive identification of individuals with positive treatment effect while controlling false discoveries (with B. Duan, L. Wasserman).       arXiv

  22. Multiple testing under negative dependence (with Z. Chi, R. Wang).       arXiv

  23. Universal inference meets random projections: a scalable test for log-concavity (with R. Dunn, A. Gangrade, L. Wasserman).       arXiv | code | TLDR

  24. On the existence of powerful p-values and e-values for composite hypotheses (with Z. Zhang, R. Wang).       arXiv

Published (or accepted) papers

About half the list below are journal papers, and the other half are full-length peer-reviewed papers with proceedings in top-tier venues in AI/ML, where conference publications are the norm.
  1. De Finetti's Theorem and related results for infinite weighted exchangeable sequences (with R. Barber, E. Candes, R. Tibshirani), Bernoulli, 2024       arXiv

  2. Semiparametric efficient inference in adaptive experiments (with T. Cook, A. Mishler), Conference on Causal Learning and Reasoning (CLeaR), 2024.       arXiv | TLDR

  3. Anytime-valid off-policy inference for contextual bandits (with I. Waudby-Smith, L. Wu, N. Karampatziakis, P. Mineiro), ACM/IMS J of Data Science, 2024.       arXiv | proc

  4. Testing exchangeability by pairwise betting (with A. Saha), Intl. Conf. on AI and Statistics (AISTATS), 2024. oral talk       arXiv | TLDR

  5. Graph fission and cross-validation (with J. Leiner), Intl. Conf. on AI and Statistics (AISTATS), 2024       arXiv | TLDR

  6. Online multiple testing with e-values (with Z. Xu), Intl. Conf. on AI and Statistics (AISTATS), 2024.       arXiv | TLDR

  7. Deep anytime-valid hypothesis testing (with T. Pandeva, P. Forré, S. Shekhar), Intl. Conf. on AI and Statistics (AISTATS), 2024.       arXiv

  8. Differentially private conditional independence testing (with I. Kalemaj, S. Kasiviswanathan), Intl. Conf. on AI and Statistics (AISTATS), 2024.       arXiv | TLDR

  9. E-detectors: a nonparametric framework for online changepoint detection (with J. Shin, A. Rinaldo), New England J of Stat. and Data Science, 2023.       arXiv | proc

  10. A unified recipe for deriving (time-uniform) PAC-Bayes bounds (with B. Chugg, H. Wang), J of ML Research, 2023.       arXiv | proc

  11. A permutation-free kernel independence test (with S. Shekhar, I. Kim), J of ML Research, 2023.       arXiv | code | proc | TLDR

  12. Data fission: splitting a single data point (with J. Leiner, B. Duan, L. Wasserman), J of American Stat Assoc, 2023 arXiv | proc | poster | slides | code | talk | TLDR

  13. A composite generalization of Ville's martingale theorem using e-processes (with J. Ruf, M. Larsson, W. Koolen), Elec. J. of Prob., 2023 arXiv | proc | TLDR

  14. Online multiple hypothesis testing (with D. Robertson, J. Wason), Statistical Science, 2023 arXiv | proc

  15. Nonparametric two-sample testing by betting (with S. Shekhar), IEEE Trans. on Info. Theory, 2023       arXiv | proc | code | slides | TLDR

  16. E-values as unnormalized weights in multiple testing (with N. Ignatiadis, R. Wang), Biometrika, 2023 arXiv | proc

  17. Comparing sequential forecasters (with Y.J. Choe), Operations Research, 2023 arXiv | proc | code | talk | poster | slides (Citadel, Research Showcase Runner-up)

  18. Game-theoretic statistics and safe anytime-valid inference (with P. Grunwald, V. Vovk, G. Shafer), Statistical Science, 2023 arXiv | proc

  19. Adaptive privacy composition for accuracy-first mechanisms (with R. Rogers, G. Samorodnitsky, S. Wu), Conf. on Neural Information Processing Systems (NeurIPS), 2023 arXiv | TLDR

  20. Sequential predictive two-sample and independence testing (with A. Podkopaev), Conf. on Neural Information Processing Systems (NeurIPS), 2023 arXiv

  21. Auditing fairness by betting (with B. Chugg, S. Cortes-Gomez, B. Wilder), Conf. on Neural Information Processing Systems (NeurIPS), 2023 arXiv | code

  22. Counterfactually comparing abstaining classifiers (with Y. J. Choe, A. Gangrade), Conf. on Neural Information Processing Systems (NeurIPS), 2023 arXiv

  23. An efficient doubly-robust test for the kernel treatment effect (with D. Martinez-Taboada, E. Kennedy), Conf. on Neural Information Processing Systems (NeurIPS), 2023 arXiv

  24. On the sublinear regret of GP-UCB (with J. Whitehouse, S. Wu), Conf. on Neural Information Processing Systems (NeurIPS), 2023 arXiv | TLDR

  25. Martingale methods for sequential estimation of convex functionals and divergences (with T. Manole), IEEE Trans. on Information Theory, 2023 arXiv | article | talk (Student Research Award, Stat Soc Canada) | TLDR

  26. Estimating means of bounded random variables by betting (with I. Waudby-Smith), J. of the Royal Statistical Society, Series B, 2023 arXiv (Discussion paper) | proc | code

  27. Sequential change detection via backward confidence sequences (with S. Shekhar). Intl. Conf. on Machine Learning (ICML), 2023   arXiv | code | slides | TLDR

  28. Fully adaptive composition in differential privacy (with J. Whitehouse, R. Rogers, Z. S. Wu), Intl. Conf. on Machine Learning (ICML), 2023 arXiv

  29. Online Platt scaling with calibeating (with C. Gupta), Intl. Conf. on Machine Learning (ICML), 2023 arXiv

  30. A nonparametric extension of randomized response for locally private confidence sets (with I. Waudby-Smith, Z. S. Wu), Intl. Conf. on Machine Learning (ICML), 2023 arXiv | code (oral talk)

  31. Sequential kernelized independence testing (with A. Podkopaev, P. Bloebaum, S. Kasiviswanathan), Intl. Conf. on Machine Learning (ICML), 2023 arXiv

  32. Risk-limiting financial audits via weighted sampling without replacement (with S. Shekhar, Z. Xu, Z. Lipton, P. Liang), Intl. Conf. Uncertainty in AI (UAI), 2023 arXiv | TLDR

  33. Huber-robust confidence sequences (with H. Wang), Intl. Conf. on AI and Statistics (AISTATS), 2023, arXiv (full oral talk) | TLDR

  34. Catoni-style confidence sequences for heavy-tailed mean estimation (with H. Wang), Stochastic Processes and Applications, 2023 arXiv | article | code | TLDR

  35. Anytime-valid confidence sequences in an enterprise A/B testing platform (with A. Maharaj, R. Sinha, D. Arbour, I. Waudby-Smith, S. Liu, M. Sinha, R. Addanki, M. Garg, V. Swaminathan), ACM Web Conference (WWW), 2023 arXiv

  36. Dimension-agnostic inference using cross U-statistics (with I. Kim), Bernoulli, 2023 arXiv | proc | TLDR

  37. On the power of conditional independence testing under model-X (with E. Katsevich), Electronic J. Stat, 2023 arXiv | article

  38. Permutation tests using arbitrary permutation distributions (with R. Barber, E. Candes, R. Tibshirani), Sankhya A, 2023 arXiv | article

  39. Conformal prediction beyond exchangeability (with R. Barber, E. Candes, R. Tibshirani), Annals of Stat., 2023 arXiv | article

  40. Faster online calibration without randomization: interval forecasts and the power of two choices (with C. Gupta), Conf. on Learning Theory (COLT), 2022 arXiv | article

  41. Top-label calibration and multiclass-to-binary reductions (with C. Gupta), Intl. Conf. on Learning Representations, 2022 arXiv | article

  42. Gaussian universal likelihood ratio testing (with R. Dunn, S. Balakrishnan, L. Wasserman), Biometrika, 2022 arXiv | article | TLDR

  43. A permutation-free kernel two sample test (with S. Shekhar, I. Kim), Conf. on Neural Information Processing Systems (NeurIPS), 2022 arXiv | article | code | (oral talk) | TLDR

  44. Testing exchangeability: fork-convexity, supermartingales, and e-processes (with J. Ruf, M. Larsson, W. Koolen). Intl J. of Approximate Reasoning, 2022 arXiv | article

  45. Tracking the risk of a deployed model and detecting harmful distribution shifts (with A. Podkopaev). Intl. Conf. on Learning Representations (ICLR), 2022 arXiv | article

  46. Brownian noise reduction: maximizing privacy subject to accuracy constraints (with J. Whitehouse, Z.S. Wu, R. Rogers), Conf. on Neural Information Processing Systems (NeurIPS), 2022 arXiv | article

  47. Sequential estimation of quantiles with applications to A/B-testing and best-arm identification (with S. Howard), Bernoulli, 2022 arXiv | article | code

  48. Brainprints: identifying individuals from magnetoencephalograms (with S. Wu, L. Wehbe), Nature Communications Biology, 2022 bioRxiv | article

  49. Interactive rank testing by betting (with B. Duan, L. Wasserman), Conf. on Causal Learning and Reasoning (CLEAR), 2022 arXiv | article (oral talk)

  50. Large-scale simultaneous inference under dependence (with J. Tian, X. Chen, E. Katsevich, J. Goeman), Scandanavian J of Stat., 2022 arXiv | article

  51. False discovery rate control with e-values (with R. Wang), J. of the Royal Stat. Soc., Series B, 2022 arXiv | article

  52. Nested conformal prediction and quantile out-of-bag ensemble methods (with C. Gupta, A. Kuchibhotla), Pattern Recognition, 2022 arXiv | article | code

  53. Distribution-free prediction sets for two-layer hierarchical models (with R. Dunn, L. Wasserman), J of American Stat. Assoc., 2022 arXiv | article | code | TLDR

  54. Fast and powerful conditional randomization testing via distillation (with M. Liu, E. Katsevich, L. Janson), Biometrika, 2021 arXiv | article | code

  55. Uncertainty quantification using martingales for misspecified Gaussian processes (with W. Neiswanger), Algorithmic Learning Theory (ALT), 2021 arXiv | article | code | talk

  56. RiLACS: Risk-limiting audits via confidence sequences (with I. Waudby-Smith, P. Stark), Intl. Conf. for Electronic Voting (EVoteID), 2021 arXiv | article | code (Best Paper award)

  57. Predictive inference with the jackknife+ (with R. Barber, E. Candes, R. Tibshirani), Annals of Stat., 2021 arXiv | article | code

  58. Path length bounds for gradient descent and flow (with C. Gupta, S. Balakrishnan), J. of Machine Learning Research, 2021 arXiv | article | blog

  59. Nonparametric iterated-logarithm extensions of the sequential generalized likelihood ratio test (with J. Shin, A. Rinaldo), IEEE J. on Selected Areas in Info. Theory, 2021 arXiv | article

  60. Time-uniform, nonparametric, nonasymptotic confidence sequences (with S. Howard, J. Sekhon, J. McAuliffe), The Annals of Stat., 2021 arXiv | article | code | tutorial

  61. Off-policy confidence sequences (with N. Karampatziakis, P. Mineiro), Intl. Conf. on Machine Learning (ICML), 2021 arXiv | article

  62. Best arm identification under additive transfer bandits (with O. Neopane, A. Singh), Asilomar Conf. on Signals, Systems and Computers, 2021 arXiv | article (Best Student Paper award)

  63. On the bias, risk and consistency of sample means in multi-armed bandits (with J. Shin, A. Rinaldo), SIAM J. on the Math. of Data Science, 2021 arXiv | article | talk

  64. Dynamic algorithms for online multiple testing (with Z. Xu), Conf. on Math. and Scientific Machine Learning, 2021 arXiv | article | talk | slides | code | TLDR

  65. Online control of the familywise error rate (with J. Tian), Statistical Methods in Medical Research, 2021 arXiv | article

  66. Asynchronous online testing of multiple hypotheses (with T. Zrnic, M. Jordan), J. of Machine Learning Research, 2021 arXiv | article | code | blog

  67. Classification accuracy as a proxy for two sample testing (with I. Kim, A. Singh, L. Wasserman), Annals of Stat., 2021 arXiv | article | (JSM Stat Learning Student Paper Award) | TLDR

  68. Distribution-free calibration guarantees for histogram binning without sample splitting (with C. Gupta), Intl. Conf. on Machine Learning, 2021 arXiv | article

  69. Distribution-free uncertainty quantification for classification under label shift (with A. Podkopaev), Conf. on Uncertainty in AI, 2021 arXiv | article

  70. Distribution-free binary classification: prediction sets, confidence intervals and calibration (with C. Gupta, A. Podkopaev), Conf. on Neural Information Processing Systems (NeurIPS), 2020 arXiv | article (spotlight talk)

  71. The limits of distribution-free conditional predictive inference (with R. Barber, E. Candes, R. Tibshirani), Information and Inference, 2020 arXiv | article

  72. Analyzing student strategies in blended courses using clickstream data (with N. Akpinar, U. Acar), Educational Data Mining, 2020 arXiv | article | talk (oral talk)

  73. The power of batching in multiple hypothesis testing (with T. Zrnic, D. Jiang, M. Jordan), Intl. Conf. on AI and Statistics, 2020 arXiv | article | talk

  74. Online control of the false coverage rate and false sign rate (with A. Weinstein), Intl. Conf. on Machine Learning (ICML), 2020 arXiv | article

  75. Confidence sequences for sampling without replacement (with I. Waudby-Smith), Conf. on Neural Information Processing Systems (NeurIPS), 2020 arXiv | article | code (spotlight talk)

  76. Universal inference (with L. Wasserman, S. Balakrishnan), Proc. of the National Academy of Sciences, 2020 arXiv | article | talk

  77. A unified framework for bandit multiple testing (with Z. Xu, R. Wang), Conf. on Neural Information Processing Systems (NeurIPS), 2020 arXiv | article | talk | slides | code | TLDR

  78. Simultaneous high-probability bounds on the FDP in structured, regression and online settings (with E. Katsevich), Annals of Stat., 2020 arXiv | article | code

  79. Time-uniform Chernoff bounds via nonnegative supermartingales (with S. Howard, J. Sekhon, J. McAuliffe), Prob. Surveys, 2020 arXiv | article | talk

  80. STAR: A general interactive framework for FDR control under structural constraints (with L. Lei, W. Fithian), Biometrika, 2020 arXiv | article | poster | code

  81. Familywise error rate control by interactive unmasking (with B. Duan, L. Wasserman), Intl. Conf. on Machine Learning (ICML), 2020 arXiv | article | code

  82. Interactive martingale tests for the global null (with B. Duan, S. Balakrishnan, L. Wasserman), Electronic J. of Stat., 2020 arXiv | article | code

  83. On conditional versus marginal bias in multi-armed bandits (with J. Shin, A. Rinaldo), Intl. Conf. on Machine Learning (ICML), 2020 arXiv | article

  84. Are sample means in multi-armed bandits positively or negatively biased? (with J. Shin, A. Rinaldo), Conf. on Neural Information Processing Systems (NeurIPS), 2019 arXiv | article | poster

  85. A higher order Kolmogorov-Smirnov test (with V. Sadhanala, Y. Wang, R. Tibshirani), Intl. Conf. on AI and Statistics, 2019 arXiv | article

  86. ADDIS: an adaptive discarding algorithm for online FDR control with conservative nulls (with J. Tian), Conf. on Neural Information Processing Systems (NeurIPS), 2019 arXiv | code | article

  87. A unified treatment of multiple testing with prior knowledge using the p-filter (with R. F. Barber, M. Wainwright, M. Jordan), Annals of Stat., 2019 arXiv | article | code

  88. DAGGER: A sequential algorithm for FDR control on DAGs (with J. Chen, M. Wainwright, M. Jordan), Biometrika, 2019 arXiv | article | code

  89. Conformal prediction under covariate shift (with R. Tibshirani, R. Barber, E. Candes), Conf. on Neural Information Processing Systems (NeurIPS), 2019 arXiv | article | poster

  90. Optimal rates and tradeoffs in multiple testing (with M. Rabinovich, M. Wainwright, M. Jordan), Statistica Sinica, 2019 arXiv | article | poster

  91. Function-specific mixing times and concentration away from equilibrium (with M. Rabinovich, M. Wainwright, M. Jordan), Bayesian Analysis, 2019 arXiv | article | poster

  92. Decoding from pooled data (II): sharp information-theoretic bounds (with A. El-Alaoui, F. Krzakala, L. Zdeborova, M. Jordan), SIAM J. on Math. of Data Science, 2019 arXiv | article

  93. Decoding from pooled data (I): phase transitions of message passing (with A. El-Alaoui, A. Ramdas, F. Krzakala, L. Zdeborova, M. Jordan), IEEE Trans. on Info. Theory, 2018 arXiv | article

  94. On the power of online thinning in reducing discrepancy (with R. Dwivedi, O. N. Feldheim, Ori Gurel-Gurevich), Prob. Theory and Related Fields, 2018 arXiv | article | poster

  95. On kernel methods for covariates that are rankings (with H. Mania, M. Wainwright, M. Jordan, B. Recht), Electronic J. of Stat., 2018 arXiv | article

  96. SAFFRON: an adaptive algorithm for online FDR control (with T. Zrnic, M. Wainwright, M. Jordan), Intl. Conf. on Machine Learning (ICML), 2018 arXiv | article | code (full oral talk)

  97. Online control of the false discovery rate with decaying memory (with F. Yang, M. Wainwright, M. Jordan), Conf. on Neural Information Processing Systems (NeurIPS), 2017 arXiv | article | poster | talk (from 44:00) (full oral talk)

  98. MAB-FDR: Multi (A)rmed\/(B)andit testing with online FDR control (with F. Yang, K. Jamieson, M. Wainwright), Conf. on Neural Information Processing Systems (NeurIPS), 2017 arXiv | article | code (spotlight talk)

  99. QuTE: decentralized FDR control on sensor networks (with J. Chen, M. Wainwright, M. Jordan), IEEE Conf. on Decision and Control, 2017 arXiv | article | code | poster

  100. Iterative methods for solving factorized linear systems (with A. Ma, D. Needell), SIAM J. on Matrix Analysis and Applications, 2017 arXiv | article

  101. Rows vs. columns : randomized Kaczmarz or Gauss-Seidel for ridge regression (with A. Hefny, D. Needell), SIAM J. on Scientific Computing, 2017 arXiv | article

  102. On Wasserstein two sample testing and related families of nonparametric tests (with N. Garcia, M. Cuturi), Entropy, 2017 arXiv | article

  103. Generative models and model criticism via optimized maximum mean discrepancy (with D. Sutherland, H. Tung, H. Strathmann, S. De, A. Smola, A. Gretton), Intl. Conf. on Learning Representations, 2017 arXiv | article | poster | code

  104. Minimax lower bounds for linear independence testing (with D. Isenberg, A. Singh, L. Wasserman), IEEE Intl. Symp. on Information Theory, 2016 arXiv | article

  105. p-filter: multi-layer FDR control for grouped hypotheses (with COAUTHORS), J. of the Royal Stat. Society, Series B, 2016 arXiv | article |code | poster

  106. Sequential nonparametric testing with the law of the iterated logarithm (with A. Balsubramani), Conf. on Uncertainty in AI, 2016 arXiv | article | errata

  107. Asymptotic behavior of Lq-based Laplacian regularization in semi-supervised learning (with A. El-Alaoui, X. Cheng, M. Wainwright, M. Jordan), Conf. on Learning Theory, 2016 arXiv | article

  108. Regularized brain reading with shrinkage and smoothing (with L. Wehbe, R. Steorts, C. Shalizi), Annals of Applied Stat., 2015 arXiv | article

  109. On the high-dimensional power of a linear-time two sample test under mean-shift alternatives (with S. Reddi, A. Singh, B. Poczos, L. Wasserman), Intl. Conf. on AI and Statistics, 2015 arXiv | article | errata

  110. On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions (with S. Reddi\*, B. Poczos, A. Singh, L. Wasserman), AAAI Conf. on Artificial Intelligence, 2015 arXiv | article | supp

  111. Fast two-sample testing with analytic representations of probability measures (with K. Chwialkowski, D. Sejdinovic, A. Gretton), Conf. on Neural Information Processing Systems (NeurIPS), 2015 arXiv | article | code

  112. Nonparametric independence testing for small sample sizes (with L. Wehbe), Intl. Joint Conf. on AI, 2015 arXiv | article (oral talk)

  113. Convergence properties of the randomized extended Gauss-Seidel and Kaczmarz methods (with A. Ma, D. Needell), SIAM J. on Matrix Analysis and Applications, 2015 arXiv | article | code

  114. Fast & flexible ADMM algorithms for trend filtering (with R. Tibshirani), J. of Computational and Graphical Statistics, 2015 arXiv | article | talk | code

  115. Towards a deeper geometric, analytic and algorithmic understanding of margins (with J. Pena), Opt. Methods and Software, 2015 arXiv | article

  116. Margins, kernels and non-linear smoothed perceptrons (with J. Pena), Intl. Conf. on Machine Learning (ICML), 2014 arXiv | article | poster | talk oral talk

  117. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses (with L. Wehbe, B. Murphy, P. Talukdar, A. Fyshe, T. Mitchell), PLoS ONE, 2014 website | article

  118. An analysis of active learning with uniform feature noise (with A. Singh, L. Wasserman, B. Poczos), Intl. Conf. on AI and Statistics, 2014 arXiv | article | poster | talk (oral talk)

  119. Algorithmic connections between active learning and stochastic convex optimization (with A. Singh), Conf. on Algorithmic Learning Theory (ALT), 2013 arXiv | article | poster

  120. Optimal rates for stochastic convex optimization under Tsybakov's noise condition (with A. Singh), Intl. Conf. on Machine Learning (ICML), 2013 arXiv | article | poster | talk (oral talk)

Miscellaneous

  1. Adaptivity & computation-statistics tradeoffs for kernel & distance based high-dimensional two sample testing (with S. Reddi, B. Poczos, A. Singh, L. Wasserman).       arXiv | poster

  2. Algorithms for graph similarity and subgraph matching (with D. Koutra, A. Parikh, J. Xiang).       report