I am a fourth-year Ph.D. candidate in the Department of Statistics and Data Science at Carnegie Mellon University (CMU), where I am very fortunate to be advised by Sivaraman Balakrishnan and Larry Wasserman. Before coming to CMU, I received a Bachelor of Science in Mathematics from McGill University, where I am grateful to have been advised by Abbas Khalili. At CMU, I am a member of the Statistical Machine Learning Theory group and the Statistical Methods for the Physical Sciences group.

I am broadly interested in nonparametric statistics and statistical machine learning. My current research falls into the following areas.

  • Statistical Optimal Transport: Statistical inference for Wasserstein distances, optimal transport maps and variants thereof.
  • Latent Variable Models: Model selection and analysis of parameter estimation in latent variable models, particularly finite mixture models and mixture-of-expert models.
  • Sequential Analysis: Sequential two-sample testing; nonparametric sequential inference, particularly for problems related to the above two categories.
  • Applications to Particle Physics: Applications of the optimal transport framework to background modeling for searches of new physical phenomena at the Large Hadron Collider. My work is motivated, in particular, by the search for double Higgs boson production.

My papers can be found below or on my Google Scholar page. Code for all of my research is publicly available via GitHub.


Papers

Plugin Estimation of Smooth Optimal Transport Maps. Preprint.
Manole, T., Balakrishnan, S., Niles-Weed, J., Wasserman, L.

Sharp Convergence Rates for Empirical Optimal Transport with Smooth Costs. Preprint.
Manole, T., Niles-Weed, J.

Sequential Estimation of Convex Functionals and Divergences. Preprint.
Manole, T., Ramdas, A.

Uniform Convergence Rates for Maximum Likelihood Estimation under Two-Component Gaussian Mixture Models. Preprint.
Manole*, T., Ho*, N.

Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models. To appear, Proceedings of the 39th International Conference on Machine Learning.
Manole, T., Ho, N.

Minimax Confidence Intervals for the Sliced Wasserstein Distance. Electronic Journal of Statistics 16(1), 2252-2345, 2022.
Manole, T., Balakrishnan, S., Wasserman, L.

Estimating the Number of Components in Finite Mixture Models via the Group-Sort-Fuse Procedure. The Annals of Statistics 49(6), 3043–3069, 2021.
Manole, T., Khalili, A.

(* Equal Contribution)


Selected Talk Slides

Transfer Learning for Data-Driven Background Modelling: Optimal Transport v Classifier Extrapolation. Invited.
PhyStat--Systematics Workshop, 2021.

Estimating the Number of Components in Finite Mixture Models via the Group-Sort-Fuse Procedure. Contributed.
Joint Statistical Meetings, 2021. (Winner of a Statistical Learning and Data Science ASA Student Paper Award.)

Uniform Convergence Rates for Maximum Likelihood Estimation under Two-Component Gaussian Mixture Models. Invited.
International Conference on Econometrics and Statistics (EcoSta), 2021.

Sequential Estimation of Convex Divergences. Contributed.
Annual Meeting of the Statistical Society of Canada, 2021. (Winner of an SSC General Student Research Presentation Award.)

Reverse Martingales: How they Arise and how to use them for Sequential Analysis. Guest Lecture.
Course on Game-Theoretic Statistical Inference, co-taught by Glenn Shafer, Ruodu Wang and Aaditya Ramdas, 2021.

Minimax Confidence Intervals for the Sliced Wasserstein Distance. Spotlight Presentation.
Optimal Transport and Machine Learning Workshop, Neural Information Processing Systems (NeurIPS), 2019.


Contact

Email: tmanole [at] andrew [dot] cmu [dot] edu

Tudor Manole
Baker Hall
Department of Statistics and Data Science
Carnegie Mellon University
Pittsburgh, PA 15213