Dimension reduction though sparsity is a very efficient way to regularize inverse problems that occurs when we want to reconstruct maps. We describe briefly the concept of sparse representations, and we how to regularize inverse problems such as blue/red galaxies separation or the Cosmic Microwave Background recovery. We present the first CMB map reconstructed from both WMAP and Planck PR2 datasets. We show that the Galactic barrier can be broken down, so that we can have a full sky high quality CMB map, for the first time. In addition, as a result of our novel approach, this high quality map is free of any significant thermal SZ contamination.
More than a decade ago two physicists, Wayne Hu and Takemi Okamoto, invented a new estimator for measuring the dark matter distortion imprinted on the our observations of the cosmic microwave background. Their estimator, called the quadratic estimator, quickly became the state-of-the-art tool for detecting, measuring and mapping dark matter. From a spatial statistics perspective this estimator has some remarkable properties. In this talk I will present an analysis of the quadratic estimator in the context of both cosmology and in the larger context of estimating general random field non-stationarity. I will discuss a property of random fields we call, local invariant non-stationarity, which appears to be a sufficient condition for extending the remarkable properties of Hu and Okamoto's quadratic estimate to more general forms of non-stationarity.
Joint work with Joe Guinness at NCSU.
Explicitly specifying a likelihood function is becoming increasingly difficult for many problems in astronomy. Astronomers often specify a simpler approximate likelihood - leaving out important aspects of a more realistic model. Estimation of a stellar initial mass function (IMF) is one such example. The stellar IMF is the mass distribution of stars initially formed in a particular volume of space, but is typically not directly observable due to stellar evolution and other disruptions of a cluster. Several difficulties associated with specifying a realistic likelihood function for the stellar IMF will be addressed in this talk. Approximate Bayesian computation (ABC) provides a framework for performing inference in cases where the likelihood is not available. I will introduce ABC, and demonstrate its merit through a simplified IMF model where a likelihood function is specified and exact posteriors are available. A new formation model for stellar clusters using a preferential attachment framework will be presented. The proposed formation model, along with ABC, provides a new mode of analysis of the IMF.
Cosmological information of large scale structure is usually extracted via N-point function analysis, of which only 2 and 3 point functions are typically being applied to the data, leaving potentially vast amounts of information untapped. I will discuss an alternative approach of extracting all of the information at once, via a nonlinear optimal quadratic estimator approach. I will present the theory behind this approach and some of the necessary tools we developed to solve it, such as fast optimizers and fast nonlinear structure formation modeling using a FastPM simulation package. The end product is an optimal quadratic estimator of the initial power spectrum, including its Fisher matrix which serves both as a window and covariance matrix. I will present some first test applications of the method.
I'll give a status update on CHIME, a new interferometric radio telescope in British Columbia which will go online in fall 2016. A pathfinder instrument with 1/8 the full collecting area is already taking data. I'll show some early results from pathfinder CHIME, forecasts for full CHIME, and speculate on future prospects for the field. Recent advances in commodity GPU and network hardware have made it possible to build large interferometers with unprecedented statistical power. Taking full advantage of this increase in sensitivity will require solving new computational and statistical problems which will be interesting to both astronomers and computer scientists.
Focusing on very high dimensional parameter spaces, I will compare sampling methods to optimization algorithms. Sampling allows for obtaining the full posterior distribution, while optimization methods can be used to find the maximum likelihood or the maximum posterior point. Analytical approximations to the posterior distribution can be obtained by using an optimization algorithm. I will focus on the Hamiltonian Monte Carlo algorithm for sampling high dimensional parameter spaces and the L-BFGS algorithm for optimization. I will discuss their tuning issues, and compare their computational costs. These algorithms are discussed in the context of reconstructing the initial conditions (linear density perturbations) that seed the late time Universe. I will also present publicly available implementations of the algorithms.
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this talk we present CosmoSIS, a new framework for cosmological parameter estimation which uses modularity as the key to addressing these challenges. Calculations are broken up into interchangeable modular units with inputs and outputs clearly defined. CosmoSIS is designed to connect together, share, and advance development of inference tools across the community.
Weak gravitational lensing (WL) probes massive structures in the Universe on large scales, providing the information about the late-time evolution of the dark matter. One way to extract the non-Gaussian part of this information is WL peak counts, which have been shown as a promising tool to constrain cosmology.
We propose a new model to predict WL peak counts. We generate fast simulations based on halo sampling, and select peaks from the derived lensing maps. This approach has three main advantages. First, the model is very fast: only several seconds are required for performing a realization. Second, including realistic conditions is straightforward. Third, the model provides the full PDF information because of its stochasticity.
We combine approximate Bayesian computation (ABC) with our model to constrain cosmological parameters. ABC is an alternative solution for the posterior when the likelihood estimation is intractable. It is an accept-reject sampler, and probes a posterior considered close to the true one. Without the need to evaluate the covariance matrix for the likelihood, ABC can considerably reduce the computation cost.
I will first validate our model by comparing it with N-body simulations. Then, I will demonstrate how parameters are constrained with population Monte Carlo ABC (PMC ABC), and show its agreement with the results using likelihood function. I will also examine different filtering techniques for WL and study their impacts on constraints. Finally, an outlook on parameter constraints using ABC with data from the surveys such as CFHTLenS, KiDS, and DES.
Lin & Kilbinger (2015a) http://arxiv.org/abs/1410.6955
Lin & Kilbinger (2015b) http://arxiv.org/abs/1506.01076
Lin et al. in preparation.
By its very nature dark matter cannot be directly observed and only through its gravitational effects can it be studied. The aim of this work is to use tomographic weak gravitational lensing measurements to map this distribution in three dimensions.
As weak lensing is an integrated effect along the line of sight, recovering the 3D dark matter distribution involves deprojecting the radial lensing kernel from a limited number of noisy shear and distance measurements. This problem can be seen as a typical instance of an extremely ill-posed linear inverse problem.
Previous methods that have attempted to solve this inverse problem have relied on linear methods such as Wiener filtering or SVD regularization but with very poor results. In particular, the recovered structures are extremely smeared along the line of sight and the amplitude of these structures cannot be reliably estimated, which has greatly limited thus far the range of applications of 3D mass-mapping techniques.
We propose a new approach to the 3D mass-mapping problem, based on sparse regularization. The 3D dark matter map is recovered as the solution of an l1 penalized least-squares problem incorporating individual redshifts estimates for the lensing sources as well as accounting for the reduced shear, which is then solved using state-of-the-art convex optimization algorithms.
We demonstrate on simulations how this approach allows us for the first time to tightly constrain the redshift and masses of dark matter halos using 3D mass-mapping.
Finally, we present 3D reconstructions of the COSMOS and STAGES field with unprecedented resolution.
In this talk, I will begin by reviewing what we hope to learn by measuring weak lensing with future surveys like LSST, Euclid and WFIRST. Then I will discuss two statistical challenges for weak lensing cosmology with future surveys. The first issue relates to the question of the complexity of the full redshift-dependent weak lensing analysis and marginalization over nuisance parameters related to systematic uncertainties. The second issue is the robust estimation of the small but coherent galaxy shape distortions from lensing, in spite of the larger shape distortions due to the atmosphere, telescope, and detectors. These two areas present both a challenge to the community, and an opportunity for new method development.
Astrophysicists and particle physicists are often interested in non-standard model comparisons. The search for the Higgs boson, for example, involves quantifying evidence for a narrow component added to a diffuse background distribution. The added component corresponds to the Higgs mass distribution, accounting for instrumental effects, and cannot be negative. Thus, not only is the null distribution on the boundary of the parameter space, but the location of the added component is unidentifiable under the null. In other settings, for example in the search for dark matter, physicists may aim to compare non-nested models. This can occur when known sources mimic a new signal or when two differently parametrized models must be compared. Because many researchers have a strong preference for frequency-based statistical methods, they use Likelihood Ratio Tests (LRT) and when appropriate use Wilks and Chernoff-type asymptotics (Monte Carlo methods are often infeasible owing to extreme significance criteria). We consider the case of non-nested models. By formulating an additive composite model, we are able to view the LRT statistics as a random process indexed by the parameter that is unidentified under the null. This allows us to leverage methods developed to handle so called trial factors and obtain asymptotically valid frequentist tests. We illustrate our proposed method in a series of numerical studies that validate its power and nominal false positive rate.
Co-authors: David van Dyk, Jan Conrad
Preprint available at : http://arxiv.org/abs/1509.01010
A central problem in astronomy is to infer latent properties, such as locations and colors, of stars and galaxies appearing in astronomical images. We have developed a hierarchical probabilistic model for this problem: the number of photons arriving at each pixel during an exposure is Poisson, with rate parameter depending on latent properties of the imaged stars and galaxies. We propose a variational-inference procedure to approximate the posterior distribution of these latent properties. Our procedure attains state-of-the-art results. It also exhibits the scaling characteristics necessary to construct an astronomical catalog from the full 20-terabyte Sloan Digital Sky Survey dataset, with a computational budget of 1 million CPU hours.
The Large Synoptic Survey Telescope (LSST) will carry out an imaging survey covering the sky that is visible from Cerro Pachon in Northern Chile. With about 1000 observations of half the sky in ugrizy bands over a 10-year period, the LSST data will enable faint time-domain astronomy and a deep stack across half the sky reaching hundred times fainter flux limit than the SDSS survey. The resulting hundreds of petabytes of imaging data for about 40 billion objects will be used for scientific investigations ranging from the properties of near-Earth asteroids and brown dwarfs, to characterizations of dark matter and energy from strong and weak lensing, galaxy clustering, and distant supernovae. These data will represent a treasure trove for follow-up programs using other ground and space-based telescopes, such as fast-response fast-cadence photometric observations and spectroscopy, as well as for facilities operating at non-optical wavelengths. I will summarize the main LSST science drivers, provide status report for LSST construction activities, and discuss astrostatistical challenges coming soon with LSST data.
Approximately 10 years ago, I completed my graduate thesis on algorithms for the efficient discovery of spatial associations. This work was motivated by the problem of linking asteroid observations in large-surveys. In the time since I first started this research, I have made plenty of common mistakes and learned best practices that I wish I had known a little earlier. In this talk I will discuss my work on asteroid linkage, reflect on some of the common mistakes I made during the process, and discuss the practical aspects of software development that I wish I had known during graduate school.
In this talk, I will discuss current research collaboration classifying all observed objects in the sky. The emphasis of this talk will be on an unsupervised feature learning algorithm designed for variable stars. This method, learns the representation of lightcurves from data, unburdening astronomers from the need to spend time finding the best statistical descriptors, and reducing the computational costs related to the fit of statistical models. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias generated when the learning process is done only with labeled lightcurves.
Surveys in the near-future will produce fainter and far more transients than ever before. In order to classify these in real-time while avoiding spectroscopy as far as possible will require a census of the universe including the fainter and/or distant-from-the-disk extensions of more familiar classes. And this is to be done with gappy, heteroskedastically noisy, sparse light-curves. The LSST, for instance, will have a lot of observations, and yet only 200 per filter over ten years for a given object. We will present a few possibilities using path finders like CRTS and PTF, if only to bring out bigger challenges involved.
The Large Synoptic Survey Telescope will provide light curves for ~1 billion stars. For a small portion of these stars, there will be ~10,000 observations in 6 bands over the course of 10 years, and for this small subset we have successfully applied algorithms that are commonly used in transiting planet searches to demonstrate that extrasolar planets can be detected for a range of stellar hosts and planet parameters. However, the vast majority of stars will only be observed ~1,000 times over the course of ten years. Due to the extremely sparsely sampled nature of these light curves, we have found that the algorithms commonly used for transiting planets have very low success rates. We present data sets of simulated light curves as a statistical challenge for methods that can be used to detect a small periodic signal in a small, sparsely-sampled light curve. Solutions to this problem can greatly increase the number of planets that can be detected by LSST, as well as being applicable to Gaia and Hipparcos data.
In this commentary, I will highlight some of the key challenges in determining and calibrating photometric redshifts for future projects such as LSST. Improved statistical methods can play an important role in ensuring that LSST redshift estimates are as accurate as possible, maximizing the scientific output of the telescope.
Unveil the physical processes beyond the observed galaxy properties is one of the main goals of the next decade extragalactic astrophysics. To address this goal, precision galaxy distributions should be estimated to constrain phenomenological, semi-analytical, and cosmological models and to eventually discriminate the galaxy formation and evolution physics.
The J-PAS (Javalambre – Physics of the accelerating universe Astrophysical Survey) will map 8500 deg2 of the northern hemisphere with 54 contiguous medium-band (~145Â) filters from 3700Â~to 9200Â. The depth in these 56 optical bands will be ~ 22.5 (5σ AB), with a selection limit of r < 22.5 performed in a deep broad-band r image. The J-PAS will provide low-resolution (R ~ 50) photo-spectra of about 200.000.000 galaxies from the local Universe up to z = 4, leading to excellent photometric redshifts with a precision of ~1000 km/s. The statistical J-PAS strength is also its main challenge: statistical uncertainties will be no longer a problem, and the systematics in the analysis techniques will dominate the final error budget in our measurements. With usual photometric techniques prone to known biases, and a too low resolution (R ~ 50) to successfully apply usual spectroscopic techniques, new and suited methodologies are mandatory to extract accurate J-PAS galaxy distributions for the next decade astrophysics.
In this talk I present the latest developments in such methodologies, that use the probability distribution functions (PDFs) provided by modern photometric redshift codes. Even if the posterior PDF in redshift - template space, PDF(z,T), is recognized as the right way to deal with photometric redshifts and Bayesian inference is widely used to estimate galaxy properties, current distribution estimators assume galaxies with a fixed z, luminosity, stellar mass, etc. However, given the probabilistic nature of the photometric redshifts, any galaxy property becomes probabilistic and thus the posterior PDFs of galaxy properties must be tracked along all the analysis process to ensure unbiased posterior galaxy distributions.
Using ALHAMBRA data, we have developed the posterior luminosity function estimator based on PDFs. This estimator has important advantages with respect to previous ones. Our new method provides an unbiased posterior luminosity function, Φ(z,MB), and (i) naturally accounts for z and MB uncertainties and covariances, (ii) ensures 100% completeness because it works with intrinsic magnitudes instead than with observed ones, and (iii) robustly deals with spectral type selections because we can statistically decompose the luminosity function as Φ(z,MB) = Φ(z,MB|T=E/S0) + Φ(z,MB|T=S/SB). Moreover, instead of studying the luminosity function in redshift slices, we create a 2D model in z – MB that is affected by the same selection as the data, avoiding volume incompleteness, using all the available galaxies to infer the model parameters, and minimising the impact of cosmic variance over the redshift.
Photometric redshift probability distribution functions are rapidly replacing redshift point estimates as data products of photometric galaxy surveys. Though there is not yet agreement on the best way to derive such data products, they are most commonly stacked to obtain an estimate of the redshift distribution function necessary for weak gravitational lensing calculations and reduced to point estimates in calculations of more complicated statistics. This work challenges that paradigm and proposes a mathematically consistent technique motivated by fundamental probability theory in which a probabilistic graphical model elucidates the relation of interim photometric redshift posteriors to a full posterior distribution over relevant physical parameters. The approach is applied to the one-point statistic of the redshift distribution function, sampled by Monte Carlo-Markov Chain procedures and validated on synthetic data. It is demonstrated that this method compares favorably to popular alternatives such as stacking and conversion of photometric redshift probability distributions to point estimates of redshift.
The Gaia satellite is currently surveying the entire celestial sphere down to 20th magnitude, obtaining astrometry, photometry, and low resolution spectrophotometry on one billion astronomical sources, plus radial velocities for over one hundred million stars. The main objective is to unravel the formation and evolution of our Galaxy though a determination of the 3D positions, velocities, and physical properties of its stars. Gaia's unique feature is the measurement of parallaxes and proper motions with hitherto unparalleled accuracy. As a survey, the physical properties of most of the targets are a priori unknown. Here I outline some of the methods being used by the Gaia consortium to classify objects and estimate astrophysical parameters from the Gaia data. I will also explain that although parallaxes are used to obtain distances to stars, the best estimator of the distance is not simply the inverse of the parallax. Estimating distances is instead an inference problem for which the specification of a prior is unavoidable.
A general overview and associated challenges of the variability processing and analysis tasks will be presented from the point of view of the Gaia DPAC consortium. What are the methods that we are using to tackle the global analysis of the Gaia data. Our goal is to systematically detect variable objects, classify them, derive characteristic parameters for specific variability classes, and give global descriptions of variable phenomena of Celestial objects. The general Gaia release scenario will also be presented.
Since telescope time is limited, real-time lightcurve classification involves carefully selecting future time points at which sources should be observed. We propose a new probability based measure of the information provided by the data for classification, and schedule new observations by selecting the future times that maximize the expected value of our measure. In the context of decision problems, our information measure has more appealing properties than variance or spread based measures that are often used in scheduling observations when the goal is estimation. Furthermore, we demonstrate that our measure satisfies a fundamental identity, and we leverage this result to achieve computational advantages over related statistical methods first proposed in Box and Hill (1967). Extensions in progress include applying the method when we have observations from multiple bands for each source, and adjusting for contamination from nearby sources or background diffuse emission. We acknowledge support from Smithsonian Competitive Grants Fund 40488100HH0043 and NSF grant DMS 1208791.
Measuring the distances between space-time points in the Universe is key to understanding its evolution and fundamental composition. Increasingly large surveys are exploring more of the Universe in both time and space with a particular focus on understanding the dark energy currently accelerating the expansion rate of the Universe. Our analysis tools and approaches need to grow along with and in preparation for these data.
I will begin by summarizing the current status of dark energy measurements based on Type Ia supernovae (SNeIa) and complementary probes and highlight the key opportunities for improvement. In concert with the big-name higher-redshift (0.2 < z < 1.7) SN Ia surveys, we need to improve our understanding of the observed astrophysical properties of Type Ia supernovae by studying SNeIa in detail at accessible redshifts (z~0.05).
I will focus on the need for full engagement between statistical approaches and the physical processes at work in the Universe. The true richness of astrostatistics will be realized when astronomers understand how to map their physical questions to incisive statistical approaches when statisticians understand how their powerful techniques map to physical questions about the Universe.
We present a new photometric classification of supernovae (SNe) using redshifts derived directly from supernova light curves in the SNLS deferred analysis. Due to limited spectroscopic ressources for candidate follow-up, photometric typing becomes increasingly relevant for present and future supernova cosmology surveys. In this work we address one of the current challenges, obtaining a large, high-purity, type Ia supernova sample.
We applied a new algorithm that provides supernova photometric redshifts for all SNIa candidates with high resolution. Further information was also obtained using a general light-curve fitter. Supervised learning techniques, notably Boosted Decision Trees, were used for SN classification with both real and simulated events. We show that it is possible to obtain a large type Ia SNe sample with an estimated contamination of less than 5%.
The current supernova (SN) photometric classification system is based on high resolution spectroscopic observations. On the other hand, the next generation of large scale surveys will gather photometric light curves of SNe at an unprecedented rate; developing an efficient method for SN photometric classification is critical to cope with the rapid growth of data volumes in current astronomical surveys. In this work, we present an adaptive mechanism that generates a predictive model to identify SNIa when the training set is made of spectroscopic data, but the test set is made of photometric data. The method is applied to simulated data set derived from the Supernova Photometric Classification Challenge (Kessler et al., 2010), and preprocessed using Gaussian Process Regression (Ishida et al., 2014) for all objects with at least 1 observational epoch before -3 and after +24 days since maximum brightness. The main difficulty lies in the compatibility of models between spectroscopic (a.k.a. source) data and photometric (a.k.a target) data, since the underlying distributions on both, source and target domains, are expected to be significantly different. A solution to such difficulty is to adapt predictive models across domains. We attack such problem using machine learning techniques by combining two concepts: 1) domain adaptation is used to transfer properties of the source model to the target model; and 2) active learning is used as a means to rely on a small set of confident labels on the target domain. We show how a combination of both concepts significantly improves accuracy performance. Specifically, our proposed methodology generates predictive models with accuracy levels (purity) around 85% on the target set. Such results outperform previous ones reported using the same data set: 50% from Richards et al., 2012, 77% from Karpenka et al., 2013, and 80% from Ishida & de Souza, 2013 (using the same data preprocessing - D1), and correspond to an increase in performance of about 10% points compared to other domain adaptation techniques.
Ishida & de Souza, 2013, Kernel PCA for supernova photometric classification, MNRAS, 430 (509) - http://goo.gl/1tn8g9
Ishida et al., 2014, Statistical Challenges in 21st Century Cosmology, Proceedings of the International Astronomical Union, IAU Symposium, Volume 306 (326) - http://goo.gl/cF4zZd
Karpenka et al., 2013, A simple and robust method for automated photometric classification of supernovae using neural networks, MNRAS, 429 (1278) - http://goo.gl/9vnP7a
Kessler et al., 2010, Results from the Supernova Photometric Classification Challenge, PASP, 122 (1415) - http://goo.gl/jEaFrM
Richards et al, 2012, Semi-supervised learning for photometric supernova classification, MNRAS, 419 (1121) - http://goo.gl/ID2f5M
Conventional analyses in Type Ia supernova (SN Ia) cosmology currently use a simplistic regression of SN Ia optical magnitudes versus color and light curve shape. This approach results in a puzzling discrepancy between the estimated color-magnitude relation and the extinction-reddening vector expected for normal interstellar dust. We have constructed a probabilistic generative model describing the SN Ia data as arising from a combination of intrinsic SN Ia variations and a host galaxy dust effects. We incorporate these latent variables into a hierarchical Bayesian statistical model for SN Ia light curve data, and estimate its hyperparameters via maximum likelihood and Gibbs sampling. We use the model to analyze the optical light curve data from a sample of 277 nearby SN Ia at z < 0.10 by coherently estimating the distinct intrinsic and dust components. We find that the data is consistent with a host galaxy dust population average E(B-V) reddening of 0.07 ± 0.01 mag, and a dust law of R_V = 2.7 ± 0.3, consistent with the typical average for Milky Way dust. By simultaneously modeling the physically distinct effects of intrinsic SN Ia variations and host galaxy dust underlying the data, our model corrects for a color-dependent systematic distance bias in the tails of the SN Ia apparent color distribution. I will discuss the implications of this approach applied to a compilation of ~900 cosmological SN Ia out to z ~ 1 cross-calibrated with the Pan-STARRS survey, as well as applications to spectroscopic and near-infrared data.
Mandel, K., Foley, R.J., & Kirshner, R.P. Type Ia Supernova Colors and Ejecta Velocities: Hierarchical Bayesian Regression with Non-Gaussian Distributions. 2014. ApJ, 797, 75.
Mandel, K., Narayan, G. & Kirshner, R.P. Type Ia Supernova Light Curve Inference: Hierarchical Models in the Optical and Near Infrared. 2011. ApJ, 731, 120
Mandel, K., Wood-Vasey, W.M., Friedman, A.S., Kirshner, R.P. Type Ia Supernova Light Curve Inference: Hierarchical Bayesian Analysis in the Near Infrared. 2009. ApJ, 704, 629
We extend our previous application of a non-parametric Bayesian
classification algorithm to the combination of optical and mid-IR data
in attempt to improve quasar completeness at high redshift. The
training sample of 157k SDSS quasars is used to leverage 733k new
quasars from SDSS imaging data, including 7874 objects that are
expected to have 3.5 < z < 5. We present number count and luminosity
function analyses that suggest that the resulting catalog is more
complete at z > 3.5 than traditional mid-IR and optical selection.
This analysis relies on the AllWISE survey and a small area of Spitzer
data. We discuss both the application of the algorithm to 100 square
degrees of Spitzer mapping on the multi-wavelength rich SDSS Stripe 82
region and clustering analysis of the resulting high-redshift quasars.
We introduce a new framework for accelerating posterior sampling with computationally intensive models, borrowing ideas from deterministic approximation theory, derivative-free optimization, and experimental design. Previous efforts at integrating approximate models into inference typically sacrifice either the sampler's exactness or efficiency; our work seeks to address these limitations by exploiting useful convergence characteristics of local approximations. We show that our approximate Markov chain samples asymptotically from the exact posterior distribution of interest, and describe variations of the algorithm that exploit parallel computational resources. Results suggest that when the likelihood has some local regularity, the number of model evaluations per MCMC step can be greatly reduced without biasing the Monte Carlo average. Joint work with Patrick Conrad (MIT), Andrew Davis (MIT), Natesh Pillai (Harvard), and Aaron Smith (Ottawa).
Computer models are becoming increasingly prevalent in a variety of scientific settings; these models pose challenges because the resulting likelihood function cannot be directly evaluated. For example, astrophysicists develop computer models that predict the photometric magnitudes of a star as a function of input parameters such as age and chemical composition. A goal is to use such models to derive the physical properties of globular clusters―gravitationally bound collections of up to millions of stars. Recent observations from the Hubble Space Telescope provide evidence that globular clusters host multiple stellar populations, with stars belonging to the same population sharing certain physical properties. We embed physics-based computer models into a statistical likelihood function that assumes a hierarchical structuring of the parameters in which physical properties are common to (i) the cluster as a whole, or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. A Bayesian approach is adopted for model fitting, and we devise an adaptive MCMC scheme that greatly improves convergence relative to its precursor, non-adaptive MCMC algorithm. Our method constitutes a major advance over standard practice, which involves fitting single computer models by comparing their predictions with one or more two-dimensional projections of the data.
I will present a new approach for quantifying the abundance of galaxy clusters and constraining cosmological parameters using dynamical measurements. In the standard method, galaxy line-of-sight (LOS) velocities or velocity dispersions are used to infer cluster masses in order to quantify the halo mass function (HMF). This technique is strongly affected by mass measurement errors. In our new method, the probability distribution of velocities for each cluster in the sample are summed to create a new statistic called the velocity distribution function (VDF). The VDF can be measured more directly and precisely than the HMF and it can also be robustly predicted with cosmological simulations which capture the dynamics of subhalos or galaxies. We apply these two methods to mock cluster catalogs and forecast the bias and constraints on the matter density parameter Ωm and the amplitude of matter fluctuations σ8 in flat Lambda CDM cosmologies. For an example observation of 200 massive clusters, the VDF constrains the parameter combination σ8 Ωm0.29. Adding errors shows a negligible widening of the 68% constraints with only minor bias. However, the HMF with dynamical mass errors is biased to low Ωm and high σ8 and the fiducial model lies well outside of the forecast constraints, prior to accounting for Eddington bias. When the VDF is combined with constraints from the cosmic microwave background (CMB), the degeneracy between cosmological parameters can be significantly reduced. Upcoming spectroscopic surveys that probe larger volumes and fainter magnitudes will provide a larger number of clusters for applying the VDF as a cosmological probe.
The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictate the evolution of the Universe. To detect cosmic filaments, we propose a density ridge model and introduce the subspace constrained mean shift algorithm. The density ridges are collection of curves that represent high density regions. By comparing density ridges to both smoothed particle hydrodynamics simulation and the Sloan Digital Sky Survey, we found that several properties of galaxies, including principal axes, color, stellar mass and size, are all associated with filaments.
In Bayesian analysis, the posterior follows from the data and a choice of a prior and a likelihood. These choices may be somewhat subjective and reasonably vary over some range. Thus, we wish to measure the sensitivity of the posterior to variation in these choices. While the field of robust Bayes has been formed to address this problem, its tools are not commonly used in practice. An important contributing reason for this lack is the difficulty of calculating robustness measures from MCMC draws, which often lack generality or require additional coding or computation. We demonstrate that, by contrast to MCMC, variational Bayes (VB) techniques are readily amenable to robustness analysis. Since VB casts posterior inference as an optimization problem, its methodology is built on the ability to calculate derivatives of posterior quantities with respect to model parameters, even in very complex models. We use this insight to develop local prior robustness measures for mean-field variational Bayes (MFVB), a particularly popular form of VB.
A statistical challenge that arrises in the Bayesian estimation of the time delays among light curves of gravitationally lensed quasars is the multimodal posterior distribution of the time delay parameters. A popular Markov chain Monte Carlo strategy for dealing with multimodality is tempering. Unfortunately, this strategy typically requires extensive and time-consuming tuning and tends to be computationally expensive. We propose a repulsive-attractive Metropolis (RAM) algorithm that expedites a Markov chain's jumping between modes of a multi-modal distribution in a simple and fast manner. The RAM algorithm is essentially a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes repulsive, followed by an uphill move in density that aims to make local modes attractive. The downhill move is achieved via a reciprocal Metropolis ratio so that the algorithm prefers downward movement. The uphill move does the opposite using the standard Metropolis ratio which prefers upward movement. This down-up movement in density increases the probability of a proposed move to a different mode. Because the acceptance probability of the proposal involves a ratio of intractable integrals, we introduce an auxiliary variable which introduces a term that cancels with the intractable ratio. Using two examples, we demonstrate the potential for the RAM algorithm to explore a multi-modal distribution more effectively and with less tuning than is commonly required by tempering-based methods.
The Thomson optical depth provides a constraint on the properties of early galaxies that drive cosmic reionization. Using non-parametric Bayesian modelling I calculate posterior probabilities for the redshift evolution of the escape fraction of ionizing flux that enters the neutral intergalactic medium based on the Planck CMB polarization and temperature data, which is sensitive to the Thomson optical depth. I also give results for the effective clumping factor and the faint-end magnitude limit for the galaxy luminosity function at z > 6.
Based on methods in 1507.02685, 1403.5849, 1403.5922
A large number of astronomical data analysis problems can be viewed as Bayesian mixture modelling problems. Approaching them from this perspective can be very fruitful. While the computational burden is higher, the scientific potential of the data can be drastically increased in some cases. I will present recent results inferring the mass of a dark substructure in a lens galaxy and disentangling exoplanet signals from stellar activity in radial velocity data.
The Kepler Mission has discovered thousands of extra-solar planets and planet candidates, enabled by Kepler's ultra high precision photometry. However, the improvements in precision have brought to light new statistical challenges in detecting and characterizing exoplanets in the presence of correlated noise. These challenges have afflicted many of the most interesting exoplanets, from Earth-like planets to planetary systems whose orbital dynamics place important constraints on how planetary systems form and evolve. I will focus on the problem of correlated noise for characterizing transiting exoplanets using transit timing variations. I will present a comparison of several techniques using wavelets, Gaussian processes, and polynomial splines to account for correlated noise in the likelihood function when inferring planetary parameters. Accounting for correlated noise is essential to realizing the full potential of the Kepler Mission and future transiting planet missions like TESS and PLATO.
The Kepler Mission has discovered thousands of exoplanets and revolutionized our understanding of their population. This large, homogeneous catalog of discoveries has enabled rigorous studies of the occurrence rate of exoplanets and extra-Solar planetary systems as a function of their physical properties. Transit surveys like Kepler are most sensitive to planets with shorter orbital periods than the gas giant planets that dominate the dynamics of our Solar System. I have developed a fully-automated method of discovering and characterizing long-period transiting planets with only one or two transits in the Kepler archival light curves. Since the method involves no human intervention, I can also precisely measure the completeness function of the discoveries and place constraints on the occurrence rate of exoplanets with orbital periods longer than 2 years. I will present this method and the statistical tools developed as part of this project.
Exoplanets which transit their host stars provide a unique opportunity to study the range of planetary compositions produced by planet formation processes. When a mass measurement can be obtained for these planets, it can be combined with the radius inferred from the depth of its transit to yield a bulk planetary density. The distribution of bulk densities, the corresponding range of allowed compositions, and how these characteristics change as a function of distance from their host stars are important constraints for planet formation theory.
Here we explore these relationships with a hierarchical Bayesian model (HBM) of the mass-radius-period (M-R-P) dependence of low-mass (< 20 M_E) planets in multi-planet systems. In particular, we investigate different period parameterizations and present initial results of HBMs that account for measurement uncertainty. Looking ahead to next steps in this work, we describe the challenges involved in expanding the HBM to account for potential biases due to selection effects such as the detection efficiency and choice of survey strategy. Addressing these issues becomes important as we move toward comparing the M-R-P relationships of two sets of planets whose masses have been measured via different astronomical methods. When the respective observational biases in period and bulk density of these two datasets are accounted for, any remaining differences would correspond to intrinsic, astrophysical variations with important implications for the formation and possible migration of low-mass planets.
Variable stars observed in multiband time domain surveys are an example of functional data. Estimating parameters and classifying these functions is important for a variety of science goals. These tasks are especially challenging when the functions are sparsely sampled. In this work we present a new model for RR Lyrae variable stars constructed using Stripe 82 SDSS data. The resulting model is parsimonious (few free parameters) and computationally efficient to fit to new light curves, making it well suited for finding RR Lyrae in surveys such as Pan-STARRS and DES. We demonstrate the ability of the model to estimate periods and classify sparsely sampled RR Lyrae. We discuss ongoing work to use RR Lyrae as tracers of Milky Way halo structure.
Simulation studies suggest that larger galaxies are formed
through accretion of smaller galaxies. The process of accretion
creates debris that contain only around 1% of the total mass but are
spread over volume that is orders of magnitude larger and follow
elegant and simple physics. Studying these structures will provide
answers to fundamental problems such as (a) how often do the smaller
galaxies merge into larger ones, and (b) what is the structure or
locus of the debris.
Traditionally astronomers have relied on predictions from simulations of how these debris structures form and behave. In the talk I will outline a method to identify the two substructures — shells and streams — from two-dimensional projected positions of a data cloud of stars. We first estimate the density of the two-dimensional position of the stars using a kernel density estimate (KDE) and then crucially use features of this KDE to identify shells from streams. The feature that distinguishes streams from shells is how the density of the data cloud behaves around the 'ridge.'
Many estimation problems in astronomy are highly complex with non-standard aggregate data objects (e.g., images, spectra, and light curves) which are not directly amenable to traditional nonparametric statistical analysis. At the same time, the physical processes that generated these data may themselves be complicated; often the only existing “theory” is in the form of complex simulations. In this talk, I will describe our efforts toward developing nonparametric methods that fully exploit massive data rich in information but complex in structure, and our work toward modeling such data beyond regression and classification. (Part of this work is joint with Peter Freeman, Rafael Izbicki, and Chad Schafer.)
Most university courses on statistical time series emphasize inference for
regularly-sampled continuously-distributed data, often with an implicit
assumption of Gaussianity (through use of least-squares methods) and an
explicit assumption of stationarity.
At the frontiers of science, data bearing on the shakiest pillars of our scientific understanding are usually sparse, irregularly sampled, sometimes featuring extremes, and often presented as binned counts. The Central Limit Theorem does not apply, and the methods taught in our statistics courses are not applicable.
This talk presents a few methods and examples for addressing and making inference for these data. The three key ideas are:
1) even if data are sampled at discrete times, it can be useful to model the underlying phenomena in continuous time to overcome the problem of irregularly-sampled data;
2) Poisson, negative Binomial, alpha Stable, and other Infinitely-Divisible Distributions offer a rich alternative to the Gaussian for modeling and inference, with some exciting new features; and
3) Stationary processes can be used as building blocks or starting points for modeling non-stationary phenomena.
Upcoming wide-field surveys such as LSST, WFIRST, and Euclid have ambitious goals for the precise measurement of cosmological and dark energy parameters. For the gravitational lensing distortion of galaxy images, or “cosmic shear”, a primary systematic is the distribution of intrinsic galaxy shapes. By simultaneously inferring the distribution of galaxy morphologies disambiguated from lensing in a hierarchical probabilistic model, we achieve large gains in gravitational lensing precision and learn about the evolution of galaxies over cosmic time. Many probabilistic algorithms become computationally challenging however when working with a large number of images taken at different times or from different instruments. We have developed computationally efficient and flexible algorithms to combine inferences about image features from different epochs using importance sampling of ‘interim’ single-epoch inferences. I will demonstrate how our importance sampling algorithm allows us to solve previously intractable computing problems such as the marginalization of point-spread function uncertainties in individual epochs while improving characterization of sources common to all epochs.
Dark matter simulations provide robust and scalable predictions of the matter distribution for a given cosmology. However, because dark matter is not directly observable, in most cases when comparing dark matter simulations with observations, we need to assume a specific model for the galaxy-halo connection. In this talk I will discuss some recent developments on the abundance matching technique, challenges we face with empirical models, and also alternative approaches to the galaxy-halo connection with dark matter simulations.
I will describe efforts to create and use synthetic data from cosmological 3D hydrodynamics coupled to models of astrophysical processes. These efforts span wide ranges of volume and detail — the first needed to capture galaxy diversity and the second to evolve models on scales relevant for high-resolution observations. In particular, I will discuss our work to create and analyze millions of synthetic images derived from the Illustris project, a recent large hydrodynamical simulation effort. Recent results include modeling roughly correctly the correlations among optical galaxy structure, mass, star formation rate, and dynamics, and implementing ever more realistic feedback effects. Many open issues remain, such as the nature of feedback, the gas and metal cycle, the role of bulges and black holes, dust, and the evolutionary history of galaxy structures. While current and future datasets can measure the results of chaotic processes that shape the lives of massive galaxies over cosmic time, they cannot fully interpret these subtle and diverse effects without large, detailed predictions. For this reason, the best benchmarks for utilizing surveys to understand galaxies will be simulations with full spatial and spectral realism, including star formation, photoionization, mergers, tidal tails, star-forming clumps, dust, and AGN. To advance these goals, I advocate for systematically mock-observing large suites of hydrodynamical simulations and correlating these mock data to model galaxies' intrinsic physical histories.
Observations of the cosmic microwave background radiation (CMB) and the relic abundances of light elements are typically used to constrain the effective number of relativistic species in the early universe (Neff). In this talk we show to what extent predictions from CMB measurements and Big Bang Nucleosynthesis (BBN) observables are disconnected in giving a precise prediction on the number of effective neutrino species. We find that constraints are strongly affected if one considers a scaling of the neutrino temperature in such a way to keep the neutrino energy density fixed, which can be produced in various phenomenological situations. This freedom in the measurement of the effective number of neutrinos opens up a new window into physics beyond the standard model of particle physics. It also allows an amelioration in tension between ground-based detector experiments seeming to statistically favor five neutrino species while cosmological measurements seem to favor three to four. We also discuss the effect of these considerations on the Lithium Problem of BBN.
Over the past two decades many strides have been made in the effort to model galactic chemical evolution and to determine the details of stellar evolution like, e.g., correlations between age and metallicity. Moreover, efforts to uncover the details of stellar nucleosynthesis as a product of inherited enrichment, internal nucleosynthetic processes, contributed enrichment from binary coevolution, and explosive nucleosynthesis during a supernova have yielded great gains in understanding the fundamentals of stellar nucleosynthesis. Many of the these gains were made by constructing detailed simulations of high mass stellar evolution and by studying chemical abundance patterns of individual stars. In my talk, I will discuss an alternate way to decode the chemical abundances we see in stars en mass to shed light on the “inherent" nucleosynthesis of progenitors across the core-collapse SN mass spectrum. In addition, the analysis discussed can reveal insights concerning the stochastic nature of star formation in different galactic environments, help pinpoint the stellar mass spectrum location of likely nucleosynthetic sites for various neutron-capture elements, and probe the mass-dependent yield possibilities and trends for all elements in general.
We have developed a Bayesian method for estimating the mass and cumulative mass profile of the Milky Way that uses the positions and velocities of Galactic satellites. A preliminary analysis using the kinematic data of globular clusters and dwarf galaxies, and assuming a simple Hernquist (1990) model, returned a mass estimate for the Milky Way that is in agreement with many other studies (see Eadie, Harris, & Widrow 2015). In our study, however, we found that both high-velocity objects and measurement uncertainties have a strong influence on the estimated mass. Thus, we have since developed a way to include measurement uncertainties by treating all of the satellite positions and velocities as parameters in a hierarchical Bayesian model (see Eadie et al, JSM Proceedings 2015, Section on Physical & Engineering Sciences, 792-799, Eadie et al 2016 in prep), We have also started using a more realistic model for the Milky Way. In this talk, I will give an introduction to our method and discuss our most recent results.