Publications and Preprints
Yotam Hechtlinger, Purvasha Chakravarti, Jining Qin Main Idea: The convolution used in Convolutional Neural Networks (CNN) implicitly assumes that spatially close variables are the most correlated, requiring the data to have a grid structure. It is possible to generalize the convolution to data sets lacking the grid structure by learning the data graph structure (e.g. using the Correlation matrix) and convolve each variable with its closest neighbors, selected by a random walk on the data graph. This provides a generalization of CNN to new data structures. 
Yotam Hechtlinger Main Idea: We suggest a simple method to interpret the behavior of any predictive model, both for regression and classification. Given a particular model, the information required to interpret it can be obtained by studying the partial derivatives of the model with respect to the input. We exemplify this insight by interpreting convolutional and multilayer neural networks in the field of natural language processing. Advances in Neural Information Processing Systems (NIPS), Interpretable Machine Learning Workshop, 2016 

Yoav Benjamini, Yotam Hechtlinger Main Idea: In a paper by Jager and Leek, the authors estimate the ScienceWise FDR of medical research at 14% by analyzing the Pvalues distribution in top medical journals. In a discussion we have been asked to write on the paper, we address possible drawbacks with the analysis, suggest ways to improve the estimation of the ScienceWise FDR and consider the multiplicity problem of selective inference scientists must face. Biostatistics, 2013 
Misc.
I was born and raised in Israel and currently reside in Pittsburgh. I enjoy Rock Climbing, Theatre and random stuff that makes you wonder where the time went. My office is in FMS 132.
Github,
Linkedin
and Email (yhechtli['at']stat['dot']cmu[.]edu).