
June 2628, 2013 Linue, Hawaii 
 
Schedule Wednesday, June 26, 2013
Thursday, June 27
Friday, June 28


Intrinsic modes of neuronal firing: Single and multi Shigeru Shinomoto Department of Physics, Kyoto University The occurrences of neuronal firings in vivo are influenced by external stimuli or behavioral contexts and the frequency is fluctuating continuously in time. However, not every aspect of the firing is determined externally; we have found that the firing irregularity as represented by the variation of consecutive interspike intervals is rather constant for any given neuron, independent of stimulus conditions, while it differs among neurons and in particular between the cortical areas [1]. Here we explore another intrinsic characteristic in a single spike train, and also a new aspect in multiunit spike trains. [Single unit] The firing rate can be estimated by the frequency defined as the number of firings divided by the interval of observation time. The frequency of firings in a finite time interval could fluctuate not only due to the irregular occurrences, but also according to temporal modulation of the underlying rate. From a given sequence of spike train, we wish to determine whether the underlying rate undergoes undergoes continuous modulation or exhibits transitions between discrete states. By constructing a method of inferring the mode of rate process, we analyze the biological spike trains [2]. [Multi unit] When analyzing multiple spike trains, a few millisecond delayed peak and trough in the cross correlation of spike trains are presumed as indicating excitatory and inhibitory synaptic connections, respectively. A relatively broader peak in the cross correlation centered at zero time delay may indicate common inputs from others. We shall analyze whether the presence of these functional connections is related to the neuronal correlated activity in a few hundred millisecond[3]. [1] S. Shinomoto et al. PLoS Comput Biol, 5(7):e1000433, 2009. [2] Y. Mochizuki and S. Shinomoto, a talk in this workshop. [3] T. Shintani and S. Shinomoto, a talk in this workshop. 
Behaviordependent modification of synaptic efficacies in the prefrontal cortical microcircuits Shigeyoshi Fujisawa RIKEN Brain Science Institute Computations in neocortical local circuits are likely to be implemented by the formation and segregation of cell assemblies. Yet, how neurons interact with each other flexibly through synaptic connections is not well understood. Here I investigated finetimescale neuronal interactions in microcircuits of the prefrontal cortex in rats performing a working memory task, by using largescale highdensity extracellular recordings of local circuits via multichannel silicon probes, which enable the observation of simultaneous neuronal firing activities in >100 neurons. I observed that putative monosynaptic interactions reflected shortterm plasticity in their dynamic and predictable modulation across various aspects of the task. Seeking potential mechanisms for such effects, I found evidence for both firing patterndependent facilitation and depression, as well as for a supralinear effect of presynaptic coincidence on the firing of postsynaptic targets. 
Origin and functional roles of spontaneous noise in the brain
 Selforganized stochastic resonance and memory states Junnosuke Teramae Graduate schools of information science and technology, Osaka University In the absence of sensory stimulation, cortical circuits generate rich and ubiquitous forms of spontaneous activity. The spontaneous states exhibit sparse irregular and asynchronous neuronal firing, and interact bidirectionally with sensory experience. There has been much recent interest in the genesis and functional roles of such spontaneous activity or noise in the brain. However, the computational principles that determine noise in neural networks remain almost completely unknown. By focusing on recent experiments findings that excitatory synaptic potentials (EPSPs) between cortical pyramidal neurons obey a longtailed, typically lognormal, distribution, we reveal network origin and a functional role of the spontaneous noise. Such a distribution creates a synaptic spectrum spanning from many weak synapses (typically, EPSP ~ 0.1 mV) to a small fraction of extremely strong synapses (EPSP ~ 10 mV). We numerically and analytically study the significance of these strongspare and weakdense EPSPs for the dynamics of recurrent networks and for spikebased signal processing. We show that the networks can robustly generate internal noise optimal for spike transmission between neurons with the help of a longtailed distribution in the weights of recurrent connections. Weakdense connections redistribute excitatory activity over the network as noise sources to optimally enhance the responses of individual neurons to input at sparsestrong connections. Our results identify a simple mechanism for internal noise generation supporting both stability and optimal spikebased communication between cortical neurons. We also apply the mechanism to an associative memory network. 
Neural motifs of higherorder interactions: Signature of latent structures Hiroyuki Nakahara Combined neuronal activity patterns convey information in the brain, and thereby underlie behavior. Understanding the function and structure of these activity patterns is challenging because the dependency among the patterns, or neural interactions, must be captured in the analysis of a given largescale dataset. Furthermore, the different orders of the interactions should be sorted out, as they are key signatures of the latent structure. In this talk, we present our efforts to dissociate different underlying structures, using an information geometry framework as a generic basis of analysis in order to properly separate out higherorder interactions. A given hypothesis or model often corresponds to their particular embedding in a given dataset, that is, a signature of the latent structure. Thus, we posit that a more parsimonious and more accurate model describing neural interactions (than models using pairwise interactions, for example) could be constructed if we could have an access to a latent structure; in turn, embedded higherorder interactions found in the dataset is a clue to verify the structure. Using a dataset of spontaneous local field potential (LFP) peak activities recorded in cortical organotypic cultures, we demonstrate this approach in two cases: hierarchy, identifying different levels of units of interactions, and nonlinear thresholding, using dichotomized Gaussian model as a generative model. If time permits, some more works with preliminary results will also be mentioned. 
Estimating Timevarying Higherorder Neuronal Interactions in Awake Behaving Animals Hideaki Shimazaki1, Shunichi Amari1, Emery N. Brown2,3,4, Sonja Gruen5,6 1.RIKEN Brain Science Institute, Wakoshi, Saitama, Japan 2.Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, USA 3.Department of Anesthesia and Critical Care, Massachusetts General Hospital, Boston, USA 4.Division of Health Sciences and Technology, Harvard Medical School/Massachusetts Institute of Technology, Cambridge, USA 5.Institute of Neuroscience and Medicine (INM6), Computational and Systems Neuroscience, Research Center Julich, Julich, Germany 6.Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany Neurons embedded in a network exhibit correlated spiking activity, and can produce synchronous spikes with millisecond precision. It was reported that the correlated activity organizes dynamically during behavior and cognition, independently from spike rates of individual neurons. However, in order to detect the dependency of multiple neurons, it may be necessary to estimate their `higherorder' interactions that can not be inferred from simultaneous activities of only pairs of neurons from the group. To estimate the timevarying higherorder neuronal interactions, we recently developed a method that combines the model of higherorder interactions with a statespace method. We applied this method to three neurons recorded simultaneously from the primary motor cortex of a monkey engaged in a delayed motor task (data from Riehle et al., Science, 1997). We found that depending on the behavioral demands due to the task these neurons dynamically organized into a group characterized by the presence of higherorder (triplewise) interaction. There was, however, no noticeable change in he firing rates of the involved neurons. The analysis demonstrates that timevarying higherorder analysis allows us to detect subsets of correlated neurons that may belong to a larger set of neurons comprising a neuronal cell assembly. Ref: Shimazaki et al., PLoS Comp. Biol. (2012) 8(3): e1002385 
Social spike trains: where computational neuroscience and computational social science can meet Naoki Masuda Point process modeling is a strong methodology for understanding spike trains of single and networked neurons. As other presenters in this workshop would address, we can, for example, detect burst firing, predict future spikes, and quantify the information contained in spikes, to a great extent. Since circa 2004, social data regarded as event sequences, like spike trains, have been increasingly accumulated. A main reason for this movement is that various new data taken from social networking services (e.g., Facebook and Twitter) and humans wearing RFID devices, for example, have been increasingly available. Another important reason is that many statistical physicists started to analyze such data, similar to the advent of network science in late 1990s. Exchanges of ideas and tools between computational neuroscience and such computational social science may be beneficial for both sides. In the present talk, I will introduce approaches and achievements that the latter community made in recent years, and discuss how their data, results, interests are similar and dissimilar to those in computational neuroscience. This is a position talk, and discussion (throughout the workshop) is very welcome. 
Decoding visual contents from human brain activity Yukiyasu Kamitani ATR Computational Neuroscience Laboratories Objective assessment of mental contents in terms of brain activity represents a major challenge in neuroscience. Despite its widespread use in human brain mapping, functional magnetic resonance imaging (fMRI) has been thought to lack the resolution to probe into putative neural representations of perceptual contents. As a consequence, the potential for reading out mental contents from human brain activity, or ‘neural decoding’, has not been fully explored. In this talk, I present our work on the decoding of fMRI signals based on machine learningbased analysis. I first show that visual features represented in ‘subvoxel’ neural structures could be decoded from population fMRI responses. Decoding of stimulus features is extended to ‘neural mindreading’, in which a person's subjective state is predicted. We next discuss how a multivoxel pattern can represent more information than the sum of individual voxels, and how an effective set of voxels for decoding could be selected. A modular decoding approach is presented with an example of visual image reconstruction where arbitrary images could be accurately predicted from singetrial fMRI signals. Finally, I present our recent results on dream decoding in which semantic contents of visual dreams were predicted from brain activity during sleep. These results show that our machine learningbased approach provides an effective means to read out complex mental contents from brain activity while discovering information representation in multivoxel patterns. 
Brain machine interface and decoding in mobile environments Shin Ishii Kyoto University / ATR Cognitive Mechanisms Laboratories Brain machine interface (BMI) is a technology to directly connect brains and computers. It recently emerges due to the progress in neuroscience, signal processing and machine learning, and has been successfully applied mainly in experimental rooms. In the first part, I introduce a statistical method to combine different modalities of brain measurement; EEG (electroencephalogram) and NIRS (near infrared spectroscopy). Since NIRS has relatively good special resolution, its signals were used as prior information for EEG source localization problem, and the localized EEG signals were used for decoding of brain activities. In the second part, I introduce our collaboration project, called networkbased BMI, which is an application of noninvasive BMI techniques to helping elder and physically handicapped people improve their quality of life. Since they spend much of their time in houses, there they have to move around possibly on wheelchairs, and turn on and off home appliances by themselves. To examine such BMI applicability, we have constructed a real environment like a house, which is called a BMI house. The BMI house is equipped with many sensors, such as ultrasonic sensors, laser range finders, cameras and microphones. Subject’s brain activities are measured by EEGNIRS and signals are transmitted to computer servers via wireless network such to keep the data scalability. This BMI house is also a welldefined real environment; the association of brain activities and sensor signals will provide a large amount of information of cortical activities in natural situations in daily living. If there is some more time, I will also introduce our decoding study of scene prediction in a partially observable decision making environment. 
Statistical Neurodynamics of Random Networks Shunichi Amari RIKEN Brain Science Institute We revisit a classic problem of randomly connected neural and Boolean networks by using a simple model. It is an interesting problem to know the characteristics of the statetransition graph of a random network. It is still unknown how quickly the state of a network falls in its attractors, that is the length of the transient period. We focus on differences of the state transition graphs 1) between a neural network and Boolean networks and 2) between sparsely vs densely connected networks. We show the state transition graph has a characteristic of a scalefree network, of which a small number of states monopolize the incoming branches. This is a joint work with Drs. Hiroaki Ando (BSI), Taro Toyoizumi (BSI) and Naoki Masuda (U. Tokyo). 
Simultaneous estimation of state transitions and instantaneous firing rates using switching state space model Ken Takiyama JSPS SPD / Tamagawa University We propose an algorithm that can simultaneously estimate state transitions and instantaneous firing rates using a switching state space model. External or internal stimuli, e.g., visual stimuli or attention, cause state transitions, i.e., firing rates discontinuously change at the onset of those stimuli. Not only firing rates but also state transitions thus include information about what the recorded neurons encode. Previous statistical methods have the following disadvantages. A hidden Markov model can estimate state transitions, however it cannot estimate instantaneous firing rates. Although state space model can estimate instantaneous firing rates, it cannot estimate state transitions. Therefore, we propose an algorithm that can simultaneously estimate state transitions and instantaneous firing rates. Based on synthetic data analysis, we confirmed the higher estimation accuracy of our method than previous methods both in firing rate estimation and in state transition estimation. Furthermore, spike data in medial temporal area includes a state transition which can be estimated only by our method (figure (b)). This transition is a rapid change of a temporal correlation of the firing rates, which correlation can be estimated only when instantaneous firing rates are estimated. Simultaneous estimation by our method is thus suggested to be important for data analysis of spike data. 
Inference of synaptic connections from multiple spike train data based on a coupled escape rate model Ryota Kobayashi, Katsunori Kitano We developed a coupled escape rate model (CERM) (Kobayashi & Kitano, J. Comput. Neurosci. 2013) to infer synaptic connections from multiple neural spike train data. The synaptic strength was determined by maximizing the likelihood of observed spike trains. We applied this method as well as modelfree methods, i.e., transfer entropy (TE) and crosscorrelation (CC) (Garofalo et al., PLos One 2009), to simulated multineuronal activities generated by the previously proposed cortical network model, which consists of thousands of biophysically detailed neurons (Kitano & Fukai, J. Comput. Neurosci., 2007). Using the simulated data enables us to verify these methods by comparing inferred synaptic connections with true ones defined in the model. We also applied these methods to the spike data generated by the cortical network model with different topologies of synaptic connectivity (regular, smallworld and random). Our results indicate that all methods perform better for highly clustered (regular and smallworld) networks than for random networks. With respect to modelfree methods, CERM performs better especially in nonregular networks. Overall, CERM method is most suitable to infer synaptic connections from multineuronal spike activities although it involves high computational cost. 
Surrogate Data via Multicanonical Monte Carlo Yukito Iba The Institute of Statistical Mathematics, Tokyo, Japan Recent trends in neural spike analysis are to use state space modeling and hierarchical Bayesian method; they provide a flexible way to extract information hidden in data. On the other hand, many analyses still utilizes methods based on surrogate data, where original data are compared to “randomized” artificial data that preserve the values of a given set of statistics. These methods can be regarded as statistical hypothesis testing, in which null hypothesis corresponds to uniform distribution on a set defined by the condition that the values of given set f statistics are equal to those of original data. The key of these surrogate methods is how to generate sets of surrogate data with necessary diversity; historically lots of clever techniques are developed for randomizing data without changing the values of the given statistics (Schreiber T and Schmitz A (2000)). Schreiber (1998) introduced an idea that solving constraint satisfaction problem by a generic tool such as simulated annealing; it provides an almost automatic way to generate surrogate data without casebycase techniques. Nevertheless, there is no theoretical justification that repeated use of simulated annealing can generate unbiased samples from null distribution. In this talk, a novel scheme for generating surrogate data is introduced; we follow Schreiber’s approach but utilize multicanonical Monte Carlo (Berg and Neuhaus (1992)) instead of simulate annealing. The advantage of multicanonical Monte Carlo is that it is designed for generating a number of samples unbiasedly from a given distribution. Here we implement proposed method for a simple example of surrogating time series and show how it works. See Iba (2013) for details, which also provides a concise introduction to multicanonical Monte Carlo and related methods. Berg BA and Neuhaus T (1992) Multicanonical ensemble: A new approach to simulate firstorder phase transitions. Physical Review Letters 68(1):912. 
Contribution of functional connections in neuronal firing activities Toshiaki Shintani1, Shigeru Shinomoto1 Kyoto university The shorttime modulation in the cross correlation of neuronal firing is called the functional connection, and is considered as reflecting a physical connection between neurons. A few millisecond delayed peak and trough in the cross correlation of spike trains are presumed as the potential presence of excitatory and inhibitory synaptic connections, respectively. A relatively broader peak in the cross correlation centered at zero time delay may represent the situation that the neurons are receiving common inputs from others. We are interested in whether the presence of these functional connections is related to the neuronal correlated activity in a few hundred millisecond. We measure the neuronal activity with a 2state hidden Markov model. We are going to report the result of the analysis in the workshop. 
Using a V1 model to model A1: Emergence of maps and "complex cells" from natural sound statistics Hiroki Terashima Department of Complexity Science and Engineering, The University of Tokyo Japan Society for the Promotion of Science
The complex cells were found in the visual cortex more than fifty
years ago. The concept of nonlinear responses with phase invariance
has been firmly established and successfully modelled as an adaptation
to natural image statistics [1]. However, analogous discussions have
been lacking in other modalities, despite of the anatomical uniformity
across sensory cortices. A kind of universality in sensory areas has
been suggested by successful applications of models for visual cortex
to auditory cortex [2, 3], although they studied only linear receptive
fields. In the present study, we applied a nonlinear model of visual
complex cells [1] to natural sounds; receptive fields of the auditory
“complex cells” typically had multiple peaks at harmonic frequencies
[4]. Moreover, some of them resembled the pitch cells that were
recently found in auditory cortex and nonlinearly respond to harmonic
complex tones in a way similar to a psychoacoustic phenomenon called
“missing fundamental” [5]. The result suggests that the pitch cells
might be analogous to the complex cells [6]: the visual complex cells
are phaseinvariant, whereas the pitch cells are invariant under a
spectral transformation with a constant pitch. 
Alternative modes in neuronal code Yasuhiro Mochizuki1, Shigeru Shinomoto1 1.Kyoto University The occurrences of neuronal discharges called spikes or firings are generally influenced by external stimuli, but not every aspect of the in vivo firing is determined extrinsically. For instance, it has been revealed that the firing irregularity as represented by the variation of consecutive interspike intervals does not vary for any given neuron, rather independent of stimulus conditions, while it differs among neurons and in particular between the cortical areas[1]. Here, we explore another intrinsic characteristic in the longer temporal structure of spike trains than two consecutive intervals, by paying attention to whether the degree of freedom in the firing rate is limited or not. In this respect, it is worth notice that the information transmission rate can be increased by limiting the firing rate to discrete states[2]. It has also been argued that the preferable distribution of firing rates undergoes a phase transition from continuous to binary when decreasing the width of the time window[3]. Specifically, we suggest analyzing every spike train to determine whether the firing rate undergoes continuous modulation or exhibits transitions between discrete states. We select one hypothesis by comparing the EBM and the HMM in their suitability to account for a given spike train, regarding them as representing the extreme hypotheses of continuous and discrete rate processes, respectively. Effectiveness of the selection procedure is tested using synthetic data generated by the doubly stochastic Poisson processes whose rates are given by the continuous OUP and the discontinuous SSP. Finally, the suggested analysis is applied to publicly available spike data recorded from the visual cortical areas, primary visual cortex (V1) and middle temporal area (MT), and the thalamus, the lateral geniculate nucleus (LGN) of monkeys.
[1] S. Shinomoto et al. PLoS Comput Biol, 5(7):e1000433, 2009.

MEG source reconstruction with whole brain network dynamics based on architechture of anatomical connecting Makoto Fukushima1,2, *Okito Yamashita1(oyamashi@atr.jp) 1.NAIST 2.ATR Magnetoencephalography (MEG) is noninvasive brain measurement technique with millisecondorder temporal resolution. The signal origin of MEG is postsynaptic potentials generated by a million of synapses of pyramidal cells. Unlike unit recordings, the measurements cover whole brain regions (although very low spatial resolution). Thus it can be useful to elucidate the largescale network dynamics of human at behaviorally relevant timescales. In this study, we propose the novel dynamical source reconstruction method to identify the whole brain network dynamics (under some stimulus). We model the network dynamics by the multivariate linear AR model whose network structure follows anatomical connections obtained from diffusion MRI. Each element of the MAR coefficient matrix (in our example, 20% of 2000*2000 = 4000000 elements which are specified by anatomical connections) represents the effective connectivity between the corresponding cortical sites. The variational Bayesian method is used to estimate the MAR parameters as well as the current sources. From the estimates, we can know how the current sources communicates each other as well as which regions are activated. We performed the simulation analysis to confirm validity of the proposed method. 
A statistical approach for identifying glianeuron networks Ken Nakae Graduate School of Informatics, Kyoto University Information processing in the brain largely depends on neural communication through synaptic connections. Recently, many researchers have reported that synaptic and neuronal activities are dynamically modulated by the activity of glial cells, astrocytes that envelope neighboring synapses. In these studies, artificial stimulations to the glial cell results in exocytosis of gliotransmitter from the cell, which modulates postsynaptic current or increases neuronal excitability. This indicates that glianeuron networks are necessary for understanding information processing in the brain. However, the glial effects in natural conditions have been controversial because of the artificial stimulations in these experiments. To discuss these effects in natural conditions, we propose a statistical method for estimating both synaptic connectivity and glial modulation by observations of glial and neuronal activities as calcium imaging data. In this study, we show that glianeuron networks are successfully reconstructed from artificial data generated by a computational model, which is different from our generative model. Applying our method to calcium imaging data, we identify a glianeuron network in a natural condition, where neuronal and glial activities are spontaneous. We compare prediction errors between our method and conventional method, in which glial activities are not considered. This comparison suggests that each glial activity selectively effects firing rate of neurons, which is consistent with previous experimental results and that synaptic modulation from glial activity are rare but strong. 
A Bayesian method for uncovering interactions in neural oscillator networks based on reduction theory of dynamical systems Kaiichiro Ota, Toshio Aoyagi Graduate School of Informatics, Kyoto University and JST CREST We propose a statistical method for estimating a simple mathematical model for networks of neural oscillators, called a phase model, from partially observed data. The estimated phase model reproduces the dynamics of the original network, and can be used as a tool for quantitative analyses of the network dynamics. Various rhythms in brains, such as hippocampal theta rhythms and EEG oscillations, have been discovered. Such rhythms typically exhibit collective spatiotemporal activity patterns like synchronization. However, quantitative understanding of underlying dynamics is still lacking. This is because we cannot observe all the state of the neural system, but at most hundreds of them, while the system has extremely many degrees of freedom. In other words, without any assumption restricting potential models of the system, model estimation would be infeasible due to the curse of dimensionality. Using the phase reduction theory for dynamical systems exhibiting rhythms, the number of model parameters to be estimated can be greatly reduced, because possible forms of model equations can be theoretically restricted to that of a phase model. In our proposed method, we assume that only a set of oscillatory time series can be observed and used as data for the estimation. The data are first transformed into time series of phase variables. Then, by solving a regression problem, parameters of the phase model are estimated. It is confirmed in numerical experiments that the estimated model equations agree well with the theoretical ones.

A General Framework for Learning of Cortical Dynamics Takashi Hayakawa1,3, Takeshi Kaneko1, Toshio Aoyagi2,3 1.Department of Medicine, Kyoto University 2.Department of Informatics, Kyoto University 3.JST CREST Although a variety of characteristic firing profiles in the mammalian cerebral cortex have been reported in experimental studies, their functional roles and underlying learning mechanisms remain to be understood. Meanwhile, in this talk, we will introduce a theoretical framework of learning on recurrent neural networks, which relates an arbitrary objective function, a biologically implementable learning rule and resulting firing profiles of neurons altogether. By numerical simulations based on this framework, we succeeded in reproducing different firing profiles of cortical neurons such as orientation selectivity, neuronal avalanche, repeated precise firing sequences and persistent firing as consequences of learning according to biologically implementable learning rules. Hence we believe that our framework can provide a useful tool for the unified understanding of different aspects of the cortical circuits. 
The effect of lateral inhibition on highorder correlations Yasuhiko Igarashi1 and Masato Okada1,2 1. Graduate School of Frontier Sciences, University of Tokyo, Chiba, Japan. 2. RIKEN Brain Science Institute 21 Hirosawa, Wako, Japan. It is widely acknowledged that dependencies among cells determine the detailed nature of a neural population code, namely, the manner in which information is represented by specific patterns of spiking and silence over a group of neurons. Ko et al. have reported that connectivity between neighbouring neurons is specifically structured, which affected the firing rates and neural correlations [1]. It would appear that these structured neural connectivities in V1 also affects the structure of higherorder correlations in neuronal firing. Here, we expanded the previous theoretical framework to higherorder correlations in a parsimonious structured network with common inputs and spiking nonlinearities as a model of orientation selectivity [2]. We found that the inhomogeneous mean inputs modulate the spiking nonlinearity to result in the structured higherorder correlations and heterogeneous structure of the network can dynamically control the structure of 3rdorder correlations and can generate both sparse and synchronized neural activity[3,4], and proposed a decisive experiment to test the effect of inhomogeneous connectivity on higherorder correlations.
[1] H. Ko et al., Nature, 473(7028), 868(2011). 
Characterizing neural spiking dynamics using point process adaptive filtering Uri Eden We develop a framework to combine conductance based neural modeling with point process statistical theory to estimate model components directly from a set of observed spike times. We construct estimation algorithms using sequential Monte Carlo (particle filter) methods that combine future and past spiking information to update a collection of model realizations that are consistent with the observed spiking data. 
Title Patrick Purden Massachusetts General Hospital, Harvard Medical School I will be discussing the neurophysiological dynamics of loss and recovery of consciousness under propofol anesthesia. This talk will cover results from diverse experiments including high density electroencephalogram, human single unit recordings, and routine electroencephalogram monitoring in the operating room. 
Interactions between behaviorally relevant rhythms and synaptic plasticity alter coding in the piriform cortex AnneMarie Oswald University of Pittsburgh We combine experiments and modeling to investigate the influence of shortterm plasticity at afferent synapses on the spike output of piriform cortex pyramidal cells. We find that short term depression enhances neural coding of sniff frequency inputs but not passive breathing inputs. 
Statistical evaluation of synchronous spike patterns extracted by Frequent Item Set Mining Sonja Gruen Institute of Neuroscience and Medicine (INM6) and Institute for Advanced Simulation (IAS6), Jülich Research Centre and JARA, Jülich, Germany Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains (PicadoMuiño et al (2013) Front. Neuroinform. 7:9. doi: 10.3389/fninf.2013.00009). This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any subpattern (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible, due to severe multiple testing. To overcome this issue, we propose to test the significance not of individual patterns, but instead of patterns of same signature, i.e. with the same pattern size z and support c. We derived a statistical test with the nullhypothesis of full independence (pattern spectrum filtering, PSF) by comparing the pattern spectrum to the significance spectrum obtained from surrogate data (Torre et al, submitted). As a result injected spike patterns, that mimic assembly activity, are well detected and yield a low false negative (FN) rate. However, this approach is prone to additionally classify patterns as significant which result from chance overlap of real assembly activity and background spiking. This induces another type of FP with respect to the nullhypothesis of having one assembly of given signature embedded in otherwise independent spiking activity, thus inducing a special type of false positives (FPs). We propose the additional method of pattern set reduction (PSR) to remove these FPs by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. 
Largescale spatiotemporal spike patterning associated with wave propagation Sanggyun Kim University of California, San Diego I will consider a problem with the context of dynamic neural signal processing: timevarying causal inference with prediction with expert advice and its application to multiple neural spike data. 
Cooperative dynamics in simple neural circuits Eric SheaBrown University of Washington How does connectivity impact network dynamics? We address this question by linking network characteristics on two scales. On the global scale we consider the coherence of overall network dynamics. We show that this can often be predicted from the local "motif" structure of the network. 
A sparse common input model for population neural activity Ian Stevenson A central challenge in systems neuroscience is understanding how network structure affects spiking in populations of neurons. This problem is made difficult by the fact that, in most electrophysiological experiments, the vast majority of neurons in a network are unobserved. Here we introduce a technique for modeling the effects of hidden populations of neurons using convolutional sparse coding. We present methods for inference and learning in this framework and compare our sparse common input model to previously proposed models based on slowly varying, Gaussian common input. 
Spatial variations in the hippocampal theta rhythm encode rat position Gautam Agarwal, Antal Berenyi, Kenji Mizuseki, Ian Stevenson, Gyorgy Buzsaki, Friedrich Sommer University of California, Berkeley
The theta rhythm is an ~8 Hz oscillation in
the hippocampus that mirrors the timing and coordination of large
groups of neurons. Its structure varies richly in both space and
time. To identify putative sources responsible for this variation, we
perform temporal ICA on the analytic (complexvalued) representation
of the thetaband oscillation recorded using an electrode array
implanted in subregion CA1. This analysis reveals a population of
sparse components, each with a characteristic spatial amplitudephase
relationship distributed across the electrode array. We find that many
of these components are activated in a place and directionselective
manner; as a rat runs down a linear track, the components transiently
activate in a specific sequence. More specifically, the set of “place
components” cover the entire track. This observation resembles the
known response properties of CA1 pyramidal cells, which also activate
in a placespecific manner, together covering the full track. However,
unlike place cells, the sparse components in the theta band are
exclusively active at single positions, tile the track very uniformly,
and contribute to voltage signals across the entire electrode
array. Simulations suggest that the place components in the LFP 
Missing Mass Approximations for the Partition Function of Stimulus Driven Ising Models Robert Haslinger Massachusetts General Hospital, Harvard University A challenge for studying dynamic network function has been to develop statistical population models that simultaneously account for stimulus drive, the population’s previous history, and also dependencies between neurons in the same time bin. Generalized Linear Models account for the first two of these, but since they make a conditional independence assumption, do not capture same time bin interactions. Ising models capture same time bin interactions, but not the influence of time varying stimulus drive or prior history. The reason is that Ising models require the calculation of a normalization constant, or “partition function”, which involves a sum over all 2^N possible spike patterns. If time varying stimuli are included, the partition function itself becomes stimulus dependent and must be separately calculated for all unique stimuli observed. This potentially increases computation time by the length of the data set. Here we remedy this problem and demonstrate that stimuli, previous history and same bin correlations can all be included in the same model framework. First we show that stimulus driven Ising models can easily be fit via pseudolikelihood methods, using the exact same mathematical framework developed for fitting Generalized Linear Models. Second, we show that the partition function for such models can be easily and quickly (in seconds) approximated via a missing mass calculation. Noting that the most probable spike patterns (which are few) occur in the training data, we sum partition function terms corresponding to those patterns explicitly. We then approximate the sum over the remaining patterns (which are improbable, but many) by casting it in terms of the stimulus modulated missing mass (total stimulus dependent probability of all patterns not observed in the training data). We use a product of conditioned logistic regression models to approximate the stimulus modulated missing mass. This method has complexity of roughly O(LNN_{pat}) where is L the data length, N the number of neurons and N_{pat} the number of unique patterns in the data, contrasting with the O(L2^N) complexity of alternate methods. Using multiple unit recordings from rat hippocampus, macaque DLPFC and cat Area 18 we demonstrate our method requires orders of magnitude less computation time than Monte Carlo methods and can approximate the stimulus driven partition function more accurately than either Monte Carlo methods or mean field theories. This advance allows the simultaneous modeling of stimuli, past history and same time bin interactions and an integrated approach for studying the dynamics of population based stimulus encoding. 
Efficient coding, highorder natural scene statistics, and computations in visual cortex Jonathan Victor Weill Cornell Medical College Several decades of work have suggested that Barlow’s principle of efficient coding is a powerful framework for understanding retinal design principles. Whether a similar notion extends to cortical visual processing is less clear, as there is no “bottleneck” comparable to the optic nerve, and much redundancy has already been removed by the retina. Here, we present convergent psychophysical and physiological evidence that regularities of highorder image statistics are indeed exploited by central visual processing, and at a surprising level of detail. In a recent study of natural image statistics (Tkačic et al., 2010), we showed that highorder correlations in certain specific spatial configurations are informative, while highorder correlations in other spatial configurations are not: they can be accurately guessed from lowerorder ones. We then construct artificial images (visual textures) composed either of informative or uninformative correlations. We find that informative highorder correlations are visually salient, while the uninformative correlations are nearly imperceptible. Physiological studies in macaque visual cortex identify the locus of the underlying computations. First, neuronal responses in macaque V1 and V2 mirror the psychophysical findings, in that many neurons respond differentially to the informative statistics, while few respond to the uninformative ones. Moreover, the differential responses largely arise in the supragranular layers, indicating that the computations are the result of intracortical processing. We then consider low and highorder local image statistics together, and apply a dimensionreduction (binarization) to cast them into a 10dimensional space. We determine the perceptual isodiscrimination surfaces within this space. These are wellapproximated by ellipsoids, and the principal axes of the ellipsoids correspond to the distribution of the local statistics in natural images. Interestingly, this correspondence differs in specific ways from the predictions of a model that implements efficient coding in an unrestricted manner. I suggest that these deviations provide insights into the computational mechanisms that underlie the representation of image statistics. 
Thermodynamics of Prediction Susanna Still University of Hawaii Biological computing systems face not only the challenge of processing information efficiently, but also the challenge of energetic efficiency, particularly when many very small units are packed tightly so that heat dissipation becomes an issue. I will discuss the thermodynamic cost of predictive inference and derive new bounds on both dissipation and heat production for systems driven arbitrarily far from equilibirum. There is a fundamental equivalence between thermodynamic inefficiency, measured by dissipation, and information processing inefficiency, measured by nonpredictive information. We interpret the dynamics of a system which responds to a stochastic environmental signal as computing an implicit model of this driving signal. The system’s state retains information about past environmental fluctuations, and a fraction of this information is predictive of future fluctuations. The remaining nonpredictive information reflects model complexity that does not improve predictive power, and thus represents the inefficiency of the model. We find that instantaneous nonpredictive information: 1) is proportional to the work dissipated during an environmental change; 2) provides a lower bound on the overall dissipation; 3) augments the lower bound on heat generated due to information erasure (Landauer's principle). Our results hold far from thermodynamic equilibrium and are thus applicable to a wide range of systems, including biomolecular machines, neurons, and potential future nano computing devices. These results highlight a profound connection between the effective use of information and efficient thermodynamic operation: any system constructed to keep memory about its environment, and to operate with maximal energetic efficiency, has to be predictive. http://prl.aps.org/abstract/PRL/v109/i12/e120604 
Estimation of Structured TimeFrequency Representations by Iteratively Reweighted LeastSquares Demba Ba MIT Data from neuroscience experiments are noisy, highdimensional and exhibit highlystructured dynamics in time and frequency. The transition into and out of consciousness during Propofolinduced general anesthesia is characterized by abrupt changes both in average neuronal spiking rates as well as their modulation by a lowfrequency (< 1 Hz) oscillation (Lewis et al, 2012). In EEG recordings, the lowfrequency phase modulates alpha amplitude (8 ? 12Hz) during profound unconsciousness, and during the transition into and out of unconsciousness (Purdon et al, 2013). In this talk, I will introduce models and algorithms to decompose a signal (included but not limited to pointprocess and EEG signals) into a small number of amplitudemodulated oscillations. To this end, I cast the problem of spectral estimation as one of statistical inference and propose a family of priors on the timefrequency plane which promotes spectral estimates which are sparse in frequency and smooth in time. This formulation requires the solution to highdimensional inference problems for which I develop novel iteratively reweighted leastsquares algorithm which are provably convergent and scale well with typical signal lengths, unlike standard techniques based on convex optimization. I demonstrate the techniques on EEG data exhibiting the salient features observed during profound unconsciousness, and during the transition into and out of unconsciousness, under Propofolinduced general anesthesia. Compared to Thomson?s multitaper, our techniques yields spectral estimates which are significantly denoised, and with significantly higher time and frequency resolution. This is joint work with Betash Babadi, PhD, Patrick Purdon, PhD and Emery N. Brown, MD, PhD. 
Four confounds in neuroscience data and fast maximumlikelihood continuoustime point process estimation. Kyle Lepage Boston University In this talk I discuss (i) robust statistical estimation of the EEG reference, (ii) the generation of multiple frequencies in the nonstationary spectrum of a point process due to the coupling of a stimulus locked rate oscillation with oscillatory history dependence in a standard model of the conditional intensity, (iii) multitaper spikefield association, and (iv) the generation of nonzero, spurious phase relations between multiple coherent current dipole sources when solving the MEG inverse problem. The talk ends with the introduction of a computationally fast, continuoustime maximum likelihood point process estimator based upon a classical, but relatively unknown quadrature scheme, due to Gauss. 
Receptive field formation by interacting excitatory and inhibitory plasticity Claudia Clopath, Tim Vogels, Henning Sprekeler Columbia University
Cortical neurons receive a balance of excitatory and inhibitory
currents. This E/I balance is thought to be essential for the proper
functioning of cortical networks, because it ensures their stability
and provides an explanation for the irregular spiking activity
observed in vivo. Although the balanced state is a relatively robust
dynamical regime of recurrent neural networks, it is not clear how it
is maintained in the presence of synaptic plasticity on virtually all
synaptic connections in the mammalian brain. We recently suggested
that activitydependent Hebbian plasticity of inhibitory synapses
could be a selforganization mechanism by which inhibitory currents
can be adjusted to balance their excitatory counterpart (Vogels et
al. 2011). The E/I balance not only generates irregular activity, it
also changes neural response properties to sensory stimulation. In
particular, it can lead to a sharp stimulus tuning in spiking activity
although subthreshold inputs are broadly tuned, it can change the
neuronal inputoutput relation and cause pronounced onset activity due
to the delay of inhibition with respect to excitation. This control of
neuronal output by the interplay of excitation and inhibition suggests
that activitydependent excitatory synaptic plasticity should be
sensitive to the E/I balance and should in turn be indirectly
controlled by inhibitory plasticity. Because we expected that
excitatory plasticity is modulated by inhibitory plasticity, the
question under which conditions excitatory Hebbian learning rules can
establish receptive fields needs to be reevaluated in the presence of
inhibitory plasticity. In particular, it is of interest under which
conditions neurons can simultaneously develop a stimulus selectivity
and a cotuning of excitatory and inhibitory inputs. 
Welcome to Mona, and Some Things I'd Like to Talk About Rob Kass Carnegie Mellon University Thanks to my coorganizers, the U.S.Japan Brain Research Cooperative Program, the NIMH, the National Institute for Physiological Science, and the DMS of NSF we are able to meet for an exciting conference. Our goals are not only to provide efficient overviews of selected current research topics, but also to share highlevel ideas, especially about the interface between mathematics and statistics in computational neuroscience. I would, myself like to hear ideas on how to think about LFPspike interaction, how to think about oscillatory drive and synchrony, and how to think about trialtotrial variation, gain, and noise. It is especially important to have U.S. and Japanese talk to each other about such subjects. In this brief lecture I will summarize the way we use statistical models of spike trains, and I will outline my interest in neural synchrony. 
Statistical and physical reasoning for understanding multiscale seizure dynamics Mark Kramer Boston University Epilepsy is one of the most common brain diseases, affecting approximately 50 million people worldwide. Surgical treatment of epilepsy often requires invasive brain voltage recordings, which provide unprecedented spatial and temporal observations of in vivo brain activity. We briefly describe invasive local field potential (LFP) recordings from a microelectrode array during seizure, and approaches linking statistical and physical reasoning to connect these observed brain activity to physical mechanisms 
On the distribution of photon counts with censoring in twophoton microscopy Satish Iyengar University of Pittsburgh Photon counting methods can give better images than analog methods in twophoton laser scanning microscopy (TPLSM). However, photon detectors have a dead period that leads to undercounts. We describe a model for photon generation and present the distribution of the observed counts, which we then use to estimate the number of photons emitted. This is joint work with Jon Driscoll, David Kleinfeld, and Burcin Simsek. 
Internal model estimation using closedloop braincomputer interfaces Byron Yu Carnegie Mellon University I will present a statistical algorithm for estimating an internal forward model from neural population activity recorded during braincomputer interface experiments. When applied to multielectrode recordings in the motor cortex of a monkey performing closedloop cursor control, the internal model was able to explain over 70% of the subject's aiming errors. 
Efficient Bayesian Neural Signal Processing via Convex Optimization and Optimal Transport Todd P. Coleman University of California, San Diego In this talk, we consider many problems in neural signal processing where Bayesian inference of a latent variable in a continuum is required: from developing dynamic signal processing algorithms with sequential prediction to developing timevaryng measures of causality, to producing nonlinear filters in state space models. In such settings, calculating integrals is cumbersome and we instead demonstrate how these integrals can be calculating by constructing maps that push the prior to the posterior. By the use of optimal transport theory, we demonstrate that for a large class of problems where the prior is logconcave and the likelihood is logconcave in the latent varaible, then an efficient (i.e. convex optimization) method can be developed that only requires drawing independent samples from the prior. We give examples of this framework in terms of identifying timevarying measures of causal relationships in ensemble neural sequencing in primary motor cortex of a monnkey performing a reachout task. 