Posted on Tuesday, 19th March 2013
Please post a comment on JacobsEtAl.pdf
Posted in Class | Comments (13)
Leave a Reply
You must be logged in to post a comment.
Posted on Tuesday, 19th March 2013
Please post a comment on JacobsEtAl.pdf
Posted in Class | Comments (13)
You must be logged in to post a comment.
March 20th, 2013 at 10:02 am
While I appreciated their choice of model system and the supposed elegance of their testing technique, I didn’t like how their abstract compared to their actual results. In the abstract, they made some major generalizations with regards to coarse and fine coding and stated that they show that coarse coding is insufficient and a finer coding is required. In their experiment however, they only test three models and simply rule out two simple models as possible neural codes while the other slightly more complex remains a possibility. I don’t think this supports their generalization that the neural code is fine rather than coarse.
March 20th, 2013 at 1:18 pm
It’s intuitive to me how strictly rate code information can be incorporated into the Bayesian equations, but I’m confused about how spike timing code and temporal correlation code gets incorporated into the Bayesian framework. To this end, can you help us unpack equations [2] and [3] a bit (from page 5940)?
March 20th, 2013 at 2:24 pm
This was a thought provoking paper. I was particularly shocked that they were able to decode so much information from such an early stage of signal processing. In the discussion they do acknowledge that their findings don’t necessarily imply that history-dependent temporal coding is the true and only code or that it is even the code the mouse used during to perform the discrimination. However, I was hoping that they would bring up the possibility of downstream code conversions. It seems plausible to me that somewhere down the line the code is changed to put more weight on say spike count rather than precise temporal patterns of input to accomodate higher levels of noise or some other regionally specific condition. Does this seem plausible to you? Or does the fact that the original signal (from the retinal ganglion) is unlikely to carry information in counts suggest that it is just as unlikely to occur in the cortex?
March 20th, 2013 at 2:54 pm
In Figure 3 I am curious about the trend of the temporal correlation decoding towards the upper bound of tested spatial frequencies. Because the decoding is Bayesian, it will be as good or better than the actual decoder the animal uses. In the bottom row of the figure, decoder performance for spatial frequencies up to approximately 0.4 cycles/degree does not appear significantly different from the animal’s actual performance. However the decoder output does not fall to chance level beyond this spatial frequency like the animal’s performance does. What sources of noise could be present to impair discrimination?
March 20th, 2013 at 3:41 pm
I think this paper provides convincing evidence that the retinal ganglion cells use temporal codes, and the use of spike timing information may be generalized fairly well to the other sensory neurons serving different types of sensation. However, this may not be the case for the cortical neurons, which exhibit tremendous variability in the temporal distribution of spikes in their discharge patterns. Given such variability, cortical neurons are thought to use rate codes in ensembles of neurons rather than transmitting information using the temporal code. Thus, decoding using the temporal code of cortical neurons may be worse or equivalent of that using the rate code.
March 20th, 2013 at 5:04 pm
This study found evidence against coarse coding in the neural code and instead found that a temporal correlation code performed as well as the animal at a 2-alternative forced choice task when examining the retina. The authors were good at pointing out that they did not confirm temporal correlation coding as the neural code and was really intriguing. But they are trying to show that temporal correlation coding is no different than the animal response, which can’t really be shown statistically. Also, we talked in class about how population codes and Bayesian models are difficult to dissociate in vitro, but the authors did not look at population codes. Would it be possible to dissociate those two using this data?
March 20th, 2013 at 5:12 pm
My question is about choice of classifiers. As the textbook and paper mention, a Bayesian classifier is optimal. However, in the literature that I am familiar with–decoding from fMRI activation–support vector machine classifiers are far more common. Does the optimality of a Bayesian classifier make it the ideal choice for all situations, i.e. should they be used over SVCs? Or is there advantages to different types of classifiers that make them better suited to certain types of data (or research questions)?
March 20th, 2013 at 5:28 pm
This is an interesting paradigm, but I’m not quite sure if I buy into the results completely. Even though their rationale for the temporal correlation code makes sense as evidenced by the spike count code and spike timing codes falling short of the animals performance using optimized (Bayesian) decoding, the authors raise some issues regarding possible error and the limit on the different spike pattern permutations that they tested (ie. code with multicell noise correlation…maybe?). In addition, it’s possible that the in Vitro stimulus differed from the behavioral task. Perhaps the animal used other contextual cues during the behavioral task (although I didn’t read the supplemental materials)?
March 20th, 2013 at 5:46 pm
The authors describe 3 codes that vary quite drastically in complexity, and compare their performance in decoding a retinal task. While they rule out the 2 simplest codes – spike counts and spike timings assuming independence of spikes, they find that the third code – spike timings assuming correlation between successive spikes – contains enough information to allow decoding using it to approach in vivo task accuracy.
For code 3, in the supplementary information section they factor the joint distribution of spike time and previous spike interval as a product of two functions, fitted by cubic splines. Is this a natural thing to do, and is there any intuition behind using this particular form (product)?
March 20th, 2013 at 5:59 pm
The authors showed that decoding retinal ganglion cell spike trains under the assumptions that they are mutually independent and contain only first order ISI correlations (“temporal correlation code”) was sufficient to match the animal’s performance on a two-alternative forced choice task. Another test of the neural code would be to check what features of the retinal ganglion cells are transmitted by downstream neurons. If the neural code (for this task) is the authors’ “temporal correlation code”, then shouldn’t it also be the case that the responses of neurons at the next location in the visual system can be predicted taking in to account only first order ISI correlations, and treating different neurons as independent? And, has this been checked?
March 20th, 2013 at 5:59 pm
Do we know what aspect of the neural signal the temporal correlation code is capturing that the spike count and timing codes aren’t? Is it just making the signal more stable? Could it be incorporating the dynamics of on/off cells better to give a sharper signal?
March 20th, 2013 at 6:24 pm
Very interesting paper. The authors propose a method for investigating the unit of information in the neural code. Their proposed strategy is to measure the quantity of information available to an animal within a particular candidate code and compare it with the quantity of information the animal’s behavior indicates the animal must be exploiting. Candidate codes that encode less information than the animal’s behavior indicates it utilizes can be disregarded as implausible given the animal’s performance.
Their reasoning certainly seems intuitive, but the challenge remains to measure the amount of information provided by various candidate codes in the activity of some animal’s nervous system during the performance of some task with measurable information requirements. In order to meet this challenge the authors performed an experiment with mice performing a visual discrimination task and measured the activity within the mice’s retinas. The retina provides a strategic advantage to the experimenters in that it’s exclusively feedforward and the only channel for visual information available to the animals. It follows that all visual information the animals appear to exploit in the task must be made available to the animal through whatever code is actually utilized in the retina.
The authors applied their strategy in their analysis of the mice’s visual discrimination task concluding that a temporal code that does not assume the independence of spiking events is the only code (in general terms, that is, where we compare it with a time-pooling rate code and the temporal code which makes the independence assumption) that encodes enough information to explain the animal’s performance. This is quite an exciting result. Such a code is both the most information-rich and the most mysterious in that it requires that detailed temporal relations between individual spikes be distinguished. In animals with large nervous systems, preserving detailed information about the temporal relations between individual spiking events seems difficult, so if such a code is utilized in human neocortex, for instance, then some interesting mechanisms for decoding such temporal relations must be at work.
The authors’ conclusions about the retina seem justified to me, but it is very much unclear what their findings concerning the retina imply about other parts of the nervous system, and particularly, the rest of the brain. It seems plausible that, as they suggest, multiple codes are used simultaneously or by different parts of the nervous system. Nevertheless, I appreciate the authors’ methodology and find their results and encouraging for prospects of a fine-grain neural code.
March 20th, 2013 at 7:10 pm
Good, sounds like these guys have read their Karl Popper and aren’t spouting claims of certainty whenever they show consistency.
It seems like the animal could be using additional information rather than just the visual input that an isolated retina is restricted to. In the text they show that the spike time code is significantly different from the animal results but I don’t see the test if it differs from the temporal correlation code. It seems directly comparing the decoders would be important in the case that some additional bias is influencing the whole animal.