Posted on Thursday, 17th January 2013

Non-computational students: Read KEB 5.1-5.3, 5.4.1-5.4.3, 5.5.2, 6.1.1, 6.2.1, 6.3.1, 8.1, 8.2–8.4 (skim the latter), 10.1-10.3, 10.4 up until (not including) ROC curves; then post a comment.

Computational students: start working on your readings, if you wish. Those whose topics were not yet clearly identified should discuss them further with me next week.

Posted in Class | Comments (4)

  1. rvistein Says:

    What distinguishes data points as “memoryless” rather than independent. The memoryless property of exponential distributions seems to just be independent events in sequence along an interval.

    ‘Large’ in LLS seems like an arbitrary distinction. How does one decided if their sample size qualifies as large. Wouldn’t this need to be determined before performing MLE?

    General comments for the course. It might be useful to have a recitation for the non-comp students. As one with a limited background in mathematics, I find it useful to see equations be applied rather than exclusively reading the general case. A set TA office hour for asking very basic questions that might be disruptive in class would also be helpful

  2. elindsay Says:

    Regarding the Poisson distribution, I’m confused about how you choose a bin width; it seems a bit arbitrary. In some examples, 1ms is chosen, and each bin either contains an event or does not, but cannot contain more than one event. There is some discussion about the refractory period when a neuron absolutely cannot fire again, so why not fix the bin width to the time period of one event, rather than risk violating the independence assumption by setting the bin width too small? In another example, the bin width is set at 7.5 seconds, and instead of recording whether or not an event occurred, there is a frequency table that follows a Poisson distribution. Related to this, I’m not sure what the distribution looks like in the discrete case described first in this question, where you simply have a list recording whether or not a rare event occurred.

  3. juncholp Says:

    Overall it has been a very informative reading process. In chapter 5, the description on different pdfs starting from the Bernoulli to multivaritate normal distributions was very well-organized to reveal the emergence of a pdf on the basis of the previously described pdfs and the relationship among those pdfs as well. This step-by-step explanation helped me understand why the spike count of a neuron is usually modeled using a Poisson and/or Gaussian pdfs. This has led me to the issue of demonstrating the activities of a network of neurons, as in most cases, encoding or decoding of neural data is operated with a collection of simultaneously recorded neurons. Given a multiple neurons of which activities are substantially correlated due to electrical and/or neurochemical connections among them, the multivariate normal pdf, which is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, should be used to describe the activity of the neuronal population. Although we had a paragraph on the multivariate normal distribution in our reading, I think an example of practical usage of the model may be necessary to better understand the relatively complicated model.

    I expect to hear more about comparison between the use of Poisson versus Gaussian models to describe the single neuronal activity. In the book, it says both are utilized, but it also notes that the assumption that spike counts to be Poisson distributed is questionable because careful examination of spike trains almost always indicates some departure from the Poisson.
    - How should we determine to use either?
    - I think the firing rate of a neuron should be also considered to find a better model for the neuron, because, for a neuron with very low firing rate, a Poisson distribution may fit better to depict the spike trains.

    I found typos: ‘homogeity’ (5.2.3, 135 page), ‘deparature’ (8.1.1, 204 page)

  4. mgeramita Says:

    I see that the exponential distribution can describe the spike times (and consequently the ISIs) for “memoryless” neurons. Additionally, I understand how real neurons may differ from this distribution (ie neurons may have an intrinsic “burstiness”).

    I also see how one can derive an inverse gaussian distribution for the ISIs of a random walk neuron.

    I’m curious whether there are certain assumptions that one can make when modeling the EPSCs and IPSCs in these integrate-and-fire models that makes the spike times look more “memoryless”? I think it would be interesting if, by only changing the inputs, the neuron’s memory of its previous spike time is weakened (or strengthened).

Leave a Reply

You must be logged in to post a comment.