Posted on Thursday, 1st March 2012
Please read chapter 9 through Sec 9.1 and post a comment.
Posted in Class | Comments (16)
Leave a Reply
You must be logged in to post a comment.
Posted on Thursday, 1st March 2012
Please read chapter 9 through Sec 9.1 and post a comment.
Posted in Class | Comments (16)
You must be logged in to post a comment.
March 5th, 2012 at 10:36 am
Small typo on second line of page 248: “seaking” should be “seeking”. Also, midway through the second paragraph on page 248, the sentence is missing a verb. It should read “This number is easy to understand…”
In a few of the theorems and results provided, you say that “near” is defined probabilistically. Can you explain what that means a bit more?
March 5th, 2012 at 11:35 am
In a Nature paper from 2000, there occurred a discussion between Stephen Scott and Apostolos Georgopoulos about the effects of performing a square root transform on spike counts to stabilize the variance. Scott argued that this biased the data and created an artificially large number of movement direction related firing rates. Can you comment on this discussion at all?
Here is a link to the paper. The discussion begins on page 9.
http://e.guigon.free.fr/rsc/article/Todorov00.pdf
March 5th, 2012 at 2:53 pm
Can you describe the result for the propagation of uncertainty that f’(x) near the mean ~=0? I understand that f(x) should be approximately linear, is this simply clarifying this or does it have more mathematical meaning?
March 5th, 2012 at 5:38 pm
Had some difficulty understanding the use of numerical simulation for propagating uncertainty.
March 5th, 2012 at 9:34 pm
Does the log transformation only provide variance stabilization in particular cases? If so, do other transformations stabilize the variance in other cases?
March 5th, 2012 at 10:32 pm
If X is not normally distributed, does the delta method work?
March 5th, 2012 at 10:53 pm
p254: ” ‘near’ being defined probabilistically, in terms of σX” — Does this mean there is an actual analytical method to determine whether f′(x) is near enough to μX for us assume Y = f(X) is ≈ normal?
March 5th, 2012 at 11:04 pm
I can see that there are complicated situations where the simulation based approach to error propagation is preferable to the analytical approach, but are there situations where the analytical approach is better?
March 6th, 2012 at 1:22 am
Can you talk more about the brute force computer simulation methods that you refer to in propagation of uncertainty? Why is this sometimes easier than mathematically deriving the source of uncertainty? Are there advantages to this other than to get the confidence intervals?
March 6th, 2012 at 2:07 am
When you say “near” is defined probabilistically (where f’(x) =/= 0), what exactly does this mean? f’(x) cannot cross 0 within a standard deviation of the mean?
March 6th, 2012 at 7:28 am
Are there specific guidelines for how many generated iterations (G) is enough? You mention that you should observe approximately equal results after several runs. However, in the case of a very computationally expensive procedure, we may want to use as few iterations as possible. How much variability in the results is allowable?
March 6th, 2012 at 8:05 am
1. If f(x) does not meet the specifications given in the scalar case (p254) or the multivariate normal (p258) – what exactly does this mean for our ability to estimate mu(y) and sigma(y)? Does it mean our only option is the bootstrap or is there a closed form solution for functions that aren’t as tidy, but still possible to find?
March 6th, 2012 at 8:40 am
I understand that these methods for propogation of uncertainty contain much more information than simple confidence intervals, but will they result in the same intervals as my old high school chemistry method of simply performing the transformation on the original bounds? To my mind, I think it should be the same if the transformation is linear.
March 6th, 2012 at 8:57 am
The methods of assessing and controlling for propagation of uncertainty seem at least broadly analogous to methods used for assessing and correcting for heteroskedasticity in regressions, such as Weighted Last Squares / Feasible General Least Squares (which we recently covered in my econometrics class). The parallel is probably not exact, but it made me wonder something. In econometrics we were told that when constructing an FGLS model one should only do transformations on the data that yield positive values (e.g., exponential). However, this does not seem to be a concern in the examples listed in 9.1. Why not? Or maybe a better way to ask this would be, what’s different about FGLS that makes having only positive values a concern there when it isn’t here? Again, the 2 situations might not be analogous at all, so my question might not actually mean much, but I was wondering.
March 6th, 2012 at 9:09 am
For section 9.1.1, how would you go about finding the resulting pdf of a distribution, given the initial pdf and an arbitrary transform?
March 6th, 2012 at 9:25 am
I’m having trouble understanding why the function of an approximately normal vector would be approximately normally distributed, could you go over this?