As everyone in who has shared an office with me knows, I spend a lot of time thinking about inverse problems. In particular, statistical inverse problems. I think this is important because fundamentally statistics is an inverse problem. Observe the following very general statistical problem. Suppose {\mathcal{T}} is some separable Banach space. Suppose {\Theta \subseteq \mathcal{T}} is a model space. Let {(\phi_j) \subset \mathcal{T}^*} be a sequence of elements in the dual space of continuous linear functionals on {\mathcal{T}}. Lastly, suppose {n \in \mathbb{N}}, {\sigma >0} is a positive number, and {W} is a random variable defined on {\mathbb{R}^n}. Then we can define the statisical problem as a mapping {(\theta,\sigma,n) \mapsto \mathbb{P}_{\theta,\sigma,\phi}} where the observations are {\phi_j(\theta) + \sigma W_j} for each {1 \leq j \leq n} or equivalently {\phi(\theta) + \sigma W}. Hence, our goal is to make some inference about {\Theta} given an observation {Y \sim \mathbb{P}_{\theta,\sigma,\phi}}.

Generally speaking, there is usually another layer of formalization that encapsulates what we mean by `some inference.’ In particular, suppose we have a space {\mathcal{B}} and define a set of mappings {\mathcal{G} = \{g : \mathcal{\Theta} \rightarrow \mathcal{B} \}}. Then, we consider the elements of {\Theta} as models and either the space {\mathcal{G}} {\mathcal{B}} as the parameter space.

Now, the goal is to make a model {\theta \in \Theta}, a set of functionals {(\phi_j) \subset \mathcal{T}^*}, a parameter of interest {g \in \mathcal{G}} and an observation {Y \sim \mathbb{P}_{\theta,\sigma,\phi}} and develop some estimator {\hat{g}} that is a {\mathbb{P}_{\theta,\sigma,\phi}}- measureable function from {\mathbb{R}^n} to {\mathcal{B}}.

Inverse Problem Formalization (Working)

Weak-ly Updates Enter your password to view comments.
Aug 272010

This post is password protected. To view it please enter your password below:


© 2010 Weak-ly Update Suffusion theme by Sayontan Sinha