Introduction
Suppose we are attempting to match two images, each with differing amounts of seeing and noise. One we call the reference image and it is obtained with no noise. The second is called the science image
and it generally has more severe seeing and has noise. We wish to find some transformation that maps
to
such that the difference is noise-like if there are no additional sources.
In particular, we posit a sequence of operators and
such that there exists a
where
. Our goal is to choose
based only on data.
1. Result
We propose a multiresolution noise-like statistic based on an idea in Davies and Kovak 1991, defined as:
where is a multiresolution analysis of the pixelized grid and
. Note that we do not include the multiplicative factor
whenever it is not explicitly needed.
Observe that we can write this as
. Note that the summation of
is supressed in the first term for notational clarity. Also, when
is fixed, we supress that argument.
Now, one quality this statistic could have would be to asymptotically distinguish between competing hypotheses. In this case, low-noise asymptotics makes more sense than large sample, so we choose this regime.
Our goal is to look at the power of this statistic to determine amongst hypothesis asymptotically. It is known (Das Gupta 2008) that asymptotics for fixed alternative hypothesis leads to trivial results, such as power always tending toward 1.
Hence, we wish to look at an analogy to the Pittman slope. This can be phrased as follows. Let be given. Then we want to look at
is a function going to zero with
and
is a constant. We look at the ratio of the test under the alternate and null hypothesis as a way of rescaling. Alternatively, we can make
a function of
. We see in what follows the ratio in effect chooses that function.
Lemma 1 We can rewrite (2) as
Proof: Use (1) and multiply by .
This is a difficult seeming probability to calculate, even asymptotically. Hence we use the limit as a heuristic that the absence of in the denominator within the probability allows us to consider the following instead
Before continuing, we need a result for exchanging and
:
Lemma 2 Let
be a sequence of random variables over some index
such that
for some
. Then for any
![]()
Proof: Write . Now, since
, we see that
where for the last inequality we use that for the necessary kinds of sequences of functions and measures.
Now, we would like to examine the such that the RHS of (3)
. First, we can compute the RHS of (3) as follows. Define
. Then
random variable.
Using that for any doubly indexed sequence
we see that under (3) and (4)
Now, we see that this probability goes to 1 when . We did this calculation for the case where
has for a kernel a non-normalized Gaussian kernel with variance
for all
.
The result was that
and
However, the second of the two results is not informative. If we additionally assume that for
we get
and
We’re not sure what happens if .