Uniformly most powerful test: Difference between revisions
m (Robot: Automated text replacement (-{{SIB}} +, -{{EH}} +, -{{EJ}} +, -{{Editor Help}} +, -{{Editor Join}} +)) |
|||
Line 1: | Line 1: | ||
In [[statistical hypothesis testing]], a '''uniformly most powerful (UMP) test''' is a [[statistical hypothesis testing|hypothesis test]] which has the greatest [[Statistical_power |power]] <math>1-\beta</math> among all possible tests of a given [[Type I and type II errors|size ]] ''α''. For example, according to the [[Neyman-Pearson_lemma|Neyman-Pearson lemma]], the [[Likelihood_ratio|likelihood-ratio]] test is UMP for testing simple (point) hypotheses. | In [[statistical hypothesis testing]], a '''uniformly most powerful (UMP) test''' is a [[statistical hypothesis testing|hypothesis test]] which has the greatest [[Statistical_power |power]] <math>1-\beta</math> among all possible tests of a given [[Type I and type II errors|size ]] ''α''. For example, according to the [[Neyman-Pearson_lemma|Neyman-Pearson lemma]], the [[Likelihood_ratio|likelihood-ratio]] test is UMP for testing simple (point) hypotheses. |
Latest revision as of 17:12, 20 August 2012
In statistical hypothesis testing, a uniformly most powerful (UMP) test is a hypothesis test which has the greatest power <math>1-\beta</math> among all possible tests of a given size α. For example, according to the Neyman-Pearson lemma, the likelihood-ratio test is UMP for testing simple (point) hypotheses.
Setting
Let <math>X</math> denote a random vector (corresponding to the measurements), taken from a parametrized family of probability density functions or probability mass functions <math>f_{\theta}(x)</math>, which depends on the unknown deterministic parameter <math>\theta \in \Theta</math>. The parameter space <math>\Theta</math> is partitioned into two disjoint sets <math>\Theta_0</math> and <math>\Theta_1</math>. Let <math>H_0</math> denote the hypothesis that <math>\theta \in \Theta_0</math>, and let <math>H_1</math> denote the hypothesis that <math>\theta \in \Theta_1</math>. The binary test of hypotheses is performed using a test function <math>\phi(x)</math>.
- <math>\phi(x) =
\begin{cases} 1 & \text{if } x \in R \\ 0 & \text{if } x \in A \end{cases}</math> meaning that <math>H_1</math> is in force if the measurement <math>X \in R</math> and that <math>H_0</math> is in force if the measurement <math>X \in A</math>. <math>A \cup R</math> is a disjoint covering of the measurement space.
Formal definition
A test function <math>\phi(x)</math> is UMP of size <math>\alpha</math> if for any other test function <math>\phi'(x)</math> we have:
- <math>\sup_{\theta\in\Theta_0}\; E_\theta\phi'(X)=\alpha'\leq\alpha=\sup_{\theta\in\Theta_0}\; E_\theta\phi(X)\,</math>
- <math> E_\theta\phi'(X)=1-\beta'\leq 1-\beta=E_\theta\phi(X) \quad \forall \theta \in \Theta_1 </math>
The Karlin-Rubin theorem
The Karlin-Rubin theorem can be regarded as an extension of the Neyman-Pearson lemma for composite hypotheses. Consider a scalar measurement having a probability density function parameterized by a scalar parameter θ, and define the likelihood ratio <math> l(x) = f_{\theta_1}(x) / f_{\theta_0}(x)</math>. If <math>l(x)</math> is monotone non-decreasing for any pair <math>\theta_1 \geq \theta_0</math> (meaning that the greater <math>x</math> is, the more likely <math>H_1</math> is), then the threshold test:
- <math>\phi(x) =
\begin{cases} 1 & \text{if } x > x_0 \\ 0 & \text{if } x < x_0 \end{cases}</math>
- <math>E_{\theta_0}\phi(X)=\alpha</math>
is the UMP test of size α for testing <math> H_0: \theta \leq \theta_0 \text{ vs. } H_1: \theta > \theta_0 </math>
Note that exactly the same test is also UMP for testing <math> H_0: \theta = \theta_0 \text{ vs. } H_1: \theta > \theta_0 </math>
Important case: The exponential family
Although the Karlin-Rubin may seem weak because of its restriction to scalar parameter and scalar measurement, it turns out that there exist a host of problems for which the theorem holds. In particular, the one-dimensional exponential family of probability density functions or probability mass functions with <math>f_\theta(x) = c(\theta)h(x)\exp(\pi(\theta)T(x))</math> has a monotone non-decreasing likelihood ratio in the sufficient statistic T(x), provided that <math>\pi(\theta)</math> is non-decreasing.
Example
Let <math>X=(X_0 , X_1 ,\dots , X_{M-1})</math> denote i.i.d. normally distributed <math>N</math>-dimensional random vectors with mean <math>\theta m</math> and covariance matrix <math>R</math>. We then have
- <math>f_\theta (X) = (2 \pi)^{-M N / 2} |R|^{-M / 2} \exp \left\{-\frac{1}{2} \sum_{n=0}^{M-1}(X_n - \theta m)^T R^{-1}(X_n - \theta m) \right\} = </math>
- <math> = (2 \pi)^{-M N / 2} |R|^{-M / 2} \exp \left\{-\frac{1}{2} \sum_{n=0}^{M-1}(\theta^2 m^T R^{-1} m) \right\} \cdot \exp \left\{-\frac{1}{2} \sum_{n=0}^{M-1}X_n^T R^{-1} X_n \right\} \cdot \exp \left\{\theta m^T R^{-1} \sum_{n=0}^{M-1}X_n \right\}</math>
which is exactly in the form of the exponential family shown in the previous section, with the sufficient statistic being
- <math>T(X) = m^T R^{-1} \sum_{n=0}^{M-1}X_n.</math>
Thus, we conclude that the test
- <math>\phi(T) =
\begin{cases} 1 & \text{if } T > t_0 \\ 0 & \text{if } T < t_0 \end{cases}</math>
- <math>E_{\theta_0} \phi (T) = \alpha</math>
is the UMP test of size <math>\alpha</math> for testing <math>H_0: \theta \leq \theta_0</math> vs. <math>H_1: \theta > \theta_0</math>
Further discussion
Finally, we note that in general, UMP tests do not exist for vector parameters or for two-sided tests (a test in which one hypothesis lies on both sides of the alternative). Why is it so?
The reason is that in these situations, the most powerful test of a given size for one possible value of the parameter (e.g. for <math>\theta_1</math> where <math>\theta_1 > \theta_0</math>) is different than the most powerful test of the same size for a different value of the parameter (e.g. for <math>\theta_2</math> where <math>\theta_2 < \theta_0</math>). As a result, no test is Uniformly most powerful.this test is used for testing composite null hypothesis.
References
- L. L. Scharf, Statistical Signal Processing, Addison-Wesley, 1991, section 4.7.