Fisher's method
In statistics, Fisher's method, developed by and named for Ronald Fisher, is a data fusion or "meta-analysis" (analysis after analysis) technique for combining the results from a variety of independent tests bearing upon the same overall hypothesis (H0) as if in a single large test.
Fisher's method combines extreme value probabilities, P(results at least as extreme, assuming H0 true) from each test, called "p-values", into one test statistic (X2) having a chi-square distribution using the formula
- <math>X^2_{2k} = -2\sum_{i=1}^k \log_e(p_i).</math>
The p-value for X2 itself can then be interpolated from a chi-square table using 2k "degrees of freedom", where k is the number of tests being combined. As in any similar test, H0 is rejected for small p-values, usually < 0.05.
In the case that the tests are not independent, the null distribution of X2 is more complicated. If the correlations between the <math>\log_e(p_i)</math> are known, these can be used to form an approximation.
References
- Fisher, R. A. (1948) "Combining independent tests of significance", American Statistician, vol. 2, issue 5, page 30. (In response to Question 14)