Comparisonwise Error Rate
Contents |
the simple question posed by an analysis of variance - do at least two treatment means differ? It may be that embedded in a group of experimentwise error rate treatments there is only one "control" treatment to which every other treatment comparison wise error rate should be compared, and comparisons among the non-control treatments may be uninteresting. One may also, after performing an analysis
Experiment Wise Error Rate Definition
of variance and rejecting the null hypothesis of equality of treatment means want to know exactly which treatments or groups of treatments differ. To answer these kinds of questions requires careful consideration
Experiment Wise Error Anova
of the hypotheses of interest both before and after an experiment is conducted, the Type I error rate selected for each hypothesis, the power of each hypothesis test, and the Type I error rate acceptable for the group of hypotheses as a whole. Comparisons or Contrasts If we let represent a treatment mean and ci a weight associated with the ith treatment decision wise error rate mean then a comparison or contrast can be represented as: , where It can be seen that this contrast is a linear combination of treatment means (other contrasts such as quadratic and cubic are also possible). All of the following are possible comparisons: because they are weighted linear combinations of treatment means and the weights sum to zero. For example, previously we have performed comparisons between two treatment means using the t - statistic: with (n1 + n2) - 2 degrees of freedom. This statistic is a "contrast." The numerator of this expression follows the general form of the contrast outlined above with the weights c1 and c2 equal to 1 and -1, respectively: However, we also see that this contrast is divided by the pooled within cell or within group variation. So, a contrast is actually the ratio of a linear combination of weighted means to an estimate of the pooled within cell or error variation in the experiment: with degrees of freedom. For a non - directional null hypothesis t
the experimentwise error rate is: where αew http://online.sfsu.edu/efc/classes/biol458/multcomp/multcomp.htm is experimentwise error rate αpc is the per-comparison error rate, and c is the number of comparisons. For example, if 5 independent comparisons http://davidmlane.com/hyperstat/A43646.html were each to be done at the .05 level, then the probability that at least one of them would result in a Type I error is: 1 - (1 - .05)5 = 0.226. If the comparisons are not independent then the experimentwise error rate is less than . Finally, regardless of whether the comparisons are independent, αew ≤ (c)(αpc) For this example, .226 < (5)(.05) = 0.25.
may be challenged and removed. (June 2016) (Learn how and when to remove this template message) In statistics, family-wise error rate (FWER) is the probability of making one or more false https://en.wikipedia.org/wiki/Family-wise_error_rate discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests. Contents 1 History 2 Background 2.1 Classification of multiple hypothesis tests 3 Definition 4 Controlling procedures 4.1 The Bonferroni procedure 4.2 The Šidák procedure 4.3 Tukey's procedure 4.4 Holm's step-down procedure (1979) 4.5 Hochberg's step-up procedure 4.6 Dunnett's correction 4.7 Scheffé's method 4.8 Resampling procedures 5 Alternative approaches 6 error rate References History[edit] Tukey coined the terms experimentwise error rate and "error rate per-experiment" to indicate error rates that the researcher could use as a control level in a multiple hypothesis experiment.[citation needed] Background[edit] Within the statistical framework, there are several definitions for the term "family": Hochberg & Tamhane defined "family" in 1987 as "any collection of inferences for which it is meaningful wise error rate to take into account some combined measure of error".[1][pageneeded] According to Cox in 1982, a set of inferences should be regarded a family:[citation needed] To take into account the selection effect due to data dredging To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Yoav Benjamini).[citation needed] Classification of multiple hypothesis tests[edit] Main article: Classification of multiple hypothesis tests The following table defines various errors committed when testing multiple null hypotheses. Suppose we have a number m of multiple null hypotheses, denoted by: H1,H2,...,Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing the test results over Hi will give us the following table and related random