Experimentwise And Comparison Wise Error Rate
Contents |
the experimentwise error rate is: where αew
Experimentwise Alpha
is experimentwise error rate αpc is the per-comparison error rate, and c is the number of comparisons. For example, if 5 independent comparisons
Per Comparison Error Rate
were each to be done at the .05 level, then the probability that at least one of them would result in a Type I error is: 1 - (1 - .05)5 = 0.226. If the comparisons are not independent then the experimentwise error rate is less than . Finally, regardless of whether the comparisons are independent, αew ≤ (c)(αpc) For this example, .226 < (5)(.05) = 0.25.
the simple question posed by an analysis of variance - do at least two treatment means differ? It may be that embedded in a group of treatments there is only one "control" treatment to which every other treatment should be pairwise error rate compared, and comparisons among the non-control treatments may be uninteresting. One may also, after performing an
Experimentwise Alpha Definition
analysis of variance and rejecting the null hypothesis of equality of treatment means want to know exactly which treatments or groups of treatments experiment wise error rate tukey differ. To answer these kinds of questions requires careful consideration of the hypotheses of interest both before and after an experiment is conducted, the Type I error rate selected for each hypothesis, the power of each hypothesis test, and http://davidmlane.com/hyperstat/A43646.html the Type I error rate acceptable for the group of hypotheses as a whole. Comparisons or Contrasts If we let represent a treatment mean and ci a weight associated with the ith treatment mean then a comparison or contrast can be represented as: , where It can be seen that this contrast is a linear combination of treatment means (other contrasts such as quadratic and cubic are also possible). All of the following are http://online.sfsu.edu/efc/classes/biol458/multcomp/multcomp.htm possible comparisons: because they are weighted linear combinations of treatment means and the weights sum to zero. For example, previously we have performed comparisons between two treatment means using the t - statistic: with (n1 + n2) - 2 degrees of freedom. This statistic is a "contrast." The numerator of this expression follows the general form of the contrast outlined above with the weights c1 and c2 equal to 1 and -1, respectively: However, we also see that this contrast is divided by the pooled within cell or within group variation. So, a contrast is actually the ratio of a linear combination of weighted means to an estimate of the pooled within cell or error variation in the experiment: with degrees of freedom. For a non - directional null hypothesis t could be replaced by F: with 1, and degrees of freedom. In general, a contrast is the ratio of a linear combination of weighted means to the mean square within cells times the sum of the squares of the weights assigned to each mean divided by the sample size within cells: where the cI' s are the weights assigned to each treatment mean,, ni is the number of observations in each cell and MSerror is the within cell variation pooled from the ent
may be challenged and removed. (June 2016) (Learn how and when to remove this template message) In statistics, family-wise error https://en.wikipedia.org/wiki/Family-wise_error_rate rate (FWER) is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests. Contents 1 History 2 Background 2.1 Classification of multiple hypothesis tests 3 Definition 4 Controlling procedures 4.1 The Bonferroni procedure 4.2 The Šidák procedure 4.3 Tukey's wise error procedure 4.4 Holm's step-down procedure (1979) 4.5 Hochberg's step-up procedure 4.6 Dunnett's correction 4.7 Scheffé's method 4.8 Resampling procedures 5 Alternative approaches 6 References History[edit] Tukey coined the terms experimentwise error rate and "error rate per-experiment" to indicate error rates that the researcher could use as a control level in a wise error rate multiple hypothesis experiment.[citation needed] Background[edit] Within the statistical framework, there are several definitions for the term "family": Hochberg & Tamhane defined "family" in 1987 as "any collection of inferences for which it is meaningful to take into account some combined measure of error".[1][pageneeded] According to Cox in 1982, a set of inferences should be regarded a family:[citation needed] To take into account the selection effect due to data dredging To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Yoav Benjamini).[citation needed] Classification of multiple hypothesis tests[edit] Main article: Classification of multiple hypothes