Per Comparison Error Rate
Contents |
when considering a result under how to calculate per comparison error rate many hypotheses, some tests will give false experimentwise error rate positives; many statisticians make use of Bonferroni correction, false discovery rate, and
Family Wise Error Rate
other methods to determine the odds of a negative result appearing to be positive. References[edit] ^ Benjamini, Yoav; Hochberg,
False Discovery Rate
Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing" (PDF). Journal of the Royal Statistical Society, Series B. 57 (1): 289–300. MR1325392. Retrieved from "https://en.wikipedia.org/w/index.php?title=Per-comparison_error_rate&oldid=672691707" Categories: Hypothesis testingRates Navigation menu Personal tools Not logged inTalkContributionsCreate accountLog in Namespaces Article Talk Variants Views Read Edit View history More Search Navigation Main pageContentsFeatured contentCurrent eventsRandom articleDonate to WikipediaWikipedia store Interaction HelpAbout WikipediaCommunity portalRecent changesContact page Tools What links hereRelated changesUpload fileSpecial pagesPermanent linkPage informationWikidata itemCite this page Print/export Create a bookDownload as PDFPrintable version Languages Add links This page was last modified on 23 July 2015, at 06:40. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view
then the per-comparison error rate would be 0.05. Compare with the familywise error rate.
levels in an ANOVA. What is the individual error rate? The individual error rate is the maximum probability that one or more comparisons will incorrectly conclude that the observed difference http://support.minitab.com/en-us/minitab/17/topic-library/modeling-statistics/anova/multiple-comparisons/what-are-individual-and-family-error-rates/ is significantly different from the null hypothesis. What is the family error rate? http://www.psych.utoronto.ca/courses/c1/chap12/chap12.html The family error rate is the maximum probability that a procedure consisting of more than one comparison will incorrectly conclude that at least one of the observed differences is significantly different from the null hypothesis. The family error rate is based on both the individual error rate and the error rate number of comparisons. For a single comparison, the family error rate is equal to the individual error rate which is the alpha value. However, each additional comparison causes the family error rate to increase in a cumulative manner. It is important to consider the family error rate when making multiple comparisons because your chances of committing a type I error for per comparison error a series of comparisons is greater than the error rate for any one comparison alone. Example of setting the individual error rate and family error rate You do a one-way ANOVA to examine steel strength from five different steel plants using 25 samples from each plant. You decide to examine all 10 comparisons between the five plants to determine specifically which means are different. If you assign an alpha of 0.05 to each of the 10 comparisons (the individual error rate), Minitab calculates a family error rate of 0.28 for the set of 10 comparisons. However, if you want the entire set of comparisons to have a family error rate of 0.05, then Minitab automatically assigns each individual comparison an alpha of 0.007. Tukey's method, Fisher's least significant difference (LSD), Hsu's multiple comparisons with the best (MCB), and Bonferroni confidence intervals are methods for calculating and controlling the individual and family error rates for multiple comparisons. Minitab.comLicense PortalStoreBlogContact UsCopyright © 2016 Minitab Inc. All rights Reserved.EnglishfrançaisDeutschportuguêsespañol日本語한êµì–´ä¸æ–‡ï¼ˆç®€ä½“)By using this site you agree to the use of cookies for analytics and personalized content.Read our policyOK
describe a number of different ways of testing which means are different Before describing the tests, it is necessary to consider two different ways of thinking about error and how they are relevant to doing multiple comparisons Error Rate per Comparison (PC) This is simply the Type I error that we have talked about all along. So far, we have been simply setting its value at .05, a 5% chance of making an error Familywise Error Rate (FW) Often, after an ANOVA, we want to do a number of comparisons, not just one The collection of comparisons we do is described as the "family" The familywise error rate is the probability that at least one of these comparisons will include a type I error Assuming that a ¢ is the per comparison error rate, then: The per comparison error: a = a ¢ but, the familywise error: a = 1 - (1-a ¢ )c Thus, if we do two comparisons, but keep a ¢ at 0.05, the FWerror will really be: a = 1 - (1 - 0.05)2 =1 - (0.95)2 = 1 - 0.9025 = 0.0975 Thus, there is almost a 10% chance of one of the comparisons being significant when we do two comparisons, even when the nulls are true. The basic problem then, is that if we are doing many comparisons, we want to somehow control our familywise error so that we don’t end up concluding that differences are there, when they really are not The various tests we will talk about differ in terms of how they do this They will also be categorized as being either "A priori" or "post hoc" A priori: A priori tests are comparisons that the experimenter clearly intended to test before collecting any data Post hoc: Post hoc tests are comparisons the experimenter has decided to test after collecting the data, looking at the means, and noting which means "seem" different. The probability of making a type I error is smaller for A priori tests because, when doing post hoc tests, you are essentially doing all possible comparisons before deciding which to test in a formal statistical manner Steve: Significant F issue An example for context See page 351 for a very complete description of the Morphine Tolerance study .. Seigel (1975) Highlights: paw lick latency as a measure of pain resistance tolerance to morphine devel