Anova Family Error
Contents |
may be challenged and removed. (June 2016) (Learn how and when to remove this template message) In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors, family wise error rate formula among all the hypotheses when performing multiple hypotheses tests. Contents 1 History 2
Experiment Wise Error Rate
Background 2.1 Classification of multiple hypothesis tests 3 Definition 4 Controlling procedures 4.1 The Bonferroni procedure 4.2 The Šidák procedure comparison wise error rate 4.3 Tukey's procedure 4.4 Holm's step-down procedure (1979) 4.5 Hochberg's step-up procedure 4.6 Dunnett's correction 4.7 Scheffé's method 4.8 Resampling procedures 5 Alternative approaches 6 References History[edit] Tukey coined the terms experimentwise error rate
Familywise Error Rate Anova
and "error rate per-experiment" to indicate error rates that the researcher could use as a control level in a multiple hypothesis experiment.[citation needed] Background[edit] Within the statistical framework, there are several definitions for the term "family": Hochberg & Tamhane defined "family" in 1987 as "any collection of inferences for which it is meaningful to take into account some combined measure of error".[1][pageneeded] According to Cox in 1982, family wise error calculator a set of inferences should be regarded a family:[citation needed] To take into account the selection effect due to data dredging To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Yoav Benjamini).[citation needed] Classification of multiple hypothesis tests[edit] Main article: Classification of multiple hypothesis tests The following table defines various errors committed when testing multiple null hypotheses. Suppose we have a number m of multiple null hypotheses, denoted by: H1,H2,...,Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing the test results over Hi will give us the following table and related random variables: Null hypothesis is true (H0) Alternative hypothesis is true (HA) Total Test is declared significant V {\displaystyle V} S {\displaystyle S} R {\displaystyle R} Test is declared
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn
Per Comparison Error Rate
more about hiring developers or posting ads with us Cross Validated Questions Tags Users Badges
Decision Wise Error Rate
Unanswered Ask Question _ Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, family wise error rate post hoc and data visualization. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Correcting for family-wise error https://en.wikipedia.org/wiki/Family-wise_error_rate rate with series of repeated measures ANOVA? up vote 1 down vote favorite I am trying to make requested revisions to an accepted manuscript, and I am baffled by the following comment from a reviewer: "Eight hypothesis tests are reported in the final paragraph of the results section but the authors do not say if they corrected for their Type I familywise error rate. What correction was done or should be done?" To give background, I had http://stats.stackexchange.com/questions/156007/correcting-for-family-wise-error-rate-with-series-of-repeated-measures-anova a pre- and post-test design, and change was assessed on eight different (theoretically distinct) variables. So, basically, I conducted eight separate repeated measures ANOVAs. I can understand that Type I error rate would be inflated from doing multiple tests, but I have not come across any correction for use with a series of RM ANOVA. Is there a standard procedure or recommended correction? repeated-measures type-i-errors share|improve this question asked Jun 8 '15 at 16:39 Sara Sohr-Preston 61 Because they are "(theoretically distinct) variables", you could simply state that the eight tests represent different families of tests and therefore each test is conducted with an alpha of .05. Otherwise, if you were in fact to adjust your alpha level, you could use a Bonferroni correction and use an alpha of .05/8. –Patrick Coulombe Jun 8 '15 at 17:06 add a comment| active oldest votes Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up using Facebook Sign up using Email and Password Post as a guest Name Email Post as a guest Name Email discard By posting your answer, you agree to the privacy policy and terms of service. Browse other questions tagged repeated-measures type-i-errors or ask your own question. asked 1 y
describe a number of different ways of testing which means are different Before describing the tests, it is necessary to consider two different ways of thinking about error and how http://www.psych.utoronto.ca/courses/c1/chap12/chap12.html they are relevant to doing multiple comparisons Error Rate per Comparison (PC) This is simply the Type I error that we have talked about all along. So far, we have been simply setting its value at .05, a 5% chance of making an error Familywise Error Rate (FW) Often, after an ANOVA, we want to do a number of comparisons, not just one The collection of comparisons we do is described as wise error the "family" The familywise error rate is the probability that at least one of these comparisons will include a type I error Assuming that a ¢ is the per comparison error rate, then: The per comparison error: a = a ¢ but, the familywise error: a = 1 - (1-a ¢ )c Thus, if we do two comparisons, but keep a ¢ at 0.05, the FWerror will really be: a = 1 wise error rate - (1 - 0.05)2 =1 - (0.95)2 = 1 - 0.9025 = 0.0975 Thus, there is almost a 10% chance of one of the comparisons being significant when we do two comparisons, even when the nulls are true. The basic problem then, is that if we are doing many comparisons, we want to somehow control our familywise error so that we don’t end up concluding that differences are there, when they really are not The various tests we will talk about differ in terms of how they do this They will also be categorized as being either "A priori" or "post hoc" A priori: A priori tests are comparisons that the experimenter clearly intended to test before collecting any data Post hoc: Post hoc tests are comparisons the experimenter has decided to test after collecting the data, looking at the means, and noting which means "seem" different. The probability of making a type I error is smaller for A priori tests because, when doing post hoc tests, you are essentially doing all possible comparisons before deciding which to test in a formal statistical manner Steve: Significant F issue An example for context See page 351 for a very complete description of the Morphine Tolerance study .. Seigel (1975) Highlights: paw lick latency