Family Wise Error Anova
Contents |
describe a number of different ways of testing which means are different Before describing the tests, it is necessary to consider two different ways how to calculate family wise error rate of thinking about error and how they are relevant to doing multiple comparisons family wise error calculator Error Rate per Comparison (PC) This is simply the Type I error that we have talked about all along. So experiment wise error rate far, we have been simply setting its value at .05, a 5% chance of making an error Familywise Error Rate (FW) Often, after an ANOVA, we want to do a number of family wise error rate post hoc comparisons, not just one The collection of comparisons we do is described as the "family" The familywise error rate is the probability that at least one of these comparisons will include a type I error Assuming that a ¢ is the per comparison error rate, then: The per comparison error: a = a ¢ but, the familywise error: a = 1 - (1-a ¢ )c Thus, if
Per Comparison Error Rate
we do two comparisons, but keep a ¢ at 0.05, the FWerror will really be: a = 1 - (1 - 0.05)2 =1 - (0.95)2 = 1 - 0.9025 = 0.0975 Thus, there is almost a 10% chance of one of the comparisons being significant when we do two comparisons, even when the nulls are true. The basic problem then, is that if we are doing many comparisons, we want to somehow control our familywise error so that we don’t end up concluding that differences are there, when they really are not The various tests we will talk about differ in terms of how they do this They will also be categorized as being either "A priori" or "post hoc" A priori: A priori tests are comparisons that the experimenter clearly intended to test before collecting any data Post hoc: Post hoc tests are comparisons the experimenter has decided to test after collecting the data, looking at the means, and noting which means "seem" different. The probability of making a type I error is smaller for A priori tests because, when doing post hoc tests, you are essentially doing all possible comparisons before deciding
Descriptive Statistics Hypothesis Testing General Properties of Distributions Distributions Normal Distribution Sampling Distributions Binomial and
Familywise Non Coverage Error Rate
Related Distributions Student's t Distribution Chi-square and F Distributions Other decision wise error rate Key Distributions Testing for Normality and Symmetry ANOVA One-way ANOVA Factorial ANOVA ANOVA with comparison wise error rate Random or Nested Factors Design of Experiments ANOVA with Repeated Measures Analysis of Covariance (ANCOVA) Miscellaneous Correlation Reliability Non-parametric Tests Time Series Analysis Survival http://www.psych.utoronto.ca/courses/c1/chap12/chap12.html Analysis Handling Missing Data Regression Linear Regression Multiple Regression Logistic Regression Multinomial and Ordinal Logistic Regression Log-linear Regression Multivariate Descriptive Multivariate Statistics Multivariate Normal Distribution Hotelling’s T-square MANOVA Repeated Measures Tests Box’s Test Factor Analysis Cluster Analysis Appendix Mathematical Notation Excel Capabilities Matrices and Iterative Procedures Linear http://www.real-statistics.com/one-way-analysis-of-variance-anova/experiment-wise-error-rate/ Algebra and Advanced Matrix Topics Other Mathematical Topics Statistics Tables Bibliography Author Citation Blogs Tools Real Statistics Functions Multivariate Functions Time Series Analysis Functions Missing Data Functions Data Analysis Tools Contact Us Experiment-wise error rate We could have conducted the analysis for Example 1 of Basic Concepts for ANOVA by conducting multiple two sample tests. E.g. to decide whether or not to reject the following null hypothesis H0: μ1 = μ2 = μ3 We can use the following three separate null hypotheses: H0: μ1 = μ2 H0: μ2 = μ3 H0: μ1 = μ3 If any of these null hypotheses is rejected then the original null hypothesis is rejected. Note however that if you set α = .05 for each of the three sub-analyses then the overall alpha value is .14 since 1 – (1 – α)3 = 1 – (1 – .05)3 = 0.142525 (see Example 6 of Basic Probability Concepts). This means that the probab
Graphpad.com FAQs Find ANY word Find ALL words Find EXACT phrase t tests after one-way ANOVA, without correction for multiple comparisons FAQ# 1533 Last Modified 8-September-2009 Correcting for multiple comparisons is notessential Testing multiple hypotheses at once creates a dilemma that cannot be escaped. If http://www.graphpad.com/support/faqid/1533/ you donotmake any corrections for multiple comparisons, it becomes 'too easy' to find 'significant' findings by chance--it is too easy to make a Type I error. But if youdocorrect for multiple comparisons, you lose power to detect real differences -- itis too easy tomake aType II error. The only way to escape this dilemma is to focus you analyses, and thus avoid making multiple comparisons. For example, if your treatments are ordered, don't compare each mean with each wise error other mean (multiple comparisons), instead do one test for trend to ask if the outcome is linearly related with treatment number. Another example: If some of the groups are simply positive and negative controls needed to verify that an experiment 'worked', don't include them as part of the ANOVA and as part of the multiple comparisons. Once you verified that the experiment worked, throw away those controls and only analyze the data that relate to your experimental wise error rate hypothesis, which might be a single comparison. If you need to test multiple hypotheses at once, there is simply no way to escape the dilemma. If you use multiple comparisons procedures to reduce the risk of making a Type I error, you will increase your risk of making a Type II error. If you don't make corrections for multiple comparisons, you increase your risk of making a Type I error and lower the chance of making a Type II error. How to compute individual P values without correcting for multiple comparisons Saville suggests that corrections formultiple comparison not be performed, but rather that yousimply report all your data and let your readers make the conclusions (D. J. Saville, Multiple Comparison Procedures: The Practical Solution. The American Statistician, 44:174-180, 1990). This requires you to alert your readers to the fact you have not done any correction for multiple comparisons, and to honestly report all the comparisons you did make, so the reader can informally adjust for multiple comparisons while reviewing the data. A t test compares the difference between two means with a standard error of that difference, which is computed from the pooled standard deviation of the groups and their sample sizes.One-way ANOVA assumes that all the data are sampled from populations that follow a Gaussian distribution, and that the standard deviation of all of these populations is the same. If