Familywise Error Rate Anova
Contents |
describe a number of different ways of testing which means are different Before describing the tests, it is necessary to consider two different ways of familywise error rate calculator thinking about error and how they are relevant to doing multiple comparisons Error family wise error calculator Rate per Comparison (PC) This is simply the Type I error that we have talked about all along. So far, we experiment wise error rate have been simply setting its value at .05, a 5% chance of making an error Familywise Error Rate (FW) Often, after an ANOVA, we want to do a number of comparisons, not just family wise error rate post hoc one The collection of comparisons we do is described as the "family" The familywise error rate is the probability that at least one of these comparisons will include a type I error Assuming that a ¢ is the per comparison error rate, then: The per comparison error: a = a ¢ but, the familywise error: a = 1 - (1-a ¢ )c Thus, if we do two comparisons,
Per Comparison Error Rate
but keep a ¢ at 0.05, the FWerror will really be: a = 1 - (1 - 0.05)2 =1 - (0.95)2 = 1 - 0.9025 = 0.0975 Thus, there is almost a 10% chance of one of the comparisons being significant when we do two comparisons, even when the nulls are true. The basic problem then, is that if we are doing many comparisons, we want to somehow control our familywise error so that we don’t end up concluding that differences are there, when they really are not The various tests we will talk about differ in terms of how they do this They will also be categorized as being either "A priori" or "post hoc" A priori: A priori tests are comparisons that the experimenter clearly intended to test before collecting any data Post hoc: Post hoc tests are comparisons the experimenter has decided to test after collecting the data, looking at the means, and noting which means "seem" different. The probability of making a type I error is smaller for A priori tests because, when doing post hoc tests, you are essentially doing all possible comparisons before deciding which to test in a formal statistical mann
Descriptive Statistics Hypothesis Testing General Properties of Distributions Distributions Normal Distribution Sampling Distributions Binomial and Related Distributions Student's t Distribution Chi-square and
Familywise Non Coverage Error Rate
F Distributions Other Key Distributions Testing for Normality and Symmetry ANOVA comparison wise error rate One-way ANOVA Factorial ANOVA ANOVA with Random or Nested Factors Design of Experiments ANOVA with Repeated decision wise error rate Measures Analysis of Covariance (ANCOVA) Miscellaneous Correlation Reliability Non-parametric Tests Time Series Analysis Survival Analysis Handling Missing Data Regression Linear Regression Multiple Regression Logistic Regression Multinomial and http://www.psych.utoronto.ca/courses/c1/chap12/chap12.html Ordinal Logistic Regression Log-linear Regression Multivariate Descriptive Multivariate Statistics Multivariate Normal Distribution Hotelling’s T-square MANOVA Repeated Measures Tests Box’s Test Factor Analysis Cluster Analysis Appendix Mathematical Notation Excel Capabilities Matrices and Iterative Procedures Linear Algebra and Advanced Matrix Topics Other Mathematical Topics Statistics Tables Bibliography Author Citation Blogs Tools Real Statistics Functions Multivariate Functions http://www.real-statistics.com/one-way-analysis-of-variance-anova/experiment-wise-error-rate/ Time Series Analysis Functions Missing Data Functions Data Analysis Tools Contact Us Experiment-wise error rate We could have conducted the analysis for Example 1 of Basic Concepts for ANOVA by conducting multiple two sample tests. E.g. to decide whether or not to reject the following null hypothesis H0: μ1 = μ2 = μ3 We can use the following three separate null hypotheses: H0: μ1 = μ2 H0: μ2 = μ3 H0: μ1 = μ3 If any of these null hypotheses is rejected then the original null hypothesis is rejected. Note however that if you set α = .05 for each of the three sub-analyses then the overall alpha value is .14 since 1 – (1 – α)3 = 1 – (1 – .05)3 = 0.142525 (see Example 6 of Basic Probability Concepts). This means that the probability of rejecting the null hypothesis even when it is true (type I error) is 14.2525%. For k groups, you would need to run m = COMBIN(k, 2) such tests and so the resulting overall alpha would be 1 – (1
or more absolutely true null hypotheses in a family of several absolutely true null hypotheses. Rejecting an absolutely true null hypothesis is known as a http://core.ecu.edu/psyc/wuenschk/docs30/FamilywiseAlpha.htm "Type One Error." It is important to keep in mind that one cannot make a Type I error unless one tests an absolutely true null hypothesis. Accordingly, if absolutely true null hypotheses are unlikely to be encountered, then the unconditional probability of making a Type I error will be quite small. Psychologists and some others act as if they think they error rate will burn in hell for an eternity if they ever make even a single Type I error -- that is, if they ever reject a null hypothesis when, in fact, that hypothesis is absolutely true. I and many others are of the opinion that the unconditional probability of making a Type I error is close to zero, since it is highly wise error rate unlikely that one will ever test a null hypothesis that is absolutely true. Why worry so much about making an error that is almost impossible to make? There exists a variety of techniques for capping familywise alpha at some value, usually .05. Why .05? Maybe .05 is, sometimes, a reasonable criterion for statistical significance when making a single comparison, but is it really reasonable to cap familywise alpha at .05? Even if it is, what reasonably constitutes the family for which one should cap familywise alpha at .05? Is it the family of hypotheses that I am testing for this particular outcome variable in this particular research project? I am testing for all comparisons make in this particular research project? I am testing this month, this year, or during my lifetime? All psychologists are testing this month, this year, or whenever? Many times I have asked this question about what reasonably constitutes a family of comparisons for which alpha should be capped at .05. I have never been satisfied with any answer I have received. Controlling Familywise Alpha When Making Multi
be down. Please try the request again. Your cache administrator is webmaster. Generated Fri, 14 Oct 2016 02:39:38 GMT by s_ac4 (squid/3.5.20)