Family Error Rate Statistics
Contents |
may be challenged and removed. (June 2016) (Learn how and when to remove this template message) In statistics, family-wise error rate (FWER) is the
Error Rate Statistics Sample Size
probability of making one or more false discoveries, or type I human error rate statistics errors, among all the hypotheses when performing multiple hypotheses tests. Contents 1 History 2 Background 2.1 Classification type 1 error rate statistics of multiple hypothesis tests 3 Definition 4 Controlling procedures 4.1 The Bonferroni procedure 4.2 The Šidák procedure 4.3 Tukey's procedure 4.4 Holm's step-down procedure (1979) 4.5 Hochberg's
Statistical Error Rate Calculator
step-up procedure 4.6 Dunnett's correction 4.7 Scheffé's method 4.8 Resampling procedures 5 Alternative approaches 6 References History[edit] Tukey coined the terms experimentwise error rate and "error rate per-experiment" to indicate error rates that the researcher could use as a control level in a multiple hypothesis experiment.[citation needed] Background[edit] Within the statistical framework, there are several definitions
Family Wise Error Rate
for the term "family": Hochberg & Tamhane defined "family" in 1987 as "any collection of inferences for which it is meaningful to take into account some combined measure of error".[1][pageneeded] According to Cox in 1982, a set of inferences should be regarded a family:[citation needed] To take into account the selection effect due to data dredging To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Yoav Benjamini).[citation needed] Classification of multiple hypothesis tests[edit] Main article: Classification of multiple hypothesis tests The following table defines various errors committed when testing multiple null hypotheses. Suppose we have a number m of multiple null hypotheses, denoted by: H
Descriptive Statistics Hypothesis Testing General Properties of Distributions Distributions Normal Distribution Sampling Distributions Binomial and Related Distributions Student's t Distribution Chi-square and F family wise error rate post hoc Distributions Other Key Distributions Testing for Normality and Symmetry ANOVA One-way
Family Wise Error Rate R
ANOVA Factorial ANOVA ANOVA with Random or Nested Factors Design of Experiments ANOVA with Repeated Measures how to calculate family wise error rate Analysis of Covariance (ANCOVA) Miscellaneous Correlation Reliability Non-parametric Tests Time Series Analysis Survival Analysis Handling Missing Data Regression Linear Regression Multiple Regression Logistic Regression Multinomial and Ordinal Logistic https://en.wikipedia.org/wiki/Family-wise_error_rate Regression Log-linear Regression Multivariate Descriptive Multivariate Statistics Multivariate Normal Distribution Hotelling’s T-square MANOVA Repeated Measures Tests Box’s Test Factor Analysis Cluster Analysis Appendix Mathematical Notation Excel Capabilities Matrices and Iterative Procedures Linear Algebra and Advanced Matrix Topics Other Mathematical Topics Statistics Tables Bibliography Author Citation Blogs Tools Real Statistics Functions Multivariate Functions Time Series Analysis http://www.real-statistics.com/one-way-analysis-of-variance-anova/experiment-wise-error-rate/ Functions Missing Data Functions Data Analysis Tools Contact Us Experiment-wise error rate We could have conducted the analysis for Example 1 of Basic Concepts for ANOVA by conducting multiple two sample tests. E.g. to decide whether or not to reject the following null hypothesis H0: μ1 = μ2 = μ3 We can use the following three separate null hypotheses: H0: μ1 = μ2 H0: μ2 = μ3 H0: μ1 = μ3 If any of these null hypotheses is rejected then the original null hypothesis is rejected. Note however that if you set α = .05 for each of the three sub-analyses then the overall alpha value is .14 since 1 – (1 – α)3 = 1 – (1 – .05)3 = 0.142525 (see Example 6 of Basic Probability Concepts). This means that the probability of rejecting the null hypothesis even when it is true (type I error) is 14.2525%. For k groups, you would need to run m = COMBIN(k, 2) such tests and so the resulting overall alpha would be 1 – (1 – α)m, a value which
describe a number of different ways of testing which means are different Before describing the tests, it is necessary to consider two different ways of thinking about http://www.psych.utoronto.ca/courses/c1/chap12/chap12.html error and how they are relevant to doing multiple comparisons Error Rate per Comparison http://onlinestatbook.com/glossary/familywise.html (PC) This is simply the Type I error that we have talked about all along. So far, we have been simply setting its value at .05, a 5% chance of making an error Familywise Error Rate (FW) Often, after an ANOVA, we want to do a number of comparisons, not just one The collection of comparisons error rate we do is described as the "family" The familywise error rate is the probability that at least one of these comparisons will include a type I error Assuming that a ¢ is the per comparison error rate, then: The per comparison error: a = a ¢ but, the familywise error: a = 1 - (1-a ¢ )c Thus, if we do two comparisons, but keep a ¢ at 0.05, the error rate statistics FWerror will really be: a = 1 - (1 - 0.05)2 =1 - (0.95)2 = 1 - 0.9025 = 0.0975 Thus, there is almost a 10% chance of one of the comparisons being significant when we do two comparisons, even when the nulls are true. The basic problem then, is that if we are doing many comparisons, we want to somehow control our familywise error so that we don’t end up concluding that differences are there, when they really are not The various tests we will talk about differ in terms of how they do this They will also be categorized as being either "A priori" or "post hoc" A priori: A priori tests are comparisons that the experimenter clearly intended to test before collecting any data Post hoc: Post hoc tests are comparisons the experimenter has decided to test after collecting the data, looking at the means, and noting which means "seem" different. The probability of making a type I error is smaller for A priori tests because, when doing post hoc tests, you are essentially doing all possible comparisons before deciding which to test in a formal statistical manner Steve: Significant F issue An example for context See page 351 for a very com