Comparison Error Definitiond
Contents |
when considering a result under error definition chemistry many hypotheses, some tests will give false error definition physics positives; many statisticians make use of Bonferroni correction, false discovery rate, and error computer definition other methods to determine the odds of a negative result appearing to be positive. References[edit] ^ Benjamini, Yoav; Hochberg,
Percent Error Definition
Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing" (PDF). Journal of the Royal Statistical Society, Series B. 57 (1): 289–300. MR1325392. Retrieved from "https://en.wikipedia.org/w/index.php?title=Per-comparison_error_rate&oldid=672691707" Categories: Hypothesis testingRates Navigation menu Personal tools Not experimental error definition logged inTalkContributionsCreate accountLog in Namespaces Article Talk Variants Views Read Edit View history More Search Navigation Main pageContentsFeatured contentCurrent eventsRandom articleDonate to WikipediaWikipedia store Interaction HelpAbout WikipediaCommunity portalRecent changesContact page Tools What links hereRelated changesUpload fileSpecial pagesPermanent linkPage informationWikidata itemCite this page Print/export Create a bookDownload as PDFPrintable version Languages Add links This page was last modified on 23 July 2015, at 06:40. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view
may be challenged and removed. (June 2016) (Learn how and when to remove this template message) An example of data produced by data dredging, apparently showing a close link between the letters in the winning word used in a relative error definition spelling bee competition and the number of people in the United States killed by
Systematic Error Definition
venomous spiders. The clear similarity in trends is a coincidence. If many data series are compared, similarly convincing but coincidental data
Fundamental Attribution Error Definition
may be obtained. In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously[1] or infers a subset of parameters selected based on the observed https://en.wikipedia.org/wiki/Per-comparison_error_rate values.[2] It is also known as the look-elsewhere effect. Errors in inference, including confidence intervals that fail to include their corresponding population parameters or hypothesis tests that incorrectly reject the null hypothesis, are more likely to occur when one considers the set as a whole. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques https://en.wikipedia.org/wiki/Multiple_comparisons_problem generally require a higher significance threshold for individual comparisons, so as to compensate for the number of inferences being made. Contents 1 History 2 Definition 2.1 Classification of multiple hypothesis tests 3 Example 4 Controlling procedures 5 Post-hoc testing of ANOVAs 6 Large-scale multiple testing 6.1 Assessing whether any alternative hypotheses are true 7 See also 8 References 9 Further reading History[edit] The interest in the problem of multiple comparisons began in the 1950s with the work of Tukey and Scheffé. New methods and procedures came out: the closed testing procedure (Marcus et al., 1976) and the Holm–Bonferroni method (1979). Later, in the 1980s, the issue of multiple comparisons came back (Hochberg and Tamhane (1987), Westfall and Young (1993), and Hsu (1996)). In 1995 work on the false discovery rate and other new ideas began. In 1996 the first conference on multiple comparisons took place in Israel. This was followed by conferences around the world: Berlin (2000), Bethesda (2002), Shanghai (2005), Vienna (2007), and Tokyo (2009). All these reflect increased interest in multiple comparisons.[3] Definition[edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2016) (Learn how and when to remove
describe a number of different ways of testing which means are different Before describing the tests, it is necessary to consider http://www.psych.utoronto.ca/courses/c1/chap12/chap12.html two different ways of thinking about error and how they are relevant to doing multiple comparisons Error Rate per Comparison (PC) This is simply the Type I error that we have talked about all along. So far, we have been simply setting its value at .05, a 5% chance of making an error Familywise Error Rate (FW) Often, after an ANOVA, we error definition want to do a number of comparisons, not just one The collection of comparisons we do is described as the "family" The familywise error rate is the probability that at least one of these comparisons will include a type I error Assuming that a ¢ is the per comparison error rate, then: The per comparison error: a = a ¢ but, the familywise error: comparison error definitiond a = 1 - (1-a ¢ )c Thus, if we do two comparisons, but keep a ¢ at 0.05, the FWerror will really be: a = 1 - (1 - 0.05)2 =1 - (0.95)2 = 1 - 0.9025 = 0.0975 Thus, there is almost a 10% chance of one of the comparisons being significant when we do two comparisons, even when the nulls are true. The basic problem then, is that if we are doing many comparisons, we want to somehow control our familywise error so that we don’t end up concluding that differences are there, when they really are not The various tests we will talk about differ in terms of how they do this They will also be categorized as being either "A priori" or "post hoc" A priori: A priori tests are comparisons that the experimenter clearly intended to test before collecting any data Post hoc: Post hoc tests are comparisons the experimenter has decided to test after collecting the data, looking at the means, and noting which means "seem" different. The probability of making a type I error is smaller fo