Error Bars Overlap Graph
Contents |
Graphpad.com FAQs Find ANY word Find ALL words Find EXACT phrase What you can conclude when two error bars overlap (or don't)? FAQ# 1362 Last Modified 22-April-2010 It is tempting to look at whether two error bars overlap or not, and try to reach a conclusion about whether the difference between standard error bars overlap means is statistically significant. Resist that temptation (Lanzante, 2005)! SD error bars SD error bars quantify if error bars overlap the scatter among the values. Looking at whether the error bars overlap lets you compare the difference between the mean with the amount of scatter
If Error Bars Overlap Are They Significant
within the groups. But the t test also takes into account sample size. If the samples were larger with the same means and same standard deviations, the P value would be much smaller. If the samples were smaller with the same
If Error Bars Overlap Is There A Significant Difference
means and same standard deviations, the P value would be larger. When the difference between two means is statistically significant (P < 0.05), the two SD error bars may or may not overlap. Likewise, when the difference between two means is not statistically significant (P > 0.05), the two SD error bars may or may not overlap. Knowing whether SD error bars overlap or not does not let you conclude whether difference between the means is statistically significant or not. SEM error bars error bars overlap meaning SEM error bars quantify how precisely you know the mean, taking into account both the SD and sample size. Looking at whether the error bars overlap, therefore, lets you compare the difference between the mean with the precision of those means. This sounds promising. But in fact, you don’t learn much by looking at whether SEM error bars overlap. By taking into account sample size and considering how far apart two error bars are, Cumming (2007) came up with some rules for deciding when a difference is significant or not. But these rules are hard to remember and apply. Here is a simpler rule: If two SEM error bars do overlap, and the sample sizes are equal or nearly equal, then you know that the P value is (much) greater than 0.05, so the difference is not statistically significant. The opposite rule does not apply. If two SEM error bars do not overlap, the P value could be less than 0.05, or it could be greater than 0.05. If the sample sizes are very different, this rule of thumb does not always work. Confidence interval error bars Error bars that show the 95% confidence interval (CI) are wider than SE error bars. It doesn’t help to observe that two 95% CI error bars overlap, as the difference between the two means may or may not be statistically significant. Useful rule of thumb: If two 95% CI error bars do not overlap, and the sample sizes are ne
category Specials, focuses & supplements Authors & referees Guide to authors For referees Submit manuscript Reporting checklist About the journal About Nature http://www.nature.com/nmeth/journal/v10/n10/full/nmeth.2659.html Methods About the editors Press releases Contact the journal Subscribe For http://mathbench.umd.edu/modules/prob-stat_bargraph/page08.htm advertisers For librarians Methagora blog Home archive issue This Month full text Nature Methods | This Month Print Share/bookmark Cite U Like Facebook Twitter Delicious Digg Google+ LinkedIn Reddit StumbleUpon Previous article Nature Methods | This Month The Author File: Jeff Dangl Next article error bars Nature Methods | Correspondence ExpressionBlast: mining large, unstructured expression databases Points of Significance: Error bars Martin Krzywinski1, Naomi Altman2, Affiliations Journal name: Nature Methods Volume: 10, Pages: 921–922 Year published: (2013) DOI: doi:10.1038/nmeth.2659 Published online 27 September 2013 Article tools PDF PDF Download as PDF (269 KB) View interactive PDF in ReadCube Citation Reprints Rights & error bars overlap permissions Article metrics The meaning of error bars is often misinterpreted, as is the statistical significance of their overlap. Subject terms: Publishing• Research data• Statistical methods At a glance Figures View all figures Figure 1: Error bar width and interpretation of spacing depends on the error bar type. (a,b) Example graphs are based on sample means of 0 and 1 (n = 10). (a) When bars are scaled to the same size and abut, P values span a wide range. When s.e.m. bars touch, P is large (P = 0.17). (b) Bar size and relative position vary greatly at the conventional P value significance cutoff of 0.05, at which bars may overlap or have a gap. Full size image View in article Figure 2: The size and position of confidence intervals depend on the sample. On average, CI% of intervals are expected to span the mean—about 19 in 20 times for 95% CI. (a) Means and 95% CIs of 20 samples (n =
and found 6: Error bars 7: Practice with error bars 8: And another way: the standard error 9: The same graph both ways 10: Review map| <| >| home Another way to add info: the standard error Graphs using standard deviation (SD) tell you what a big population of fish would look like -- whether their sizes would be all uniform, or somewhat raggedy, or totally raggedy. Sometimes, though, you don't really care what a population looks like, you just want to know, did a treatment (like Fish2Whale instead of other competing brands) make a difference on average? In that case you measure a bunch of fish because you're trying to get a really good estimate of the average effect, despite whatever raggediness might be present in the populations. Let's say your company decides to go all out to prove that Fish2Whale really is better than the competition. They convert a supply closet into an acquarium, hatch 400 fish, and tell you to do a HUGE experiment. The whole idea of the HUGE experiment is to get a really accurate measurement of the effect of Fish2Whale, despite the natural differences such as temperature, light, initial size of fish, solar flares, and ESP phenomena. The return on their investment? Really small error bars. But how do you get small error bars? Just using 400 fish WON'T give you a smaller SD. A huge population will be just as "ragged" as a small population. Instead, you need to use a quantity called the "standard error", or SE, which is the same as the standard deviation DIVIDED BY the square root of the sample size. Since you fed 100 fish with Fish2Whale, you get to divide the standard deviation of each result by 10 (i.e., the square root of 100). Likewise with each of the other 3 brands. So your reward for all that work is that your error bars are much smaller: Why should you care about small error bars? Well, as a rule of thumb, if the SE error bars for the 2 treatments do not overlap, then you have shown that the treatment made a difference. (This is not a statistical test, but simply a way to visualize what your results mean. Many statistical tests are actually based on the exact amount of overlap of the SE bars, but they can get quite technical. For now, we'll just assume that no overlap = a true difference between the treatments.) So, in order to show that Fish2Whale really is better than the competitors, NOT ONLY does the mean growth need to be higher, but (mean minus SE) for Fish2Whale must be bigger than (mean plus SE) for the other brands. In other words, the error bars shouldn't overlap. It's a little easier to see on a graph: If you turn on javascript, this