Overlap Error Bar Chart
Contents |
in a publication or presentation, you may be tempted to draw conclusions about the statistical significance of differences between group means by looking at whether the error bars overlap. Let's look at how to interpret error bars two contrasting examples. What can you conclude when standard error bars do not error bars overlap overlap? When standard error (SE) bars do not overlap, you cannot be sure that the difference between two means is large error bars statistically significant. Even though the error bars do not overlap in experiment 1, the difference is not statistically significant (P=0.09 by unpaired t test). This is also true when you compare proportions sem error bars with a chi-square test. What can you conclude when standard error bars do overlap? No surprises here. When SE bars overlap, (as in experiment 2) you can be sure the difference between the two means is not statistically significant (P>0.05). What if you are comparing more than two groups? Post tests following one-way ANOVA account for multiple comparisons, so they yield higher P values
What Do Small Error Bars Mean
than t tests comparing just two groups. So the same rules apply. If two SE error bars overlap, you can be sure that a post test comparing those two groups will find no statistical significance. However if two SE error bars do not overlap, you can't tell whether a post test will, or will not, find a statistically significant difference. What if the error bars do not represent the SEM? Error bars that represent the 95% confidence interval (CI) of a mean are wider than SE error bars -- about twice as wide with large sample sizes and even wider with small sample sizes. If 95% CI error bars do not overlap, you can be sure the difference is statistically significant (P < 0.05). However, the converse is not true--you may or may not have statistical significance when the 95% confidence intervals overlap. Some graphs and tables show the mean with the standard deviation (SD) rather than the SEM. The SD quantifies variability, but does not account for sample size. To assess statistical significance, you must take into account sample size as well as variability. Therefore, observing whether SD error bars over
and found 6: Error bars 7: Practice with error bars 8: And another way: the standard error 9: The same
Standard Error Bars Excel
graph both ways 10: Review map| <| >| home Another way to how to calculate error bars add info: the standard error Graphs using standard deviation (SD) tell you what a big population of fish would error bars standard deviation or standard error look like -- whether their sizes would be all uniform, or somewhat raggedy, or totally raggedy. Sometimes, though, you don't really care what a population looks like, you just want https://egret.psychol.cam.ac.uk/statistics/local_copies_of_sources_Cardinal_and_Aitken_ANOVA/errorbars.htm to know, did a treatment (like Fish2Whale instead of other competing brands) make a difference on average? In that case you measure a bunch of fish because you're trying to get a really good estimate of the average effect, despite whatever raggediness might be present in the populations. Let's say your company decides to go all out to prove that http://mathbench.umd.edu/modules/prob-stat_bargraph/page08.htm Fish2Whale really is better than the competition. They convert a supply closet into an acquarium, hatch 400 fish, and tell you to do a HUGE experiment. The whole idea of the HUGE experiment is to get a really accurate measurement of the effect of Fish2Whale, despite the natural differences such as temperature, light, initial size of fish, solar flares, and ESP phenomena. The return on their investment? Really small error bars. But how do you get small error bars? Just using 400 fish WON'T give you a smaller SD. A huge population will be just as "ragged" as a small population. Instead, you need to use a quantity called the "standard error", or SE, which is the same as the standard deviation DIVIDED BY the square root of the sample size. Since you fed 100 fish with Fish2Whale, you get to divide the standard deviation of each result by 10 (i.e., the square root of 100). Likewise with each of the other 3 brands. So your reward for all that work is that your error bars are much smaller: Why
MenuMenu Home Current issue Comment Research Archive Archive by issue Archive by category Specials, focuses & supplements Authors http://www.nature.com/nmeth/journal/v10/n10/full/nmeth.2659.html & referees Guide to authors For referees Submit manuscript Reporting checklist About the journal About Nature Methods About the editors Press releases Contact the journal Subscribe For advertisers For librarians Methagora blog Home archive issue This Month full text Nature Methods | This Month Print Share/bookmark Cite U Like Facebook Twitter Delicious Digg Google+ LinkedIn error bar Reddit StumbleUpon Previous article Nature Methods | This Month The Author File: Jeff Dangl Next article Nature Methods | Correspondence ExpressionBlast: mining large, unstructured expression databases Points of Significance: Error bars Martin Krzywinski1, Naomi Altman2, Affiliations Journal name: Nature Methods Volume: 10, Pages: 921–922 Year published: (2013) DOI: doi:10.1038/nmeth.2659 Published online 27 September 2013 Article tools overlap error bar PDF PDF Download as PDF (269 KB) View interactive PDF in ReadCube Citation Reprints Rights & permissions Article metrics The meaning of error bars is often misinterpreted, as is the statistical significance of their overlap. Subject terms: Publishing• Research data• Statistical methods At a glance Figures View all figures Figure 1: Error bar width and interpretation of spacing depends on the error bar type. (a,b) Example graphs are based on sample means of 0 and 1 (n = 10). (a) When bars are scaled to the same size and abut, P values span a wide range. When s.e.m. bars touch, P is large (P = 0.17). (b) Bar size and relative position vary greatly at the conventional P value significance cutoff of 0.05, at which bars may overlap or have a gap. Full size image View in article Figure 2: The size and position of confidence intervals depend on the sample. On average, CI% of intervals are expected to span the mean—about 19
be down. Please try the request again. Your cache administrator is webmaster. Generated Sun, 23 Oct 2016 20:57:26 GMT by s_wx1085 (squid/3.5.20)