Can Error Bars Overlap And Still Be Significant
Contents |
Graphpad.com FAQs Find ANY word Find ALL words Find EXACT phrase What you can conclude when two error bars overlap (or don't)? FAQ# 1362 Last Modified 22-April-2010 It is tempting to look at whether two error bars overlap or not, and try to reach a conclusion about whether the difference large error bars between means is statistically significant. Resist that temptation (Lanzante, 2005)! SD error bars SD error bars sem error bars quantify the scatter among the values. Looking at whether the error bars overlap lets you compare the difference between the mean with the amount of what are error bars in excel scatter within the groups. But the t test also takes into account sample size. If the samples were larger with the same means and same standard deviations, the P value would be much smaller. If the samples were smaller with the
Error Bars 95 Confidence Interval Excel
same means and same standard deviations, the P value would be larger. When the difference between two means is statistically significant (P < 0.05), the two SD error bars may or may not overlap. Likewise, when the difference between two means is not statistically significant (P > 0.05), the two SD error bars may or may not overlap. Knowing whether SD error bars overlap or not does not let you conclude whether difference between the means is statistically significant or not. SEM what do small error bars mean error bars SEM error bars quantify how precisely you know the mean, taking into account both the SD and sample size. Looking at whether the error bars overlap, therefore, lets you compare the difference between the mean with the precision of those means. This sounds promising. But in fact, you don’t learn much by looking at whether SEM error bars overlap. By taking into account sample size and considering how far apart two error bars are, Cumming (2007) came up with some rules for deciding when a difference is significant or not. But these rules are hard to remember and apply. Here is a simpler rule: If two SEM error bars do overlap, and the sample sizes are equal or nearly equal, then you know that the P value is (much) greater than 0.05, so the difference is not statistically significant. The opposite rule does not apply. If two SEM error bars do not overlap, the P value could be less than 0.05, or it could be greater than 0.05. If the sample sizes are very different, this rule of thumb does not always work. Confidence interval error bars Error bars that show the 95% confidence interval (CI) are wider than SE error bars. It doesn’t help to observe that two 95% CI error bars overlap, as the difference between the two means may or may not be statistically significant. Useful rule of thumb: If two 95% CI error bars do not overlap, and
in a publication or presentation, you may be tempted to draw conclusions about the statistical significance of differences between group means by looking at whether the error bars overlap. Let's look at calculating error bars two contrasting examples. What can you conclude when standard error bars do
Error Bars Standard Deviation Or Standard Error
not overlap? When standard error (SE) bars do not overlap, you cannot be sure that the difference between two
How To Draw Error Bars
means is statistically significant. Even though the error bars do not overlap in experiment 1, the difference is not statistically significant (P=0.09 by unpaired t test). This is also true when you http://www.graphpad.com/support/faqid/1362/ compare proportions with a chi-square test. What can you conclude when standard error bars do overlap? No surprises here. When SE bars overlap, (as in experiment 2) you can be sure the difference between the two means is not statistically significant (P>0.05). What if you are comparing more than two groups? Post tests following one-way ANOVA account for multiple comparisons, so they yield higher https://egret.psychol.cam.ac.uk/statistics/local_copies_of_sources_Cardinal_and_Aitken_ANOVA/errorbars.htm P values than t tests comparing just two groups. So the same rules apply. If two SE error bars overlap, you can be sure that a post test comparing those two groups will find no statistical significance. However if two SE error bars do not overlap, you can't tell whether a post test will, or will not, find a statistically significant difference. What if the error bars do not represent the SEM? Error bars that represent the 95% confidence interval (CI) of a mean are wider than SE error bars -- about twice as wide with large sample sizes and even wider with small sample sizes. If 95% CI error bars do not overlap, you can be sure the difference is statistically significant (P < 0.05). However, the converse is not true--you may or may not have statistical significance when the 95% confidence intervals overlap. Some graphs and tables show the mean with the standard deviation (SD) rather than the SEM. The SD quantifies variability, but does not account for sample size. To assess statistical significance, you must take into account sample size as well as variability. Therefore, observ
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow http://stats.stackexchange.com/questions/114701/standard-error-bars-overlap-but-significance-estimated-marginal-means-versus-o the company Business Learn more about hiring developers or posting ads with us Cross Validated Questions Tags Users Badges Unanswered Ask Question _ Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise error bars to the top Standard error bars overlap but significance - estimated marginal means versus observed means up vote 1 down vote favorite I'm running a Mixed effects model ANOVA with two fixed factors (condition, repetition) and one random factor (subject). Subsequently, a Tukey multiple comparisons test is performed. Now I'd like to plot the means and standard errors (SEMs) of the single conditions in a single error bar plot, and report the p can error bars values between the conditions. The problem: while in the Tukey test, I got significant differences and non-overlapping SEMs between certain means, for my plotted real/observed data the SEM bars overlap. This is now counterintuitive, since commonly you would assume that in the case of overlapping, the means are not significantly different. My question is: is the difference between estimated marginal means and observed means due to having a random factor in my model, or what is the reason for the discrepancy? how would you report the data? Would you still plot observed data with the p values and state that the p values are derived from the estimated model? Or would you plot estimated means and standard errors? Thank you! EDIT: I'm adding the multiple comparisons result for a sample case as well as the observed means and standard error plot in case this helps. anova mean standard-error post-hoc share|improve this question edited Sep 8 '14 at 19:13 asked Sep 8 '14 at 13:38 user54643 64 add a comment| 1 Answer 1 active oldest votes up vote 2 down vote Statistical significance is not transitive. If you want to say how much error there is in estimating the means, show error bars around the means. If you want to compare the means, show results of multiple comparisons. Don't mi
be down. Please try the request again. Your cache administrator is webmaster. Generated Thu, 06 Oct 2016 03:30:03 GMT by s_hv987 (squid/3.5.20)