Overlapping Standard Error Bars
Contents |
in a publication or presentation, you may be tempted to draw conclusions about the statistical significance of differences between group means by looking at whether the error bars overlap. Let's look at two contrasting examples. What can you conclude
How To Interpret Error Bars
when standard error bars do not overlap? When standard error (SE) bars do not large error bars overlap, you cannot be sure that the difference between two means is statistically significant. Even though the error bars do not
Sem Error Bars
overlap in experiment 1, the difference is not statistically significant (P=0.09 by unpaired t test). This is also true when you compare proportions with a chi-square test. What can you conclude when standard error bars what are error bars in excel do overlap? No surprises here. When SE bars overlap, (as in experiment 2) you can be sure the difference between the two means is not statistically significant (P>0.05). What if you are comparing more than two groups? Post tests following one-way ANOVA account for multiple comparisons, so they yield higher P values than t tests comparing just two groups. So the same rules apply. If two SE error bars overlap, you what do small error bars mean can be sure that a post test comparing those two groups will find no statistical significance. However if two SE error bars do not overlap, you can't tell whether a post test will, or will not, find a statistically significant difference. What if the error bars do not represent the SEM? Error bars that represent the 95% confidence interval (CI) of a mean are wider than SE error bars -- about twice as wide with large sample sizes and even wider with small sample sizes. If 95% CI error bars do not overlap, you can be sure the difference is statistically significant (P < 0.05). However, the converse is not true--you may or may not have statistical significance when the 95% confidence intervals overlap. Some graphs and tables show the mean with the standard deviation (SD) rather than the SEM. The SD quantifies variability, but does not account for sample size. To assess statistical significance, you must take into account sample size as well as variability. Therefore, observing whether SD error bars overlap or not tells you nothing about whether the difference is, or is not, statistically significant. What if the groups were matched and analyzed with a paired t test? All the comments above assume you are performing an unp
CatservEvolutionBlogGreg Laden's BlogLife LinesPage 3.14PharyngulaRespectful InsolenceSignificant Figures by Peter GleickStarts With
Error Bars Standard Deviation Or Standard Error
A BangStoatThe Pump HandleThe Weizmann WaveUncertain PrinciplesUSA how to calculate error bars Science and Engineering Festival: The BlogWorld's Fair2010 World Science Festival BlogA
Confidence Interval Error Bars Excel
Blog Around The ClockAdventures in Ethics and ScienceA Good PoopAll of My Faults Are Stress RelatedAngry ToxicologistApplied StatisticsArt https://egret.psychol.cam.ac.uk/statistics/local_copies_of_sources_Cardinal_and_Aitken_ANOVA/errorbars.htm of Science LearningA Vote For ScienceBasic Concepts in SciencebioephemeraBlogging the OriginBrookhaven Bits & BytesBuilt on FactsChaotic UtopiaChristina's LIS RantClass MCognitive DailyCommon KnowledgeCulture DishDean's CornerDeep Sea NewsDeveloping IntelligenceDispatches from the Creation WarsDot PhysicsDr. Joan Bushwell's Chimpanzee RefugeEffect http://scienceblogs.com/cognitivedaily/2008/07/31/most-researchers-dont-understa-1/ MeasureEruptionsevolgenEvolution for EveryoneEvolving ThoughtsFraming ScienceGalactic InteractionsGene ExpressionGenetic FutureGood Math, Bad MathGreen GabbroGuilty PlanetIntegrity of ScienceIntel ISEFLaelapsLife at the SETI InstituteLive from ESOF 2014Living the Scientific Life (Scientist, Interrupted)Mike the Mad BiologistMixing MemoryMolecule of the DayMyrmecosNeuron CultureNeuronticNeurophilosophyNeurotopiaNot Exactly Rocket ScienceObesity PanaceaObservations of a NerdOf Two MindsOmni BrainOn Becoming a Domestic and Laboratory GoddessOscillatorPhoto SynthesisPure PedantryRetrospectacle: A Neuroscience BlogRevolutionary Minds Think TankScience + SocietyScience After SunclipseScience is CultureScienceOnline 2010: The BlogSciencePunkScience To LifeSciencewomenSeed/MoMA SalonSee Jane ComputeShifting BaselinesSignoutSpeakeasy ScienceSpeaking Science 2.0Stranger FruitSuperbugTerra SigillataTetrapod ZoologyThe Blogger SAT ChallengeThe Book of TrogoolThe Cheerful OncologistThe Corpus CallosumThe Examining Room of Dr. CharlesThe Frontal CortexThe IntersectionThe Island of DoubtThe LoomThe Primate DiariesThe Quantu
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of http://stats.stackexchange.com/questions/164722/overlapping-standard-errors-and-statistical-significance this site About Us Learn more about Stack Overflow the company Business Learn http://mathbench.umd.edu/modules/prob-stat_bargraph/page08.htm more about hiring developers or posting ads with us Cross Validated Questions Tags Users Badges Unanswered Ask Question _ Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Join them; it only takes a minute: Sign up Here's error bars how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Overlapping standard errors and statistical significance up vote 1 down vote favorite I have a paired data set which I have placed into $x$ and $y$ columns where $x$ are the control values and $y$ are the values following drug treatment. $N=10$ for overlapping standard error both $x$ and $y$ columns as they are paired data. Each $x$ is the control for the corresponding $y$. I have seen in various texts stating that when standard error margins overlap, the data cannot significant. By standard error margin, I am referring to ($SE_\bar x = SD/\sqrt N$). However, I have conducted two-tailed paired $t$-tests on my data set (comparing the means of all values in $x$ versus the means of all values in $y$) and my results yield statistical significance with a $p$-value $< 0.05$ (despite there being overlapping standard error margins with data in $x$ and $y$). My question is: in a paired data set, is it possible for there to be statistical significance between the control ($x$) and drug treatment ($y$) despite having overlapping standard errors? My $t$-test was done using GraphPad prism so I'm confident there are no errors in the $t$-test. statistical-significance t-test standard-error share|improve this question edited Aug 5 '15 at 0:55 gung 74.4k19161310 asked Aug 5 '15 at 0:03 Provo 83 Yes, there is. The standard error of the difference (which is what you care about) is dependent on the correlation
and found 6: Error bars 7: Practice with error bars 8: And another way: the standard error 9: The same graph both ways 10: Review map| <| >| home Another way to add info: the standard error Graphs using standard deviation (SD) tell you what a big population of fish would look like -- whether their sizes would be all uniform, or somewhat raggedy, or totally raggedy. Sometimes, though, you don't really care what a population looks like, you just want to know, did a treatment (like Fish2Whale instead of other competing brands) make a difference on average? In that case you measure a bunch of fish because you're trying to get a really good estimate of the average effect, despite whatever raggediness might be present in the populations. Let's say your company decides to go all out to prove that Fish2Whale really is better than the competition. They convert a supply closet into an acquarium, hatch 400 fish, and tell you to do a HUGE experiment. The whole idea of the HUGE experiment is to get a really accurate measurement of the effect of Fish2Whale, despite the natural differences such as temperature, light, initial size of fish, solar flares, and ESP phenomena. The return on their investment? Really small error bars. But how do you get small error bars? Just using 400 fish WON'T give you a smaller SD. A huge population will be just as "ragged" as a small population. Instead, you need to use a quantity called the "standard error", or SE, which is the same as the standard deviation DIVIDED BY the square root of the sample size. Since you fed 100 fish with Fish2Whale, you get to divide the standard deviation of each result by 10 (i.e., the square root of 100). Likewise with each of the other 3 brands. So your reward for all that work is that your error bars are much smaller: Why should you care about small error bars? Well, as a rule of thumb, if the SE error bars for the 2 treatments do not overlap, then you have shown that the treatment made a