Paired T-test Error Bars
Contents |
in a publication or presentation, you may be tempted to draw conclusions about the statistical significance of differences between group means by looking at whether the error bars how to interpret error bars overlap. Let's look at two contrasting examples. What can you conclude when overlapping error bars standard error bars do not overlap? When standard error (SE) bars do not overlap, you cannot be sure
Large Error Bars
that the difference between two means is statistically significant. Even though the error bars do not overlap in experiment 1, the difference is not statistically significant (P=0.09 by unpaired
Sem Error Bars
t test). This is also true when you compare proportions with a chi-square test. What can you conclude when standard error bars do overlap? No surprises here. When SE bars overlap, (as in experiment 2) you can be sure the difference between the two means is not statistically significant (P>0.05). What if you are comparing more than two groups? Post how to calculate error bars tests following one-way ANOVA account for multiple comparisons, so they yield higher P values than t tests comparing just two groups. So the same rules apply. If two SE error bars overlap, you can be sure that a post test comparing those two groups will find no statistical significance. However if two SE error bars do not overlap, you can't tell whether a post test will, or will not, find a statistically significant difference. What if the error bars do not represent the SEM? Error bars that represent the 95% confidence interval (CI) of a mean are wider than SE error bars -- about twice as wide with large sample sizes and even wider with small sample sizes. If 95% CI error bars do not overlap, you can be sure the difference is statistically significant (P < 0.05). However, the converse is not true--you may or may not have statistical significance when the 95% confidence intervals overlap. Some graphs and tables show the mean with the standard deviation (SD) rather than the SEM. The SD quantifies variability,
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about
Error Bars Standard Deviation Or Standard Error
hiring developers or posting ads with us Cross Validated Questions Tags Users Badges Unanswered Ask Question how to draw error bars _ Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Join error bars in excel them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Is using error bars for means in a https://egret.psychol.cam.ac.uk/statistics/local_copies_of_sources_Cardinal_and_Aitken_ANOVA/errorbars.htm within-subjects study wrong? up vote 5 down vote favorite 1 I seem to recall one of my professors saying that error bars are completely uninformative when comparing repeated measures taken from a single group. Is that true? Surely many studies compute the sample means for condition A and for condition B (i.e. levels A and B of a certain within-subjects factor), compare the means with a paired samples t-test, and then display them on a graph with error bars. http://stats.stackexchange.com/questions/133014/is-using-error-bars-for-means-in-a-within-subjects-study-wrong Is this really wrong? If so, why? confidence-interval data-visualization repeated-measures t-test share|improve this question edited Jan 11 '15 at 17:50 gung 74.4k19161310 asked Jan 11 '15 at 17:00 wildetudor 3521314 1 Not quite a direct answer to your question, but there are a few different methods that are supposed to calculate meaningful error bars for within-subjects data, e.g. this method proposed by Cousineau and O'Brien. (apologies if this is not accessible, I'm on a university computer and can't tell if it's an open access article or just using my institutional access automatically) –Marius Jan 12 '15 at 5:41 Thanks, this is helpful to know! –wildetudor Jan 12 '15 at 9:22 add a comment| 1 Answer 1 active oldest votes up vote 9 down vote accepted It isn't "wrong" necessarily, and it isn't "completely uninformative". But it provides information that pertains to a largely unrelated question, and so is likely to be misleading. When you run a paired samples $t$-test, you are really conducting a one-sample $t$-test of whether the mean of the differences is equal to $0$. Because this is a one-sample test, a corresponding figure would have one bar showing the mean difference (with error bars). To see how this could be misleading, consider these data (coded with R): set.seed(4868) # this makes the example exactly reproducible (if you use R) b = c(2, 4, 6, 8) a = b + rn
Επιλέξτε τη γλώσσα σας. Κλείσιμο Μάθετε περισσότερα View this message in English Το YouTube εμφανίζεται στα Ελληνικά. Μπορείτε να αλλάξετε αυτή την προτίμηση παρακάτω. Learn more You're viewing YouTube https://www.youtube.com/watch?v=0CUtzb9Pke0 in Greek. You can change this preference below. Κλείσιμο Ναι, θέλω να τη κρατήσω Αναίρεση Κλείσιμο Αυτό το βίντεο δεν είναι https://liesandstats.wordpress.com/2008/09/26/no-one-understands-error-bars/ διαθέσιμο. Ουρά παρακολούθησηςΟυράΟυρά παρακολούθησηςΟυρά Κατάργηση όλωνΑποσύνδεση Φόρτωση... Ουρά παρακολούθησης Ουρά __count__/__total__ SPSS - Paired-samples t-test (2 of 2) - creating a error bars bar graph Doug Maynard ΕγγραφήΕγγραφήκατεΚατάργηση εγγραφής223223 Φόρτωση... Φόρτωση... Σε λειτουργία... Προσθήκη σε... Θέλετε να το δείτε ξανά αργότερα; Συνδεθείτε για να προσθέσετε το βίντεο σε playlist. Σύνδεση Κοινή χρήση Περισσότερα Αναφορά Θέλετε να αναφέρετε το βίντεο; Συνδεθείτε για να αναφέρετε ακατάλληλο περιεχόμενο. Σύνδεση paired t-test error Μεταγραφή Στατιστικά στοιχεία 21.175 προβολές 31 Σας αρέσει αυτό το βίντεο; Συνδεθείτε για να μετρήσει η άποψή σας. Σύνδεση 32 1 Δεν σας αρέσει αυτό το βίντεο; Συνδεθείτε για να μετρήσει η άποψή σας. Σύνδεση 2 Φόρτωση... Φόρτωση... Μεταγραφή Δεν ήταν δυνατή η φόρτωση της διαδραστικής μεταγραφής. Φόρτωση... Φόρτωση... Η δυνατότητα αξιολόγησης είναι διαθέσιμη όταν το βίντεο είναι ενοικιασμένο. Αυτή η λειτουργία δεν είναι διαθέσιμη αυτήν τη στιγμή. Δοκιμάστε ξανά αργότερα. Δημοσιεύτηκε στις 18 Σεπ 2013In this video we produce a bar graph of results from a paired-samples t-test with appropriate error bars. By default, SPSS gives you error bars which assume that groups are unrelated, which is not true for this design.(In the previous video, we used SPSS v21 to conduct a paired-samples t-test (a.k.
common misconception regarding error bars: overlap means no statistical significance. Checking statistical significance is not the only relevant piece of information that you can get from error bars (otherwise what would be the point) but it's the first thing people look for when they see them in a graph. Another common misconception is that error bars are always relevant, and should therefore always be present in a graph of experimental results. If only it were that simple. Who's laughing now A professor of psychology was criticized recently when he posted an article online with a graph that did not include error bars. He followed up with poll to see if readers understood error bars (most didn't), and then posted an article about how most researchers don't understand error bars. He based his post on a relatively large study (of almost 500 participants) that tested researchers that had published in psychology, neuroscience, and medical journals. One of the articles cited in the study is Inference by Eye: Confidence Intervals and How to Read Pictures of Data [PDF] by Cumming and Finch. In it the authors describe some pitfalls relating to making inferences from error bars (for both confidence intervals and standard errors). And they describe rules of thumb (what the authors call rules of eye, since they are rules for making visual inferences). But note the fine-print: the rules are for two-sided confidence intervals on the mean, with a normally distributed population, used for making single inferences. Pitfalls Before you can judge error bars, you need to know what they represent: a percent confidence interval, standard error, or standard deviation. Then you need to worry about whether the data is independent (for between-subject comparisons), or paired (such as repeated tests, for within-subject comparisons), and the reason error bars are being reported (for between-subject comparisons, a meta-analysis in which results are pooled, or just to confuse). And these points are not always made clear in figure captions. For paired or repeated data, you probably don't care about the error bars on an independent variable. For example, confidence intervals on the means are of little value for visual inspections—you want to look at the confidence interval on the mean of the differences (which depends on correlation between the confidence intervals on the individual means, which can't be determined visually). In other words error