Multiplying Error Bars
Contents |
dividing Is one result consistent with another? What if there are several measurements of the same quantity? How can one estimate the uncertainty of a slope on a graph? Uncertainty in a single measurement Bob weighs himself on his bathroom scale. The smallest divisions on the scale are 1-pound marks, so
Multiplying Uncertainties
the least count of the instrument is 1 pound. Bob reads his weight as closest to combining uncertainties the 142-pound mark. He knows his weight must be larger than 141.5 pounds (or else it would be closer to the 141-pound mark), but
How To Calculate Uncertainty In Physics
smaller than 142.5 pounds (or else it would be closer to the 143-pound mark). So Bob's weight must be weight = 142 +/- 0.5 pounds In general, the uncertainty in a single measurement from a single instrument is half how to calculate absolute uncertainty the least count of the instrument. Fractional and percentage uncertainty What is the fractional uncertainty in Bob's weight? uncertainty in weight fractional uncertainty = ------------------------ value for weight 0.5 pounds = ------------- = 0.0035 142 pounds What is the uncertainty in Bob's weight, expressed as a percentage of his weight? uncertainty in weight percentage uncertainty = ----------------------- * 100% value for weight 0.5 pounds = ------------ * 100% = 0.35% 142 pounds Combining uncertainties in several quantities: adding how to calculate percentage uncertainty or subtracting When one adds or subtracts several measurements together, one simply adds together the uncertainties to find the uncertainty in the sum. Dick and Jane are acrobats. Dick is 186 +/- 2 cm tall, and Jane is 147 +/- 3 cm tall. If Jane stands on top of Dick's head, how far is her head above the ground? combined height = 186 cm + 147 cm = 333 cm uncertainty in combined height = 2 cm + 3 cm = 5 cm combined height = 333 cm +/- 5 cm Now, if all the quantities have roughly the same magnitude and uncertainty -- as in the example above -- the result makes perfect sense. But if one tries to add together very different quantities, one ends up with a funny-looking uncertainty. For example, suppose that Dick balances on his head a flea (ick!) instead of Jane. Using a pair of calipers, Dick measures the flea to have a height of 0.020 cm +/- 0.003 cm. If we follow the rules, we find combined height = 186 cm + 0.020 cm = 186.020 cm uncertainty in combined height = 2 cm + 0.003 cm = 2.003 cm ??? combined height = 186.020 cm +/- 2.003 cm ??? But wait a minute! This doesn't make any sense! If we can't tell exactly where the top of Dick's head is to within a couple of cm
graphs | What to plot? | Examples ] Error and Uncertainty All readings, data, results or other numerical quantities taken from the real world by direct measurement or otherwise
Percentage Uncertainty Physics
are subject to uncertainty. This is a consequence of not being able to measure uncertainty calculator anything exactly. Uncertainty cannot be avoided but it can be reduced by using 'better' apparatus. The uncertainty on a measurement
Percentage Uncertainty Definition
has to do with the precision or resolution of the measuring instrument. When results are analysed it is important to consider the affects of uncertainty in subsequent calculations involving the measured quantities. If you http://spiff.rit.edu/classes/phys273/uncert/uncert.html are unlucky (or careless) then your results will also be subject to errors. Errors are mistakes in the readings that, had the experiment been done differently, been avoided. It is perfectly possible to take a measurement accurately and erroneously! Unfortunately it is not always possible to know when you are making an error (otherwise you wouldn't make it!) and so godd experimental technique has to able to http://pfnicholls.com/physics/Uncertainty.html guard against the affect of errors Types of Error: Human Error: Errors introduced by basic incompetence, mistakes in using the apparatus etc. Reduced by repeating the experiment several times and comparing results to those of other similar experiments, by ensuring results seem reasonable Systematic Error: Error introduced by poor calibration or zero point setting of instruments such as meters - this may cause instrumentation to always 'under read' or 'over read' a value by a fixed amount. Reduced by plotting graphs, the relationships between two quantities often depends on the way in which they change rather than their absolute values. A systematic error would manifest itself as an intercept on the y-axis other than that expected. In the A Level course this is most commonly experienced with micrometers (that don't read zero when nothing is between the jaws) and electrical meters that may not rest at zero Equipment Error: Error introduced by the mis-functioning of equipment. The only real check is to see if the results seem reasonable and 'make sense' ... take time to stop and think about what the instruments are telling you ... does it seem okay? Parallax Error: Error introduced by reading scales from the wrong a
I. Why do we use errorbars? It is a crime to plot measures of central tendency without an indication of their variability. Enough said! II. What http://www-psych.stanford.edu/~lera/290/errorbars.html do we use as errorbars? There are pretty much two options: standard errors, or confidence intervals. These quantities are related. The confidence interval is the standard error multiplied by the critical value of a test statistic, which is either t or Z, depending on whether we know the population parameters or estimate them from a sample. The choice really depends upon your rhetorical how to intent: different things can be concluded from the errorbars, depending on what you choose to plot. Standard errors From an overlap, you can conclude no significant difference Approximately 68% confidence interval for population mean Difference between means is hard to evaluate Confidence intervals Can't draw conclusions from overlap Exact confidence interval for population mean Difference between means from multiplying by root 2 Most how to calculate papers I've read recently plot standard errors. I suspect an ulterior motive... III. Errorbars for between-subject means We have two ways of estimating the standard error: a local and a global estimate. Again, it's up to you which one you use. If you're going to be using within-subjects errorbars subsequently, then it's best to use the global estimate for consistency. Local estimate of the standard error Global estimate of the standard error Remember to multiply by the critical value of your test-statistic if you want confidence intervals! IV. Errorbars for within-subject means
The trick is to think about what is the best estimate of the error variance. When you do a within-subjects ANOVA, the analogue of the MSE is the mean square for the interaction of subjects and the effect you're testing. Basically, if you want to show differences between means on the basis of some factor, replace the MSE in the equation for between-subject means with whatever appears in the denominator of your within-subjects F-ratio. V. Errorbars for categorical data Binomial data How do we work out the confidence interval on an estimate of the prbe down. Please try the request again. Your cache administrator is webmaster. Generated Fri, 21 Oct 2016 00:50:07 GMT by s_wx1011 (squid/3.5.20)