Absolute Systematic Error
Contents |
just how much the measured value is likely to deviate from the unknown, true, value of the quantity. The systematic error psychology art of estimating these deviations should probably be called uncertainty analysis, but systematic error vs random error chemistry for historical reasons is referred to as error analysis. This document contains brief discussions about how errors are systematic error epidemiology reported, the kinds of errors that can occur, how to estimate random errors, and how to carry error estimates into calculated results. We are not, and will not be, concerned with systematic error in surveying the “percent error” exercises common in high school, where the student is content with calculating the deviation from some allegedly authoritative number. Significant figures Whenever you make a measurement, the number of meaningful digits that you write down implies the error in the measurement. For example if you say that the length of an object is 0.428 m, you imply an
Systematic Error Affects Precision Or Accuracy
uncertainty of about 0.001 m. To record this measurement as either 0.4 or 0.42819667 would imply that you only know it to 0.1 m in the first case or to 0.00000001 m in the second. You should only report as many significant figures as are consistent with the estimated error. The quantity 0.428 m is said to have three significant figures, that is, three digits that make sense in terms of the measurement. Notice that this has nothing to do with the "number of decimal places". The same measurement in centimeters would be 42.8 cm and still be a three significant figure number. The accepted convention is that only one uncertain digit is to be reported for a measurement. In the example if the estimated error is 0.02 m you would report a result of 0.43 ± 0.02 m, not 0.428 ± 0.02 m. Students frequently are confused about when to count a zero as a significant figure. The rule is: If the zero has a non-zero digit anywhere to its left, then the zero is significant, otherwise it is not.
of Error, least count (b) Estimation (c) Average Deviation (d) Conflicts (e) Standard Error in the Mean 3. What does uncertainty tell me? Range of possible values 4. Relative
Systematic Error Vs Statistical Error
and Absolute error 5. Propagation of errors (a) add/subtract (b) multiply/divide (c) powers systematic error calculation (d) mixtures of +-*/ (e) other functions 6. Rounding answers properly 7. Significant figures 8. Problems to try 9. systematic sampling error Glossary of terms (all terms that are bold face and underlined) Part II Graphing Part III The Vernier Caliper In this manual there will be problems for you to try. They are http://www.owlnet.rice.edu/~labgroup/pdf/Error_analysis.htm highlighted in yellow. There are also examples highlighted in green. 1. Systematic and random errors. No measurement made is ever exact. The accuracy (correctness) and precision (number of significant figures) of a measurement are always limited by the degree of refinement of the apparatus used, by the skill of the observer, and by the basic physics in the experiment. In doing experiments we are http://www.rit.edu/~w-uphysi/uncertainties/Uncertaintiespart1.html trying to establish the best values for certain quantities, or trying to validate a theory. We must also give a range of possible true values based on our limited number of measurements. Why should repeated measurements of a single quantity give different values? Mistakes on the part of the experimenter are possible, but we do not include these in our discussion. A careful researcher should not make mistakes! (Or at least she or he should recognize them and correct the mistakes.) We use the synonymous terms uncertainty, error, or deviation to represent the variation in measured data. Two types of errors are possible. Systematic error is the result of a mis-calibrated device, or a measuring technique which always makes the measured value larger (or smaller) than the "true" value. An example would be using a steel ruler at liquid nitrogen temperature to measure the length of a rod. The ruler will contract at low temperatures and therefore overestimate the true length. Careful design of an experiment will allow us to eliminate or to correct for systematic errors. Even when systematic errors are eliminated there will remain a second type of v
the empirical resources are exhausted need we pass on to the dreamy realm of speculation." -- Edwin Hubble, The Realm of the Nebulae (1936) Uncertainty To physicists the terms "error" or http://www2.sjs.org/friedman/PhysAPC/Errors%20and%20Uncertainties.htm "uncertainty" do not mean "mistake". Mistakes, such as incorrect calculations due to the improper use of a formula, can be and should be corrected. However, even mistake-free lab measurements have an inherent uncertainty or error. Consider the dartboards shown below, in which the 'grouping' of thrown darts is a proxy for our laboratory measurements. A 'precise' measurement means the darts are close together. An 'accurate' measurement means the darts systematic error hit close to the bullseye. Notice the combinations: Measurements are precise, just not very accurate Measurements are accurate, but not precise Measurements neither precise nor accurate Measurements both precise and accurate There are several different kinds and sources of error: Actual variations in the quantity being measured, e.g. the diameter of a cylindrically shaped object may actually be different in different places. The remedy for this situation is to systematic error vs find the average diameter by taking a number of measurements at a number of different places. Then the scatter within your measurements gives an estimate of the reliability of the average diameter you report. Note that we usually assume that our measured values lie on both sides of the 'true' value, so that averaging our measurements gets us closer to the 'truth'. Another approach, especially suited to the measurement of small quantities, is sometimes called 'stacking.' Measure the mass of a feather by massing a lot of feathers and dividing the total mass by their number. Systematic errors in the measuring device used. Suppose your sensor reports values that are consistently shifted from the expected value; averaging a large number of readings is no help for this problem. To eliminate (or at least reduce) such errors, we calibrate the measuring instrument by comparing its measurement against the value of a known standard. It is sometimes quite difficult to identify a systematic error. Get in the habit of checking your equipment carefully. Make a preliminary analysis of your data early in the experiment; if you gather all the data without checking for systematic error, you might have to do it all over again! Random error: 'sometimes stuff