Error Analysis Rules
Contents |
or more quantities, each with their individual uncertainties, and then combine the information from these quantities in order to come up with a final result of our experiment. How can you state your answer for the combined result of these measurements and their uncertainties scientifically?
Error Propagation Exponential
The answer to this fairly common question depends on how the individual measurements are combined error propagation inverse in the result. We will treat each case separately: Addition of measured quantities If you have measured values for the quantities X, Y,
Error Propagation Calculator
and Z, with uncertainties dX, dY, and dZ, and your final result, R, is the sum or difference of these quantities, then the uncertainty dR is: Here the upper equation is an approximation that can also serve error propagation physics as an upper bound for the error. Please note that the rule is the same for addition and subtraction of quantities. Example: Suppose we have measured the starting position as x1 = 9.3+-0.2 m and the finishing position as x2 = 14.4+-0.3 m. Then the displacement is: Dx = x2-x1 = 14.4 m - 9.3 m = 5.1 m and the error in the displacement is: (0.22 + 0.32)1/2 m = 0.36 m Multiplication of measured quantities error propagation square root In the same way as for sums and differences, we can also state the result for the case of multiplication and division: Again the upper line is an approximation and the lower line is the exact result for independent random uncertainties in the individual variables. And again please note that for the purpose of error calculation there is no difference between multiplication and division. Example: We have measured a displacement of x = 5.1+-0.4 m during a time of t = 0.4+-0.1 s. What is the average velocity and the error in the average velocity? v = x / t = 5.1 m / 0.4 s = 12.75 m/s and the uncertainty in the velocity is: dv = |v| [ (dx/x)2 + (dt/t)2 ]1/2 = 12.75 m/s [(0.4/5.1)2 + (0.1/0.4)2]1/2 = 3.34 m/s Multiplication with a constant What if you have measured the uncertainty in an observable X, and you need to multiply it with a constant that is known exactly? What is the error then? This is easy: just multiply the error in X with the absolute value of the constant, and this will give you the error in R: If you compare this to the above rule for multiplication of two quantities, you see that this is just the special case of that rule for the uncertainty in c, dc = 0. Example
it. In doing this it is crucial to understand that all measurements of physical quantities are subject to uncertainties. It is never possible
Error Propagation Ln
to measure anything exactly. It is good, of course, to make the error propagation sine error as small as possible but it is always there. And in order to draw valid conclusions the error
Error Propagation Calculus
must be indicated and dealt with properly. Take the measurement of a person's height as an example. Assuming that her height has been determined to be 5' 8", how accurate http://lectureonline.cl.msu.edu/~mmp/labs/error/e2.htm is our result? Well, the height of a person depends on how straight she stands, whether she just got up (most people are slightly taller when getting up from a long rest in horizontal position), whether she has her shoes on, and how long her hair is and how it is made up. These inaccuracies could all be called errors http://teacher.nsrl.rochester.edu/phy_labs/AppendixB/AppendixB.html of definition. A quantity such as height is not exactly defined without specifying many other circumstances. Even if you could precisely specify the "circumstances," your result would still have an error associated with it. The scale you are using is of limited accuracy; when you read the scale, you may have to estimate a fraction between the marks on the scale, etc. If the result of a measurement is to have meaning it cannot consist of the measured value alone. An indication of how accurate the result is must be included also. Indeed, typically more effort is required to determine the error or uncertainty in a measurement than to perform the measurement itself. Thus, the result of any physical measurement has two essential components: (1) A numerical value (in a specified system of units) giving the best estimate possible of the quantity measured, and (2) the degree of uncertainty associated with this estimated value. For example, a measurement of the width of a table would yield a result such as 95.3 +/- 0.1 cm. Significant Figures The significant figures
find that the error in this measurement is 0.001 in. To find the area we multiply the width (W) and length (L). The area then is http://www.utm.edu/~cerkal/Lect4.html L x W = (1.001 in) x (1.001 in) = 1.002001 in2 which rounds to 1.002 in2. This gives an error of 0.002 if we were given that the square was exactly super-accurate 1 inch a side. This is an example of correlated error (or non-independent error) since the error in L and W are the same. The error in L is error propagation correlated with that of in W. Now, suppose that we made independent determination of the width and length separately with an error of 0.001 in each. In this case where two independent measurements are performed the errors are independent or uncorrelated. Therefore the error in the result (area) is calculated differently as follows (rule 1 below). First, find the relative error (error/quantity) in each of error analysis rules the quantities that enter to the calculation, relative error in width is 0.001/1.001 = 0.00099900. The resultant relative error is Relative Error in area = Therefore the absolute error is (relative error) x (quantity) = 0.0014128 x 1.002001=0.001415627. which rounds to 0.001. Therefore the area is 1.002 in2± 0.001in.2. This shows that random relative errors do not simply add arithmetically, rather, they combine by root-mean-square sum rule (Pythagorean theorem). Let’s summarize some of the rules that applies to combining error when adding (or subtracting), multiplying (or dividing) various quantities. This topic is also known as error propagation. 2. Error propagation for special cases: Let σx denote error in a quantity x. Further assume that two quantities x and y and their errors σx and σy are measured independently. In this case relative and percent errors are defined as Relative error = σx / x, Percent error = 100 (σx / x) Multiplying or dividing with a constant. The resultant absolute error also is multiplied or divided. Multiplication or division, relative error. Addition or subtraction: In this case, the absolute errors obey Pythagorea