Error Propagation Example
Contents |
or more quantities, each with their individual uncertainties, and then combine the information from these quantities in order to come up with a final result of our experiment. How can you state your answer for the combined result of these measurements and their uncertainties scientifically? The answer to this fairly error propagation calculator common question depends on how the individual measurements are combined in the result. We will treat each
Error Analysis Example
case separately: Addition of measured quantities If you have measured values for the quantities X, Y, and Z, with uncertainties dX, dY, and dZ, error propagation example problems and your final result, R, is the sum or difference of these quantities, then the uncertainty dR is: Here the upper equation is an approximation that can also serve as an upper bound for the error. Please note that the standard error example rule is the same for addition and subtraction of quantities. Example: Suppose we have measured the starting position as x1 = 9.3+-0.2 m and the finishing position as x2 = 14.4+-0.3 m. Then the displacement is: Dx = x2-x1 = 14.4 m - 9.3 m = 5.1 m and the error in the displacement is: (0.22 + 0.32)1/2 m = 0.36 m Multiplication of measured quantities In the same way as for sums and differences, we can also state the result
Standard Deviation Example
for the case of multiplication and division: Again the upper line is an approximation and the lower line is the exact result for independent random uncertainties in the individual variables. And again please note that for the purpose of error calculation there is no difference between multiplication and division. Example: We have measured a displacement of x = 5.1+-0.4 m during a time of t = 0.4+-0.1 s. What is the average velocity and the error in the average velocity? v = x / t = 5.1 m / 0.4 s = 12.75 m/s and the uncertainty in the velocity is: dv = |v| [ (dx/x)2 + (dt/t)2 ]1/2 = 12.75 m/s [(0.4/5.1)2 + (0.1/0.4)2]1/2 = 3.34 m/s Multiplication with a constant What if you have measured the uncertainty in an observable X, and you need to multiply it with a constant that is known exactly? What is the error then? This is easy: just multiply the error in X with the absolute value of the constant, and this will give you the error in R: If you compare this to the above rule for multiplication of two quantities, you see that this is just the special case of that rule for the uncertainty in c, dc = 0. Example: If an object is realeased from rest and is in free fall, and if you measure the velocity of this object at some point to be v = - 3.8+-0.3 m
"change" in the value of that quantity. Results are is obtained by mathematical operations on the data, and small changes in any data quantity can affect the value uncertainty subtraction of a result. We say that "errors in the data propagate through the calculations error propagation for division to produce error in the result." 3.2 MAXIMUM ERROR We first consider how data errors propagate through calculations to affect error limits
Error Propagation For Powers
(or maximum error) of results. It's easiest to first consider determinate errors, which have explicit sign. This leads to useful rules for error propagation. Then we'll modify and extend the rules to other error measures and http://lectureonline.cl.msu.edu/~mmp/labs/error/e2.htm also to indeterminate errors. The underlying mathematics is that of "finite differences," an algebra for dealing with numbers which have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused by experimental errors. Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and https://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm B respectively. The data quantities are written to show the errors explicitly: [3-1] A + ΔA and B + ΔB We allow the possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB." The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + δb) So the result, with its error ΔR explicitly shown in the form R + ΔR, is: R + ΔR = (A + B) + (Δa + Δb) [3-2] The error in R is: ΔR = ΔA + ΔB. We conclude that the error in the sum of two quantities is the sum of the errors in those quantities. You can easily work out the case where the result is calculated from the difference of two quantities. In that case the error in the result is the difference in the errors. Summarizing: Sum and difference rule. When two quantities are added (or subtracted), their determinate errors add (or subtract). Now consider multiplication: R = AB. With errors explicitly included: R + ΔR = (A + ΔA)(B + ΔB) =
Error Propagation Projects Lab, University of Utah print Very often we use our physical measurements as a means to some computational end. We may, for example, use measurements of mass and velocity to calculate kinetic energy, or temperature and pressure to calculate molar http://www.che.utah.edu/~tony/course/material/DataAnalysis/10_error_propagation.php volume. While we should have a fair grasp on the uncertainty inherent in our physical measurements, we are also interested in bounding the uncertainty in those calculated values. Methods of error propagation allow us to translate the error in independent variables into the error within the dependent variables of our functions. Introduction Error Propagation for Arbitrary Functions Analytical Method for Error Propagation Numerical Method for Error Propagation Monte Carlo Method for Error Propagation Error Propagation Example Introduction: error propagation Let's take a very simple example (This example will illustrate the difference of two variables, but the results for error propagation in addition are the same). Say we wanted to know the weight of a liquid in a container. We take the weight of the empty container, w1, then fill it with our liquid and take the weight again, w2. Each weight, w1 and w2, comes with a range of uncertainty, ±e1 and ±e2 respectively (always at some error propagation example confidence level). In this example, these errors depend on the precision of the scale we used. Of course, the weight of the liquid, w0, should be the difference, w2 - w1, but what is our uncertainty in w0? At first glance, we might expect the range of w0 to be between the maximum and minimum values possible if we take our measured weights at their extremes: (1) (2) making the uncertainty (error) in w0: (3) . However, in reality, this value of e0 is too pessimistic. Think of tossing two six-sided dice and adding the resulting numbers together. With individual die, we have a 1 in 6 chance (16.7%) to get each number, including the extremes of 1 and 6. However, when we add the die together, the numbers at the extremes of that calculated value become less likely. For the extremes in the calculated value of 12 or 2, the probability drops to (1/6)*(1/6)=2.78%, while the mean value of 7 remains at 16.7%. To illustrate this concept with our liquid weight example, let's assume w1 = 1 g, w2 = 2 g, and the error associated with both measurements is ± 0.1 g. For simplicity and illustration, assume the measurements are normally distributed and the error we are reporting is one standard deviation (This confidence interval is associated with a low confidence level, but the sa