Error Propagation Negative Exponent
Contents |
or more quantities, each with their individual uncertainties, and then combine the information from these quantities in order to come up with a final result of our experiment. How can you state your answer for the combined result of these measurements and their uncertainties scientifically? The answer to this fairly common question depends on error propagation rules exponents how the individual measurements are combined in the result. We will treat each case separately: Addition of error propagation exponential measured quantities If you have measured values for the quantities X, Y, and Z, with uncertainties dX, dY, and dZ, and your final result, R, is error propagation for exponential functions the sum or difference of these quantities, then the uncertainty dR is: Here the upper equation is an approximation that can also serve as an upper bound for the error. Please note that the rule is the same for addition and subtraction error propagation powers of quantities. Example: Suppose we have measured the starting position as x1 = 9.3+-0.2 m and the finishing position as x2 = 14.4+-0.3 m. Then the displacement is: Dx = x2-x1 = 14.4 m - 9.3 m = 5.1 m and the error in the displacement is: (0.22 + 0.32)1/2 m = 0.36 m Multiplication of measured quantities In the same way as for sums and differences, we can also state the result for the case of multiplication and division: Again the upper line is
Error Propagation Examples
an approximation and the lower line is the exact result for independent random uncertainties in the individual variables. And again please note that for the purpose of error calculation there is no difference between multiplication and division. Example: We have measured a displacement of x = 5.1+-0.4 m during a time of t = 0.4+-0.1 s. What is the average velocity and the error in the average velocity? v = x / t = 5.1 m / 0.4 s = 12.75 m/s and the uncertainty in the velocity is: dv = |v| [ (dx/x)2 + (dt/t)2 ]1/2 = 12.75 m/s [(0.4/5.1)2 + (0.1/0.4)2]1/2 = 3.34 m/s Multiplication with a constant What if you have measured the uncertainty in an observable X, and you need to multiply it with a constant that is known exactly? What is the error then? This is easy: just multiply the error in X with the absolute value of the constant, and this will give you the error in R: If you compare this to the above rule for multiplication of two quantities, you see that this is just the special case of that rule for the uncertainty in c, dc = 0. Example: If an object is realeased from rest and is in free fall, and if you measure the velocity of this object at some point to be v = - 3.8+-0.3 m/s, how long has it been in free fall? Answer: we can calculate the time as (g = 9.81 m/s2 is assumed to be know
"change" in the value of that quantity. Results are is obtained by mathematical operations on the data, and small changes in any data quantity can affect the value of a result. We say
Error Propagation Inverse
that "errors in the data propagate through the calculations to produce error in the result." error propagation calculator 3.2 MAXIMUM ERROR We first consider how data errors propagate through calculations to affect error limits (or maximum error) of results. It's easiest to error propagation square root first consider determinate errors, which have explicit sign. This leads to useful rules for error propagation. Then we'll modify and extend the rules to other error measures and also to indeterminate errors. The underlying mathematics is that of http://lectureonline.cl.msu.edu/~mmp/labs/error/e2.htm "finite differences," an algebra for dealing with numbers which have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused by experimental errors. Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and B respectively. The data quantities are written to show the errors explicitly: [3-1] A + https://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm ΔA and B + ΔB We allow the possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB." The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + δb) So the result, with its error ΔR explicitly shown in the form R + ΔR, is: R + ΔR = (A + B) + (Δa + Δb) [3-2] The error in R is: ΔR = ΔA + ΔB. We conclude that the error in the sum of two quantities is the sum of the errors in those quantities. You can easily work out the case where the result is calculated from the difference of two quantities. In that case the error in the result is the difference in the errors. Summarizing: Sum and difference rule. When two quantities are added (or subtracted), their determinate errors add (or subtract). Now consider multiplication: R = AB. With errors explicitly included: R + ΔR = (A + ΔA)(B + ΔB) = AB + (ΔA)B + A(ΔB) + (ΔA)(ΔB) [3-3] or : ΔR = (ΔA)B + A(ΔB) + (ΔA)(ΔB) This doesn't look like a simple rule. However, when we express the errors in
in an exponent?How do I calculate out this value?(4.36 +/- 0.16)^(2.35 +/- 0.04)Im confused on how to combine https://www.quora.com/How-does-one-calculate-uncertainty-in-an-exponent the uncertainties.UpdateCancelAnswer Wiki2 Answers Julius Bier Kirkegaard, physics, computers, 'n' stuffUpdated 107w ago · Upvoted by David Joyce, Professor of Mathematics at Clark UniversityThe standard way to propagate uncertainties is to do linear estimations:If[math]z = x^y[/math]then the uncertainty of [math]z[/math] coming from the uncertainty of [math]x[/math] is (by differentiation)[math]\delta{z_x} = y error propagation x^{y-1} \, \delta x [/math] [math]= 2.35 \cdot 4.36^{1.35} \cdot 0.16 = 2.75[/math]and from [math]y[/math][math]\delta{z_y}= x^y \, \ln(x) \delta y[/math] [math]= 4.36^{2.35} \cdot \ln(4.36) \cdot 0.04 = 1.87[/math]Assuming these are Gaussian standard deviations (as is typically meant by [math]\pm[/math]) you combine these in quadrature:[math]\delta z = \sqrt{2.75^2 + 1.87^2} = 3.3[/math]which error propagation negative is less than the value obtained by taking the extremes (as shown by David Joyce), which assumes that the uncertaines are extrema ranges, whereas the present propagates standard deviations. The linear approximation, however, can lead to some error, and to truely get the right result you would have to do Monte Carlo sampling. But I'm sure that both methods presented here will suffice.For the present problem, Monte Carlo sampling yields 3.34, calculated in Python byimport numpy as np dx = 0.16*(np.random.randn(10000000)) dy = 0.04*(np.random.randn(10000000)) z = (4.36+dx)**(2.35+dy) print(np.std(z)) print(np.sqrt(2.75**2 + 1.87**2))1.9k Views · View UpvotesRelated QuestionsMore Answers BelowHow do pocket calculators calculate exponents?How do you measure uncertainty?How can I calculate exponents in Java?How can one calculate the number of measurements necessary to lower an uncertainty to a specific amount?How do I calculate uncertainty in measurements? David Joyce, Professor of Mathematics at Clark UniversityWritten 108w ago · Upvoted by Steven MerzHere's a