Error Propagation Lnx
Contents |
a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers error propagation natural log or posting ads with us Physics Questions Tags Users Badges Unanswered Ask Question _ Physics propagation error logarithm Stack Exchange is a question and answer site for active researchers, academics and students of physics. Join them; it only takes a minute: error propagation example Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top The error of the natural logarithm up vote 10 down vote favorite 2 Can error propagation division anyone explain why the error for $\ln (x)$ (where for $x$ we have $x\pm\Delta x$) is simply said to be $\frac{\Delta x}{x}$? I would very much appreciate a somewhat rigorous rationalization of this step. Additionally, is this the case for other logarithms (e.g. $\log_2(x)$), or how would that be done? error-analysis share|cite|improve this question edited Jan 25 '14 at 20:01 Chris Mueller 4,72711444 asked Jan 25 '14 at 18:31 Just_a_fool 3341413 add a comment| 2 Answers 2 active
Error Propagation Physics
oldest votes up vote 17 down vote accepted Simple error analysis assumes that the error of a function $\Delta f(x)$ by a given error $\Delta x$ of the input argument is approximately $$ \Delta f(x) \approx \frac{\text{d}f(x)}{\text{d}x}\cdot\Delta x $$ The mathematical reasoning behind this is the Taylor series and the character of $\frac{\text{d}f(x)}{\text{d}x}$ describing how the function $f(x)$ changes when its input argument changes a little bit. In fact this assumption makes only sense if $\Delta x \ll x$ (see Emilio Pisanty's answer for details on this) and if your function isnt too nonlinear at the specific point (in which case the presentation of a result in the form $f(x) \pm \Delta f(x)$ wouldnt make sense anyway). Note that sometimes $\left| \frac{\text{d}f(x)}{\text{d}x}\right|$ is used to avoid getting negative erros. Since $$ \frac{\text{d}\ln(x)}{\text{d}x} = \frac{1}{x} $$ the error would be $$ \Delta \ln(x) \approx \frac{\Delta x}{x} $$ For arbitraty logarithms we can use the change of the logarithm base: $$ \log_b x = \frac{\ln x}{\ln b}\\ (\ln x = \log_\text{e} x) $$ to obtain $$ \Delta \log_b x \approx \frac{\Delta x}{x \cdot \ln b} $$ share|cite|improve this answer edited Jan 26 '14 at 7:54 answered Jan 25 '14 at 18:39 LeFitz 48949 1 This (nice) answer is correct for the case that $\Delta x\ll x$ but will otherwise fail; see my answer below for why and what to do there. –Emili
to get a speed, or adding two lengths to get a total length. Now that we have learned how to determine the error in the directly measured quantities we need to learn how these errors propagate to an error in
Error Propagation Calculus
the result. We assume that the two directly measured quantities are X and Y, with error propagation khan academy errors X and Y respectively. The measurements X and Y must be independent of each other. The fractional error is the value error propagation average of the error divided by the value of the quantity: X / X. The fractional error multiplied by 100 is the percentage error. Everything is this section assumes that the error is "small" compared to the value itself, http://physics.stackexchange.com/questions/95254/the-error-of-the-natural-logarithm i.e. that the fractional error is much less than one. For many situations, we can find the error in the result Z using three simple rules: Rule 1 If: or: then: In words, this says that the error in the result of an addition or subtraction is the square root of the sum of the squares of the errors in the quantities being added or subtracted. This mathematical procedure, also used in Pythagoras' theorem about right http://www.upscale.utoronto.ca/PVB/Harrison/ErrorAnalysis/Propagation.html triangles, is called quadrature. Rule 2 If: or: then: In this case also the errors are combined in quadrature, but this time it is the fractional errors, i.e. the error in the quantity divided by the value of the quantity, that are combined. Sometimes the fractional error is called the relative error. The above form emphasises the similarity with Rule 1. However, in order to calculate the value of Z you would use the following form: Rule 3 If: then: or equivalently: For the square of a quantity, X2, you might reason that this is just X times X and use Rule 2. This is wrong because Rules 1 and 2 are only for when the two quantities being combined, X and Y, are independent of each other. Here there is only one measurement of one quantity. Question 9.1. Does the first form of Rule 3 look familiar to you? What does it remind you of? (Hint: change the delta's to d's.) Question 9.2. A student measures three lengths a, b and c in cm and a time t in seconds: a = 50 ± 4 b = 20 ± 3 c = 70 ± 3 t = 2.1 ± 0.1 Calculate a + b, a + b + c, a / t, and (a + c) / t. Question 9.3. Calculate (1.23 ±
propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument https://en.wikipedia.org/wiki/Propagation_of_uncertainty precision) which propagate to the combination of variables in the function. The uncertainty u can be expressed in a number of ways. It may be defined by the absolute error Δx. Uncertainties can also be defined by the relative error (Δx)/x, which is usually written as a percentage. Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, σ, the positive square root of variance, σ2. The value of error propagation a quantity and its error are then expressed as an interval x ± u. If the statistical probability distribution of the variable is known or can be assumed, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are ± one standard deviation from the value, error propagation lnx that is, there is approximately a 68% probability that the true value lies in the region x ± σ. If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.[1] Contents 1 Linear combinations 2 Non-linear combinations 2.1 Simplification 2.2 Example 2.3 Caveats and warnings 2.3.1 Reciprocal 2.3.2 Shifted reciprocal 3 Example formulas 4 Example calculations 4.1 Inverse tangent function 4.2 Resistance measurement 5 See also 6 References 7 Further reading 8 External links Linear combinations[edit] Let { f k ( x 1 , x 2 , … , x n ) } {\displaystyle \ ρ 5(x_ ρ 4,x_ ρ 3,\dots ,x_ ρ 2)\}} be a set of m functions which are linear combinations of n {\displaystyle n} variables x 1 , x 2 , … , x n {\displaystyle x_ σ 7,x_ σ 6,\dots ,x_ σ 5} with combination coefficients A k 1 , A k 2 , … , A k n , ( k = 1 … m ) {\displaystyle A_ σ 1,A_ σ 0,\dots ,A_ ρ 9,(k=1\dots m)} . f k = ∑ i n A k i x i or