Lognormal Error Propagation
Contents |
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about
Error Propagation Division
Stack Overflow the company Business Learn more about hiring developers or posting ads with error propagation calculator us Cross Validated Questions Tags Users Badges Unanswered Ask Question _ Cross Validated is a question and answer site for people
Error Propagation Physics
interested in statistics, machine learning, data analysis, data mining, and data visualization. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best error propagation chemistry answers are voted up and rise to the top Error propagation - nonnormal (again) up vote 2 down vote favorite 1 I have a dataset of ~2000 points. Each of those points has a standard error value associated with it, and it is assumed that the data points and errors are uncorrelated. Both the dataset and the errors follow a log-normal distribution. My end-point is to sum all of those data error propagation square root points to get a total value, and get the standard error value for that sum. Can I use the formula: $SE_{total} = \sqrt{SE_1^2 + SE_2^2 + \dots + SE_n^2}$ even though the distributions are not normal? I apologize that this is a bit of a repeat of similar questions on this site, but the more I read about this issue the more confused I'm getting! lognormal central-limit-theorem error-propagation share|improve this question edited Jan 21 '15 at 15:02 Aksakal 18.7k11853 asked Jan 21 '15 at 14:48 Gary Lemon 112 add a comment| 1 Answer 1 active oldest votes up vote 1 down vote The simple answer is YES, you can use the formula and it is exact. If your goal is simply to compute the standard error, then this formula will work for any distributions, as long as your observations are independent. BONUS. If you want to go a little further, and try to determine the error distribution, not just the standard error, then it gets a bit complicated. You should be able to apply the central limit theorem (CLT) in Lyapunov's formulation or Lindberg's. If your errors are truly independent and the variances are finite, then the assumptions of CLT should be applicable. Usually, CLT is assumed to work well for
propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate to the combination of variables in the function.
Error Propagation Reciprocal
The uncertainty u can be expressed in a number of ways. It may be defined by the error propagation inverse absolute error Δx. Uncertainties can also be defined by the relative error (Δx)/x, which is usually written as a percentage. Most commonly, the uncertainty on
Propagated Error Calculus
a quantity is quantified in terms of the standard deviation, σ, the positive square root of variance, σ2. The value of a quantity and its error are then expressed as an interval x ± u. If the statistical probability distribution of the http://stats.stackexchange.com/questions/134352/error-propagation-nonnormal-again variable is known or can be assumed, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are ± one standard deviation from the value, that is, there is approximately a 68% probability that the true value lies in the region x ± σ. If the uncertainties are correlated then covariance must be taken into account. Correlation https://en.wikipedia.org/wiki/Propagation_of_uncertainty can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.[1] Contents 1 Linear combinations 2 Non-linear combinations 2.1 Simplification 2.2 Example 2.3 Caveats and warnings 2.3.1 Reciprocal 2.3.2 Shifted reciprocal 3 Example formulas 4 Example calculations 4.1 Inverse tangent function 4.2 Resistance measurement 5 See also 6 References 7 Further reading 8 External links Linear combinations[edit] Let { f k ( x 1 , x 2 , … , x n ) } {\displaystyle \ ρ 5(x_ ρ 4,x_ ρ 3,\dots ,x_ ρ 2)\}} be a set of m functions which are linear combinations of n {\displaystyle n} variables x 1 , x 2 , … , x n {\displaystyle x_ σ 7,x_ σ 6,\dots ,x_ σ 5} with combination coefficients A k 1 , A k 2 , … , A k n , ( k = 1 … m ) {\displaystyle A_ σ 1,A_ σ 0,\dots ,A_ ρ 9,(k=1\dots m)} . f k = ∑ i n A k i x i or f = A x {\displaystyle f_ ρ 5=\sum _ ρ 4^ ρ 3A_ ρ 2x_ ρ 1{\text{ or }}\mathrm ρ 0 =\mathrm σ 9 \,} and let the variance-covariance matrix on x be denoted by Σ x {\displaystyle \mathrm {\Sigma ^ σ 1} \,} . Σ x = ( σ 1 2 σ 12 σ 13 ⋯ σ 12 σ 2 2
Error Propagation Projects Lab, University of Utah print Very often we use our physical measurements as a means to some computational end. We may, for example, use measurements of mass and velocity to calculate kinetic energy, or temperature and pressure to calculate molar volume. While http://www.che.utah.edu/~tony/course/material/DataAnalysis/10_error_propagation.php we should have a fair grasp on the uncertainty inherent in our physical measurements, we are https://chris.mutel.org/ecoinvent-lognormal.html also interested in bounding the uncertainty in those calculated values. Methods of error propagation allow us to translate the error in independent variables into the error within the dependent variables of our functions. Introduction Error Propagation for Arbitrary Functions Analytical Method for Error Propagation Numerical Method for Error Propagation Monte Carlo Method for Error Propagation Error Propagation Example Introduction: Let's take a very error propagation simple example (This example will illustrate the difference of two variables, but the results for error propagation in addition are the same). Say we wanted to know the weight of a liquid in a container. We take the weight of the empty container, w1, then fill it with our liquid and take the weight again, w2. Each weight, w1 and w2, comes with a range of uncertainty, ±e1 and ±e2 respectively (always at some confidence level). In this example, these lognormal error propagation errors depend on the precision of the scale we used. Of course, the weight of the liquid, w0, should be the difference, w2 - w1, but what is our uncertainty in w0? At first glance, we might expect the range of w0 to be between the maximum and minimum values possible if we take our measured weights at their extremes: (1) (2) making the uncertainty (error) in w0: (3) . However, in reality, this value of e0 is too pessimistic. Think of tossing two six-sided dice and adding the resulting numbers together. With individual die, we have a 1 in 6 chance (16.7%) to get each number, including the extremes of 1 and 6. However, when we add the die together, the numbers at the extremes of that calculated value become less likely. For the extremes in the calculated value of 12 or 2, the probability drops to (1/6)*(1/6)=2.78%, while the mean value of 7 remains at 16.7%. To illustrate this concept with our liquid weight example, let's assume w1 = 1 g, w2 = 2 g, and the error associated with both measurements is ± 0.1 g. For simplicity and illustration, assume the measurements are normally distributed and the error we are reporting is one standard deviation (This confidence interval is associated with a low confidence level, but the same equations for error propagation are obtained with stricter levels).
Mutel In uncertainty. tags: notable Uncertainty in ecoinvent The ecoinvent database (version 2.2) characterizes inventory data uses the following uncertainty distributions: Undefined Triangular Normal Lognormal Here are the number of biosphere and technosphere exchanges: Number of exchanges sorted by distribution type in ecoinvent 2.2 database Distribution Technosphere Biosphere Undefined [1] 1629 37036 Triangular 8 0 Normal 27 16 Lognormal 37314 54032 [1]Includes exchanges labelled with the lognormal distribution, but with a standard deviation of zero, giving them no uncertainty. It's clear that ecoinvent has a (almost unhealthy) love for the lognormal distribution. Why is the lognormal distribution so popular? Let's briefly review the normal and lognormal distributions. A little bit of theory First, we should begin by saying that normal means normal in the mathematical sense, i.e. orthogonal, and can also be called the Gaussian distribution. However, normal turns out to be quite a name, as this distribution, with it simple mathematical form and properties underlines much of frequentist statistics. The normal distribution is also the core of the central limit theorem. The normal distribution is defined by μ, the average, and σ, the standard deviation. If a dataset is lognormally distributed, then the natural logarithm of that dataset is normally distributed. Naturally, the opposite works as well: if a dataset is normally distributed, then the exponential of that dataset if lognormally distributed. The lognormal distribution is just a small mathematical variation on the normal distribution, therefore, and the lognormal distribution is still defined by μ, the average, and σ, the standard deviation of the underlying normal distribution. The geometric mean of a lognormally-distributed dataset, is defined as the exponential of the mean of the underlying