Dependent Error Propagation
Contents |
a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring error propagation example developers or posting ads with us Physics Questions Tags Users Badges Unanswered Ask Question _ error propagation division Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. Join them; it only takes a minute: error propagation physics Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Error propagation with dependent variables up vote -1 down vote favorite Based on
Error Propagation Calculus
Microdosimetry theory, trying to figure out error propagation for a lot of quantities that are produced from radiation spectra where each channel with $f(y)$ counts has error $\sqrt{f(y)}$. Now, I have a function called the dose-weighted lineal energy distribution: $d(y) = \frac{yf(y)}{y_{F}} = \frac{yf(y)}{\int{yf(y)dy}}$ I have calculated the constant $y_F\pm\Delta y_F$ using the measured quantity $f(y)\pm\sqrt{f(y)}$ but how do I find the uncertainty in the $d(y)$ distribution when these quantities are not independent? Any help would be error propagation khan academy greatly appreciated : ) Note: $\Delta y \approx 0$ so this only concerns $f(y)$ and $y_F$. error-analysis share|cite|improve this question edited Oct 26 '15 at 22:37 asked Oct 26 '15 at 15:51 lachycharts 61 Do you know how to propagate errors in general? If yes, what is your precise question about that/why doesn't it work in this case? –ACuriousMind Oct 26 '15 at 16:00 @ACuriousMind - Thanks for your reply. I have determined the errors in $y_F$ using the general method, yes. However, at this point in my process, this is the first time two quantities have not been independent. Unless my understanding of this word is incorrect, the general method cannot be used in this case because they are dependent, right? –lachycharts Oct 26 '15 at 16:04 The simplification doesn't work, but the general method still works. The covariances are non-zero then, however. –ACuriousMind Oct 26 '15 at 16:14 Ah ha, gotcha. I haven't ventured into stuff like this before and my maths is a little fuzzy these days - would you happen to know where to find a more detailed description of this general method? Thanks –lachycharts Oct 26 '15 at 16:20 Actually, scrap that. About the covariance. Covariance is "a measure of how much two random variables change together" - although $y_F$ has uncertainty based on
Error Propagation Projects Lab, University of Utah print Very often we use our physical measurements as a means to some computational end. We may, for example, use measurements of mass and velocity to calculate kinetic energy, or temperature and pressure error propagation average to calculate molar volume. While we should have a fair grasp on the uncertainty inherent
Error Propagation Chemistry
in our physical measurements, we are also interested in bounding the uncertainty in those calculated values. Methods of error propagation allow us to
Error Propagation Log
translate the error in independent variables into the error within the dependent variables of our functions. Introduction Error Propagation for Arbitrary Functions Analytical Method for Error Propagation Numerical Method for Error Propagation Monte Carlo Method for Error http://physics.stackexchange.com/questions/214702/error-propagation-with-dependent-variables Propagation Error Propagation Example Introduction: Let's take a very simple example (This example will illustrate the difference of two variables, but the results for error propagation in addition are the same). Say we wanted to know the weight of a liquid in a container. We take the weight of the empty container, w1, then fill it with our liquid and take the weight again, w2. Each weight, w1 and w2, comes with a range of http://www.che.utah.edu/~tony/course/material/DataAnalysis/10_error_propagation.php uncertainty, ±e1 and ±e2 respectively (always at some confidence level). In this example, these errors depend on the precision of the scale we used. Of course, the weight of the liquid, w0, should be the difference, w2 - w1, but what is our uncertainty in w0? At first glance, we might expect the range of w0 to be between the maximum and minimum values possible if we take our measured weights at their extremes: (1) (2) making the uncertainty (error) in w0: (3) . However, in reality, this value of e0 is too pessimistic. Think of tossing two six-sided dice and adding the resulting numbers together. With individual die, we have a 1 in 6 chance (16.7%) to get each number, including the extremes of 1 and 6. However, when we add the die together, the numbers at the extremes of that calculated value become less likely. For the extremes in the calculated value of 12 or 2, the probability drops to (1/6)*(1/6)=2.78%, while the mean value of 7 remains at 16.7%. To illustrate this concept with our liquid weight example, let's assume w1 = 1 g, w2 = 2 g, and the error associated with both measurements is ± 0.1 g. For simplicity and illustration, assume the measurements are normally distributed and the error we are repor
links in the footer of our site. RIT Home > Administrative Offices > error propagation Academics Admission Colleges Co-op News Research Student Life 404 Error - Page not found The page you dependent error propagation are looking for at: http://www.rit.edu/~w-uphysi/uncertainties/Uncertaintiespart2.html can not be found. We hope that the following links will help you find the appropriate content on the RIT site. If not, try visiting the RIT A-Z Site Index or the Google-powered RIT Search. Rochester Institute of Technology, One Lomb Memorial Drive, Rochester, NY 14623-5603 Copyright © Rochester Institute of Technology. All Rights Reserved | Disclaimer | Copyright Infringement Questions or concerns? Send us feedback. Telephone: 585-475-2411
or more quantities, each with their individual uncertainties, and then combine the information from these quantities in order to come up with a final result of our experiment. How can you state your answer for the combined result of these measurements and their uncertainties scientifically? The answer to this fairly common question depends on how the individual measurements are combined in the result. We will treat each case separately: Addition of measured quantities If you have measured values for the quantities X, Y, and Z, with uncertainties dX, dY, and dZ, and your final result, R, is the sum or difference of these quantities, then the uncertainty dR is: Here the upper equation is an approximation that can also serve as an upper bound for the error. Please note that the rule is the same for addition and subtraction of quantities. Example: Suppose we have measured the starting position as x1 = 9.3+-0.2 m and the finishing position as x2 = 14.4+-0.3 m. Then the displacement is: Dx = x2-x1 = 14.4 m - 9.3 m = 5.1 m and the error in the displacement is: (0.22 + 0.32)1/2 m = 0.36 m Multiplication of measured quantities In the same way as for sums and differences, we can also state the result for the case of multiplication and division: Again the upper line is an approximation and the lower line is the exact result for independent random uncertainties in the individual variables. And again please note that for the purpose of error calculation there is no difference between multiplication and division. Example: We have measured a displacement of x = 5.1+-0.4 m during a time of t = 0.4+-0.1 s. What is the average velocity and the error in the average velocity? v = x / t = 5.1 m / 0.4 s = 12.75 m/s and the uncertainty in the velocity is: dv = |v| [ (dx/x)2 + (dt/t)2 ]1/2 = 12.75 m/s [(0.4/5.1)2 + (0.1/0.4)2]1/2 = 3.34 m/s Multiplication with a constant What if you have measured the uncertainty in an observable X, and you need to multiply it with a constant that is known exactly? What is the error then? This is easy: just multiply the error in X with the absolute value of the constant, and this will give you the error in R: If you compare this to the above rule for multiplicati