Error Propagation Through Average
Contents |
Community Forums > Mathematics > Set Theory, Logic, Probability, Statistics > Dismiss Notice Join Physics Forums Today! The friendliest, high quality science and math community on the planet! Everyone who loves science is here! Error error propagation average standard deviation propagation with averages and standard deviation Page 1 of 2 1 2 Next > May error propagation weighted average 25, 2012 #1 rano I was wondering if someone could please help me understand a simple problem of error propagation going error propagation mean from multiple measurements with errors to an average incorporating these errors. I have looked on several error propagation webpages (e.g. UC physics or UMaryland physics) but have yet to find exactly what I am looking for. how to find error propagation I would like to illustrate my question with some example data. Suppose we want to know the mean ± standard deviation (mean ± SD) of the mass of 3 rocks. We weigh these rocks on a balance and get: Rock 1: 50 g Rock 2: 10 g Rock 3: 5 g So we would say that the mean ± SD of these rocks is: 21.6 ± 24.6 g. But now let's
Error Propagation Example
say we weigh each rock 3 times each and now there is some error associated with the mass of each rock. Let's say that the mean ± SD of each rock mass is now: Rock 1: 50 ± 2 g Rock 2: 10 ± 1 g Rock 3: 5 ± 1 g How would we describe the mean ± SD of the three rocks now that there is some uncertainty in their masses? Would it still be 21.6 ± 24.6 g? Some error propagation websites suggest that it would be the square root of the sum of the absolute errors squared, divided by N (N=3 here). But in this case the mean ± SD would only be 21.6 ± 2.45 g, which is clearly too low. I think this should be a simple problem to analyze, but I have yet to find a clear description of the appropriate equations to use. If my question is not clear please let me know. Any insight would be very appreciated. rano, May 25, 2012 Phys.org - latest science and technology news stories on Phys.org •Game over? Computer beats human champ in ancient Chinese game •Simplifying solar cells with a new mix of materials •Imaged 'jets' reveal cerium's post-shock inner strength May 25, 2012 #2 vi
Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack
Error Propagation Division
Overflow the company Business Learn more about hiring developers or posting ads with us error propagation physics Cross Validated Questions Tags Users Badges Unanswered Ask Question _ Cross Validated is a question and answer site for people interested in error propagation calculus statistics, machine learning, data analysis, data mining, and data visualization. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted https://www.physicsforums.com/threads/error-propagation-with-averages-and-standard-deviation.608932/ up and rise to the top Propagation of uncertainty through an average up vote 1 down vote favorite I have a set of distance measurements that are all accurate to +/- 0.01 M. {1.00,2.00,3,00} We can obtain the distance moved between measurements by saying {2-1, 3-2} its trivial to see we moved 1M each time. My question is this. If you want to know the average distance moved, how you you carry http://stats.stackexchange.com/questions/48948/propagation-of-uncertainty-through-an-average the +/- 0.01M through the average. I would like to report the Average Difference +/- the uncertainty. How do I calculate the uncertainty? (My real data is more messy than this). standard-error error uncertainty error-propagation share|improve this question edited Jan 31 '13 at 7:55 mpiktas 24.7k449104 asked Jan 31 '13 at 6:28 MARCO HOWARD 61 add a comment| 1 Answer 1 active oldest votes up vote 1 down vote Ok, there are two issues here. The first is the general question of how to use known uncertainty in estimating the mean and variance. The second is the specific issue relating to the fact that you are taking differences. In general: In a more general situation, one might have to average a number of measurements each with known standard error $\sigma$. In which case the total variance is the sum of the sample variance and the measurement variance. This is analogous to ANOVA where there is the total variance is the sum of the between groups and within groups variance. Imagine each measurement was actually a little subsample group of repeated measurements, then this is exactly what you would have. To do this more rigerously, we assuming the following generative model $X = Z + \epsilon$ where $Z \sim N(\mu, \sigma_Z^2)$ is our "true v
"change" in the value of that quantity. Results are is obtained by mathematical operations on the data, and small changes in any data quantity can affect the value of a result. We say that "errors in https://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm the data propagate through the calculations to produce error in the result." 3.2 MAXIMUM ERROR We first consider how data errors propagate through calculations to affect error limits (or maximum error) of results. It's easiest to first consider determinate errors, which have explicit sign. This leads to useful rules for error propagation. Then we'll modify and extend the rules to other error measures and also to indeterminate errors. The underlying mathematics is that of "finite differences," an algebra for dealing error propagation with numbers which have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused by experimental errors. Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and B respectively. The data quantities are written to show the errors explicitly: [3-1] A + ΔA and B + ΔB We allow the error propagation through possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB." The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + δb) So the result, with its error ΔR explicitly shown in the form R + ΔR, is: R + ΔR = (A + B) + (Δa + Δb) [3-2] The error in R is: ΔR = ΔA + ΔB. We conclude that the error in the sum of two quantities is the sum of the errors in those quantities. You can easily work out the case where the result is calculated from the difference of two quantities. In that case the error in the result is the difference in the errors. Summarizing: Sum and difference rule. When two quantities are added (or subtracted), their determinate errors add (or subtract). Now consider multiplication: R = AB. With errors explicitly included: R + ΔR = (A + ΔA)(B + ΔB) = AB + (ΔA)B + A(ΔB) + (ΔA)(ΔB) [3-3] or : ΔR = (ΔA)B + A(ΔB) + (ΔA)(ΔB) This doesn't look like a simple rule. However, when we express the errors in relative form, things look better. When the error a is small relative to A and ΔB is small relative
be down. Please try the request again. Your cache administrator is webmaster. Generated Fri, 14 Oct 2016 14:59:03 GMT by s_ac15 (squid/3.5.20)