Adding Error Quadrature
Contents |
a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more error propagation about hiring developers or posting ads with us Physics Questions Tags Users Badges Unanswered Ask
Add Errors In Quadrature
Question _ Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. Join them; it only combination of errors in measurement takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top How to combine measurement error with statistic error up propagation of error division vote 10 down vote favorite 3 We have to measure a period of an oscillation. We are to take the time it takes for 50 oscillations multiple times. I know that I will have a $\Delta t = 0.1 \, \mathrm s$ because of my reaction time. If I now measure, say 40, 41 and 39 seconds in three runs, I will also have standard deviation of 1. What is the total error then? Do
Error Propagation Formula Physics
I add them up, like so? $$\sqrt{1^2 + 0.1^2}$$ Or is it just the 1 and I discard the (systematic?) error of my reaction time? I wonder if I measure a huge number of times, the standard deviation should become tiny compared to my reaction time. Is the lower bound 0 or is it my reaction time with 0.1? measurement statistics error-analysis share|cite|improve this question edited Apr 9 '12 at 16:17 Qmechanic♦ 63.3k988239 asked Apr 9 '12 at 12:41 Martin Ueding 2,62921133 add a comment| 3 Answers 3 active oldest votes up vote 6 down vote accepted I think you're exercising an incorrect picture of statistics here - mixing the inputs and outputs. You are recording the result of a measurement, and the spread of these measurement values (we'll say they're normally distributed) is theoretically a consequence of all of the variation from all different sources. That is, every time you do it, the length of the string might be a little different, the air temperature might be a little different. Of course, all of these are fairly small and I'm just listing them for the sake of argument. The point is that the ultimate standard deviation of the measured value $\sigma$ should be the result of all individual sources (we will index by $i$), under the assumption that all sources of variation ar
to get a speed, or adding two lengths to get a total length. Now that we have learned how to determine the error in the directly measured quantities we need to learn how these errors propagate to an error in the result. We assume that the two directly measured quantities are X combination of errors cbse class 11 and Y, with errors X and Y respectively. The measurements X and Y must be independent of error propagation square root each other. The fractional error is the value of the error divided by the value of the quantity: X / X. The fractional error multiplied by
Combination Of Errors In Physics
100 is the percentage error. Everything is this section assumes that the error is "small" compared to the value itself, i.e. that the fractional error is much less than one. For many situations, we can find the error in the result Z using three http://physics.stackexchange.com/questions/23441/how-to-combine-measurement-error-with-statistic-error simple rules: Rule 1 If: or: then: In words, this says that the error in the result of an addition or subtraction is the square root of the sum of the squares of the errors in the quantities being added or subtracted. This mathematical procedure, also used in Pythagoras' theorem about right triangles, is called quadrature. Rule 2 If: or: then: In this case also the errors are combined in quadrature, but this time it is the fractional errors, i.e. the error in the quantity divided by the http://www.upscale.utoronto.ca/PVB/Harrison/ErrorAnalysis/Propagation.html value of the quantity, that are combined. Sometimes the fractional error is called the relative error. The above form emphasises the similarity with Rule 1. However, in order to calculate the value of Z you would use the following form: Rule 3 If: then: or equivalently: For the square of a quantity, X2, you might reason that this is just X times X and use Rule 2. This is wrong because Rules 1 and 2 are only for when the two quantities being combined, X and Y, are independent of each other. Here there is only one measurement of one quantity. Question 9.1. Does the first form of Rule 3 look familiar to you? What does it remind you of? (Hint: change the delta's to d's.) Question 9.2. A student measures three lengths a, b and c in cm and a time t in seconds: a = 50 ± 4 b = 20 ± 3 c = 70 ± 3 t = 2.1 ± 0.1 Calculate a + b, a + b + c, a / t, and (a + c) / t. Question 9.3. Calculate (1.23 ± 0.03) + . ( is the irrational number 3.14159265 ) Question 9.4. Calculate (1.23 ± 0.03) × . Exercise 9.1. In Exercise 6.1 you measured the thickness of a hardcover book. What is the volume of that book? What is the error in that estimated volume? You may have noticed a useful property of quadrature while doing the above questions. Say one quantity has an error of 2 and the other quantity has an
lead to a rapid but small fluctuation in motion detector readings. These fluctuations occur, even when the motion detector is measuring the distance to a stationary object. Random fluctuations http://webpages.ursinus.edu/lriley/ref/unc/unc.html can also be a characteristic of the quantity being measured. For example, if we use a meter stick to measure the landing positions of a series of projectiles shot from a spring-loaded launcher, we see significant random variations which clearly do not arise from the limitations of the meter stick. Instead, we suspect that the launch velocity given of error to projectiles by the launcher is subject to small random variations. Truly random fluctuations average to zero, and so the way to remove them is to average a large number of measurements, (1) The average value approaches the ``true value'' as the number of measurements in the average approaches infinity. Finding the ``true value'' is impractical, so we settle combination of errors for the ``best value'' given by the average. The average value is also called the mean value. Random fluctuations are described by the normal distribution, or Gaussian distribution, or the ``bell curve.'' The uncertainty in the ``best value'' of a large collection of normally distributed measurements can be calculated using the standard deviation (2) which describes the width of the distribution. More precisely, about 68% of a normal distribution falls within of the average value. The standard deviation is the uncertainty in a single measurement in the distribution. Rather than doing this calculation ``by hand,'' I recommend using the STDEV() function of your spreadsheet. The uncertainty in the average of a large number of measurements is less than . This follows from the idea that the more measurements we make, the closer the average value comes to the ``true value.'' The standard deviation of the mean is given by (3) We report this as the uncertainty in . See the sample write-up in Appendix A for an example of an analysis of normally distributed data.