Derivative Rule Error Propagation
Contents |
links in the footer of general rule for error propagation our site. RIT Home > Administrative Offices > error propagation rules exponents Academics Admission Colleges Co-op News Research Student Life 404 Error - Page not found The page you
Error Propagation Rules Division
are looking for at: http://www.rit.edu/~w-uphysi/uncertainties/Uncertaintiespart2.html can not be found. We hope that the following links will help you find the appropriate content on the RIT site. If not, try visiting the RIT A-Z Site Index or the Google-powered RIT Search. Rochester Institute of Technology, One Lomb Memorial Drive, Rochester, NY 14623-5603 Copyright © Rochester Institute of Technology. All Rights Reserved | Disclaimer | Copyright Infringement Questions or concerns? Send us feedback. Telephone: 585-475-2411
with: (1) Functions of several variables. (2) Evaluation of partial derivatives, and the chain rules of differentiation. (3) Manipulation of summations in algebraic context. At this mathematical level our presentation
Error Propagation Rules Trig
can be briefer. We can dispense with the tedious explanations and elaborations error propagation calculator of previous chapters. 6.2 THE CHAIN RULE AND DETERMINATE ERRORS If a result R = R(x,y,z) is calculated from error propagation formula physics a number of data quantities, x, y and z, then the relation: [6-1] ∂R ∂R ∂R dR = —— dx + —— dy + —— dz ∂x ∂y ∂z
holds. This http://www.rit.edu/~w-uphysi/uncertainties/Uncertaintiespart2.html is one of the "chain rules" of calculus. This equation has as many terms as there are variables. Then, if the fractional errors are small, the differentials dR, dx, dy and dz may be replaced by the absolute errors ΔR, Δx, Δy, and Δz, and written: [6-2] ∂R ∂R ∂R ΔR ≈ —— Δx + —— Δy + —— Δz ∂x ∂y ∂z https://www.lhup.edu/~dsimanek/scenario/errorman/calculus.htm Strictly this is no longer an equality, but an approximation to DR, since the higher order terms in the Taylor expansion have been neglected. So long as the errors are of the order of a few percent or less, this will not matter. This equation is now an error propagation equation. [6-3] Finally, divide equation (6.2) by R: ΔR x ∂R Δx y ∂R Δy z ∂R Δz —— = —————+——— ——+————— R R ∂x x R ∂y y R ∂z z The factors of the form Δx/x, Δy/y, etc are relative (fractional) errors. This equation shows how the errors in the result depend on the errors in the data. Eq. 6.2 and 6.3 are called the standard form error equations. They are also called determinate error equations, because they are strictly valid for determinate errors (not indeterminate errors). [We'll get to indeterminate errors soon.] The coefficients in Eq. 6.3 of the fractional errors are of the form [(x/R)(∂R/dx)]. These play the very important role of "weighting" factors in the various error terms. At this point numeric values of the relative errors could be s"change" in the value of that quantity. Results are is obtained by mathematical operations on the data, https://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm and small changes in any data quantity can affect the value of a result. We say that "errors in the data propagate through the calculations to produce error in the result." 3.2 MAXIMUM ERROR We first consider how data errors propagate through calculations to affect error limits (or maximum error) of results. It's easiest to first consider determinate errors, which have error propagation explicit sign. This leads to useful rules for error propagation. Then we'll modify and extend the rules to other error measures and also to indeterminate errors. The underlying mathematics is that of "finite differences," an algebra for dealing with numbers which have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused error propagation rules by experimental errors. Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and B respectively. The data quantities are written to show the errors explicitly: [3-1] A + ΔA and B + ΔB We allow the possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB." The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + δb) So the result, with its error ΔR explicitly shown in the form R + ΔR, is: R + ΔR = (A + B) + (Δa + Δb) [3-2] The error in R is: ΔR = ΔA + ΔB. We conclude that the error in the sum of two quantities is the sum of the errors in those quantities. You can easily work ou