Error Summation
Contents |
"change" in the value of that quantity. Results are is obtained by mathematical operations on the data, and small changes in any data quantity can affect the value of propagation of error division a result. We say that "errors in the data propagate through the calculations to produce error propagation formula physics error in the result." 3.2 MAXIMUM ERROR We first consider how data errors propagate through calculations to affect error limits (or maximum error propagation square root error) of results. It's easiest to first consider determinate errors, which have explicit sign. This leads to useful rules for error propagation. Then we'll modify and extend the rules to other error measures and also to
Error Propagation Average
indeterminate errors. The underlying mathematics is that of "finite differences," an algebra for dealing with numbers which have relatively small variations imposed upon them. The finite differences we are interested in are variations from "true values" caused by experimental errors. Consider a result, R, calculated from the sum of two data quantities A and B. For this discussion we'll use ΔA and ΔB to represent the errors in A and B respectively. The data error propagation chemistry quantities are written to show the errors explicitly: [3-1] A + ΔA and B + ΔB We allow the possibility that ΔA and ΔB may be either positive or negative, the signs being "in" the symbols "ΔA" and "ΔB." The result of adding A and B is expressed by the equation: R = A + B. When errors are explicitly included, it is written: (A + ΔA) + (B + ΔB) = (A + B) + (Δa + δb) So the result, with its error ΔR explicitly shown in the form R + ΔR, is: R + ΔR = (A + B) + (Δa + Δb) [3-2] The error in R is: ΔR = ΔA + ΔB. We conclude that the error in the sum of two quantities is the sum of the errors in those quantities. You can easily work out the case where the result is calculated from the difference of two quantities. In that case the error in the result is the difference in the errors. Summarizing: Sum and difference rule. When two quantities are added (or subtracted), their determinate errors add (or subtract). Now consider multiplication: R = AB. With errors explicitly included: R + ΔR = (A + ΔA)(B + ΔB) = AB + (ΔA)B + A(ΔB) + (ΔA)(ΔB) [3-3] o
total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors). In particular, simply summing n numbers in sequence has a worst-case error error propagation calculator that grows proportional to n, and a root mean square error that grows as n
Error Propagation Inverse
{\displaystyle {\sqrt {n}}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of
Adding Errors In Quadrature
n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2] The algorithm is attributed to William Kahan.[3] Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping https://www.lhup.edu/~dsimanek/scenario/errorman/propagat.htm track of the accumulated error in integer operations (although first documented around the same time[4]) and the Delta-sigma modulation[5] (integrating, not just summing the error). Contents 1 The algorithm 1.1 Worked example 2 Accuracy 3 Alternatives 4 Possible invalidation by compiler optimization 5 Support by libraries 6 See also 7 References 8 External links The algorithm[edit] In pseudocode, the algorithm is: function KahanSum(input) var sum = 0.0 var c = 0.0 // A running compensation https://en.wikipedia.org/wiki/Kahan_summation_algorithm for lost low-order bits. for i = 1 to input.length do var y = input[i] - c // So far, so good: c is zero. var t = sum + y // Alas, sum is big, y small, so low-order digits of y are lost. c = (t - sum) - y // (t - sum) cancels the high-order part of y; subtracting y recovers negative (low part of y) sum = t // Algebraically, c should always be zero. Beware overly-aggressive optimizing compilers! next i // Next time around, the lost low part will be added to y in a fresh attempt. return sum Worked example[edit] This example will be given in decimal. Computers typically use binary arithmetic, but the principle being illustrated is the same. Suppose we are using six-digit decimal floating point arithmetic, sum has attained the value 10000.0, and the next two values of input(i) are 3.14159 and 2.71828. The exact result is 10005.85987, which rounds to 10005.9. With a plain summation, each incoming value would be aligned with sum and many low order digits lost (by truncation or rounding). The first result, after rounding, would be 10003.1. The second result would be 10005.81828 before rounding, and 10005.8 after rounding. This is not correct. However, with compensated summation, we get the correct rounded result of 10005.9. Assume that c has
substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even https://en.wikipedia.org/wiki/Pairwise_summation smaller round-off errors, pairwise summation is nearly as good (differing only http://www.sciencedirect.com/science/article/pii/S0898122108006718 by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation. In particular, pairwise summation of a sequence of n numbers xn works by recursively error propagation breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(εlogn), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one propagation of error at a time for i=1,...,n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of O ( ε log n ) {\displaystyle O(\varepsilon {\sqrt {\log n}})} for pairwise summation.[2] A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3] Pairwise summation is the default summation algorithm in NumPy[4] and the Julia technical-computing language,[5] where in both cases it was found to have comparable speed to naive summation (thanks to the use of a large base case). The algorithm[edit] In pseudocode, the pairwise summation algorithm for an array x of length n > 0 can be written: s = pairwise(x[1…n]) if n ≤ N base case: naive su
Please note that Internet Explorer version 8.x will not be supported as of January 1, 2016. Please refer to this blog post for more information. Close ScienceDirectSign inSign in using your ScienceDirect credentialsUsernamePasswordRemember meForgotten username or password?Sign in via your institutionOpenAthens loginOther institution loginHelpJournalsBooksRegisterJournalsBooksRegisterSign inHelpcloseSign in using your ScienceDirect credentialsUsernamePasswordRemember meForgotten username or password?Sign in via your institutionOpenAthens loginOther institution login Download PDF Opens in a new window. Article suggestions will be shown in a dialog on return to ScienceDirect. Help Direct export Export file RIS(for EndNote, Reference Manager, ProCite) BibTeX Text RefWorks Direct Export Content Citation Only Citation and Abstract Advanced search JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. This page uses JavaScript to progressively load the article content as a user scrolls. Click the View full text link to bypass dynamically loaded article content. View full text Computers & Mathematics with ApplicationsVolume 57, Issue 4, February 2009, Pages 560–564 A new error-free floating-point summation algorithm ☆V.Y. Pana, , , , B. Murphya, , G. Qianb, , R.E. Rosholta, a Department of Mathematics and Computer Science, Lehman College, The City University of New York, Bronx, NY 10468, USAb Ph.D. Program in Computer Science, The City University of New York, New York, NY 10036, USAReceived 7 August 2007, Accepted 18 September 2007, Available online 7 January 2009AbstractSummation is a basic operation in scientific computing; furthermore division-free arithmetic computations can be boiled down to summation. We propose a new summation algorithm, which consists of double-precision floating-point operations and outputs the error-free sums. The computational time is proportional to the condition number of the problem, is low according to both our estimates and extensive experiments, and further decreases for producing faithful rounding of the sum, rather than its error-free value.KeywordsError-free summation; Floating-point additions1. IntroductionSummation is a basic operation in scientific computing. It has been extensively studied, e.g.,in[1], [2], [3] and [4], which contain extensive bibliography. See excellent surveys in [1] and [2]. The paper[5] expres