Floating Point Error Accumulation
Contents |
general: Multiplication and division are “safe” operations Addition and floating point rounding error example subtraction are dangerous, because when numbers of different magnitudes
Floating Point Error Example
are involved, digits of the smaller-magnitude number are lost. This loss of digits can truncation error be inevitable and benign (when the lost digits also insignificant for the final result) or catastrophic (when the loss is magnified and distorts
Floating Point Calculator
the result strongly). The more calculations are done (especially when they form an iterative algorithm) the more important it is to consider this kind of problem. A method of calculation can be stable (meaning that it tends to reduce rounding errors) or unstable (meaning that floating point arithmetic rounding errors are magnified). Very often, there are both stable and unstable solutions for a problem. There is an entire sub-field of mathematics (in numerical analysis) devoted to studying the numerical stability of algorithms. For doing complex calculations involving floating-point numbers, it is absolutely necessary to have some understanding of this discipline. The article What Every Computer Scientist Should Know About Floating-Point Arithmetic gives a detailed introduction, and served as an inspiration for creating this website, mainly due to being a bit too detailed and intimidating to programmers without a scientific background. © Published at floating-point-gui.de under the Creative Commons Attribution License (BY) The Floating-Point Guide Home Basic Answers References xkcd Number Formats Binary Fractions Floating-Point Exact Types On Using Integers Errors Rounding Comparison Propagation Languagecheat sheets C# Java JavaScript Perl PHP Python Ruby SQL
total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors). In particular, simply
What Every Computer Scientist Should Know About Floating-point Arithmetic
summing n numbers in sequence has a worst-case error that grows proportional to
Floating Point Addition
n, and a root mean square error that grows as n {\displaystyle {\sqrt {n}}} for random inputs (the roundoff floating point representation errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the http://floating-point-gui.de/errors/propagation/ floating-point precision.[2] The algorithm is attributed to William Kahan.[3] Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time[4]) and the Delta-sigma modulation[5] (integrating, not just summing the error). Contents 1 The algorithm 1.1 Worked example 2 Accuracy 3 Alternatives 4 Possible invalidation by compiler optimization 5 Support by libraries 6 https://en.wikipedia.org/wiki/Kahan_summation_algorithm See also 7 References 8 External links The algorithm[edit] In pseudocode, the algorithm is: function KahanSum(input) var sum = 0.0 var c = 0.0 // A running compensation for lost low-order bits. for i = 1 to input.length do var y = input[i] - c // So far, so good: c is zero. var t = sum + y // Alas, sum is big, y small, so low-order digits of y are lost. c = (t - sum) - y // (t - sum) cancels the high-order part of y; subtracting y recovers negative (low part of y) sum = t // Algebraically, c should always be zero. Beware overly-aggressive optimizing compilers! next i // Next time around, the lost low part will be added to y in a fresh attempt. return sum Worked example[edit] This example will be given in decimal. Computers typically use binary arithmetic, but the principle being illustrated is the same. Suppose we are using six-digit decimal floating point arithmetic, sum has attained the value 10000.0, and the next two values of input(i) are 3.14159 and 2.71828. The exact result is 10005.85987, which rounds to 10005.9. With a plain summation, ea
about citing this article, click here Received: 18-Jan-2006 Accepted: 12-Oct-2006 Published: 31-Oct-2006 Abstract This paper provides a framework that highlights the features of computer models that make them especially vulnerable to floating-point errors, and suggests http://jasss.soc.surrey.ac.uk/9/4/4.html ways in which the impact of such errors can be mitigated. We focus on small floating-point errors because these are most likely to occur, whilst still potentially having a major influence on the outcome of the model. The significance of small floating-point errors in computer models can often be reduced by applying a range of different techniques to different parts of the code. Which floating point technique is most appropriate depends on the specifics of the particular numerical situation under investigation. We illustrate the framework by applying it to six example agent-based models in the literature. Keywords: Floating Point Arithmetic, Floating Point Errors, Agent Based Modelling, Computer Modelling, Replication Introduction 1.1If a model uses floating-point numbers, chances are that it is suffering floating-point errors. However, the important question is not floating point error whether there are floating-point errors in the model (which there almost certainly are); neither is it whether these errors occur frequently. The question is not even (necessarily) whether the results obtained from the model are significantly affected by floating-point errors. The key question is whether floating-point errors have a substantial impact on whatever the model is used for. In short, the model might be suffering floating-point errors, but … does it really matter? 1.2It is clear that there cannot be a general answer for such a question. The answer not only depends on the particular model under consideration but also, and crucially, on the specific use the researcher makes of it. Since researchers use models for very different purposes, in order to assess whether floating-point errors are something to worry about or not, the best that can be provided is a framework that assists researchers in finding out the answer for themselves in any specific case they might encounter. The aim of this paper is to provide such a framework. We use six agent-based models that have formed the basis of several scientific publications to illustrate both the use and the usefu