Binary Division Rounding Error
Contents |
integers in a certain range, depending on the word size used for integers. Certain floating-point numbers may also be represented exactly, depending on the representation scheme in use on the computer in question and the word size used for floating-point numbers. Certain floating-point numbers rounding binary floating point numbers cannot be represented exactly, regardless of the word size used. Errors due to rounding have long rounding binary fractions been the bane of analysts trying to solve equations and systems. Such errors may be introduced in many ways, for instance: inexact representation
Floating Point Error
of a constant integer overflow resulting from a calculation with a result too large for the word size integer overflow resulting from a calculation with a result too large for the number of bits used to represent the mantissa
Floating Point Rounding Error
of a floating-point number accumulated error resulting from repeated use of numbers stored inexactly Integer Representation A digital computer can represent exactly integers in the range 0..2n -1, where n is the number of bits used to represent an integer (typically 16 or 32; 216-1=65535, 232-1=4,294,967,295). Usually, half these integers are used to represent negative numbers, so the effective range is -2n-1..2n-1 (-32,768..32,767 for 16 bits, -2,147,483,648.. 2,147,483,647 for 32 bits). To accomplish this, "two's complement" representation is floating point rounding error example typically used so that a negative number k is represented by adding a "bias term" of 2n to get k+2n. For instance, if n=4 bits is used (a number too small to be likely, but useful for illustrative purposes) the numbers -8..7 may be represented, by adding a bias term of 24=16 so that the negative numbers -8..-1 are represented as 8..15 (see below). Four bits: 0 0000 4 0100 8 1000 12 1100 1 0001 5 0101 9 1001 13 1101 2 0010 6 0110 10 1000 14 1110 3 0011 7 0111 11 1011 15 1111 Two's complement: 0 0000 4 0100 -8 1000 -4 1100 1 0001 5 0101 -7 1001 -3 1101 2 0010 6 0110 -6 1000 -2 1110 3 0011 7 0111 -5 1011 -1 1111 Overflow Overflow occurs when an arithmetic calculation results in an integer too large for the word size. For instance, with n=4 bits, the result of adding 6+7 is 13, which exceeds the maximum positive integer (7). The result of this calculation is 1101, which is interpreted as -3. In a more likely example, the result of adding 20000+20000 results in an integer too large for 16-bit integers (with the result interpreted as -25536). Floating-Point Representation The exact way floating-point numbers are represented varies between computing platforms, although the same basic ideas apply in general. The general representation scheme used for floating-point
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us
Floating Point Binary
Learn more about Stack Overflow the company Business Learn more about hiring developers or floating point number example posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow floating point ieee Community Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up Python float division “rounding error” on division by 100 https://www.cs.drexel.edu/~introcs/Fa09/extras/Rounding/index.html [duplicate] up vote 0 down vote favorite This question already has an answer here: Limiting floats to two decimal points 15 answers Is floating point math broken? 27 answers When dividing a float by 100 in Python 2.7 I get the following "rounding behaviour": >>> 3.7e-03/100 3.7000000000000005e-05 I would expect the following: >>> 3.7e-03/100 3.7e-05 Note that: >>> 3.7e-03/100 == 3.7e-05 False >>> 3.7000000000000005e-05 == 3.7e-05 False http://stackoverflow.com/questions/19703657/python-float-division-rounding-error-on-division-by-100 While probably of not practical difference in most applications I find this behaviour somewhat disconcerting. Why does this happen and how can I avoid it? I am using Python: '2.7.5 |Anaconda 1.7.0 (32-bit)| (default, Jul 1 2013, 12:41:55) [MSC v.1500 32 bit (Intel)]' python python-2.7 floating-point division floating-accuracy share|improve this question edited Oct 31 '13 at 14:25 asked Oct 31 '13 at 10:13 ARF 2,57011631 marked as duplicate by geoffspear, Daniel Roseman, Henrik, FallenAngel, Martijn Pieters♦ Oct 31 '13 at 10:33 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. This has a good explanation stackoverflow.com/questions/5997027/… –Niek de Klein Oct 31 '13 at 10:15 What happens if you don't divide: i.e. what is 3.7e-03? The trouble is first computers use binary and 10 isn't a power of two. Second they only have limited bytes, so even if something could be written as the sum of exact powers of two you might get rounding errors due to lack of bytes. –doctorlove Oct 31 '13 at 10:16 1 You might want to read What Every Computer Scientist Should Know About Floating-Point Arithmet
new programmers become aware of binary floating-point after seeing their programs give odd results: “Why does my program print 0.10000000000000001 when I enter 0.1?”; “Why http://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/ does 0.3 + 0.6 = 0.89999999999999991?”; “Why does 6 * https://docs.python.org/3/tutorial/floatingpoint.html 0.1 not equal 0.6?” Questions like these are asked every day, on online forums like stackoverflow.com. The answer is that most decimals have infinite representations in binary. Take 0.1 for example. It's one of the simplest decimals you can think floating point of, and yet it looks so complicated in binary: Decimal 0.1 In Binary ( To 1369 Places) The bits go on forever; no matter how many of those bits you store in a computer, you will never end up with the binary equivalent of decimal 0.1. 0.1 In Binary 0.1 is floating point rounding one-tenth, or 1/10. To show it in binary -- that is, as a bicimal -- divide binary 1 by binary 1010, using binary long division: Computing One-Tenth In Binary The division process would repeat forever -- and so too the digits in the quotient -- because 100 (“one-zero-zero”) reappears as the working portion of the dividend. Recognizing this, we can abort the division and write the answer in repeating bicimal notation, as 0.00011. 0.1 In Floating-Point 0.00011 is a finite representation of an infinite number of digits. That doesn't help us with floating-point. Floating-point does not represent numbers using repeat bars; it represents them with a fixed number of bits. In double-precision floating-point, for example, 53 bits are used, so the otherwise infinite representation is rounded to 53 significant bits. Let's see what 0.1 looks like in double-precision. First, let's write it in binary, truncated to 57 significant bits: 0.000110011001100110011001100110011001100110011001100110011001̷
base 2 (binary) fractions. For example, the decimal fraction 0.125 has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.001 has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2. Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine. The problem is easier to understand at first in base 10. Consider the fraction 1/3. You can approximate that as a base 10 fraction: 0.3 or, better, 0.33 or, better, 0.333 and so on. No matter how many digits you're willing to write down, the result will never be exactly 1/3, but will be an increasingly better approximation of 1/3. In the same way, no matter how many base 2 digits you're willing to use, the decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the infinitely repeating fraction 0.0001100110011001100110011001100110011001100110011... Stop at any finite number of bits, and you get an approximation. On most machines today, floats are approximated using a binary fraction with the numerator using the first 53 bits starting with the most significant bit and with the denominator as a power of two. In the case of 1/10, the binary fraction is 3602879701896397 / 2 ** 55 which is close to but not exactly equal to the true value of 1/10. Many users are not aware of the approximation because of the way values are displayed. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display >>> 0.1 0.1000000000000000055511151231257827021181583404541015625 That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead >>> 1 / 10 0.1 Just remember, even though the printed result looks like the exact value of 1/10, the actual stored value is the nearest representable binary fraction. Interestingly, th