Consider the following code
a = 1 / 3.0 b = 4 / 6.0 # ww w.j ava 2s .c o m print( a ) print( b ) print( a + b ) print( a - b ) print( a * b )
The value here is only as accurate as floating-point hardware Can lose precision over many calculations.
Both Fraction and Decimal provide ways to get exact results.
In the following example floating-point numbers do not accurately give the zero answer expected.
from fractions import Fraction from decimal import Decimal print( 0.1 + 0.1 + 0.1 - 0.3 ) # This should be zero (close, but not exact) print( Fraction(1, 10) + Fraction(1, 10) + Fraction(1, 10) - Fraction(3, 10) ) print( Decimal('0.1') + Decimal('0.1') + Decimal('0.1') - Decimal('0.3') )
Fractions and decimals both have more intuitive and accurate results than floating points:
from fractions import Fraction from decimal import Decimal import decimal # ww w .j a v a 2 s . c o m decimal.getcontext().prec = 2 print( 1 / 3 ) # Use a ".0" in Python 2.X for true "/" print( Fraction(1, 3) ) # Numeric accuracy, two ways print( Decimal(1) / Decimal(3) ) print( (1 / 3) + (6 / 12) ) # Use a ".0" in Python 2.X for true "/" print( Fraction(6, 12) ) # Automatically simplified print( Fraction(1, 3) + Fraction(6, 12) ) print( decimal.Decimal(str(1/3)) + decimal.Decimal(str(6/12)) ) print( 1000.0 / 1234567890 ) print( Fraction(1000, 1234567890) ) # Substantially simpler!