Some languages provide a
decimal type in addition to the
double types. What is
decimal and when do we use decimals instead of floats or doubles?
Actually these three concepts are not defined at the same level. Many people know the difference between floats and doubles - different length and precision. A double is a float, a double-width float.
A decimal is also a float. Instead of a binary, or base-2 float, it is a decimal, or base-10
float. As a float, it has sign, significand and exponential fields. While a binary float is
sign * significand * 2 ^ (exponential), a decimal float is calculated as
significand * 10 ^ (exponential). That’s it.
Why are decimals useful
If decimals are simply floats with a different base, how can they be more useful than binary floats?
The answer is about humans. Humans are used to using decimals to represent values. Not all such values can be exactly represented in binary form (which is the computer’s native form). An example is 1/5, which is 0.2 in decimal form but needs infinite number of bits in binary form. So errors may happen and accumulate if binary form is used when doing calculations with these numbers.
As an example, suppose you earn $0.10 every second and you want to know how much you have earned in the past 10^8 seconds:
a = 0.0 for i in range(10 ** 8): a += 0.1 print(a)
The result is usually not 10000000.0 but 9999999.98… Therefore you see you have lost 2 cents. What if you want the result for 10^16 seconds instead of 10^8 seconds? Such errors are not allowed in a financial system.
The problem doesn’t happen when using decimals:
a = Decimal('0.0') for i in range(10 ** 8): a += Decimal('0.1') print(a)
The result is exactly 10000000.0.
Are we trouble-free with decimals? Of course not. There are numbers which cannot be exactly represented either in decimal or in binary form. An example is 1/3. So you still have errors when calculating with these numbers. A reallife case includes commission and interest rate. For example, if the principal is $100 and interest rate is 1/30, then you get 100 / 30 = $3.33…(infinite), which is rounded to $3.33 and you lose some tenths of a cent even if you are using decimals.
Therefore, decimals help us solve some problems but not all. We will get exact results for addition and subtraction if inputs are exact. But we may still suffer from errors for multiplication, division and other more advanced calculations.