In this article we will learn the Difference between decimal, float and double in .NET?
Float - 7 digits (32 bit)
Double-15-16 digits (64 bit)
Decimal -28-29 significant digits (128 bit)
Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.
Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.
Example
float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);
Result
float: 0.3333333
double: 0.333333333333333
decimal: 0.333333333333333333333333333
Decimal 128 bit (28-29 significant digits) In case of financial applications it is better to use Decimal types because it gives you a high level of accuracy and easy to avoid rounding errors Use decimal for non-integer math where precision is needed (e.g. money and currency)
Double 64 bit (15-16 digits) Double Types are probably the most normally used data type for real values, except handling money. Use double for non-integer math where the most precise answer isn't necessary.
Float 32 bit (7 digits) It is used mostly in graphic libraries because very high demands for processing powers, also used situations that can endure rounding errors.
Decimals
are much slower than a double/float
.
Decimals
and Floats/Doubles
cannot be compared without a cast whereas Floats
and Doubles
can.
Decimals
also allow the encoding or trailing zeros.
Post a Comment