I have read and heard that java language has some cons. Today, to my surprise, I discovered one loophole by myself in such a simple operation that I didn’t expect in. The flaw I found was when two double values are divided and the result is also double, java compiler always doesn’t yield correct result. For instsance,

0.3d/12.0d = 0.024999999999999998

while any normal calculator or compiler like C++ gives 0.025. My conjecture about why it happens is – java probably just reads the raw underlying values in bits from registers and displays after directly converting to decimal value while other compilers like C++ does intelligent conversion from bits to decimal format before displaying.

Whatever the cause is, in order to meet my purpose, I fixed it (with my conjecture above in mind) by multiplying both dividend and divisor by a number containing max decimal precision allowed in the context I was working in. The result came out correctly.

(1000000 * 0.3d)/(1000000 * 12.0d) = 0.025. //max precision allowed = 6

However, this mayn’t yield precise result for all cases.

Another easier solution is to use BigDecimal class :

BigDecimal bd = new BigDecimal((0.3d/12.0d), new MathContext(2, RoundingMode.HALF_UP));

System.out.println(bd); // 0.025

I hope, future version of jdk will come out with a fix for this bug.