The result of the (1.0 / 0) division is: "Infinity". If we do the "type cast" operation on the above expression, to "byte" or to "short", then the result is -1. Why? And if we do cast it to "int" or to "long", then the result is their MAX_VALUE. OK, it has sense. But why is this difference? In other words: The result of (byte)(1.0 / 0) expression is why not equal to Byte.MAX_VALUE?
If (int)(Infinity)==Integer.MAX_VALUE, then (short)(Infinity) why doesn't equal to Short.MAX_VALUE?
You must be signed in to leave a comment
16 November 2020, 18:23
The answer you are searching for is here: Stack Overflow Basically a long or int get converted to the largest possible value for +infinity, or lowest possible for -infinity. However when casting to a lower container (short, byte, char), it will be first cast to a long then cast to the final type. When casting from a larger type to a lower type, the higher bits are just chopped off. The bits of a max long is a 0 followed by all ones. If you remove the higher bits (the 0 and all 1 down to a 2 bytes [16 bits]) you end up sixteen 1's which equals -1. Now with -infinity (dividing a negative floating point number by 0) you will end up with 0 for the lower types because the min value of a long is one 1 followed by all 0's, and when you drop the higher bits you end up with all 0's... which would be 0.