Profile photo for Anders Kaseorg

There is a floating-point number called 0.1, but it’s not [math]\tfrac{1}{10}[/math], but rather the floating-point number closest to [math]\tfrac{1}{10}[/math], which is [math]\tfrac{3602879701896397}{2^{55}}[/math]. Conversely, the language runtime knows to display [math]\tfrac{3602879701896397}{2^{55}}[/math] as the shortest string that represents it, which is 0.1, rather than the exact decimal expansion 0.1000000000000000055511151231257827021181583404541015625.

This process may make it appear as though no round-off is happening for short decimals (up to 15 significant digits). But that illusion disappears as soon as you try to do arithmetic. For example, 0.1 * 3 represents [math]\tfrac{3602879701896397}{2^{55}} \cdot 3 = \frac{10808639105689191}{2^{55}}[/math], which rounds to [math]\frac{5404319552844596}{2^{54}}[/math]. However, 0.3 represents [math]\frac{5404319552844595}{2^{54}}[/math], which is closer to [math]\tfrac{3}{10}[/math] than [math]\frac{5404319552844596}{2^{54}}[/math] is. Since 0.3 does not represent the same number as 0.1 * 3, the latter cannot be displayed as 0.3; it is instead displayed as 0.30000000000000004.

(This answer assumes double-precision floating-point numbers, which are used by JavaScript. Some languages support other levels of precision, in which case the same concept holds with different numbers.)

View 2 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025