Hacker News new | past | comments | ask | show | jobs | submit login

> Why [+/- 52 bit] ?

Floating point numbers ("floats") work like the scientific notation (e.g. "12 * 10^-3" ). They have an mantissa and an exponent. Double precision (64 bit) floats have a 52 bits integer mantissa, a sign bit, and 11 bits used to represent the exponent, or to encode NaN values ("Not a Number"), like the result of "0/0".

> Why [only floats in JavaScript] ?

The language was supposed to have a low barrier of entry, and having a single number type was thought to be less complex to explain, even though floats have some weird corner cases regarding rounding when you don't understand how they are implemented. Beyond the loss of precision in large numbers, some rationals that have a finite representation in base 10 have an infinite representation in base 2.

For example, 0.2 in base 10 is 0.00110011... in base 2. It is thus rounded to 52 bits. Who says rounding says rounding error.

Here's a Node.JS session that demonstrate the behaviour:

    > e = 0.2
    0.2
    > e = e + 0.2
    0.4
    > e = e + 0.2
    0.6000000000000001
    > e == 0.6
    false
    > e = e + 0.2
    0.8
    > e = e + 0.2
    1
    > e == 1
    true



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: