Presumably because it allows you to do anything involving double precision or 32-bit integer arithmetic, and performance was not originally a major consideration. It's pretty rare to need more than 53 bits of precision (and was even rarer for JS's original intent), so it makes sense that the numeric type is kept simple. Edit: and to clarify, the advantage is that this makes basic implementation extremely simple. Only if you want to optimize your engine's performance do you have to worry about shuffling types around.
These days the solution for having more precision is to use an external library. I think that's generally fine, although I think performance is a concern. Financial applications aside, working with arbitrary precision is a good hint that you might be doing something processor intensive. It's certainly a case where I'd like the library to be compiled to target asm.js, and maybe optionally NaCL, once those have widespread adoption. Ideally, ECMAScript would also have a native implementation, but that won't eliminate the need for a library for shimming for years to come.
Performance was always enough of a consideration that even BE's original implementation had both int32 and double types internally, though black-box unobservable (as in the black box, everything appears as a double).
As a side note, I'll bet you could have actually observed the difference via timing at the time, assuming you knew what hardware you were working on. On an early Pentium, a floating point add would have taken up to 3 times as long as an integer add (depending on implementation), so by comparing in a loop, you might be able to tell if a given value was being treated as an integer or a double.
Floating point numbers ("floats") work like the scientific notation (e.g. "12 * 10^-3" ). They have an mantissa and an exponent. Double precision (64 bit) floats have a 52 bits integer mantissa, a sign bit, and 11 bits used to represent the exponent, or to encode NaN values ("Not a Number"), like the result of "0/0".
> Why [only floats in JavaScript] ?
The language was supposed to have a low barrier of entry, and having a single number type was thought to be less complex to explain, even though floats have some weird corner cases regarding rounding when you don't understand how they are implemented. Beyond the loss of precision in large numbers, some rationals that have a finite representation in base 10 have an infinite representation in base 2.
For example, 0.2 in base 10 is 0.00110011... in base 2. It is thus rounded to 52 bits. Who says rounding says rounding error.
Here's a Node.JS session that demonstrate the behaviour:
> e = 0.2
0.2
> e = e + 0.2
0.4
> e = e + 0.2
0.6000000000000001
> e == 0.6
false
> e = e + 0.2
0.8
> e = e + 0.2
1
> e == 1
true
As for why? I have no clue. It's specified in the specification. That's all. There are ToInteger(), ToInt32(), ToUint16(), ToUInt32 functions defined as well, but I think that's for the host to implement
EDIT: oops. there isn't a ToInt64() function defined.