Now, in any Lisp-derived language, you don't expect to have immediate-integers with the full bit-width of the native word size, since a few bits need to be consumed by runtime tagging. If the language implementor is really clever, you can get away with losing only 2 bits, but mostly people are less clever than that, and you lose 5-8 bits, so it's reasonable to expect that MAXINT on a modern computer is at least 2^56, 2^59 being more reasonable.
(It would also be reasonable to assume that any sane language runtime would have integers transparently degrade to BIGNUMs, making the choice of accuracy over speed, but of course that almost never happens, because the painful transition from 32-bit architectures to 64-bit architectures apparently taught the current crop of CS graduates no lesson better than, "Oh, did I say that 32 bits should be enough for everybody? I meant 64 bits." But again I digress.)
It's like an AI Koan: "One day a student came to Moon and said, 'I understand how to avoid using BIGNUMs! We will simply use floats!' Moon struck the student with a stick. The student was enlightened."
Even better is that this behavior is explicitly laid out in the language specification: ECMA 262, 8.5:
The finite nonzero values are of two kinds: 2^64-2^54 of them are normalised, [...] The remaining 2^53-2 values are denormalised [...] Note that all the positive and negative integers whose magnitude is no greater than 2^53 are representable in the Number type.
Oh, also bitwise logical operations only have defined results (per the spec) up to 32 bits. And the result for out-of-range inputs is not an error condition, but silently mod 2^32.
I swear, it's a wonder anything works at all.