
8 years later, https://github.com/tc39/proposal-bigint#state-of-the-proposal
"BigInt has been shipped in Chrome and is underway in Node, Firefox, and Safari."
BigInt: Arbitrary precision integers in JavaScript:
BigInt is a new primitive that provides a way to represent whole numbers larger than 253, which is the largest number Javascript can reliably represent with the Number primitive.
But the tragic historical fact that until now, all Javascript numbers were floats means that it's impossible to implement this backward-compatibly without significant syntactic warts:
Many (all?) other dynamically typed programming languages which have multiple numeric types implement a numeric tower. This forms an ordering between types -- on the built-in numeric types, when an operator is used with operands from two types, the greater type is chosen as the domain, and the "less general" operand is cast to the "more general" type. Unfortunately, as the previous example shows, there is no "more general" type between arbitrary integers and double-precision floats. The typical resolution, then, is to take floats as the "more general" type.
Silently losing precision sometimes may be a problem, but in most dynamically typed programming languages which provide integers and floats, integers are written like 1 and floats are written like 1.0. It's possible to scan code for operations which may introduce floating point precision by looking for a decimal point. JavaScript exacerbates the scope of losing precision by making the unfortunate decision that a simple literal like 1 is a float. So, if mixed-precision were allowed, an innocent calculation such as
2n ** 53n + 1 would produce the float2 ** 53 -- defeating the core functionality of this feature.To avoid this problem, this proposal bans implicit coercions between Numbers and BigInts, including operations which are mixed type.
1n + 1 throws a TypeError. So does passing 1n as an argument into any JavaScript standard library function or Web API which expects a Number. Instead, to convert between types, an explicit call to Number() or BigInt() needs to be made to decide which domain to operate in.0 === 0n returns false.
That's all well and good, but obviously the burning questions that I want the answers to are, "What is MOST-POSITIVE-BIGNUM, and how long does it take the Javascript console to print it?"
I wasn't able to figure out which of the several BigInt Javascript packages out there most closely resemble this spec, nor was I able to figure out which of them is the one being used by Chrome. But I assume that they are all representing the bits of the BigInt inside an array (whose cells are either uint32 or IEEE-754 floats) which means that the gating factor is the range of the integer representing the length of an array (which by ECMA is 2^32-1). So the answer is probably within spitting distance of either
As a first attempt, typing 10n ** 1223146n - 1n into Chrome's JS console made it go catatonic for a minute or so, but then it spat out a string of 1,223,147 nines. (Fortunately it truncated it rather than printing them all.) So that's bigger than the Explorer version!
> (2n ** 32n - 1n) ** (2n ** 32n - 1n)
× Uncaught RangeError: Maximum BigInt size exceeded
> (2n ** 32n - 1n) ** (2n ** 31n - 1n)
× Uncaught RangeError: Maximum BigInt size exceeded
> (2n ** 32n - 1n) ** (2n ** 30n - 1n)
... and maybe we have a winner, or at least a lower bound on the max?
That last one has made Chome sit there chewing four CPU-hours so far, so it's either still doing the exponentiation, or it's trying to grind it to decimal. If it ever comes back with a result, I'll update this post...
Update: After 87 hours, I stopped waiting.
Previously, previously, previously, previously, previously, previously.
Needless to say,
1n/3n
is defined to give the wrong answer because it's 1957, and it will always be 1957. Whenever I get depressed because JS seems to have won despite its legion of horrors, I have to stop and remind myself, well, PHP could have won instead.What does "it will always be 1957" mean?
It means FORTRAN, but not yet Lisp.
(Amusingly (I work in a Fortran shop), for Fortran itself the year is now well after 1957, and indeed it's well after 1970: the people who design Fortran understand that computers are not giant PDP-11s any more.)
Gotcha. Thank you!
For those of us who grew up writing PHP and are thus certifiably dumb, can you explain a little more what this means, and why 1n/3n is defined incorrectly?
Because 1/3 is not equal to 0.
Is this worse than C or is it 1957 most everywhere? Time to bring back Pascal?
(crunkit 600) $ cat /tmp/foo.c
#include <stdio.h>
int main() {
double r = 1/3;
printf ("%f\n", r);
}
(crunkit 601) $ gcc -Wall -o /tmp/foo /tmp/foo.c # Look Ma - No errors!
(crunkit 602) $ /tmp/foo
0.000000
There is a complicated relation between dates such that 1957 on big machines is the same as 1970 on small machines (which may be the same as 1980 on microprocessor-based machines). For C it will always be small-machine-1970.
I can't resist adding this story which happened to me, today. Intel have a C compiler which, they claim, gives very good performance on their hardware. One of the things it does is to cope with the fact that (because it is 1970) C doesn't have exponentiation by compiling various
pow*
functions inline (and a lot of other numerical functions obviously). So the compiler knows the argument types of these things, since it's generating inline code for them. So, of course, if you add an incorrect declaration for these functions it's going to warn you about that, obviously, because it knows the real signature.But, because it is 1970 and the compiler is running on a machine which can execute some tens of thousands of instructions a second and probably can's store that many error messages anyway, that would be way too expensive to do: instead the thing completely silently generates code which can never produce the right answer.
We paid money for this compiler, I'm told.
I assume that what you mean by "1n/3n is defined to give the wrong answer" is that it is insane for a basic mathematical operator to default to loss of precision, and I agree.
Which makes me wonder: are there any languages subsequent to Common Lisp that had a fundamental "ratio" type? Or did that concept fall completely out of favor?
For those not in the know: In Common Lisp the numeric tower went: fixnum, bignum, ratio, float, complex. The result of integer division that would result in a remainder was always promoted to the next type up that would preserve information, so (/ 4 6) → a ratio object with numerator 2, denominator 3. If you wanted to convert that to a float, and lose information in doing so, since 2/3 can't be represented as an IEEE-754 float, you'd do that explicitly.
(This required the compiler to be pretty good at static type inferencing to figure out when a piece of code was actually doing integer math all the way through, but, it was.)
Yes, that's what I meant: given it was possible to coerce reasonable numerical performance from Lisp compilers in 1990 or before, and people have been working on compiler technology furiously since then I can't see why the default should not be 'correct where possible' with the compiler working out where 'fast' was also possible but not sacrificing correctness for that unless you told it that was OK. Unfortunately the compiler technology people have been working on since 1990 seems to have been 'compile code written for a giant PDP-11 really well for machines which look nothing like PDP-11s'.
I don't know of post-CL languages with (what I would call) good numerical systems unless you count the billion Scheme variants. I am sure there must be some, I just have not been paying attention.
(This is all just 'worse is better' of course, and I think that war was lost long ago.)
It tickles me to no end to give you the answer and await your reaction: yes, there is a language like that – and its name is Perl 6.
(In fact this point sees usage as a selling point.)
I once wrote a thing which not only debunked the silly 'Python is almost Lisp' thing that some famous Lisp hackers have claimed, but claimed in turn that if you want a language in the spirit of CL without actually being itself Lisp, then Perl is your best bet.
People from the language police visited my house shortly afterwards carrying cattle prods and all copies have been duly burnt.
I wrote a little while back,
But then someone reminded me that Emacs supposedly has lexical closures now and I don't even know what's real any more.
Languages with ratio-of-bignum types? The ones I can think of right now are Scheme, Haskell, and possibly Aldor. I'm actually surprised JavaScript didn't go that way, since any float can be represented precisely as a rational number.
Julia!
Kill me now
Almost All Real Numbers are Normal.
It seems like a sound argument to say that having accepted that we can only represent Almost None of the reals, it's a very small sacrifice to give up on a proportion of this already vanishingly small subset for convenience of implementation.
I confess that we can use the same logic to conclude things will be a lot easier if we get rid of zero but - unlike 2/3 - zero is an additive identity, and you probably want one of those in your system of arithmetic, so that's an argument for it to stay.
Is that still true of the computable numbers, though?
Mmmmm...it’s easy to get Mathematica to do rational arithmetic, but I suspect that’s not really what you’re looking for...
Algebra systems pretty much have to have rationals, and really need arbitrarily-large integers (and therefore rationals with arbitrarily large numerators & denominators). That is, as far as I know, why Lisps often have these things: most of the interesting algebra systems were written in Lisp.
As of earlier today, GNU Emacs also has bignums. No word on MOST-POSITIVE-BIGNUM yet.
OH MY GOD
I have C code now that generates what I think is Emacs's new MOST-POSITIVE-BIGNUM, but it needs 16 GB and that's all the RAM I've got, so I've yet to find the patience to add one to it and let it churn through swap forever to discover whether (1+ most-positive-bignum) is negative...
......So the way you indicate that a numeric literal is not a Number is by appending an "n"?
Yeah I guess the n is for bigiNt. Good thing Javascript is case sensitive!
Geek bites chicken, news at 11.
tags: mutants, perversions