From: Josh M. <jos...@an...> - 2013-08-03 00:37:50
|
I realise that the case is closed and the jury has already rendered its verdict, but is there a reason we can't use 'z' (= integer) rather than 'n' (= natural number, non-negative integer) for a signed 32-bit int? ------------------------------------------ Josh Milthorpe Postdoctoral Fellow, Research School of Computer Science Australian National University, Building 108 Canberra, ACT 0200 Australia Phone: + 61 (0)2 61254478 Mobile: + 61 (0)407 940743 E-mail: jos...@an... Web: http://cs.anu.edu.au/~Josh.Milthorpe/ On 03/08/13 00:07, Jonathan Brezin wrote: > > Dave et al, > > As an application programmer mostly working in very weakly typed > languages like JavaScript, I was very sceptical when I first started > working in X10 -- how long ago? 5 or 6 years? It took a while, but I > have become a real fan of being forced to say explicitly what each > identifier is, either in terms of the one value it will ever have, or > by writing out an honest, all i's dotted type expression. This causes > me to be more sympathetic to the "no default" for literals than I > might otherwise be. > > Looking back to my early computing days--C and Bell Labs in the late > 1970's and early 80's--C's choice of being glib about what size things > were ("int" meant "the natural size integer for the machine in > question") led to no end of #ifdefs as machines made the leap from 16 > to 32 bit integers, which led to code very easy to misread. C's > semantics for numeric operators are still enormously complicated, as > anyone who has looked at the literature on random number generators > will agree, where that forest of #ifdefs rears its ugly head for the > sake only of machine independence. Goodbye int, hello int32. > > So here we are again. 32 to 64 bits this time, but so what? What was > "natural" is a year ago is no longer "natural" now. The only > difference between our situation and C's is the relatively modest > amount of legacy code we are saddled with, and it's only literals that > need tweaking. So bottom line: > > I think that "n" is better than "i" precisely because mathematicians > for two hundred years have used "i" differently, and "n" is a choice > that is mathematically in good taste (natural numbers). I also think > (and few will agree with me) that numeric precision should always be > made explicit. I am not even all that pleased with float versus > double: how long until long doubles are the precision of choice for > HPC numeric problems, and are we going to go through this all over again? > > Jonathan > > > Jonathan Brezin > Research Staff Member > IBM TJ Watson Research Labs > 1101 KITCHAWAN RD > ROUTE 134 / PO BOX 218 > YORKTOWN HEIGHTS NY 10598 > > Phone: (914) 784-6728 (Tie: 863-6728) > |