From: Mike C. <MF...@uk...> - 2009-02-26 17:23:12
|
Just a couple of comments on David's points ... > While I do not completely disagree with Mike on this issue there are > some points I would like to make. > > 1. I would really like to know how many existing scripts are out there > that would be impacted by the larger NUMERIC DIGITS setting for the > 64-bit interpreter. I suspect, but have no proof at all, is that we are > not talking about a large number. Just how many scripts actually perform > numeric intensive operations? And of those, how many will be impacted? > Again, I suspect not a large number. This really isn't a matter of 'numerically intensive' (although those will do their math 4x slower). It's about the applications today that expect a short, readable result from a computation that is then displayed to the end user. Ian Collier illustrated this: Regarding readability, compare: say 1/7 say 1/7 0.142857143 0.142857142857142857142857142857142857143 To be honest I really don't need to see that many digits unless I'm doing something highly mathematical. say 11**33 say 11**33 2.32251544E+34 23225154419887808141001767796309131 I have absolutely no idea how "big" the right-hand number is just by looking at it; it's not a useful answer to a human viewer. > 2. A lot of work and a discussion went into this decision. Now is not > the time to second guess ourselves. [I missed this somehow. But we are only talking about the *default* for Numeric Digits. That won't change the way anything else works (except some testcases such as Ian's examples above). > 3. While I am all for the "human readable" features in Rexx, in this > case I believe the larger setting in a 64-bit environment is justified. In that case, why not for the 32-bit environment too? With two differing implementations, there will be two diverging sets of applications (both 'working' on either environment) which will give different results, formatting, appearance, etc. > 4. While I don't want to second guess the ANSI standard, it was > developed at a time when 64-bit environments were not even on the radar. > I believe this was at least a minor factor in setting the standard to 9. Not really. Rexx was developed when mainframes were going through the very painful 24-bit -> 31-bit upgrade, and 64-bit (and Y2K) coming along was very much used as an example of how one must avoid that trap again. 9 digits was mostly chosen because it was about as long a number that one can display that's readable/understandable. The alternative is requiring some kind of format specification for every computer -> human result, which discourages good programs. > Last but not least, I would like to point out that any dependencies a > script has on NUMERIC DIGITS being set at 9 can obviously be classified > as a BAD programming practice when no check is made to determine the > current setting. Rexx code is just like any other source code, it can be > copied and reused over and again. Who know where that code will be > pasted? And under what setting it will be running? Bad programming is > bad programming no matter how it gets created or used. So every line of code must be prefixed by: if digits()\=9 then signal badprogrammer I'm unconvinced :-). And what if someone copied part of an expression from one program to another? If the language definition says (as it, in effect does, does) "you may assume DIGITS is 9 unless there is evidence to the contrary", I think the programmer is not a bad programmer if he or she makes that assumption. [A valid criticism, on the other hand, might be that the language should not have defaults. Walter Pachl has often said that SIGNAL ON NOVALUE should be implicit (umm, default?). One could therefore argue that all Rexx programs should start with a series of NUMERIC instructions.] Mike Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU |