From: Tom C. <tom...@sa...> - 2005-06-02 13:48:15
|
In Vulcan, the SQLParse::yylex code doesn't handle very large numbers that could be represented as floating point values. For example, in ISQL, something like the following inserts into a column that is DOUBLE PRECISION result in tokenization errors: insert into foo values ( 1.000000000000000000000000 ); insert into foo values ( 1000000000000000000000000 ); This is essentially because the digits are accumulated in a 64-bit integer which is (correctly) predicted to overflow, resulting in a perceived precision error. Both are representable as floating point values easily enough, but the behavior of the lexer prevents this. I'd like to make a change that essentially recognizes insignificant trailing zeroes in an other-wise precise representation and process the result as a FLOAT_NUMBER type. However, since this is changing the way numbers are handled (at least at parse time) for these very large integers, I thought I'd better check with the community to make sure there's not a legacy/standards/ compatibility issue here that I need to know about. Testing the fix appears to work, but additionally if there are issues with numeric conversion or representation later in runtime after parsing that I should be testing for, I'd appreciate any pointers. (A quick scan of the Firebird2 code seems to indicate that this same issue exists there as well, though I am far less familiar with that code base and may be misinterpreting the code behavior from simple inspection.) -- Tom Cole, Platform R&D, SAS Institute Inc. "There are 10 kinds of people. Those who understand binary, and those who don't." |