2011/5/23 Stephan Beal <sgbeal@googlemail.com>
On Mon, May 23, 2011 at 1:50 PM, Baptiste Lepilleur <baptiste.lepilleur@gmail.com> wrote:
I propose making the following changes:
1) isUInt() returns true only asUInt() can succeed. Similar changes for isInt().
    I think this contract should be generalized. e.g. asUInt() should succeed if the value is a positive signed int Value.
2) Adds new member functions:  
   isUInt64() that returns true only if asUInt64() can succeed.
   and isInt64()...

If I understood your problem correctly, this should solve your backward compatibility issue. Is that correct? 

My proposal would be to remove the int/uint distinction entirely, since JavaScript and JSON do not distinguish between the two. In my own JSON code i simply use int64_t for all integers and double for doubles (though JS there's just a single Number type, if i'm not mistaken). In my own use of json libs and JS engines (SpiderMonkey, QtScript, and Google v8), i've always found the distinction between signed/unsigned to be philosophically unsettling because JS doesn't natively have unsigned numbers.

In fact I think if we were just interested in supporting Javascript compatibility we could probably get away with supporting only double.

There was an intermediate version of jsoncpp where the only integers type were int64 / uint64 but this kind of change broke backward compatibility painfully, for example causing printf( "%d", value.asInt()) to behave in "interesting" way. 

Also one of the aim of JsonCpp is for use for configuration file. In such case you frequently need to deal with int/ unsigned int types. If the only getter available were 64 bits integer we would be precision loss warning all over the place.

Though if you use asUInt64() you can easily pretend that the other integer type do not exist (minor the identified lack of isUInt64() with the right semantic).
JSON, in fact, does not specify the precision of numeric values (http://www.ietf.org/rfc/rfc4627.txt?number=4627), probably because it would be impossible to guaranty that any given implementation (in umpteen[1] different programming languages) can honor that, e.g. a certain embedded environment might not have integers >32 bits. Section 2.4 of the JSON RFC notably does not specify any limits on numbers, so 2^75 (when written in expanded integer form) is syntactically legal, according to the grammar, but is almost certainly semantically illegal in any modern computer.
These pages:


say that in JS integers are reliable up to "15-16" digits (53 bits). Thus int64_t can legally hold any JS-legal value.

Obviously, JSON is _not_ only for JavaScript, but i'm using JS as the baseline here because (A) JSON derives from JS and (B) it is primarily consumed by JS applications (though it is primarily generated by other technologies).

Actually, some parser leverage language features that bypass hardware limitation. Python will happily parse a 2^75 integer. Support for use of the arbitrary floating point precision type Decimal was even added at some point in python. So there is clearly use cases around there where people just use JSON for serialization and don't limit themselves to what can be done portably.


[1] = American slang for "very many"