On Mon, Apr 25, 2011 at 22:08, Roy Stogner <roystgnr@ices.utexas.edu> wrote:
I don't know where 34 came from.

log10(2^(112+1))


This is 16-byte. It's quad, not quad-double.

128-bit float == 113 bit precision ~= 34 digit precision, no?

Of course, somehow I misread as "34 bytes" which seemed a strange number.
 


The trouble is that "long double" is usually just 80 bits stored in
12 or 16 bytes, and arithmetic is with the x87 unit.  That's pretty
silly, __float128 works the way you want it to.

It's silly but the justification is supposed to be that it's hardware
accelerated.  Any idea how much slower you get going 64->80->128 bit
on modern CPUs?  It's been years since I've used long double for
anything other than regression testing.

I'm not sure, but __float128 uses the SSE unit which has not been ignored for more than a decade.