From: Marc P. <Mar...@on...> - 2002-11-27 13:15:01
|
I'm not sure this is a problem, but I'm looking for a solution for this and I wonder if one could give a piece of advice: I have a C extension using doubles and floats. I return a float casted to double to Python, from my extension, and when I display it I have some extra numbers at the end of the "correct" number. In the extension, dgv is a float (in this exemple dgv=0.1). PyTuple_SET_ITEM(tp0, i, PyFloat_FromDouble((double)dgv)); I print it in Python: print tuple[0] Which produces: 0.10000000149 I get to much numbers, because the print should not try to get more then the 4 bytes float. It looks that the floatobject.c files is setting a number precision for printing, which is forced to 12. (#define PREC_STR 12) This work if you use a "double", but not a "double" casted from a "float". This problem occurs either on SGI and DEC. With stdio: printf("%.g\n", (float) dgv); printf("%.g\n", (double)dgv); printf("%.12g\n",(float) dgv); printf("%.12g\n",(double)dgv); produces (this is a "CORRECT" behavior for printf, we're printing too much digits): 0.1 0.1 0.10000000149 0.10000000149 Any idea ? How can I say to Python to forget the precision, or set it as global. Marcvs [alias Yes I could only compute with integers, but... ] |