From: A. M. A. <per...@gm...> - 2006-11-13 03:18:44
|
On 12/11/06, Erin Sheldon <eri...@gm...> wrote: > Actually, there is a problem with that approach. It first converts > the entire array to a single type, by default a floating type. For > very large integers this precision is insufficient. For example, I > have the following integer in my arrays: > 94137100072000193L > which ends up as > 94137100072000192 > after going to a float and then back to an integer. That's an unfortunate limitation of numpy; it views double-precision floats as higher precision than 64-bit integers, but of course they aren't. If you want to put all your data in a record array, you could try transposing the lists using a list comprehension - numpy is not always as much faster than pure python as it looks. You could then convert that to a list of four arrays and do the assignment as appropriate. Alternatively, you could convert your array into a higher-precision floating-point format (if one is available on your machine) before transposing and storing in a record array. A. M. Archibald |