From: Tim H. <tim...@ie...> - 2006-11-09 21:48:52
|
Robert Kern wrote: > Fernando Perez wrote: > >> I understand why this happens, but I wonder if it should be in any way >> 'fixed' (if that is even feasible without introducing other problems): >> >> In [28]: x = 999999 >> >> In [29]: y = numpy.array([x]) >> >> In [30]: z = y[0] >> >> In [31]: x==z >> Out[31]: True >> >> In [32]: x >> Out[32]: 999999 >> >> In [33]: z >> Out[33]: 999999 >> >> In [34]: x*x >> Out[34]: 999998000001L >> >> In [35]: z*z >> Warning: overflow encountered in long_scalars >> Out[35]: -729379967 >> >> I am sure it will be, to say the least, pretty surprising (and I can >> imagine subtle bugs being caused by this). >> > > I think we decided long ago that an int32 really is an array of 32-bit integers > and behaves like one. That's precisely why one uses int32 arrays rather than > object arrays. There are algorithms that do need the wraparound, and the Python > int behavior is always available through object arrays. > > Let me add that I can't imagine that the bugs will be all that subtle given that numpy now spits out a warning on overflow. If you're really worried about this I suggest you crank up the error mode to make this an error - then you really won't be able to miss this kind of overflow. -tim |