From: Ivan V. i B. <iv...@ca...> - 2006-05-23 06:49:23
Attachments:
signature.asc
|
(I'm sending this again because I'm afraid the previous post may have qualified as spam because of it subject. Sorry for the inconvenience.) Hi all, when working with numexpr, I have come across a curiosity in both numarray and numpy:: In [30]:b =3D numpy.array([1,2,3,4]) In [31]:b ** -1 Out[31]:array([1, 0, 0, 0]) In [32]:4 ** -1 Out[32]:0.25 In [33]: According to http://docs.python.org/ref/power.html: For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. Then, shouldn=E2=80=99t be ``b ** -1 =3D=3D array([1.0, 0.5, 0.33333333, = 0.25])`` (i.e. a floating point result)? Is this behaviour intentional? (I googled for previous messages on the topic but I didn=E2=80=99t find any.= ) Thanks, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Pujo A. <aj...@gm...> - 2006-05-23 06:57:15
|
use 'f' to tell numpy that its array element is a float type: b =3D numpy.array([1,2,3,4],'f') an alternative is to put dot after the number: b =3D numpy.array([1. ,2. ,3. ,4.]) This hopefully solve your problem. Cheers, pujo On 5/23/06, Ivan Vilata i Balaguer <iv...@ca...> wrote: > > (I'm sending this again because I'm afraid the previous post may have > qualified as spam because of it subject. Sorry for the inconvenience.) > > Hi all, when working with numexpr, I have come across a curiosity in > both numarray and numpy:: > > In [30]:b =3D numpy.array([1,2,3,4]) > In [31]:b ** -1 > Out[31]:array([1, 0, 0, 0]) > In [32]:4 ** -1 > Out[32]:0.25 > In [33]: > > According to http://docs.python.org/ref/power.html: > > For int and long int operands, the result has the same type as the > operands (after coercion) unless the second argument is negative; in > that case, all arguments are converted to float and a float result is > delivered. > > Then, shouldn't be ``b ** -1 =3D=3D array([1.0, 0.5, 0.33333333, 0.25])`` > (i.e. a floating point result)? Is this behaviour intentional? (I > googled for previous messages on the topic but I didn't find any.) > > Thanks, > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C=E1rabos Coop. V. V V Enjoy Data > "" > > > > |
From: Ivan V. i B. <iv...@ca...> - 2006-05-23 07:26:24
Attachments:
signature.asc
|
En/na Pujo Aji ha escrit:: > use 'f' to tell numpy that its array element is a float type: > b =3D numpy.array([1,2,3,4],'f') >=20 > an alternative is to put dot after the number: > b =3D numpy.array([1. ,2. ,3. ,4.]) >=20 > This hopefully solve your problem. You're right, but according to Python reference docs, having an integer base and a negative integer exponent should still return a floating point result, without the need of converting the base to floating point beforehand. I wonder if the numpy/numarray behavior is based on some implicit policy which states that operating integers with integers should always return integers, for return type predictability, or something like that. Could someone please shed some light on this? Thanks! En/na Pujo Aji ha escrit:: > On 5/23/06, *Ivan Vilata i Balaguer* <iv...@ca... > <mailto:iv...@ca...>> wrote: > [...] > According to http://docs.python.org/ref/power.html: >=20 > For int and long int operands, the result has the same type as th= e > operands (after coercion) unless the second argument is negative;= in > that case, all arguments are converted to float and a float resul= t is > delivered. >=20 > Then, shouldn't be ``b ** -1 =3D=3D array([1.0, 0.5, 0.33333333, 0.= 25])`` > (i.e. a floating point result)? Is this behaviour intentional? (I > googled for previous messages on the topic but I didn't find any.) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Pujo A. <aj...@gm...> - 2006-05-23 08:07:53
|
Numpy optimize the python process by explicitly define the element type of array. Just like C++. Python let you work with automatic converting... but it slows down the process. Like having extra code to check the type of your element array. I suggest you check the numpy reference instead of python reference when using numpy. Sincerely Yours, pujo On 5/23/06, Ivan Vilata i Balaguer <iv...@ca...> wrote: > > En/na Pujo Aji ha escrit:: > > > use 'f' to tell numpy that its array element is a float type: > > b =3D numpy.array([1,2,3,4],'f') > > > > an alternative is to put dot after the number: > > b =3D numpy.array([1. ,2. ,3. ,4.]) > > > > This hopefully solve your problem. > > You're right, but according to Python reference docs, having an integer > base and a negative integer exponent should still return a floating > point result, without the need of converting the base to floating point > beforehand. > > I wonder if the numpy/numarray behavior is based on some implicit policy > which states that operating integers with integers should always return > integers, for return type predictability, or something like that. Could > someone please shed some light on this? Thanks! > > En/na Pujo Aji ha escrit:: > > > On 5/23/06, *Ivan Vilata i Balaguer* <iv...@ca... > > <mailto:iv...@ca...>> wrote: > > [...] > > According to http://docs.python.org/ref/power.html: > > > > For int and long int operands, the result has the same type as th= e > > operands (after coercion) unless the second argument is negative; > in > > that case, all arguments are converted to float and a float resul= t > is > > delivered. > > > > Then, shouldn't be ``b ** -1 =3D=3D array([1.0, 0.5, 0.33333333, 0.= 25 > ])`` > > (i.e. a floating point result)? Is this behaviour intentional? (I > > googled for previous messages on the topic but I didn't find any.) > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > C=E1rabos Coop. V. V V Enjoy Data > "" > > > > |
From: Alan G I. <ai...@am...> - 2006-05-23 11:54:42
|
On Tue, 23 May 2006, Pujo Aji apparently wrote: > I suggest you check the numpy reference instead of python > reference when using numpy. http://www.scipy.org/Documentation fyi, Alan Isaac |
From: Ivan V. i B. <iv...@ca...> - 2006-05-23 12:51:36
Attachments:
signature.asc
|
En/na Pujo Aji ha escrit:: > Numpy optimize the python process by explicitly define the element type= > of array. > Just like C++. > > Python let you work with automatic converting... but it slows down the > process. > Like having extra code to check the type of your element array. > > I suggest you check the numpy reference instead of python reference the= n > using numpy. OK, I see that predictability of the type of the output result matters. ;) Besides that, I've been told that, according to the manual, power() (as every other ufunc) uses its ``types`` member to find out the type of the result depending only on the types of its arguments. It makes sense to avoid checking for particular values with possibly large arrays for efficiency, as you point out. I expected Python-like behaviour, but I understand this is not the most appropriate thing to do for a high-performace package (but then, I was not able to find out using the public docs). Thanks for your help, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |