You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: eric j. <er...@en...> - 2002-06-10 20:10:37
|
> We have certainly beaten this topic to death in the past. It keeps > coming up because there is no good way around it. > > Two points about the x + 1.0 issue: > > 1. How often this occurs is really a function of what you are doing. For > those using Numeric Python as a kind of MATLAB clone, who are typing > interactively, the size issue is of less importance and the easy > expression is of more importance. To those writing scripts to batch > process or writing steered applications, the size issue is more > important and the easy expression less important. I'm using words like > less and more here because both issues matter to everyone at some time, > it is just a question of relative frequency of concern. > > 2. Part of what I had in mind with the kinds module proposal PEP 0242 > was dealing with the literal issue. There had been some proposals to > make literals decimal numbers or rationals, and that got me thinking > about how to defend myself if they did it, and also about the fact that > Python doesn't have Fortran's kind concept which you can use to gain a > more platform-independent calculation. > > >From the PEP this example > > In module myprecision.py: > > import kinds > tinyint = kinds.int_kind(1) > single = kinds.float_kind(6, 90) > double = kinds.float_kind(15, 300) > csingle = kinds.complex_kind(6, 90) > > In the rest of my code: > > from myprecision import tinyint, single, double, csingle > n = tinyint(3) > x = double(1.e20) > z = 1.2 > # builtin float gets you the default float kind, properties > unknown > w = x * float(x) > # but in the following case we know w has kind "double". > w = x * double(z) > > u = csingle(x + z * 1.0j) > u2 = csingle(x+z, 1.0) > > Note how that entire code can then be changed to a higher > precision by changing the arguments in myprecision.py. > > Comment: note that you aren't promised that single != double; but > you are promised that double(1.e20) will hold a number with 15 > decimal digits of precision and a range up to 10**300 or that the > float_kind call will fail. > I think this is a nice feature, but it's actually heading the opposite direction of where I'd like to see things go for the general use of Numeric. Part of Python's appeal for me is that I don't have to specify types everywhere. I don't want to write explicit casts throughout equations because it munges up their readability. Of course, the casting sometimes can't be helped, but Numeric's current behavior really forces this explicit casting for array types besides double, int, and double complex. I like Numarray's fix for this problem. Also, as Perry noted, its unlikely to be used as an everyday command line tool (like Matlab) if the verbose casting is required. I'm interested to learn what other drawbacks yall found with always returning arrays (0-d for scalars) from Numeric functions. Konrad mentioned the tuple parsing issue in some extension libraries that expects floats, but it sounds like Travis thinks this is no longer an issue. Are there others? eric > > > > > -----Original Message----- > > From: num...@li... > > [mailto:num...@li...] On > > Behalf Of Konrad Hinsen > > Sent: Monday, June 10, 2002 10:08 AM > > To: eric jones > > Cc: num...@li... > > Subject: Re: FW: [Numpy-discussion] Bug: extremely misleading > > array behavior > > > > > > "eric jones" <er...@en...> writes: > > > > > How about making indexing (not slicing) arrays *always* > > return a 0-D > > > array with copy instead of "view" semantics? This is nearly > > > equivalent to creating a new scalar type, but without > > requiring major > > > changes. I > > ... > > > > I think this was discussed as well a long time ago. For pure > > Python code, this would be a very good solution. But > > > > > I think the only reason for the silent conversion is that > > Python lists > > > only allow integer values for use in indexing so that: > > > > There are some more cases where the type matters. If you call > > C routines that do argument parsing via PyArg_ParseTuple and > > expect a float argument, a rank-0 float array will raise a > > TypeError. All the functions from the math module work like > > that, and of course many in various extension modules. > > > > In the ideal world, there would not be any distinction > > between scalars and rank-0 arrays. But I don't think we'll > > get there soon. > > > > > On coercion rules: > > > > > > As for adding the array to a scalar value, > > > > > > x = array([3., 4.], Float32) > > > y = x + 1. > > > > > > Should y be a Float or a Float32? I like numarray's coercion rules > > > better (Float32). I have run into this upcasting to many times to > > > > Statistically they probably give the desired result in more > > cases. But they are in contradiction to Python principles, > > and consistency counts a lot on my value scale. > > > > I propose an experiment: ask a few Python programmers who are > > not using NumPy what type they would expect for the result. I > > bet that not a single one would answer "Float32". > > > > > On the other hand, I don't think a jump from 21 to 22 is > > enough of a > > > jump to make such a change. Numeric progresses pretty > > fast, and users > > > > I don't think any increase in version number is enough for > > incompatible changes. For many users, NumPy is just a > > building block, they install it because some other package(s) > > require it. If a new version breaks those other packages, > > they won't be happy. The authors of those packages won't be > > happy either, as they will get the angry letters. > > > > As an author of such packages, I am speaking from experience. > > I have even considered to make my own NumPy distribution > > under a different name, just to be safe from changes in NumPy > > that break my code (in the past it was mostly the > > installation code that was broken when arrayobject.h changed > > its location). > > > > In my opinion, anything that is not compatible with Numeric > > should not be called Numeric. > > > > Konrad. > > -- > > -------------------------------------------------------------- > > ----------------- > > Konrad Hinsen | E-Mail: > > hi...@cn... > > Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 > > Rue Charles Sadron | Fax: +33-2.38.63.15.17 > > 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ > > France | Nederlands/Francais > > -------------------------------------------------------------- > > ----------------- > > > > _______________________________________________________________ > > > > Don't miss the 2002 Sprint PCS Application Developer's > > Conference August 25-28 in Las Vegas - > > http://devcon.sprintpcs.com/adp/index.cfm?> source=osdntextlink > > > > > > _______________________________________________ > > Numpy-discussion mailing list Num...@li... > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > _______________________________________________________________ > > Don't miss the 2002 Sprint PCS Application Developer's Conference > August 25-28 in Las Vegas - > http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Perry G. <pe...@st...> - 2002-06-10 20:07:01
|
<Eric Jones writes>: > I further believe that all Numeric functions (sum, product, etc.) should > return arrays all the time instead of converting implicitly converting > them to Python scalars in special cases such as reductions of 1d arrays. > I think the only reason for the silent conversion is that Python lists > only allow integer values for use in indexing so that: > > >>> a = [1,2,3,4] > >>> a[array(0)] > Traceback (most recent call last): > File "<stdin>", line 1, in ? > TypeError: sequence index must be integer > > Numeric arrays don't have this problem: > > >>> a = array([1,2,3,4]) > >>> a[array(0)] > 1 > > I don't think this alone is a strong enough reason for the conversion. > Getting rid of special cases is more important because it makes behavior > predictable to the novice (and expert), and it is easier to write > generic functions and be sure they will not break a year from now when > one of the special cases occurs. > > Are there other reasons why scalars are returned? > Well, sure. It isn't just indexing lists directly, it would be anywhere in Python that you would use a number. In some contexts, the right thing may happen (where the function knows to try to obtain a simple number from an object), but then again, it may not (if calling a function where the number is used directly to index or slice). Here is another case where good arguments can be made for both sides. It really isn't an issue of functionality (one can write methods or functions to do what is needed), it's what the convenient syntax does. For example, if we really want a Python scalar but rank-0 arrays are always returned then something like this may be required: >>> x = arange(10) >>> a = range(10) >>> a[scalar(x[2])] # instead of a[x[2]] Whereas if simple indexing returns a Python scalar and consistency is desired in always having arrays returned one may have to do something like this >>> y = x.indexAsArray(2) # instead of y = x[2] or perhaps >>> y = x[ArrayAlwaysAsResultIndexObject(2)] # :-) with better name, of course One context or the other is going to be inconvenienced, but not prevented from doing what is needed. As long as Python scalars are the 'biggest' type of their kind, we strongly lean towards single elements being converted into Python scalars. It's our feeling that there are more surprises and gotchas, particularly for more casual users, on this side than on the uncertainty of an index returning an array or scalar. People writing code that expects to deal with uncertain dimensionality (the only place that this occurs) should be the ones to go the extra distance in more awkward syntax. Perry |
From: Perry G. <pe...@st...> - 2002-06-10 19:06:33
|
<Paul Dubois writes>: > > We have certainly beaten this topic to death in the past. It keeps > coming up because there is no good way around it. > Ain't that the truth. > Two points about the x + 1.0 issue: > > 1. How often this occurs is really a function of what you are doing. For > those using Numeric Python as a kind of MATLAB clone, who are typing > interactively, the size issue is of less importance and the easy > expression is of more importance. To those writing scripts to batch > process or writing steered applications, the size issue is more > important and the easy expression less important. I'm using words like > less and more here because both issues matter to everyone at some time, > it is just a question of relative frequency of concern. > We have many in the astronomical community that use IDL (instead of MATLAB) and for them size is an issue for interactive use. They often manipulate very large arrays interactively. Furthermore, many are astronomers who don't generally see themselves as programmers and who may write programs (perhaps not great programs) don't want to be bothered by such details even in a script (or they may want to read a "professional" program and not have to deal with such things). But you are right in that there is no solution that doesn't have some problems. Every array language deals with this in somewhat different ways I suspect. In IDL, the literals are generally smaller types (ints were (or used to be, I haven't used it myself in a while) 2 bytes, floats single precision) and there were ways of writing literals with higher precision (e.g., 2L, 2.0d-2). Since it was a language specifically intended to deal with numeric processing, supporting many scalar types made sense. Perry |
From: Paul F D. <pa...@pf...> - 2002-06-10 18:19:37
|
We have certainly beaten this topic to death in the past. It keeps coming up because there is no good way around it. Two points about the x + 1.0 issue: 1. How often this occurs is really a function of what you are doing. For those using Numeric Python as a kind of MATLAB clone, who are typing interactively, the size issue is of less importance and the easy expression is of more importance. To those writing scripts to batch process or writing steered applications, the size issue is more important and the easy expression less important. I'm using words like less and more here because both issues matter to everyone at some time, it is just a question of relative frequency of concern. 2. Part of what I had in mind with the kinds module proposal PEP 0242 was dealing with the literal issue. There had been some proposals to make literals decimal numbers or rationals, and that got me thinking about how to defend myself if they did it, and also about the fact that Python doesn't have Fortran's kind concept which you can use to gain a more platform-independent calculation. From the PEP this example In module myprecision.py: import kinds tinyint = kinds.int_kind(1) single = kinds.float_kind(6, 90) double = kinds.float_kind(15, 300) csingle = kinds.complex_kind(6, 90) In the rest of my code: from myprecision import tinyint, single, double, csingle n = tinyint(3) x = double(1.e20) z = 1.2 # builtin float gets you the default float kind, properties unknown w = x * float(x) # but in the following case we know w has kind "double". w = x * double(z) u = csingle(x + z * 1.0j) u2 = csingle(x+z, 1.0) Note how that entire code can then be changed to a higher precision by changing the arguments in myprecision.py. Comment: note that you aren't promised that single != double; but you are promised that double(1.e20) will hold a number with 15 decimal digits of precision and a range up to 10**300 or that the float_kind call will fail. > -----Original Message----- > From: num...@li... > [mailto:num...@li...] On > Behalf Of Konrad Hinsen > Sent: Monday, June 10, 2002 10:08 AM > To: eric jones > Cc: num...@li... > Subject: Re: FW: [Numpy-discussion] Bug: extremely misleading > array behavior > > > "eric jones" <er...@en...> writes: > > > How about making indexing (not slicing) arrays *always* > return a 0-D > > array with copy instead of "view" semantics? This is nearly > > equivalent to creating a new scalar type, but without > requiring major > > changes. I > ... > > I think this was discussed as well a long time ago. For pure > Python code, this would be a very good solution. But > > > I think the only reason for the silent conversion is that > Python lists > > only allow integer values for use in indexing so that: > > There are some more cases where the type matters. If you call > C routines that do argument parsing via PyArg_ParseTuple and > expect a float argument, a rank-0 float array will raise a > TypeError. All the functions from the math module work like > that, and of course many in various extension modules. > > In the ideal world, there would not be any distinction > between scalars and rank-0 arrays. But I don't think we'll > get there soon. > > > On coercion rules: > > > > As for adding the array to a scalar value, > > > > x = array([3., 4.], Float32) > > y = x + 1. > > > > Should y be a Float or a Float32? I like numarray's coercion rules > > better (Float32). I have run into this upcasting to many times to > > Statistically they probably give the desired result in more > cases. But they are in contradiction to Python principles, > and consistency counts a lot on my value scale. > > I propose an experiment: ask a few Python programmers who are > not using NumPy what type they would expect for the result. I > bet that not a single one would answer "Float32". > > > On the other hand, I don't think a jump from 21 to 22 is > enough of a > > jump to make such a change. Numeric progresses pretty > fast, and users > > I don't think any increase in version number is enough for > incompatible changes. For many users, NumPy is just a > building block, they install it because some other package(s) > require it. If a new version breaks those other packages, > they won't be happy. The authors of those packages won't be > happy either, as they will get the angry letters. > > As an author of such packages, I am speaking from experience. > I have even considered to make my own NumPy distribution > under a different name, just to be safe from changes in NumPy > that break my code (in the past it was mostly the > installation code that was broken when arrayobject.h changed > its location). > > In my opinion, anything that is not compatible with Numeric > should not be called Numeric. > > Konrad. > -- > -------------------------------------------------------------- > ----------------- > Konrad Hinsen | E-Mail: > hi...@cn... > Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 > Rue Charles Sadron | Fax: +33-2.38.63.15.17 > 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ > France | Nederlands/Francais > -------------------------------------------------------------- > ----------------- > > _______________________________________________________________ > > Don't miss the 2002 Sprint PCS Application Developer's > Conference August 25-28 in Las Vegas - > http://devcon.sprintpcs.com/adp/index.cfm?> source=osdntextlink > > > _______________________________________________ > Numpy-discussion mailing list Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Travis O. <oli...@ie...> - 2002-06-10 18:12:51
|
On Mon, 2002-06-10 at 11:08, Konrad Hinsen wrote: > "eric jones" <er...@en...> writes: > > > > I think the only reason for the silent conversion is that Python lists > > only allow integer values for use in indexing so that: > > There are some more cases where the type matters. If you call C > routines that do argument parsing via PyArg_ParseTuple and expect a > float argument, a rank-0 float array will raise a TypeError. All the > functions from the math module work like that, and of course many in > various extension modules. Actually, the code in PyArg_ParseTuple asks the object it gets if it knows how to be a float. 0-d arrays for some time have known how to be Python floats. So, I do not think this error occurs as you've described. Could you demonstrate this error? In fact most of the code in Python itself which needs scalars allows arbitrary objects provided the object has defined functions which return a Python scalar. The only exception to this that I've seen is the list indexing code (probably for optimization purposes). There could be more places, but I have not found them or heard of them. Originally Numeric arrays did not define appropriate functions for 0-d arrays to act like scalars in the right places. For quite a while, they have now. I'm quite supportive of never returning Python scalars from Numeric array operations unless specifically requested (e.g. the toscalar method). > > On coercion rules: > > > > As for adding the array to a scalar value, > > > > x = array([3., 4.], Float32) > > y = x + 1. > > > > Should y be a Float or a Float32? I like numarray's coercion rules > > better (Float32). I have run into this upcasting to many times to > > Statistically they probably give the desired result in more cases. But > they are in contradiction to Python principles, and consistency counts > a lot on my value scale. > > I propose an experiment: ask a few Python programmers who are not > using NumPy what type they would expect for the result. I bet that not > a single one would answer "Float32". > I'm not sure I agree with that at all. On what reasoning is that presumption based? If I encounter a Python object that I'm unfamiliar with, I don't presume to know how it will define multiplication. |
From: Konrad H. <hi...@cn...> - 2002-06-10 17:12:40
|
"eric jones" <er...@en...> writes: > How about making indexing (not slicing) arrays *always* return a 0-D > array with copy instead of "view" semantics? This is nearly equivalent > to creating a new scalar type, but without requiring major changes. I ... I think this was discussed as well a long time ago. For pure Python code, this would be a very good solution. But > I think the only reason for the silent conversion is that Python lists > only allow integer values for use in indexing so that: There are some more cases where the type matters. If you call C routines that do argument parsing via PyArg_ParseTuple and expect a float argument, a rank-0 float array will raise a TypeError. All the functions from the math module work like that, and of course many in various extension modules. In the ideal world, there would not be any distinction between scalars and rank-0 arrays. But I don't think we'll get there soon. > On coercion rules: > > As for adding the array to a scalar value, > > x = array([3., 4.], Float32) > y = x + 1. > > Should y be a Float or a Float32? I like numarray's coercion rules > better (Float32). I have run into this upcasting to many times to Statistically they probably give the desired result in more cases. But they are in contradiction to Python principles, and consistency counts a lot on my value scale. I propose an experiment: ask a few Python programmers who are not using NumPy what type they would expect for the result. I bet that not a single one would answer "Float32". > On the other hand, I don't think a jump from 21 to 22 is enough of a > jump to make such a change. Numeric progresses pretty fast, and users I don't think any increase in version number is enough for incompatible changes. For many users, NumPy is just a building block, they install it because some other package(s) require it. If a new version breaks those other packages, they won't be happy. The authors of those packages won't be happy either, as they will get the angry letters. As an author of such packages, I am speaking from experience. I have even considered to make my own NumPy distribution under a different name, just to be safe from changes in NumPy that break my code (in the past it was mostly the installation code that was broken when arrayobject.h changed its location). In my opinion, anything that is not compatible with Numeric should not be called Numeric. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: eric j. <er...@en...> - 2002-06-10 00:18:30
|
> > If you are proposing something like > > > > y = x + Float32(1.) > > > > it would work, but it sure leads to some awkward expressions. > > Yes, that's what I am proposing. It's no worse than what we have now, > and if writing Float32 a hundred times is too much effort, an > abbreviation like f = Float32 helps a lot. > > Anyway, following the Python credo "explicit is better than implicit", > I'd rather write explicit type conversions than have automagical ones > surprise me. How about making indexing (not slicing) arrays *always* return a 0-D array with copy instead of "view" semantics? This is nearly equivalent to creating a new scalar type, but without requiring major changes. I think it is probably even more useful for writing generic code because the returned value with retain array behavior. Also, the following example > a = array([1., 2.], Float) > b = array([3., 4.], Float32) > > a[0]*b would now return a Float array as Konrad desires because a[0] is a Float array. Using copy semantics would fix the unexpected behavior reported by Larry that kicked off this discussion. Slices are a different animal than indexing that would (and definitely should) continue to return view semantics. I further believe that all Numeric functions (sum, product, etc.) should return arrays all the time instead of converting implicitly converting them to Python scalars in special cases such as reductions of 1d arrays. I think the only reason for the silent conversion is that Python lists only allow integer values for use in indexing so that: >>> a = [1,2,3,4] >>> a[array(0)] Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: sequence index must be integer Numeric arrays don't have this problem: >>> a = array([1,2,3,4]) >>> a[array(0)] 1 I don't think this alone is a strong enough reason for the conversion. Getting rid of special cases is more important because it makes behavior predictable to the novice (and expert), and it is easier to write generic functions and be sure they will not break a year from now when one of the special cases occurs. Are there other reasons why scalars are returned? On coercion rules: As for adding the array to a scalar value, x = array([3., 4.], Float32) y = x + 1. Should y be a Float or a Float32? I like numarray's coercion rules better (Float32). I have run into this upcasting to many times to count. Explicit and implicit aren't obvious to me here. The user explicitly cast x to be Float32, but because of the limited numeric types in Python, the result is upcast to a double. Here's another example, >>> from Numeric import * >>> a = array((1,2,3,4), UnsignedInt8) >>> left_shift(a,3) array([ 8, 16, 24, 32],'i') I had to stare at this for a while when I first saw it before I realized the integer value 3 upcast the result to be type 'i'. So, I think this is confusing and rarely the desired behavior. The fact that this is inconsistent with Python's "always upcast" rule is minor for me. The array math operations are necessarily a different animal from scalar operations because of the extra types supported. Defining these operations in a way that is most convenient for working with array data seems OK. On the other hand, I don't think a jump from 21 to 22 is enough of a jump to make such a change. Numeric progresses pretty fast, and users don't expect such a major shift in behavior. I do think, though, that the computational speed issue is going to result in numarray and Numeric existing side-by-side for a long time. Perhaps we should think create an "interim" Numeric version (maybe starting at 30), that tries to be compatible with the upcoming numarray, in its coercion rules, etc? Advanced features such as indexing arrays with arrays, memory mapped arrays, floating point exception behavior, etc. won't be there, but it should help people transition their codes to work with numarray, and also offer a speedy alternative. A second choice would be to make SciPy's Numeric implementation the intermediate step. It already produces NaN's during div-by-zero exceptions according to numarray's rules. The coercion modifications could also be incorporated. > > Finally, we can always lobby for inclusion of the new scalar types > into the core interpreter, with a corresponding syntax for literals, > but it would sure help if we could show that the system works and > suffers only from the lack of literals. There was a seriously considered debate last year about unifying Python's numeric model into a single type to get rid of the integer-float distinction, at last year's Python conference and the ensuing months. While it didn't (and won't) happen, I'd be real surprised if the general community would welcome us suggesting stirring yet another type into the brew. Can't we make 0-d arrays work as an alternative? eric > > Konrad. > -- > ------------------------------------------------------------------------ -- > ----- > Konrad Hinsen | E-Mail: hi...@cn... > Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 > Rue Charles Sadron | Fax: +33-2.38.63.15.17 > 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ > France | Nederlands/Francais > ------------------------------------------------------------------------ -- > ----- > > _______________________________________________________________ > > Don't miss the 2002 Sprint PCS Application Developer's Conference > August 25-28 in Las Vegas - > http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Travis O. <oli...@ie...> - 2002-06-09 01:55:10
|
I did not receive any major objections, and so I have released a new Numeric (21.3) incorporating bug fixes. I also tagged the CVS tree with VERSION_21_3, and then I incorporated the unsigned integers and unsigned shorts into the CVS version of Numeric, for inclusion in a tentatively named version 22.0 I've only uploaded a platform independent tar file for 21.3. Any binaries need to be updated. If you are interested in testing the new additions, please let me know of any bugs you find. Thanks, -Travis O. |
From: Konrad H. <hi...@cn...> - 2002-06-08 07:59:54
|
> If you are proposing something like > > y = x + Float32(1.) > > it would work, but it sure leads to some awkward expressions. Yes, that's what I am proposing. It's no worse than what we have now, and if writing Float32 a hundred times is too much effort, an abbreviation like f = Float32 helps a lot. Anyway, following the Python credo "explicit is better than implicit", I'd rather write explicit type conversions than have automagical ones surprise me. Finally, we can always lobby for inclusion of the new scalar types into the core interpreter, with a corresponding syntax for literals, but it would sure help if we could show that the system works and suffers only from the lack of literals. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Perry G. <pe...@st...> - 2002-06-07 21:41:34
|
<Konrad Hinsen writes>: > I still believe that the best solution is to define scalar data types > corresponding to all array element types. As far as I can see, this > doesn't have any of the disadvantages of the other solutions that > have been proposed until now. > If x was a Float32 array how would the following not be promoted to a Float64 array y = x + 1. If you are proposing something like y = x + Float32(1.) it would work, but it sure leads to some awkward expressions. Perry |
From: Konrad H. <hi...@cn...> - 2002-06-07 20:48:50
|
> It would be nice to have a solution that had none of these > problems, but that doesn't appear to be possible. I still believe that the best solution is to define scalar data types corresponding to all array element types. As far as I can see, this doesn't have any of the disadvantages of the other solutions that have been proposed until now. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Perry G. <pe...@st...> - 2002-06-07 16:42:15
|
> > For binary operations between a Python scalar and array, there is > > no coercion performed on the array type if the scalar is of the > > same kind as the array (but not same size or precision). For example > > (assuming ints happen to be 32 bit in this case) > > That solves one problem and creates another... Two, in fact. One is > the inconsistency problem: Python type coercion always promotes > "smaller" to "bigger" types, it would be good to make no exceptions > from this rule. > > Besides, there are still situations in which types, ranks, and > indexing operations depend on each other in a strange way. With > > a = array([1., 2.], Float) > b = array([3., 4.], Float32) > > the result of > > a*b > > is of type Float, whereas > > a[0]*b > > is of type Float32 - if and only if a has rank 1. > All this is true. It really comes down to which poison you prefer. Neither choice is perfect. Changing the coercion rules results in the inconsistencies you mention. Not changing them results in the existing inconsistencies recently discussed (and still doesn't remove the difficulties of dealing with scalars in expressions without awkward constructs). We think the inconsistencies you point out are easier to live with than the existing behavior. It would be nice to have a solution that had none of these problems, but that doesn't appear to be possible. Perry |
From: Konrad H. <hi...@cn...> - 2002-06-07 16:00:57
|
> For binary operations between a Python scalar and array, there is > no coercion performed on the array type if the scalar is of the > same kind as the array (but not same size or precision). For example > (assuming ints happen to be 32 bit in this case) That solves one problem and creates another... Two, in fact. One is the inconsistency problem: Python type coercion always promotes "smaller" to "bigger" types, it would be good to make no exceptions from this rule. Besides, there are still situations in which types, ranks, and indexing operations depend on each other in a strange way. With a = array([1., 2.], Float) b = array([3., 4.], Float32) the result of a*b is of type Float, whereas a[0]*b is of type Float32 - if and only if a has rank 1. > (Yes, it would be easiest to deal with if Python had all these types, > but I think that will never happen, nor should it happen.) Python doesn't need to have them as standard types, an add-on package can provide them as well. NumPy seems like the obvious one. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Perry G. <pe...@st...> - 2002-06-06 20:29:15
|
[I thought I replied yesterday, but somehow that apparently vanished.] <Konrad Hinsen writes>: > "Perry Greenfield" <pe...@st...> writes: > > > Numarray has different coercion rules so that this doesn't > > happen. Thus one doesn't need c[1,1] to give a rank-0 array. > > What are those coercion rules? > For binary operations between a Python scalar and array, there is no coercion performed on the array type if the scalar is of the same kind as the array (but not same size or precision). For example (assuming ints happen to be 32 bit in this case) Python Int (Int32) * Int16 array --> Int16 array Python Float (Float64) * Float32 array --> Float32 array. But if the Python scalar is of a higher kind, e.g., Python float scalar with Int array, then the array is coerced to the corresponding type of the Python scalar. Python Float (Float64) * Int16 array --> Float64 array. Python Complex (Complex64) * Float32 array --> Complex64 array. Numarray basically has the same coercion rules as Numeric when two arrays are involved (there are some extra twists such as: UInt16 array * Int16 array --> Int32 array since neither input type is a proper subset of the other. (But since Numeric doesn't (or didn't until Travis changed that) have unsigned types, that wouldn't have been an issue with Numeric.) > > (if that isn't too hard to implement). Of course you get into > > backward compatibility issues. But really, to get it right, some > > incompatibility is necessary if you want to eliminate this particular > > wart. > > For a big change such as Numarray, I'd accept some incompatibilities. > For just a new version of NumPy, no. There is a lot of code out there > that uses NumPy, and I am sure that a good part of it relies on the > current coercion rules. Moreover, there is no simple way to detect > code that depends on coercion rules, so adapting existing code would > be an enormous amount of work. > Certainly. I didn't mean to minimize that. But the current coercion rules have produced a demand for solutions to the problem of upcasting, and I consider those solutions to be less than ideal (savespace and rank-0 arrays). If people really are troubled by these warts, I'm arguing that the real solution is in changing the coercion behavior. (Yes, it would be easiest to deal with if Python had all these types, but I think that will never happen, nor should it happen.) Perry |
From: Perry G. <pe...@st...> - 2002-06-04 21:17:35
|
<Konrad Hinsen wrote> <Eric Jones wrote> > > I don't think education is the answer here. We need to change > > Numeric to have uniform behavior across all typecodes. > > I agree that this would be the better solution. But until this is > done... > > > Having alternative behaviors for indexing based on the typecode can > > lead to very difficult to find bugs. Generic routines meant to work > > The differences are not that important, in most circumstances rank-0 > arrays and scalars behave in the same way. The problems occur mostly > with code that does explicit type checking. > > The best solution, in my opinion, is to provide scalar objects > corresponding to low-precision ints and floats, as part of NumPy. > There is another approach that I think is more sensible. From what I can tell, the driving force behind rank-0 arrays as scalars are the Numeric coercion rules. One needs to retain the 'lesser' integer and float types so that operations with these psuedo-scalars and other arrays does not coerce arrays to a higher type than would have been done when using the nearest equivalent of Python scalars (if there is some other reason, I'd like to know). For example if a and b are Int16 1-d arrays, if indexing an element out of them produced a Python integer value then a[0]*b becomes an Int32 (or even Int64 on some platforms?) array. Numarray has different coercion rules so that this doesn't happen. Thus one doesn't need c[1,1] to give a rank-0 array. (Eric Jones has pointed out privately that another reason is to use different error handling, but if I'm not mistaken so long as one can group all calculations so that no scalar-scalar calculation is done, one doesn't really need rank-0 arrays other than in unusual circumstances.) So I'd argue that numarray solves this issue. For those that can't wait (because numarray currently lacks a feature, library, it's too slow on small arrays or whatever) and you really must modify Numeric I think you would be much better off changing the coercion rules and eliminating rank-0 arrays resulting from ordinary indexing rather than one of the other proposed changes (if that isn't too hard to implement). Of course you get into backward compatibility issues. But really, to get it right, some incompatibility is necessary if you want to eliminate this particular wart. Perry Greenfield |
From: Travis O. <oli...@ie...> - 2002-06-04 06:39:16
|
I would like to update the Numeric CVS tree to include support for unsigned shorts and ints. Making the transition will cause some difficulty with binary extensions as these will need to be recompiled with the new Numeric. As a result, I propose that a new release of Numeric be posted (to include the recent bug fixes), and then the changes made for inclusion in the next version number of Numeric. Comments? -Travis |
From: Travis O. <oli...@ie...> - 2002-06-03 20:01:43
|
On Mon, 2002-06-03 at 10:54, Paul F Dubois wrote: > > Konrad said: > > > > The best solution, in my opinion, is to provide scalar > > objects corresponding to low-precision ints and floats, as > > part of NumPy. > > > > Konrad. > This seems like a good idea. It's been an old source of confusion. On a related note, how does the community feel about retrofitting Numeric with unsigned shorts and unsigned ints. I've got the code to do it already written. -Travis |
From: Paul F D. <pa...@pf...> - 2002-06-03 16:55:05
|
Konrad said: > > The best solution, in my opinion, is to provide scalar > objects corresponding to low-precision ints and floats, as > part of NumPy. > > Konrad. One of the thoughts I had in mind for the "kinds" proposal was to support this. I was going to do the float32 object as part of it as a demo of how it would work. So I got out the float object from Python, figuring I would just change a few types et voila. Not. It is very hard to understand, and I don't even understand the reasons it is hard to understand. Perhaps a young person with a high tolerance for pain would look at this? |
From: Konrad H. <hi...@cn...> - 2002-06-03 16:33:47
|
> I don't think education is the answer here. We need to change > Numeric to have uniform behavior across all typecodes. I agree that this would be the better solution. But until this is done... > Having alternative behaviors for indexing based on the typecode can > lead to very difficult to find bugs. Generic routines meant to work The differences are not that important, in most circumstances rank-0 arrays and scalars behave in the same way. The problems occur mostly with code that does explicit type checking. The best solution, in my opinion, is to provide scalar objects corresponding to low-precision ints and floats, as part of NumPy. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Jake E. <ja...@ed...> - 2002-06-03 14:44:54
|
I am using both MA and Numeric in a program that I am writing and ran into some typecasting oddness (at least I thought it was odd). When using only Numeric, adding an array of typecode 'l' and one of typecode '1' produces an array of typecode 'l' whereas using an MA derived array of typecode '1' added to a Numeric array of typecode 'l' produces an array of typecode '1'. Sorry if that is a bit dense, the upshot is that mixing the two causes the output to be the _smaller_ of the two types (Int8 == '1') rather than the larger (Int == 'l') as I would expect ... below is some code that reproduces the problem (it may look contrived (and is), but it comes from the guts of some code I have been playing with): #!/usr/bin/env python from Numeric import * import MA a = zeros((10,)) print a.typecode() b = MA.ones((10,),Int8) b = MA.masked_where(MA.equal(b,1),b,0) print b.typecode() print b.mask().typecode() z = ones((10,),Int8) print z.typecode() c = add(a,b.mask()) print c.typecode() d = add(a,z) print d.typecode() I get output like: l 1 1 1 1 l any thoughts? thanks! jake |
From: Jake E. <ja...@ed...> - 2002-06-03 14:25:14
|
I was converting a program written for Numeric to use masked arrays and I ran into a problem with multiply ... it would appear that there is no 3 argument version for MA? i.e. a = array([1, 2, 3]) multiply(a,a,a) works fine to square the array using Numeric, but i get an exception: TypeError: __call__() takes exactly 3 arguments (4 given) when doing it using MA ... it seems clear that that is the problem, is it an oversight or just as yet unimplemented or am I missing something? thanks! jake |
From: eric <er...@en...> - 2002-06-01 20:46:03
|
----- Original Message ----- From: "Konrad Hinsen" <hi...@cn...> To: "Pearu Peterson" <pe...@ce...> Cc: <num...@li...> Sent: Wednesday, May 29, 2002 4:08 AM Subject: Re: [Numpy-discussion] Bug: extremely misleading array behavior > Pearu Peterson <pe...@ce...> writes: > > > an array with 0 rank. It seems that the Numeric documentation is missing > > (though, I didn't look too hard) the following rules of thumb: > > > > If `a' is rank 1 array, then a[i] is Python scalar or object. [MISSING] > > Or rather: > > - If `a' is rank 1 array with elements of type Int, Float, or Complex, > then a[i] is Python scalar or object. [MISSING] > > - If `a' is rank 1 array with elements of type Int16, Int32, Float32, or > Complex32, then a[i] is a rank 0 array. [MISSING] > > - If `a' is rank > 1 array, then a[i] is a sub-array a[i,...] > > The rank-0 arrays are the #1 question topic for users of my netCDF > interface (for portability reasons, netCDF integer arrays map to > Int32, not Int, so scalar integers read from a netCDF array are always > rank-0 arrays), and almost everybody initially claims that it's a bug, > so some education seems necessary. I don't think education is the answer here. We need to change Numeric to have uniform behavior across all typecodes. Having alternative behaviors for indexing based on the typecode can lead to very difficult to find bugs. Generic routines meant to work with any Numeric type can brake a year later when someone passes in an array with a seemingly compatible type. Also, because coersion can silently change typecodes during arithmetic operations, code written expecting one behavior can all the sudden exihibit the other. That is very dangerous and hard to test. eric |
From: eric <er...@en...> - 2002-06-01 20:19:06
|
Hi, I just ran across a situation where reversing an empty array using a negative stride populates it with a new element. I'm betting this isn't the intended behavior. An example code snippet is below. eric C:\home\ej\wrk\chaco>python Python 2.1.3 (#35, Apr 8 2002, 17:47:50) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> from Numeric import * >>> import Numeric >>> Numeric.__version__ '21.0' >>> a = array(()) >>> a zeros((0,), 'l') >>> len(a) 0 >>> b = a[::-1] >>> len(b) 1 >>> b array([0]) -- Eric Jones <eric at enthought.com> Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 |
From: Gerard V. <gve...@gr...> - 2002-05-30 07:19:08
|
Announcing PyQwt-sip324_041 PyQwt = FAST and EASY data plotting for Python, Numeric and Qt! PyQwt is a set of Python bindings for the Qwt C++ class library. The Qwt library extends the Qt framework with widgets for scientific and engineering applications. It contains QwtPlot, a 2d plotting widget, and widgets for data input/output such as and QwtCounter, QwtKnob, QwtThermo and QwtWheel. PyQwt requires and extends PyQt, a set of Python bindings for Qt. PyQwt requires Numeric. Numeric extends the Python language with new data types that make Python an ideal language for numerical computing and experimentation (maybe less efficient than MatLab, but more expressive). The home page of PyQwt is http://gerard.vermeulen.free.fr NEW and IMPORTANT FEATURES of PyQwt-sip324_041: 1. requires PyQt-3.2.4 and sip-3.2.4. 2. implements practically all public and protected member functions of Qwt-0.4.1. 3. compatible with Numeric-21.0 and lower. 4. simplified setup.py script for Unix/Linux and Windows. 5. *.exe installer for Windows (requires Qt-2.3.0-NC). 6. HTML documentation with installation instructions and a reference listing the Python calls to PyQwt that are different from the corresponding C++ calls to Qwt. 7. Tested on Linux with Qt-2.3.1 and Qt-3.0.4. Tested on Windows with Qt-2.3.0-NC. Gerard Vermeulen |
From: Konrad H. <hi...@cn...> - 2002-05-29 08:12:40
|
Pearu Peterson <pe...@ce...> writes: > an array with 0 rank. It seems that the Numeric documentation is missing > (though, I didn't look too hard) the following rules of thumb: > > If `a' is rank 1 array, then a[i] is Python scalar or object. [MISSING] Or rather: - If `a' is rank 1 array with elements of type Int, Float, or Complex, then a[i] is Python scalar or object. [MISSING] - If `a' is rank 1 array with elements of type Int16, Int32, Float32, or Complex32, then a[i] is a rank 0 array. [MISSING] - If `a' is rank > 1 array, then a[i] is a sub-array a[i,...] The rank-0 arrays are the #1 question topic for users of my netCDF interface (for portability reasons, netCDF integer arrays map to Int32, not Int, so scalar integers read from a netCDF array are always rank-0 arrays), and almost everybody initially claims that it's a bug, so some education seems necessary. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |