You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Peter V. <ve...@em...> - 2003-09-04 11:25:16
|
Hi all, I was thinking a bit more about the changes to reduce() that Todd proposed, and have some questions: The problem that the output may not be able to hold the result of an operation is not unique to the reduce() method. For instance adding two arrays of type UInt can also give you the wrong answer: >>> array(255, UInt8) + array(255, UInt8) 254 So, if this is a general problem, why should only the reduce method be enhanced to avoid this? If you implement this, should this capability not be supported more broadly than only by reduce(), for instance by universal functons such as 'add'? Would it not be unexpected for users that only reduce() provides such added functionality? However, as Paul Dubois pointed out earlier, the original design philosphy of Numeric/numarray was to let the user deal with such problems himself and keep the package small and fast. This seems actually a sound decision, so would it not be better to avoid complicating numarray with these type of changes and also leave reduce as it is? Personally I don't have a need for the proposed changes to the reduce function. My original complaint that started the whole discussion was that the mean() and sum() array methods did not give the correct result in some cases. I still think they should return a correct double precision value, even if the universal functions may not. That could be achieved by a separate implementation that does not use the universal functions. I would be prepared to provide that implementation either to replace the mean and sum methods, or as a separate add-on. Cheers, Peter > 1. Add a type parameter to sum which defaults to widest type. > > 2. Add a type parameter to reductions (and fix output type handling). > Default is same-type as it is now. No major changes to C-code. > > 3. Add a WidestType(array) function: > > Bool --> Bool > Int8,Int16,Int32,Int64 --> Int64 > UInt8, UInt16,UInt32,UInt64 --> UInt64 (Int64 on win32) > Float32, Float64 --> Float64 > Complex32, Complex64 --> Complex64 > > The only thing this really leaves out, is a higher performance > implementation of sum/mean which Peter referred to a few times. > Peter, if you want to write a specialized module, we'd be happy > to put it in the add-ons package. |
From: Andrew N. <aln...@st...> - 2003-09-04 09:58:37
|
Say that I have a rank-2 array (matrix) of shape (m,n). Call it A. When extracting column vectors from A, I often want to keep the result as a rank-2 column vector (for subsequent matrix multiplications, etc), so I usually end up writing something ugly like this: column_vector = reshape(A[:,col],(m,1)) I've got a function to wrap this, but is there a builtin (or cleaner) way to do this sort of operation? TIA. Andrew. |
From: Peter V. <ve...@em...> - 2003-09-04 08:54:12
|
> 1. Add a type parameter to sum which defaults to widest type. > > 2. Add a type parameter to reductions (and fix output type handling). > Default is same-type as it is now. No major changes to C-code. > > 3. Add a WidestType(array) function: > > Bool --> Bool > Int8,Int16,Int32,Int64 --> Int64 > UInt8, UInt16,UInt32,UInt64 --> UInt64 (Int64 on win32) > Float32, Float64 --> Float64 > Complex32, Complex64 --> Complex64 This sounds like a good solution. > The only thing this really leaves out, is a higher performance > implementation of sum/mean which Peter referred to a few times. Is this really the case? It depends on how you plan to implement the output conversion. If you will do this by allocating a temporary converted copy of the complete input before the calculations then this potentially requires a lot of temporary storage. But it is certainly possible to come up with an implementation that avoids this. Have you given this some thought? > Peter, > if you want to write a specialized module, we'd be happy to put it in > the add-ons package. I hope that the reduce methods can be made sufficiently efficient so that this will not be necessary. Cheers, Peter |
From: Todd M. <jm...@st...> - 2003-09-03 21:46:14
|
I want to thank everyone who participated in this discussion: Peter, Fernando, Robert, Paul, Perry, and Tim. Tim's post has IMO a completely synthesized solution: 1. Add a type parameter to sum which defaults to widest type. 2. Add a type parameter to reductions (and fix output type handling). Default is same-type as it is now. No major changes to C-code. 3. Add a WidestType(array) function: Bool --> Bool Int8,Int16,Int32,Int64 --> Int64 UInt8, UInt16,UInt32,UInt64 --> UInt64 (Int64 on win32) Float32, Float64 --> Float64 Complex32, Complex64 --> Complex64 The only thing this really leaves out, is a higher performance implementation of sum/mean which Peter referred to a few times. Peter, if you want to write a specialized module, we'd be happy to put it in the add-ons package. Thanks again everybody, Todd -- Todd Miller <jm...@st...> |
From: Peter V. <ve...@em...> - 2003-09-03 15:40:01
|
I also believe that the current behavior for numarray/Numeric reduce method (not to cast) is the right one. It is fine to leave the user with the responsibility to be careful in the case of the reduce operation. But to correctly calculate a mean or a sum by the array methods that are provided you have to convert the array first to a more precise type, and then do the calculation. That wastes space and is slow, and seems not very elegant considering that these are very common statistical operations. A separate implementation for the mean() and sum() methods that uses double precision in the calculation without first converting the array would be straightforward. Since calculating a mean or a sum of a complete array is such a common case I think this would be useful. That leaves the same problem for the reduce method which in some cases would require first a conversion, but this is much less of a problem (at least for me). Having to convert before the operation can be wasteful though. I do like the idea that was also proposed on the list to supply an optional argument to specify the output type. Then the user has full control of the output type (nice if you want high precision in the result without converting the input), and the code can easily be used to implement the mean() and sum() methods. The default behavior of the reduce method can then remain unchanged, so this would not be an obtrusive change. But, I imagine that this may complicate the implementation. Cheers, Peter On Wednesday 03 September 2003 17:13, Paul Dubois wrote: > So after you get the result in a higher precision, then what? > a. Cast it down blindly? > b. Test every element and throw an exception if casting would lose > precision? > c. Test every element and return the smallest kind that "holds" the answer? > d. Always return the highest precision? > > a. is close to equivalent to the present behavior > b. and c. are expensive. > c. makes the type of the result unpredictable, which has its own problems. > d. uses space > > It was the originally design of Numeric to be fast rather than careful, > user beware. There is now a another considerable portion of the > community that is for very careful, and another that is for keeping it > small. You can't satisfy all those goals at once. > > If you make it slow or big in order to be careful, it will always be > slow or big, while the opposite is not true. If you make it fast, the > user can be careful. > > Todd Miller wrote: > > On Mon, 2003-09-01 at 05:34, Peter Verveer wrote: > >>Hi All, > >> > >>I noticed that the sum() and mean() methods of numarrays use the > >> precision of > >> > >>the given array in their calculations. That leads to resuls like this: > >>>>>array([255, 255], Int8).sum() > >> > >>-2 > >> > >>>>>array([255, 255], Int8).mean() > >> > >>-1.0 > >> > >>Would it not be better to use double precision internally and return the > >>correct result? > >> > >>Cheers, Peter > > > > Hi Peter, > > > > I thought about this a lot yesterday and today talked it over with > > Perry. There are several ways to fix the problem with mean() and > > sum(), and I'm hoping that you and the rest of the community will help > > sort them out. > > > > (1) The first "solution" is to require users to do their own up-casting > > prior to calling mean() or sum(). This gives the end user fine control > > over storage cost but leaves the C-like pitfall/bug you discovered. I > > mention this because this is how the numarray/Numeric reductions are > > designed. Is there a reason why the numarray/Numeric reductions don't > > implicitly up-cast? > > > > (2) The second way is what you proposed: use double precision within > > mean and sum. This has great simplicity but gives no control over > > storage usage, and as implemented, the storage would be much higher than > > one might think, potentially 8x. > > > > (3) Lastly, Perry suggested a more radical approach: rather than > > changing the mean and sum methods themselves, we could alter the > > universal function accumulate and reduce methods to implicitly use > > additional precision. Perry's idea was to make all accumulations and > > reductions up-cast their results to the largest type of the current > > family, either Bool, Int64, Float64, or Complex64. By doing this, we > > can improve the utility of the reductions and accumulations as well as > > fixing the problem with sum and mean. |
From: Todd M. <jm...@st...> - 2003-09-03 12:43:36
|
On Tue, 2003-09-02 at 15:20, Fernando Perez wrote: > Todd Miller wrote: > > > I thought about this a lot yesterday and today talked it over with > > Perry. There are several ways to fix the problem with mean() and > > sum(), and I'm hoping that you and the rest of the community will help > > sort them out. > > [snip] > > Just a thought: why not make the upcasting an optional parameter? > <snip> That sounds like a great idea. Simple, but doesn't throw out all storage control. > in most cases where you encapsulate low level details in > a high level abstraction, there end up being situations where the details poke > through the abstraction and cause you grief. Thanks for these kind words. -- Todd Miller <jm...@st...> |
From: Robert K. <ke...@ca...> - 2003-09-02 21:31:14
|
On Tue, Sep 02, 2003 at 01:20:17PM -0600, Fernando Perez wrote: > Todd Miller wrote: > > >I thought about this a lot yesterday and today talked it over with > >Perry. There are several ways to fix the problem with mean() and > >sum(), and I'm hoping that you and the rest of the community will help > >sort them out. > > [snip] > > Just a thought: why not make the upcasting an optional parameter? > > I've found that python's arguments with default values provide a very > convenient way of giving the user fine control with minimal conceptual > overhead. > > I'd rather write: > > arr = array([255, 255], Int8) > ... later > arr.sum(use_double=1) # or some similar way of tuning sum() +1, but arr.sum(typecode=Float64) would be my choice of spelling. Not sure what the default typecode should be, though. Probably Perry's suggestion: the largest type of the family. -- Robert Kern ke...@ca... "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter |
From: <ve...@em...> - 2003-09-02 20:32:43
|
Hi Todd, > I thought about this a lot yesterday and today talked it over with > Perry. There are several ways to fix the problem with mean() and > sum(), and I'm hoping that you and the rest of the community will help > sort them out. It was just an innocent question, I did not think it would have such ramifications :-) Here are my thoughts: If I understand you well, the sum() and mean() array methods are based on the reduce method of the universal functions. And these do their calculations in the precision of the array, is that correct? I also gave this some thought, and I would like to make a distinction between a reduction and the calculation of a statistical value such as the mean or the sum: To me, a reduction means the projection of an multi-dimensional array to an array with a rank that is one less than the input. The result is still an array, and often I want the result to have the same precision as the input. A statistical calculation like a sum or a mean is different: the result should be correct and the same irrespective of the type of the input, and that mandates using sufficient precision in the calculation. Note however, that such a statistic is a scalar result and does not require temporary storage at high precision for the whole array. So keeping this in mind, my comments to your solutions are: > (1) The first "solution" is to require users to do their own up-casting > prior to calling mean() or sum(). This gives the end user fine control > over storage cost but leaves the C-like pitfall/bug you discovered. I > mention this because this is how the numarray/Numeric reductions are > designed. Is there a reason why the numarray/Numeric reductions don't > implicitly up-cast? For reductions this behaviour suits me, precisely because it allows control over storage, which is one of the strengths of numarray. For calculating the mean or the sum of an array this is however an expensive solution for a very common operation. I do use this solution, but sometimes I prefer an optimized C routine instead. > (2) The second way is what you proposed: use double precision within > mean and sum. This has great simplicity but gives no control over > storage usage, and as implemented, the storage would be much higher than > one might think, potentially 8x. I did not want to suggest to store a casted version of the array before calculation of the mean or the sum. That can be done in double precision without converting the whole array in memory. I think we can all agree that this option would not be a good idea. > (3) Lastly, Perry suggested a more radical approach: rather than > changing the mean and sum methods themselves, we could alter the > universal function accumulate and reduce methods to implicitly use > additional precision. Perry's idea was to make all accumulations and > reductions up-cast their results to the largest type of the current > family, either Bool, Int64, Float64, or Complex64. By doing this, we > can improve the utility of the reductions and accumulations as well as > fixing the problem with sum and mean. I think that is a great idea in principle, but I think you should consider this carefully: First of all control of the storage cost is lost when you do a reduction. I would not find that always desirable. Thus, I would then like the old behaviour for reduction to be accesible either as a different method or by a setting an optional argument. Additionally, it would not work well for some operations. For instance precise calculation of the mean requires floating point precision. Maybe this can be solved, but would it require different casting behaviour for different operations. That might be too much trouble... I would like to propose a fourth option: (4) provide separate implementations for array methods like sum() and mean() that only calculate the scalar result. No additional storage would be necessary and the calculation can be done in double precision. I guess that the disadvantage is that one cannot leverage the existing code in the ufuncs so easily. I also assume that it would not be such a general solution as changing the reduce method is. I do have some experience in writing these sorts of code for multidimensional arrays in C and would be happy to contribute code. However, I am not too familiar with the internals of numarray library and I don't know how well my code fits in there (although I interface all my code to numarray). But I am happy to help if I can, numarray is great stuff, it has become the main tool for my numerical work. Peter -- Dr. Peter J. Verveer Cell Biology and Cell Biophysics Programme EMBL Meyerhofstrasse 1 D-69117 Heidelberg Germany Tel. : +49 6221 387245 Fax : +49 6221 387242 |
From: Fernando P. <fp...@co...> - 2003-09-02 19:20:27
|
Todd Miller wrote: > I thought about this a lot yesterday and today talked it over with > Perry. There are several ways to fix the problem with mean() and > sum(), and I'm hoping that you and the rest of the community will help > sort them out. [snip] Just a thought: why not make the upcasting an optional parameter? I've found that python's arguments with default values provide a very convenient way of giving the user fine control with minimal conceptual overhead. I'd rather write: arr = array([255, 255], Int8) ... later arr.sum(use_double=1) # or some similar way of tuning sum() than arr = array([255, 255], Int8) ... later array(arr,typecode='d').sum() Numarray/numpy are trying to tackle an inherently hard problem: matching the high-level comfort of python with the low-level performance of C. This situation is an excellent example of what I've seen described as the 'law of leaky abstractions': in most cases where you encapsulate low level details in a high level abstraction, there end up being situations where the details poke through the abstraction and cause you grief. This is an inherently tricky problem, with no easy, universal solution (that I'm aware of). Cheers, f. |
From: Todd M. <jm...@st...> - 2003-09-02 19:05:54
|
These problems are (partially) addressed in numarray-0.7 with the addition of a check_overflow parameter to numarray.fromlist(). The checked fromlist is now used in the implementation of ufuncs (e.g. multiply), preventing the silent truncation of scalar values to the type of the array. Problem c: solved -- 1001 scalar was silently truncated to Int8 before multiplication. Problem e: solved -- 3000 was silently truncated to Int8 during fromlist. Use fromlist() rather than array() and add check_overflow=1 parameter. Problem f: rejected -- working as designed, won't change. Floating point numbers passed into array() are silently truncated if the type is not a floating point type. Operating on an integer array and a floating point scalar returns a floating point array by design; likewise, operating on an integer array with an integer scalar returns an integer array. This is all expected behavior. Problem g: unaddressed -- working as designed, open to discussion. The silent truncation of g to Int16 is documented in the description of astype(). It's possible to add a check_overflow flag to astype() Thanks for the feedback, Todd On Mon, 2003-08-18 at 09:20, Colin J. Williams wrote: > # ta.py to check Int8, Int16 > import numarray as _n > import numarray.numerictypes as _nt > b= _n.arange(4, type= _nt.Int8) > print 'b, b._type:', b, b._type > c= b*1001 # Grief here - type not > updated > print 'c, c._type:', c, c._type > e= _n.array([1, -2, 3000, 4.6], type= _nt.Int8) # undocumented feature > fraction discarded > # and lowest eight bits > retained as a signed > # integer. > print 'e, e._type:', e, e._type > f= _n.array([1, 2, 3, 4.6], type= _nt.Int8) * 9.6 > print 'f, f._type:', f, f._type > g= (f.copy()*2000).astype(_nt.Int16) # undocumented - see e > above > print 'g, g._type:', g, g._type > > > -------------------------------------------------------------------- > PythonWin 2.3 (#46, Jul 29 2003, 18:54:32) [MSC v.1200 32 bit (Intel)] > on win32. > Portions Copyright 1994-2001 Mark Hammond (mha...@sk...) - > see 'Help/About PythonWin' for further copyright information. > >>> b, b._type: [0 1 2 3] Int8 > c, c._type: [ 0 -23 -46 -69] Int8 > e, e._type: [ 1 -2 -72 4] Int8 > f, f._type: [ 9.6 19.2 28.8 38.4] Float64 > g, g._type: [ 19200 -27136 -7937 11264] Int16 > > Colin W. > > > > ------------------------------------------------------- > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > Data Reports, E-commerce, Portals, and Forums are available now. > Download today and enter to win an XBOX or Visual Studio .NET. > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- Todd Miller jm...@st... STSCI / ESS / SSB |
From: Todd M. <jm...@st...> - 2003-09-02 18:33:19
|
On Mon, 2003-09-01 at 05:34, Peter Verveer wrote: > Hi All, > > I noticed that the sum() and mean() methods of numarrays use the precision of > the given array in their calculations. That leads to resuls like this: > > >>> array([255, 255], Int8).sum() > -2 > >>> array([255, 255], Int8).mean() > -1.0 > > Would it not be better to use double precision internally and return the > correct result? > > Cheers, Peter > Hi Peter, I thought about this a lot yesterday and today talked it over with Perry. There are several ways to fix the problem with mean() and sum(), and I'm hoping that you and the rest of the community will help sort them out. (1) The first "solution" is to require users to do their own up-casting prior to calling mean() or sum(). This gives the end user fine control over storage cost but leaves the C-like pitfall/bug you discovered. I mention this because this is how the numarray/Numeric reductions are designed. Is there a reason why the numarray/Numeric reductions don't implicitly up-cast? (2) The second way is what you proposed: use double precision within mean and sum. This has great simplicity but gives no control over storage usage, and as implemented, the storage would be much higher than one might think, potentially 8x. (3) Lastly, Perry suggested a more radical approach: rather than changing the mean and sum methods themselves, we could alter the universal function accumulate and reduce methods to implicitly use additional precision. Perry's idea was to make all accumulations and reductions up-cast their results to the largest type of the current family, either Bool, Int64, Float64, or Complex64. By doing this, we can improve the utility of the reductions and accumulations as well as fixing the problem with sum and mean. -- Todd Miller jm...@st... STSCI / ESS / SSB |
From: Peter V. <ve...@em...> - 2003-09-01 13:10:06
|
On Monday 01 September 2003 14:55, Pearu Peterson wrote: > On Mon, 1 Sep 2003, Peter Verveer wrote: > > Is there some way in numarray to find out what the maximum and minimum > > values are that a type can hold? For instance, is there a convenient way > > to find out what the maximal possible value of the Float32 type is? > > I am not completely sure what is behind of the following way but > > >>> numarray.fromstring('\xff\xff\x7f\x7f','f') > > array([ 3.40282347e+38], type=Float32) > > seems to give the maximum value for Float32. > > Pearu Sure, it is not too difficult to find the minimum and maximum values of each type, given the bitsize (although there may be portability issues). But it would be convenient to have such properties directly accessible in numarray, I think. Peter |
From: Pearu P. <pe...@ce...> - 2003-09-01 12:56:08
|
On Mon, 1 Sep 2003, Peter Verveer wrote: > Is there some way in numarray to find out what the maximum and minimum values > are that a type can hold? For instance, is there a convenient way to find out > what the maximal possible value of the Float32 type is? I am not completely sure what is behind of the following way but >>> numarray.fromstring('\xff\xff\x7f\x7f','f') array([ 3.40282347e+38], type=Float32) seems to give the maximum value for Float32. Pearu |
From: Todd M. <jm...@st...> - 2003-09-01 11:56:58
|
On Mon, 2003-09-01 at 04:43, Peter Verveer wrote: > Hi all, > > Is there some way in numarray to find out what the maximum and minimum values > are that a type can hold? I think the answer is "no". I added a feature request on Source Forge. > For instance, is there a convenient way to find out > what the maximal possible value of the Float32 type is? Nope. > Cheers, Peter -- Todd Miller <jm...@st...> |
From: Peter V. <ve...@em...> - 2003-09-01 09:34:41
|
Hi All, I noticed that the sum() and mean() methods of numarrays use the precision of the given array in their calculations. That leads to resuls like this: >>> array([255, 255], Int8).sum() -2 >>> array([255, 255], Int8).mean() -1.0 Would it not be better to use double precision internally and return the correct result? Cheers, Peter -- Dr. Peter J. Verveer Cell Biology and Cell Biophysics Programme EMBL Meyerhofstrasse 1 D-69117 Heidelberg Germany Tel. : +49 6221 387245 Fax : +49 6221 387242 Email: Pet...@em... |
From: Peter V. <ve...@em...> - 2003-09-01 08:43:37
|
Hi all, Is there some way in numarray to find out what the maximum and minimum values are that a type can hold? For instance, is there a convenient way to find out what the maximal possible value of the Float32 type is? Cheers, Peter -- Dr. Peter J. Verveer Cell Biology and Cell Biophysics Programme EMBL Meyerhofstrasse 1 D-69117 Heidelberg Germany Tel. : +49 6221 387245 Fax : +49 6221 387242 Email: Pet...@em... |
From: Sebastian H. <ha...@ms...> - 2003-08-30 21:04:25
|
Hi all, It already worked - see for yourself. http://www.google.com/search?q=numarray See you all at SciPy'03 ;-) Sebastian Haase >----- Original Message ----- >From: "Perry Greenfield" <pe...@st...> >To: "Sebastian Haase" <ha...@ms...>; ><num...@li...> >Sent: Thursday, August 21, 2003 5:47 AM >Subject: RE: [Numpy-discussion] Numarray web site and google > > > > Thanks for pointing this out and particularly for doing research > > into how to fix the problem! We will give this a try (though it may > > take a few days). > > > > > -----Original Message----- > > > From: num...@li... > > > [mailto:num...@li...]On Behalf Of > > > Sebastian Haase > > > Sent: Wednesday, August 20, 2003 6:42 PM > > > To: num...@li... > > > Subject: [Numpy-discussion] Numarray web site and google > > > > > > > > > Hi, > > > I'm not sure how much time you guys wants to spend on the > > > numarray web site, > > > but I find that a google search for 'numarray' still points to > > > http://stsdas.stsci.edu/numarray/ > > > saying numarray has moved! > > > (OK it does bring me to the right page after some seconds) > > > Anyway - doing some search on Usenet I found this > > > http://groups.google.com/groups?threadm=vc3bh351prjbd8%40corp.supe > > > rnews.com&rnum=8&prev=/groups%3Fq%3D%2522search%2Bengines%2522%2B%2522%252Bh > > as%> 2Bmoved%2522%2B%2Bhtml%2Bhead%26hl%3Den%26lr%3D%26ie%3DISO-8859-1% > > > 26safe%3Doff > > > From: Guy Macon (guymacon+"_http://www.guymacon.com/_"03...@sp...) > > > Subject: Re: If I can't redirect to new url, how bad is forwarding it? > > > Newsgroups: alt.internet.search-engines > > > Date: 2003-05-13 19:48:05 PST > > > > > > he essentially says adding this to the html-page might fix it soon: > > > <head> > > > <meta name="robots" content="noindex, follow" /> > > > <title>http://www.example.com</title> > > > <meta http-equiv="refresh" content="http://www.example.com" /> > > > </head> > > > > > > > > > Just a thought ... > > > > > > Thanks for numarray - have a great day. > > > > > > Sebastian Haase > > > > > > > > > > > > ------------------------------------------------------- > > > This SF.net email is sponsored by Dice.com. > > > Did you know that Dice has over 25,000 tech jobs available today? From > > > careers in IT to Engineering to Tech Sales, Dice has tech jobs from the > > > best hiring companies. http://www.dice.com/index.epl?rel_code=104 > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Num...@li... > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > ------------------------------------------------------- > > This SF.net email is sponsored by: VM Ware > > With VMware you can run multiple operating systems on a single machine. > > WITHOUT REBOOTING! Mix Linux / Windows / Novell virtual machines > > at the same time. Free trial click > here:http://www.vmware.com/wl/offer/358/0 > > _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: LUK S. <shu...@po...> - 2003-08-30 05:17:44
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Todd Miller wrote: | It's a bug in the recent port of mlab.py. Numarray doesn't really | support spacesaver, and it doesn't fake it yet either. I removed | the spacesaver code for numarray-0.7. | | On Fri, 2003-08-29 at 04:04, Justin Worrall wrote: | |>Hi, |> |>I'm having problems using the cholesky decomposition function - |>perhaps someone could point out where I am going wrong. Many thanks, |> |>Justin |> |>dingy /ms/user/w/worrall 22$ python |>Python 2.3 (#1, Aug 19 2003, 16:53:06) |>[GCC 2.95.2 19991024 (release)] on sunos5 |>Type "help", "copyright", "credits" or "license" for more information. |> |>>>>from numarray import * |>>>>from numarray.linear_algebra import * |>>>>x=array([[1.0,0.0],[0.5,1.0]]) |>>>>x |> |>array([[ 1. , 0. ], |> [ 0.5, 1. ]]) |> |>>>>cholesky_decomposition(x) |> [snipped] A big Thanks for the *very* quick response. But I found it strange that Cholesky decomposition is used for an unsymmetric matrix and then I discover that in fact this goes through raising no error. Here's the code snippet <quote> from numarray import * import numarray.linear_algebra as la # Good x=array([[1.0,0.5],[0.5,1.0]]) print "\n", x al=la.cholesky_decomposition(x) print "\n", al print "\n", matrixmultiply(al,transpose(al)) # Bad x=array([[1.0,0],[0.5,1.0]]) print "\n", x al=la.cholesky_decomposition(x) print "\n", al print "\n", matrixmultiply(al,transpose(al)) <quote> and the results: <quote> H:\>python chol-numarray.py [[ 1. 0.5] ~ [ 0.5 1. ]] [[ 1. 0. ] ~ [ 0.5 0.8660254]] [[ 1. 0.5] ~ [ 0.5 1. ]] [[ 1. 0. ] ~ [ 0.5 1. ]] [[ 1. 0. ] ~ [ 0.5 0.8660254]] [[ 1. 0.5] ~ [ 0.5 1. ]] </quote> It appears that there is no checking (at least) for the symmetry of the matrix. Of course, the user's responsibilty, ultimately ... Regards, ST -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (MingW32) iD8DBQE/UDKdNmKlQkqct/IRAqIMAJ9T2xg/45Cf08SD4jvy5M4QBrNFvgCcDvrZ PVxuUjQmVmXfSmm0WGilFhE= =1aji -----END PGP SIGNATURE----- |
From: Todd M. <jm...@st...> - 2003-08-29 20:45:33
|
Release Notes for numarray-0.7 Numarray is an array processing package designed to efficiently manipulate large multi-dimensional arrays. Numarray is modelled after Numeric and features c-code generated from python template scripts, the capacity to operate directly on arrays in files, and improved type promotions. I. ENHANCEMENTS 1. Object Arrays numarray now has an object array facility. numarray.objects provides array indexing and ufuncs for fixed length collections of arbitrary Python objects. 2. Merger of NumArray/ComplexArray classes Numarray's NumArray and ComplexArray classes have been unified into a single class. Thus, a single base class can be used to derive array classes which operate on integer, real or complex numbers. Thanks to Colin Williams for this suggestion. 3. Indexing improvements The implementation of numarray's indexing has been simplified and improved. Ad-hoc logic for getting single array elements fast has been replaced by a full conversion to C of the top level of numarray's Python indexing code. The resulting code is simpler, prototyped in Python, and adds an additional kind of indexing which occurs entirely in C for speed: sub-arrays. Slicing and array indexing, however, still involve significant amounts of Python code. 4. IEEE Special Value Constants Standard constants for nan, plus_inf, minus_inf, and inf have been added to numarray.ieeespecial making it easier to assign IEEE special values to arrays in addition to finding or replacing special values. 5. Better Numeric interoperability (wxPyPlot port) numarray-0.7 addresses a couple compatibility and speed issues which hindered numarray's use with wxPyPlot. numarray now works fine with the wxPyPlot demos by changing "import Numeric" to "import numarray as Numeric". II. BUGS FIXED See http://sourceforge.net/tracker/?atid=450446&group_id=1369&func=browse for more details. 793421 PyArray_INCREF / PyArray_XDECREF deprecated 791354 Integer overflow bugs? 785458 Crash subclassing the NumArray class 784866 astype() fails sometimes 779755 Mac OS 10 installation problem 740035 Possible problem with dot III. CAUTIONS 1. numarray extension writers should note that the documented use of PyArray_INCREF and PyArray_XDECREF (in numarray) has been found to be incompatible with Numeric and has therefore been deprecated. numarray wrapper functions using PyArray_INCREF and PyArray_XDECREF should switch to ordinary Py_INCREF and Py_XDECREF. 2. Writers of numarray subclasses should note that the "protected" _getitem/_setitem interface for NDArray has changed. WHERE ----------- Numarray-0.7 windows executable installers, source code, and manual is here: http://sourceforge.net/project/showfiles.php?group_id=1369 Numarray is hosted by Source Forge in the same project which hosts Numeric: http://sourceforge.net/projects/numpy/ The web page for Numarray information is at: http://stsdas.stsci.edu/numarray/index.html Trackers for Numarray Bugs, Feature Requests, Support, and Patches are at the Source Forge project for NumPy at: http://sourceforge.net/tracker/?group_id=1369 REQUIREMENTS ------------------------------ numarray-0.7 requires Python 2.2.2 or greater. AUTHORS, LICENSE ------------------------------ Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science Institute. Thanks go to Jochen Kupper of the University of North Carolina for his work on Numarray and for porting the Numarray manual to TeX format. Thanks also to Edward C. Jones, Francesc Alted, Paul Dubois, Eric Jones, Travis Oliphant, Pearu Peterson, Colin Williams, and everyone who has contributed with comments and feedback. Numarray is made available under a BSD-style License. See LICENSE.txt in the source distribution for details. -- Todd Miller jm...@st... |
From: Todd M. <jm...@st...> - 2003-08-29 12:57:51
|
It's a bug in the recent port of mlab.py. Numarray doesn't really support spacesaver, and it doesn't fake it yet either. I removed the spacesaver code for numarray-0.7. On Fri, 2003-08-29 at 04:04, Justin Worrall wrote: > Hi, > > I'm having problems using the cholesky decomposition function - > perhaps someone could point out where I am going wrong. Many thanks, > > Justin > > dingy /ms/user/w/worrall 22$ python > Python 2.3 (#1, Aug 19 2003, 16:53:06) > [GCC 2.95.2 19991024 (release)] on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > >>> from numarray import * > >>> from numarray.linear_algebra import * > >>> x=array([[1.0,0.0],[0.5,1.0]]) > >>> x > array([[ 1. , 0. ], > [ 0.5, 1. ]]) > >>> cholesky_decomposition(x) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > File > "/u/worrall/Python-2.3/numarray/linear_algebra/LinearAlgebra2.py", > line 159, in cholesky_decomposition > return copy.copy(num.transpose(mlab.triu(a,k=0))) > File "/u/worrall/Python-2.3/numarray/linear_algebra/mlab.py", line > 134, in triu > svsp = m.spacesaver() > AttributeError: 'NumArray' object has no attribute 'spacesaver' > >>> > -- > This is not an offer (or solicitation of an offer) to buy/sell the securities/instruments mentioned or an official confirmation. Morgan Stanley may deal as principal in or own or act as market maker for securities/instruments mentioned or may advise the issuers. This may refer to a research analyst/research report. Unless indicated, these views are the author?s and may differ from those of Morgan Stanley research or others in the Firm. We do not represent this is accurate or complete and we may not update this. Past performance is not indicative of future returns. For additional information, research reports and important disclosures, contact me or see https://secure.ms.com/servlet/cls. You should not use email to request, authorize or effect the purchase or sale of any security or instrument, to send transfer instructions, or to effect any other transactions. We cannot guarantee that any such reque > sts received via email will be processed in a timely manner. This communication is solely for the addres > -- Todd Miller <jm...@st...> |
From: Justin W. <Jus...@mo...> - 2003-08-29 08:04:47
|
Hi, I'm having problems using the cholesky decomposition function - perhaps someone could point out where I am going wrong. Many thanks, Justin dingy /ms/user/w/worrall 22$ python Python 2.3 (#1, Aug 19 2003, 16:53:06) [GCC 2.95.2 19991024 (release)] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> from numarray import * >>> from numarray.linear_algebra import * >>> x=3Darray([[1.0,0.0],[0.5,1.0]]) >>> x array([[ 1. , 0. ], [ 0.5, 1. ]]) >>> cholesky_decomposition(x) Traceback (most recent call last): File "<stdin>", line 1, in ? File "/u/worrall/Python-2.3/numarray/linear_algebra/LinearAlgebra2.py", line 159, in cholesky_decomposition return copy.copy(num.transpose(mlab.triu(a,k=3D0))) File "/u/worrall/Python-2.3/numarray/linear_algebra/mlab.py", line 134, in triu svsp =3D m.spacesaver() AttributeError: 'NumArray' object has no attribute 'spacesaver' >>> -- This is not an offer (or solicitation of an offer) to buy/sell the securi= ties/instruments mentioned or an official confirmation. Morgan Stanley m= ay deal as principal in or own or act as market maker for securities/inst= ruments mentioned or may advise the issuers. This may refer to a researc= h analyst/research report. Unless indicated, these views are the author?s= and may differ from those of Morgan Stanley research or others in the Fi= rm. We do not represent this is accurate or complete and we may not updat= e this. Past performance is not indicative of future returns. For additi= onal information, research reports and important disclosures, contact me = or see https://secure.ms.com/servlet/cls. You should not use email to re= quest, authorize or effect the purchase or sale of any security or instru= ment, to send transfer instructions, or to effect any other transactions.= We cannot guarantee that any such requests received via email will be p= rocessed in a timely manner. This communication is solely for the addres= |
From: Fernando P. <fp...@co...> - 2003-08-22 18:01:03
|
Chris Barker wrote: > Fernando Perez wrote: > > >> - Is there anything in Numarray's design which prevents this same trick >> from being used? > > > It's not entirely clear to me what you have in mind. ARE you proposing that > Blitz++ arrays be the low-level implimentaiton of Numarray?, or just that > there be an easy way to switch between them. Well, it was a bit of both. I'm glad to have been ambiguous, since you answered both :) > In the first case, it was considered a coupl eof years ago, and rejected > (as far as I could tell) mainly for reasons of portability: Blitz++ makes > heavy use of templates, which are not well supported by the very wide > variety of compilers that Python is supported on. One of the goals of > Numarray is to make it into the Python standard library, and use of C++ and > templates would preclude that, at least in the foreseeable future. I sort of suspected this, so while this option sounds very nice in theory, I didn't expect it to be a realistic possibility. > If you are proposing the second: that there could be an easy bridge between > Numarray and Blitz++ (maybe using Boost Python somehow?) so that Blitz++ > arrays could be used to write Numarray extensions, I'm all for it! I think > the community would gain a great deal from an easier to use API for writing > compiled Numarray functions. Besides making it easier for those of us that > find the need for custom extensions, it would make it much easier for > people to write and contibute high performance general purpose > code--resulting in a much expanded library for Numarray and SciPy. This is more realistic, and in fact today it _is_ a reality. As I posted before, making a blitz array out of a numpy one is trivial, and you get fast, easy to read C++. I initially learned how to do it by reverse-engineering weave.inline's auto-generated C++ files, but once that got me off the ground, it was trivial to write my own extension modules. And now that I'm starting to learn a bit more about the capabilities of the blitz library, I'm very excited. To indirectly answer Perry's email in this same thread: after his comments, it seems to me that the Numarray->blitz operation would work just as easily as the example I showed for Numpy->blitz. Perry says that in memory, numarrays are very similar to Numpys, so this shouldn't be a problem. Which means we are already quite a ways there! I think to some extent, it's more a matter of making better known to people that this approach exists, and how easy to use it is. I only learned about it after being impressed with inline's super-simple syntax as compared to the horrors of handling numpy arrays by hand. But the more I read the blitz manual, the more I see that blitz arrays behave _a lot_ like numpy arrays even inside the C++ code. So the bridge approach, as long as the constructor call is very cheap, seems like a perfectly reasonable balance between aesthetic purity and real-world practicality to me. This doesn't mean that there are no issues to be discussed, so I'll try to outline the ones that come to mind (as per Perry's request). Perhaps others can add to this list. 1- Cheap constructors: For a rank-N array, the current constructor still requires the creation of 2 TinyVector blitz arrays of size N (shape and strides). It would be great if these copies could be avoided and the existing shape & strides data from the Numpy array could be used directly. The Numpy-> blitz functions would then be almost zero-overhead operations. 2- Access to fortran libraries: I don't know how easy it is to use say BLAS with objects defined in C++ (and with row-major memory layout). This is just my ignorance, but I'd like to clarify this point, since it would be nice to be able to use BLAS/LAPACK from within the C++ code with blitz arrays with no transposition overhead. 3- Benchmark: I think it would be valuable to draft a small but reasonable set of benchmarks comparing the raw numeric/numarray C api, the blitz version and fortran versions of some common algorithms. Blitz relies heavily on templates and C++ compilers are getting better, so it would be nice to know where things stand. I have a preliminary version of this, which I've posted here, but it only covers innerproduct(). As part of that testing, I found more numerical discrepancy than seemed reasonable to me. I would really need to understand the origin of this before feeling comfortable with the blitz approach for production codes. 4- Multiple precision: For a certain class of problems, beign able to work with Numerical arrays whose precision goes beyond double is critical. At http://crd.lbl.gov/~dhbailey/mpdist/index.html there is a good example of a modern, C++ library for extended precision which offers a very simple syntax. I wonder how easy (or not) it would be for Numarray to offer optional support for this kind of arrays. This last topic is a bit more ambitious than the others (which I think seem to be very reasonabl). However, I hope that we can at least discuss it, since there is a class of problems where it is a really important issue. There may be more things I can't think of now, but hopefully this may serve as a first outline for a discussion, both on the mailing list and in person, at scipy. Best, f |
From: Electrician <rz...@ya...> - 2003-08-22 17:12:10
|
Resume from: Rich for Job or Service " no job too small " E L E C T R I C I A N Tel. (408) 482-2102 ry...@ya... WIRING & INSTALLATION Hands on electrical installations perform fitting, mounting, laying cables on Commercial, Industrial, residential new & existing buildings. Electrical Power Supply for Lights, Plugs, Receptacles, Panels, & Fuse boxes, Emergency Generators wiring and testing, Transformers, Power Lines & conduit layout, bending and mounting, parking lighting, lamps, switches, SOLAR PROJECTS, posts and underground installations. Shopping Centers; grocery stories, hardware stories, restaurants & residential - housing areas, computer business & fast food units installation & buildings; Solar Panels, Sun Tracking, Flywheel Storage & electric cars systems modify, Natural Energy in Remote areas install. LOW VOLTAGE for Home & OFFICE 12 / 24 Volt audio & video equipment, Computer & data network wiring, data backup and UPS; Monitoring Video Control & backup tapes set up and mounting, electro-optical assemblies & subsystems. DC Power Supply, Switch & Motion sensors Alarm. Fire & safety systems install. Fiber Optics systems, PLC setup, Master Control Center, cable modems & cable TV install. Network, UPS Battery Backup mounting and charging systems; Power supply testing, troubleshooting, and analyzing to a components level. Electric Vehicles Design, Assembly & Installations. CC TV & Cameras, Security Systems & Sensors for Safety, Fire sprinklers and traffic Monitoring & Door Control. Telephones / Net move & install. TECHNICIAN Use lab & shop equipment, mechanical, electrical & electronic tools, measurement & testing equipment, video cameras & microscopes. Support scientists & electronic engineers. Mechanical & Electro-Mech. Design. OFFICE, ELECTRICAL AND MECHANICAL PROJECTS Electrical & Network Sketches, one line diagrams, and "as is" drawings update. Customizing Electronic and Electrical Components & Parts, Layouts electronic and electrical schematic, connectors and mechanical detailing. Quotes, supply, bids and job estimating. Customers contact, inspection, project mgmt & supervision of electricians & material handling; Use CAD, Windows and applications; ELECTRICAL & MAINTENANCE SERVICE US Citizen; open for travel . |
From: Chris B. <Chr...@no...> - 2003-08-22 16:50:00
|
Fernando Perez wrote: > - Is there anything in Numarray's design which prevents this same trick from > being used? It's not entirely clear to me what you have in mind. ARE you proposing that Blitz++ arrays be the low-level implimentaiton of Numarray?, or just that there be an easy way to switch between them. In the first case, it was considered a coupl eof years ago, and rejected (as far as I could tell) mainly for reasons of portability: Blitz++ makes heavy use of templates, which are not well supported by the very wide variety of compilers that Python is supported on. One of the goals of Numarray is to make it into the Python standard library, and use of C++ and templates would preclude that, at least in the foreseeable future. Here's one discussion about it: http://www.geocrawler.com/mail/thread.php3?subject=%5BNumpy-discussion%5D+Meta%3A+too+many+numerical+libraries+doing+the&list=1329 If that url doesn't work, I found it by searching google for: Numpy-discussion blitz++ If you are proposing the second: that there could be an easy bridge between Numarray and Blitz++ (maybe using Boost Python somehow?) so that Blitz++ arrays could be used to write Numarray extensions, I'm all for it! I think the community would gain a great deal from an easier to use API for writing compiled Numarray functions. Besides making it easier for those of us that find the need for custom extensions, it would make it much easier for people to write and contibute high performance general purpose code--resulting in a much expanded library for Numarray and SciPy. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Perry G. <pe...@st...> - 2003-08-21 16:40:05
|
Thanks for pointing this out and particularly for doing research into how to fix the problem! We will give this a try (though it may take a few days). > -----Original Message----- > From: num...@li... > [mailto:num...@li...]On Behalf Of > Sebastian Haase > Sent: Wednesday, August 20, 2003 6:42 PM > To: num...@li... > Subject: [Numpy-discussion] Numarray web site and google > > > Hi, > I'm not sure how much time you guys wants to spend on the > numarray web site, > but I find that a google search for 'numarray' still points to > http://stsdas.stsci.edu/numarray/ > saying numarray has moved! > (OK it does bring me to the right page after some seconds) > Anyway - doing some search on Usenet I found this > http://groups.google.com/groups?threadm=vc3bh351prjbd8%40corp.supe rnews.com&rnum=8&prev=/groups%3Fq%3D%2522search%2Bengines%2522%2B%2522%252Bh as%> 2Bmoved%2522%2B%2Bhtml%2Bhead%26hl%3Den%26lr%3D%26ie%3DISO-8859-1% > 26safe%3Doff > From: Guy Macon (guymacon+"_http://www.guymacon.com/_"03...@sp...) > Subject: Re: If I can't redirect to new url, how bad is forwarding it? > Newsgroups: alt.internet.search-engines > Date: 2003-05-13 19:48:05 PST > > he essentially says adding this to the html-page might fix it soon: > <head> > <meta name="robots" content="noindex, follow" /> > <title>http://www.example.com</title> > <meta http-equiv="refresh" content="http://www.example.com" /> > </head> > > > Just a thought ... > > Thanks for numarray - have a great day. > > Sebastian Haase > > > > ------------------------------------------------------- > This SF.net email is sponsored by Dice.com. > Did you know that Dice has over 25,000 tech jobs available today? From > careers in IT to Engineering to Tech Sales, Dice has tech jobs from the > best hiring companies. http://www.dice.com/index.epl?rel_code=104 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |