You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Brian G. <ell...@gm...> - 2006-10-21 03:43:45
|
Also, when I use seterr(all='ignore') the the tests fail: ====================================================================== FAIL: Ticket #112 ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/common/homes/g/granger/usr/local/lib/python/numpy/core/tests/test_regression.py", line 219, in check_longfloat_repr assert(str(a)[1:9] == str(a[0])[:8]) AssertionError ---------------------------------------------------------------------- Ran 516 tests in 0.823s FAILED (failures=1) Thanks for helping out on this. On 10/20/06, Tim Hochberg <tim...@ie...> wrote: > Brian Granger wrote: > > Hi, > > > > i am running numpy on aix compiling with xlc. Revision 1.0rc2 works > > fine and passes all tests. But 1.0rc3 and more recent give the > > following on import: > > > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in add > > Warning: invalid value encountered in not_equal > > Warning: invalid value encountered in absolute > > Warning: invalid value encountered in less > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in equal > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in add > > Warning: invalid value encountered in not_equal > > Warning: invalid value encountered in absolute > > Warning: invalid value encountered in less > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in equal > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > [lots more of this] > > > > The odd thing is that all tests pass. I have looked, but can't find > > where this Warning is coming from in the code. Any thoughts on where > > this is coming from? What can I do to help debug this? I am not sure > > what revision introduced this issue. > > > The reason that you are seeing this now is that the default error state > has been tightened up. There were some issues with tests failing as a > result of this, but I believe I fixed those already and you're seeing > this on import, not when running the tests correct? The first thing to > do is figure out where the invalids are occurring, and the natural way > to do that is to set the error state to raise, but you can't set the > error state till you import it, so that's not going to help here. > > I think the first thing that I would try is to throw in a > seterr(all='raise', under='ignore') right after the call to _setdef in > numeric.py. If you're lucky, this will point out where the invalids are > popping up. As a sanity check, you could instead make this > seterr(all='ignore'), which should make all the warnings go away, but > won't tell you anything about why there are warnings to begin with. > > Regards, > > -tim > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Brian G. <ell...@gm...> - 2006-10-21 03:32:57
|
Here is the traceback that I got: In [1]: import numpy --------------------------------------------------------------------------- exceptions.FloatingPointError Traceback (most recent call last) /u2/granger/<ipython console> /usr/common/homes/g/granger/usr/local/lib/python/numpy/__init__.py 36 import core 37 from core import * ---> 38 import lib 39 from lib import * 40 import linalg /usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/__init__.py 3 from numpy.version import version as __version__ 4 ----> 5 from type_check import * 6 from index_tricks import * 7 from function_base import * /usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/type_check.py 8 import numpy.core.numeric as _nx 9 from numpy.core.numeric import asarray, array, isnan, obj2sctype, zeros ---> 10 from ufunclike import isneginf, isposinf 11 12 _typecodes_by_elsize = 'GDFgdfQqLlIiHhBb?' /usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/ufunclike.py 44 return y 45 ---> 46 _log2 = umath.log(2) 47 def log2(x, y=None): 48 """Returns the base 2 logarithm of x FloatingPointError: invalid value encountered in log Obviously because I am having the error raised, I only get the first one. Hmmm. Brian On 10/20/06, Tim Hochberg <tim...@ie...> wrote: > Brian Granger wrote: > > Hi, > > > > i am running numpy on aix compiling with xlc. Revision 1.0rc2 works > > fine and passes all tests. But 1.0rc3 and more recent give the > > following on import: > > > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in add > > Warning: invalid value encountered in not_equal > > Warning: invalid value encountered in absolute > > Warning: invalid value encountered in less > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in equal > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in add > > Warning: invalid value encountered in not_equal > > Warning: invalid value encountered in absolute > > Warning: invalid value encountered in less > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in equal > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > Warning: invalid value encountered in multiply > > [lots more of this] > > > > The odd thing is that all tests pass. I have looked, but can't find > > where this Warning is coming from in the code. Any thoughts on where > > this is coming from? What can I do to help debug this? I am not sure > > what revision introduced this issue. > > > The reason that you are seeing this now is that the default error state > has been tightened up. There were some issues with tests failing as a > result of this, but I believe I fixed those already and you're seeing > this on import, not when running the tests correct? The first thing to > do is figure out where the invalids are occurring, and the natural way > to do that is to set the error state to raise, but you can't set the > error state till you import it, so that's not going to help here. > > I think the first thing that I would try is to throw in a > seterr(all='raise', under='ignore') right after the call to _setdef in > numeric.py. If you're lucky, this will point out where the invalids are > popping up. As a sanity check, you could instead make this > seterr(all='ignore'), which should make all the warnings go away, but > won't tell you anything about why there are warnings to begin with. > > Regards, > > -tim > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Tim H. <tim...@ie...> - 2006-10-21 03:10:37
|
Brian Granger wrote: > Hi, > > i am running numpy on aix compiling with xlc. Revision 1.0rc2 works > fine and passes all tests. But 1.0rc3 and more recent give the > following on import: > > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > Warning: invalid value encountered in add > Warning: invalid value encountered in not_equal > Warning: invalid value encountered in absolute > Warning: invalid value encountered in less > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > Warning: invalid value encountered in equal > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > Warning: invalid value encountered in add > Warning: invalid value encountered in not_equal > Warning: invalid value encountered in absolute > Warning: invalid value encountered in less > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > Warning: invalid value encountered in equal > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > Warning: invalid value encountered in multiply > [lots more of this] > > The odd thing is that all tests pass. I have looked, but can't find > where this Warning is coming from in the code. Any thoughts on where > this is coming from? What can I do to help debug this? I am not sure > what revision introduced this issue. > The reason that you are seeing this now is that the default error state has been tightened up. There were some issues with tests failing as a result of this, but I believe I fixed those already and you're seeing this on import, not when running the tests correct? The first thing to do is figure out where the invalids are occurring, and the natural way to do that is to set the error state to raise, but you can't set the error state till you import it, so that's not going to help here. I think the first thing that I would try is to throw in a seterr(all='raise', under='ignore') right after the call to _setdef in numeric.py. If you're lucky, this will point out where the invalids are popping up. As a sanity check, you could instead make this seterr(all='ignore'), which should make all the warnings go away, but won't tell you anything about why there are warnings to begin with. Regards, -tim |
From: Brian G. <ell...@gm...> - 2006-10-21 02:42:24
|
Hi, i am running numpy on aix compiling with xlc. Revision 1.0rc2 works fine and passes all tests. But 1.0rc3 and more recent give the following on import: Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply Warning: invalid value encountered in add Warning: invalid value encountered in not_equal Warning: invalid value encountered in absolute Warning: invalid value encountered in less Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply Warning: invalid value encountered in equal Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply Warning: invalid value encountered in add Warning: invalid value encountered in not_equal Warning: invalid value encountered in absolute Warning: invalid value encountered in less Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply Warning: invalid value encountered in equal Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply Warning: invalid value encountered in multiply [lots more of this] The odd thing is that all tests pass. I have looked, but can't find where this Warning is coming from in the code. Any thoughts on where this is coming from? What can I do to help debug this? I am not sure what revision introduced this issue. Thanks Brian |
From: A. M. A. <per...@gm...> - 2006-10-20 23:28:37
|
T24gMjAvMTAvMDYsIFNlYmFzdGlhbiCvdXJlayA8c2VienVyQHBpbi5pZi51ei56Z29yYS5wbD4g d3JvdGU6CgoKPiBJcyB0aGVyZSBzb21ldGhpbmcgbGlrZSB0aGF0IGluIGFueSBudW1lcmljYWwg cHl0aG9uIG1vZHVsZXMgKG51bXB5LAo+IHB5bGFiKSBJIGNvdWxkIHVzZT8KCkluIHNjaXB5IHRo ZXJlIGFyZSBzb21lIHZlcnkgY29udmVuaWVudCBzcGxpbmUgZml0dGluZyB0b29scyB3aGljaAp3 aWxsIGFsbG93IHlvdSB0byBmaXQgYSBuaWNlIHNtb290aCBzcGxpbmUgdGhyb3VnaCB0aGUgc2lt dWxhdGlvbiBkYXRhCnBvaW50cyAob3IgbmVhciwgaWYgdGhleSBoYXZlIHNvbWUgdW5jZXJ0YWlu dHkpOyB5b3UgY2FuIHRoZW4gZWFzaWx5Cmxvb2sgYXQgdGhlIFJNUyBkaWZmZXJlbmNlIGluIHRo ZSB5IHZhbHVlcy4gWW91IGNhbiBhbHNvLCBsZXNzIGVhc2lseSwKbG9vayBhdCB0aGUgZGlzdGFu Y2UgZnJvbSB0aGUgY3VydmUgYWxsb3dpbmcgZm9yIHNvbWUgdW5jZXJ0YWludHkgaW4KdGhlIHgg dmFsdWVzLgoKSSBzdXBwb3NlIHlvdSBjb3VsZCBhbHNvIGZpdCBhIGN1cnZlIHRocm91Z2ggdGhl IGV4cGVyaW1lbnRhbCBwb2ludHMKYW5kIGNvbXBhcmUgdGhlIHR3byBjdXJ2ZXMgaW4gc29tZSB3 YXkuCgo+IEkgY2FuIGltYWdpbmUsIEkgY2FuIGZpdCB0aGUgZGF0YSB3aXRoIHNvbWUgcG9seW5v bWlhbCBvciB3aGF0ZXZlciwKPiBhbmQgdGhhbiBjb21wYXJlIHRoZSBmaXR0ZWQgZGF0YSwgYnV0 IG15IGdvYWwgaXMgdG8gb3BlcmF0ZSBvbgo+IGFzIHJhdyBkYXRhIGFzIGl0J3MgcG9zc2libGUu CgpJZiB5b3Ugd2FudCB0byBhdm9pZCB1c2luZyBhbiBhIHByaW9yaSBtb2RlbCwgTnVtZXJpY2Fs IFJlY2lwZXMKZGlzY3VzcyBzb21lIHBvc3NpYmxlIGFwcHJvYWNoZXMgKCJEbyB0d28tZGltZW5z aW9uYWwgZGlzdHJpYnV0aW9ucwpkaWZmZXI/IiBhdCBodHRwOi8vd3d3Lm5yYm9vay5jb20vYS9i b29rY3BkZi5odG1sIGlzIG9uZSkgYnV0IGl0J3Mgbm90CmNsZWFyIGhvdyB0byB0dXJuIHRoZSBw cm9ibGVtIHlvdSBkZXNjcmliZSBpbnRvIGEgc29sdmFibGUgb25lIC0gc29tZQphc3N1bXB0aW9u IGFib3V0IGhvdyB0aGUgbW9kZWxzIHZhcnkgYmV0d2VlbiBzYW1wbGVkIHggdmFsdWVzIGFwcGVh cnMKdG8gYmUgbmVjZXNzYXJ5LCBhbmQgdGhhdCBhbW91bnRzIHRvIGludGVycG9sYXRpb24uCgpB LiBNLiBBcmNoaWJhbGQK |
From: Travis O. <oli...@ee...> - 2006-10-20 23:13:04
|
Sebastien Bardeau wrote: >>One possible solution (there can be more) is using ndarray: >> >>In [47]: a=numpy.array([1,2,3], dtype="i4") >>In [48]: n=1 # the position that you want to share >>In [49]: b=numpy.ndarray(buffer=a[n:n+1], shape=(), dtype="i4") >> >> >> >Ok thanks. Actually that was also the solution I found. But this is much >more complicated when arrays are N dimensional with N>1, and above all >if user asks for a slice in one or more dimension. Here is how I >redefine the __getitem__ method for my arrays. Remember that the goal is >to return a 0-d array rather than a numpy.scalar when I extract a single >element out of a N-dimensional (N>=1) array: > > How about this. To get the i,j,k,l element a[i:i+1,j:j+1,k:k+1,l:l+1].squeeze() -Travis |
From: Robert K. <rob...@gm...> - 2006-10-20 22:15:28
|
Sebastian Żurek wrote: > Hi! > > This is probably a silly question but I'm getting confused with a > certain problem: a comparison between experimental data points (2D > points set) and a model (2D points set - no analytical form). > > The physical model produces (by a sophisticated simulations done by an > external program) some 2D points data and one of my task is to compare > those calculated data with an experimental one. > > The experimental and modeled data have form of 2D curves, build of n > 2D-points, i.e.: > > expDat=[[x1,x2,x3,..xn],[y1,y2,y3,...,yn]] > simDat=[[X1,X2,X3,...,Xn],[Y1,Y2,Y3,...,Yn]] > > The task of determining, let's say, a root mean squarred error (RMSe) > is trivial if x1==X1, x2==X2, etc. > > In general, which is a common situation xk differs from Xk (k=0..n) and > one may not simply compare succeeding Yk and yk (k=0..n) to determine > the goodness-of-fit. The distance h=Xk-X(k-1) is constant, but similar > distance m(k)=xk-x(k-1) depends on k-th point and is not a constant > value, although the data array lengths for simulation and experiment are > the same. Your description is a bit vague. Do you mean that you have some model function f that maps X values to Y values? f(x) -> y If that is the case, is there some reason that you cannot run your simulation using the same X points as your experimental data? OTOH, is there some other independent variable (say Z) that *is* common between your experimental and simulated data? f(z) -> (x, y) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: <se...@pi...> - 2006-10-20 21:34:10
|
Hi! This is probably a silly question but I'm getting confused with a certain problem: a comparison between experimental data points (2D points set) and a model (2D points set - no analytical form). The physical model produces (by a sophisticated simulations done by an external program) some 2D points data and one of my task is to compare those calculated data with an experimental one. The experimental and modeled data have form of 2D curves, build of n 2D-points, i.e.: expDat=[[x1,x2,x3,..xn],[y1,y2,y3,...,yn]] simDat=[[X1,X2,X3,...,Xn],[Y1,Y2,Y3,...,Yn]] The task of determining, let's say, a root mean squarred error (RMSe) is trivial if x1==X1, x2==X2, etc. In general, which is a common situation xk differs from Xk (k=0..n) and one may not simply compare succeeding Yk and yk (k=0..n) to determine the goodness-of-fit. The distance h=Xk-X(k-1) is constant, but similar distance m(k)=xk-x(k-1) depends on k-th point and is not a constant value, although the data array lengths for simulation and experiment are the same. My first idea was to do some interpolations to obtain the missing points, but I held it 'by a hand' (which, BTW gave quite rewarding results) and I suppose, there's some i.g. numpy method to do it for me, isn't it? I suppose to do something like: gfit(expDat,simDat,'measure_type') which I hope will return the number determining the goodness-of-fit (mean squarred error, root mean squarred error,...) of two sets of discrete 2D data points. Is there something like that in any numerical python modules (numpy, pylab) I could use? I can imagine, I can fit the data with some polynomial or whatever, and than compare the fitted data, but my goal is to operate on as raw data as it's possible. Thanks for your comments! Sebastian |
From: Keith G. <kwg...@gm...> - 2006-10-20 20:38:01
|
On 10/20/06, JJ <jos...@ya...> wrote: > My suggestion is to > create a new attribute, such as .AR, so that the > following could be used: M[K.AR,:] It would be even better if M[K,:] worked. Would such a patch be accepted? (Not that I know how to make it.) |
From: JJ <jos...@ya...> - 2006-10-20 20:27:23
|
Hello. I have a suggestion that might make slicing using matrices more user-friendly. I often have a matrix of row or column numbers that I wish to use as a slice. If K was a matrix of row numbers (nx1) and M was a nxm matrix, then I would use ans = M[K.A.ravel(),:] to obtain the matrix I want. It turns out that I use .A.ravel() quite a lot in my code, as I usually work with matrices rather than arrays. My suggestion is to create a new attribute, such as .AR, so that the following could be used: M[K.AR,:]. I believe this would be more concise, easier to read, and well used. If slices are made in both directions of the matrix, then the .A.ravel() becomes even more unwieldy. Does anyone else like this idea? John __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
From: David H. <dav...@gm...> - 2006-10-20 17:47:34
|
Thanks for the comments, Here is the code for the new histogram, tests included. I'll wait for comments or suggestions before submitting a patch (numpy / scipy) ? Cheers David 2006/10/18, Tim Hochberg <tim...@ie...>: > > > My $0.02: > > If histogram is going to get a makeover, particularly one that makes it > more complex than at present, it should probably be moved to SciPy. > Failing that, it should be moved to a submodule of numpy with similar > statistical tools. Preferably with consistent interfaces for all of the > functions. > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Tim H. <tim...@ie...> - 2006-10-20 14:17:47
|
Sebastien Bardeau wrote: > Ooops sorry there was two mistakes with the 'hasslice' flag. This seems > now to work for me. > > [SNIP code] That looks overly complicated. I believe that this (minimally tested in a slightly different setting) or some variation should work: return self[...,newaxis][index].reshape(self[index].shape) -tim |
From: Sebastien B. <Seb...@ob...> - 2006-10-20 13:49:17
|
Ooops sorry there was two mistakes with the 'hasslice' flag. This seems now to work for me. def __getitem__(self,index): # Index may be either an int or a tuple # Index length: if type(index) == int: # A single element through first dimension ilen = 1 index = (index,) # A tuple else: ilen = len(index) # Array rank: arank = len(self.shape) # Check if there is a slice: hasslice = False for i in index: if type(i) == slice: hasslice = True # Array is already a 0-d array: if arank == 0 and index == (0,): return self elif arank == 0: raise IndexError, "0-d array has only one element at index 0." # This will return a single element as a 0-d array: elif arank == ilen and not hasslice: # This ugly thing returns a numpy 0-D array AND NOT a numpy scalar! # (Numpy scalars do not share their data with the parent array) newindex = list(index) newindex[0] = slice(index[0],index[0]+1,None) newindex = tuple(newindex) return self[newindex].reshape(()) # This will return a n-D subarray (n>=1): else: return self[index] Sebastien Bardeau wrote: >> One possible solution (there can be more) is using ndarray: >> >> In [47]: a=numpy.array([1,2,3], dtype="i4") >> In [48]: n=1 # the position that you want to share >> In [49]: b=numpy.ndarray(buffer=a[n:n+1], shape=(), dtype="i4") >> >> > Ok thanks. Actually that was also the solution I found. But this is much > more complicated when arrays are N dimensional with N>1, and above all > if user asks for a slice in one or more dimension. Here is how I > redefine the __getitem__ method for my arrays. Remember that the goal is > to return a 0-d array rather than a numpy.scalar when I extract a single > element out of a N-dimensional (N>=1) array: > > def __getitem__(self,index): # Index may be either an int or a tuple > # Index length: > if type(index) == int: # A single element through first dimension > ilen = 1 > index = (index,) # A tuple > else: > ilen = len(index) > # Array rank: > arank = len(self.shape) > # Check if there is a slice: > for i in index: > if type(i) == slice: > hasslice = True > else: > hasslice = False > # Array is already a 0-d array: > if arank == 0 and index == (0,): > return self[()] > elif arank == 0: > raise IndexError, "0-d array has only one element at index 0." > # This will return a single element as a 0-d array: > elif arank == ilen and hasslice: > # This ugly thing returns a numpy 0-D array AND NOT a numpy scalar! > # (Numpy scalars do not share their data with the parent array) > newindex = list(index) > newindex[0] = slice(index[0],index[0]+1,None) > newindex = tuple(newindex) > return self[newindex].reshape(()) > # This will return a n-D subarray (n>=1): > else: > return self[index] > > Well... I do not think this is very nice. Someone has another idea? My > question in my first post was: is there a way to get a single element of > an array into > a 0-d array which shares memory with its parent array? > > Sebastien > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > -- ------------------------- Sebastien Bardeau L3AB - CNRS UMR 5804 2 rue de l'observatoire BP 89 F - 33270 Floirac Tel: (+33) 5 57 77 61 46 ------------------------- |
From: Tim H. <tim...@ie...> - 2006-10-20 13:45:16
|
Francesc Altet wrote: > A Divendres 20 Octubre 2006 11:42, Sebastien Bardeau va escriure: > [snip] > >> I can understand that numpy.scalars do not provide inplace operations >> (like Python standard scalars, they are immutable), so I'd like to use >> >> 0-d Numpy.ndarrays. But: >> >>> d = numpy.array(a[2],copy=False) >> >>> d += 1 >> >>> d >> >> array(4) >> >> >>> a >> >> array([2, 3, 3]) >> >> >>> type(d) >> >> <type 'numpy.ndarray'> >> >> >>> d.shape >> >> () >> >> >>> id(d) >> >> 169621280 >> >> >>> d += 1 >> >>> id(d) >> >> 169621280 >> >> This is not a solution because d is a copy since construction time... >> My question is: is there a way to get a single element of an array into >> a 0-d array which shares memory with its parent array? >> > > One possible solution (there can be more) is using ndarray: [SNIP] Here's a slightly more concise version of the same idea: b = a[n:n+1].reshape([]) -tim |
From: Sebastien B. <Seb...@ob...> - 2006-10-20 13:25:59
|
> One possible solution (there can be more) is using ndarray: > > In [47]: a=numpy.array([1,2,3], dtype="i4") > In [48]: n=1 # the position that you want to share > In [49]: b=numpy.ndarray(buffer=a[n:n+1], shape=(), dtype="i4") > Ok thanks. Actually that was also the solution I found. But this is much more complicated when arrays are N dimensional with N>1, and above all if user asks for a slice in one or more dimension. Here is how I redefine the __getitem__ method for my arrays. Remember that the goal is to return a 0-d array rather than a numpy.scalar when I extract a single element out of a N-dimensional (N>=1) array: def __getitem__(self,index): # Index may be either an int or a tuple # Index length: if type(index) == int: # A single element through first dimension ilen = 1 index = (index,) # A tuple else: ilen = len(index) # Array rank: arank = len(self.shape) # Check if there is a slice: for i in index: if type(i) == slice: hasslice = True else: hasslice = False # Array is already a 0-d array: if arank == 0 and index == (0,): return self[()] elif arank == 0: raise IndexError, "0-d array has only one element at index 0." # This will return a single element as a 0-d array: elif arank == ilen and hasslice: # This ugly thing returns a numpy 0-D array AND NOT a numpy scalar! # (Numpy scalars do not share their data with the parent array) newindex = list(index) newindex[0] = slice(index[0],index[0]+1,None) newindex = tuple(newindex) return self[newindex].reshape(()) # This will return a n-D subarray (n>=1): else: return self[index] Well... I do not think this is very nice. Someone has another idea? My question in my first post was: is there a way to get a single element of an array into a 0-d array which shares memory with its parent array? Sebastien |
From: Stefan v. d. W. <st...@su...> - 2006-10-20 13:22:53
|
On Fri, Oct 20, 2006 at 11:42:26AM +0200, Sebastien Bardeau wrote: > >>> a =3D numpy.array((1,2,3)) > >>> b =3D a[:2] Here you index by a slice. > >>> c =3D a[2] Whereas here you index by a scalar. So you want to do b =3D a[[2]] b +=3D 1 or in the general case b =3D a[slice(2,3)] b +=3D 1 Regards St=E9fan |
From: Gael V. <gae...@no...> - 2006-10-20 11:29:01
|
Hi, There is an operation I do a lot, I would call it "unrolling" a array. The best way to describe it is probably to give the code: def unroll(M): """ Flattens the array M and returns a 2D array with the first column= s=20 being the indices of M, and the last column the flatten M. """ return hstack((indices(M.shape).reshape(-1,M.ndim),M.reshape(-1,1))) Example: >>> M array([[ 0.73530097, 0.3553424 , 0.3719772 ], [ 0.83353373, 0.74622133, 0.14748905], [ 0.72023762, 0.32306969, 0.19142366]]) >>> unroll(M) array([[ 0. , 0. , 0.73530097], [ 0. , 1. , 0.3553424 ], [ 1. , 1. , 0.3719772 ], [ 2. , 2. , 0.83353373], [ 2. , 0. , 0.74622133], [ 1. , 2. , 0.14748905], [ 0. , 1. , 0.72023762], [ 2. , 0. , 0.32306969], [ 1. , 2. , 0.19142366]]) The docstring sucks. The function is trivial (when you know numpy a bit). Maybe this function already exists in numpy, if so I couldn't find it. Elsewhere I propose it for inclusion. Cheers, Ga=EBl |
From: Markus R. <mar...@el...> - 2006-10-20 11:14:04
|
Am 20.10.2006 um 02:53 schrieb Jay Parlar: >> Hi! >> I try to compile numpy rc3 on Panther and get following errors. >> (I start build with "python2.3 setup.py build" to be sure to use the >> python shipped with OS X. I din't manage to compile Python2.5 either >> yet with similar errors) >> Does anynbody has an Idea? >> gcc-3.3 >> XCode 1.5 >> November gcc updater is installed >> > > I couldn't get numpy building with Python 2.5 on 10.3.9 (although I > had different compile errors). The solution that ended up working for > me was Python 2.4. There's a bug in the released version of Python 2.5 > that's preventing it from working with numpy, should be fixed in the > next release. > > You can find a .dmg for Python 2.4 here: > http://pythonmac.org/packages/py24-fat/index.html > > Jay P. > I have that installed already but i get some bus errors with that. Furthermore it is built with gcc4 and i need to compile an extra module(pytables) and I fear that will not work, hence I try to compile myself. Python 2.5 dosent't compile either (libSystemStubs is only on Tiger). The linking works when i remove the -lSystemStubs and it compiled clean. Numpy rc3 wass also compiling now with python 2.5, but the tests failed: Python 2.5 (r25:51908, Oct 20 2006, 11:40:08) [GCC 3.3 20030304 (Apple Computer, Inc. build 1671)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(10) Found 5 tests for numpy.distutils.misc_util Found 4 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 9 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 183 tests for numpy.core.multiarray Found 3 tests for numpy.fft.helper Found 36 tests for numpy.core.ma Found 1 tests for numpy.lib.ufunclike Found 12 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 4 tests for numpy.ctypeslib Found 41 tests for numpy.lib.function_base Found 2 tests for numpy.lib.polynomial Found 8 tests for numpy.core.records Found 28 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 47 tests for numpy.lib.shape_base Found 0 tests for __main__ ........................................................................ ................................Warning: invalid value encountered in divide ..Warning: invalid value encountered in divide ..Warning: divide by zero encountered in divide .Warning: divide by zero encountered in divide ..Warning: invalid value encountered in divide .Warning: divide by zero encountered in divide .Warning: divide by zero encountered in divide .Warning: divide by zero encountered in divide .Warning: divide by zero encountered in divide ..Warning: invalid value encountered in divide ..Warning: invalid value encountered in divide ..Warning: divide by zero encountered in divide .Warning: divide by zero encountered in divide .Warning: divide by zero encountered in divide .Warning: divide by zero encountered in divide ........Warning: invalid value encountered in divide .Warning: invalid value encountered in divide ..Warning: divide by zero encountered in divide ........................................................................ ..............................................Warning: overflow encountered in exp F....................................................................... ..........Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide Warning: divide by zero encountered in divide .....................Warning: invalid value encountered in sqrt Warning: invalid value encountered in log Warning: invalid value encountered in log10 ..Warning: invalid value encountered in sqrt Warning: invalid value encountered in sqrt Warning: divide by zero encountered in log Warning: divide by zero encountered in log Warning: divide by zero encountered in log10 Warning: divide by zero encountered in log10 Warning: invalid value encountered in arcsin Warning: invalid value encountered in arcsin Warning: invalid value encountered in arccos Warning: invalid value encountered in arccos Warning: invalid value encountered in arccosh Warning: invalid value encountered in arccosh Warning: divide by zero encountered in arctanh Warning: divide by zero encountered in arctanh Warning: invalid value encountered in divide Warning: invalid value encountered in true_divide Warning: invalid value encountered in floor_divide Warning: invalid value encountered in remainder Warning: invalid value encountered in fmod ........................................................................ ........................................................................ ................. ====================================================================== FAIL: Ticket #112 ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site- packages/numpy/core/tests/test_regression.py", line 219, in check_longfloat_repr assert(str(a)[1:9] == str(a[0])[:8]) AssertionError ---------------------------------------------------------------------- Ran 519 tests in 3.760s FAILED (failures=1) <unittest.TextTestRunner object at 0x1545f70> >>> regards Markus |
From: Francesc A. <fa...@ca...> - 2006-10-20 10:56:09
|
A Divendres 20 Octubre 2006 11:42, Sebastien Bardeau va escriure: [snip] > I can understand that numpy.scalars do not provide inplace operations > (like Python standard scalars, they are immutable), so I'd like to use > > 0-d Numpy.ndarrays. But: > >>> d =3D numpy.array(a[2],copy=3DFalse) > >>> d +=3D 1 > >>> d > > array(4) > > >>> a > > array([2, 3, 3]) > > >>> type(d) > > <type 'numpy.ndarray'> > > >>> d.shape > > () > > >>> id(d) > > 169621280 > > >>> d +=3D 1 > >>> id(d) > > 169621280 > > This is not a solution because d is a copy since construction time... > My question is: is there a way to get a single element of an array into > a 0-d array which shares memory with its parent array? One possible solution (there can be more) is using ndarray: In [47]: a=3Dnumpy.array([1,2,3], dtype=3D"i4") In [48]: n=3D1 # the position that you want to share In [49]: b=3Dnumpy.ndarray(buffer=3Da[n:n+1], shape=3D(), dtype=3D"i4") In [50]: a Out[50]: array([1, 2, 3]) In [51]: b Out[51]: array(2) In [52]: b +=3D 1 In [53]: b Out[53]: array(3) In [54]: a Out[54]: array([1, 3, 3]) Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: <EA...@HO...> - 2006-10-20 10:54:12
|
中だし体験記!携帯番号ゲットで中だし・・・最高です! http://carmastorra.com/ika/ |
From: Sebastien B. <Seb...@ob...> - 2006-10-20 09:42:40
|
Hi! I am confused with Numpy behavior with its scalar or 0-d arrays objects: >>> numpy.__version__ '1.0rc2' >>> a = numpy.array((1,2,3)) >>> b = a[:2] >>> b += 1 >>> b array([2, 3]) >>> a array([2, 3, 3]) >>> type(b) <type 'numpy.ndarray'> To this point all is ok for me: subarrays share (by default) memory with their parent array. But: >>> c = a[2] >>> c += 1 >>> c 4 >>> a array([2, 3, 3]) >>> type(c) <type 'numpy.int32'> >>> id(c) 169457808 >>> c += 1 >>> id(c) 169737448 That's really confusing, because slices (from __getslice__ method) are not copies (they share memory), and items (single elements from __getitem__ ) are copies to one of the scalar objects provided by Numpy. I can understand that numpy.scalars do not provide inplace operations (like Python standard scalars, they are immutable), so I'd like to use 0-d Numpy.ndarrays. But: >>> d = numpy.array(a[2],copy=False) >>> d += 1 >>> d array(4) >>> a array([2, 3, 3]) >>> type(d) <type 'numpy.ndarray'> >>> d.shape () >>> id(d) 169621280 >>> d += 1 >>> id(d) 169621280 This is not a solution because d is a copy since construction time... My question is: is there a way to get a single element of an array into a 0-d array which shares memory with its parent array? Thx for your help, Sebastien |
From: Stefan v. d. W. <st...@su...> - 2006-10-20 08:47:42
|
On Thu, Oct 19, 2006 at 09:03:57PM -0400, Pierre GM wrote: > Indeed. That's basically why you have to edit your __array_finalize__ . >=20 > class InfoArray(N.ndarray): > def __new__(info_arr_cls,arr,info=3D{}): > info_arr_cls._info =3D info > return N.array(arr).view(info_arr_cls) > def __array_finalize__(self, obj): > if hasattr(obj,'info'): > self.info =3D obj.info > else: > self.info =3D self._info > return >=20 > OK, so you end up w/ two attributes 'info' and '_info', the latter havi= ng the=20 > info you want, the latter playing a temporary placeholder. That looks a= bit=20 > overkill, but that works pretty nice. Is there any reason why one can't simply do class InfoArray(N.ndarray): def __new__(info_arr_cls,arr,info=3D{}): x =3D N.array(arr).view(info_arr_cls) x.info =3D info return x def __array_finalize__(self, obj): if hasattr(obj,'info'): self.info =3D obj.info return Regards St=E9fan |
From: Travis O. <oli...@ee...> - 2006-10-20 07:31:13
|
A. M. Archibald wrote: >On 18/10/06, Travis Oliphant <oli...@ie...> wrote: > > > >>If there are any cases satisfying these rules where a copy does not have >>to occur then let me know. >> >> > >For example, zeros((4,4))[:,1].reshape((2,2)) need not be copied. > >I filed a bug in trac and supplied a patch to multiarray.c that avoids >copies in PyArray_NewShape unless absolutely necessary. > > > Very, very nice. Thanks. |
From: A. M. A. <per...@gm...> - 2006-10-20 07:01:51
|
On 18/10/06, Travis Oliphant <oli...@ie...> wrote: > If there are any cases satisfying these rules where a copy does not have > to occur then let me know. For example, zeros((4,4))[:,1].reshape((2,2)) need not be copied. I filed a bug in trac and supplied a patch to multiarray.c that avoids copies in PyArray_NewShape unless absolutely necessary. A. M. Archibald |
From: Pierre GM <pgm...@gm...> - 2006-10-20 01:23:38
|
> > >class InfoArray(N.ndarray): > > > def __new__(info_arr_cls,arr,info={}): > > > info_arr_cls.info = info > > > return N.array(arr).view(info_arr_cls) > > One has to be careful of this approach. It ads *the same* information > to all arrays, i.e. Indeed. That's basically why you have to edit your __array_finalize__ . class InfoArray(N.ndarray): def __new__(info_arr_cls,arr,info={}): info_arr_cls._info = info return N.array(arr).view(info_arr_cls) def __array_finalize__(self, obj): if hasattr(obj,'info'): self.info = obj.info else: self.info = self._info return OK, so you end up w/ two attributes 'info' and '_info', the latter having the info you want, the latter playing a temporary placeholder. That looks a bit overkill, but that works pretty nice. a = InfoArray(N.array([1,2,3]),{1:1}) b = InfoArray(N.array([1,2,3]),{1:2}) assert a.info=={1:1} assert b.info=={1:2} assert (a+1).info==a.info assert (b-2).info==b.info |