You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Ivan V. i B. <iv...@ca...> - 2006-07-24 12:01:19
|
Hi all, Since there is a "string" type which is the same as "str_", how come there is no "boolean" type which is the same as "bool_"? Did I miss some design decision about naming? You know, just for completeness, not that it is some kind of problem at all! ;) Cheers, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: David H. <dav...@gm...> - 2006-07-24 11:58:53
|
2006/7/24, PGM <pgm...@gm...>: > > Folks , > I'm still using numpy 0.9.8 and just ran into this problem on my machine > (AMD64): > > >>> import numpy as N > >>> x = N.array([1.23456]) > >>> print divmod(x,1) > (array([ 1.]), array([ 0.23456])) > >>> print divmod(x[0],1) > () > >>> print divmod(x.tolist()[0],1) > (1.0, 0.2345600000000001) > > divmod doesn't seem to like '<f8's. Forcing x to '<f4' seems to do the > trick... > Did anybody run into this problem already ? Is it really numpy related ? > If > this is a bug on 0.9.8, has it been corrected already ? It works in svn. David |
From: Sven S. <sve...@gm...> - 2006-07-24 11:50:21
|
Thanks for helping out on matrix stuff, Bill! Bill Baxter schrieb: > On 7/22/06, Sven Schreiber <sve...@gm...> wrote: >> >> Note the array slicing works correct, but the equivalent thing with the >> matrix does not. > > Looks like it happened in rev 2698 of defmatrix.py, matrix.__getitem__ > method: > > if isscalar(index[1]): > if isscalar(index[0]): > retscal = True > elif out.shape[0] == 1: > sh = out.shape > out.shape = (sh[1], sh[0]) > ==> elif isinstance(index[1], (slice, types.EllipsisType)): > ==> if out.shape[0] == 1 and not isscalar(index[0]): > > It behaves like array if you remove the 'not' in the last line. > But maybe that breaks some other cases? > Maybe you can try making that change in your numpy/core/defmatrix.py > (around line 140) and see if anything else breaks for you. > Hm, I don't know -- if you don't mind I'd like to get a second opinion before I mess around there. It's funny though that the changeset has the title "fixing up matrix slicing" or something like that... >> >> Why is the direct access to matlib impossible? > > Maybe the thinking is that since it's a compatibily module, if you > want it you should explicity import it. Like you have to do with > oldnumeric. > If that is the reason I can't really follow the logic, but I don't really mind the status quo, either. -sven |
From: Karol L. <kar...@kn...> - 2006-07-24 10:46:04
|
On Monday 24 July 2006 06:47, Sebastian Haase wrote: > Hi, > if I have a numpy array 'a' > and say: > a.dtype == numpy.float32 > > Is the result independent of a's byteorder ? > (That's what I would expect ! Just checking !) > > Thanks, > Sebastian Haase The condition will always be False, because you're comparing wrong things here, numpy.float32 is a scalar type, not a dtype. >>> numpy.float32 <type 'float32scalar'> >>> type(numpy.dtype('>f4')) <type 'numpy.dtype'> And I think byteorder matters when comparing dtypes: >>> numpy.dtype('>f4') == numpy.dtype('<f4') False Cheers, Karol -- written by Karol Langner pon lip 24 12:05:54 CEST 2006 |
From: PGM <pgm...@gm...> - 2006-07-24 10:11:00
|
Folks , I'm still using numpy 0.9.8 and just ran into this problem on my machine (AMD64): >>> import numpy as N >>> x = N.array([1.23456]) >>> print divmod(x,1) (array([ 1.]), array([ 0.23456])) >>> print divmod(x[0],1) () >>> print divmod(x.tolist()[0],1) (1.0, 0.2345600000000001) divmod doesn't seem to like '<f8's. Forcing x to '<f4' seems to do the trick... Did anybody run into this problem already ? Is it really numpy related ? If this is a bug on 0.9.8, has it been corrected already ? Thanks a lot P. |
From: Sebastian H. <ha...@ms...> - 2006-07-24 04:47:11
|
Hi, if I have a numpy array 'a' and say: a.dtype == numpy.float32 Is the result independent of a's byteorder ? (That's what I would expect ! Just checking !) Thanks, Sebastian Haase |
From: Bill B. <wb...@gm...> - 2006-07-24 01:58:46
|
Howdy, On 7/22/06, Sven Schreiber <sve...@gm...> wrote: > Hi, > > Summary: Slicing seems to be broken with matrices now. > > ... > Example: > > >>> import numpy as n > >>> n.__version__ > '1.0b1' > >>> import numpy.matlib as m > >>> a = n.zeros((2,3)) > >>> b = m.zeros((2,3)) > >>> a[:1,:].shape > (1, 3) > >>> b[:1,:].shape > (3, 1) > >>> > > Note the array slicing works correct, but the equivalent thing with the > matrix does not. Looks like it happened in rev 2698 of defmatrix.py, matrix.__getitem__ method: if isscalar(index[1]): if isscalar(index[0]): retscal = True elif out.shape[0] == 1: sh = out.shape out.shape = (sh[1], sh[0]) ==> elif isinstance(index[1], (slice, types.EllipsisType)): ==> if out.shape[0] == 1 and not isscalar(index[0]): It behaves like array if you remove the 'not' in the last line. But maybe that breaks some other cases? Maybe you can try making that change in your numpy/core/defmatrix.py (around line 140) and see if anything else breaks for you. > I also noticed the following (in a new python session) import strangeness: > > >>> import numpy > >>> numpy.matlib.zeros((2,3)) > Traceback (most recent call last): > File "<interactive input>", line 1, in ? > AttributeError: 'module' object has no attribute 'matlib' > >>> > > Why is the direct access to matlib impossible? Maybe the thinking is that since it's a compatibily module, if you want it you should explicity import it. Like you have to do with oldnumeric. --bb |
From: Sebastian H. <ha...@ms...> - 2006-07-23 23:25:10
|
Hi, I'm converting SWIG typemap'ed C extensions from numarray to numpy. I studied (and use parts of) numpy.i from the doc directory. I noticed that there is no decref for the TYPEMAP_INPLACE2 typemap. This uses a function obj_to_array_no_conversion() which in turn just returns the original PyObject* ( casted to a PyArrayObject* after some sanity checks) It looks to me that in this case there should be an explicit Py_INCREF() - in case the function is threaded (releases the Python GIL) since it holds a pointer to that object's data . (Alternatively) Travis suggested (at the http://www.scipy.org/Converting_from_numarray wiki page) using PyArray_FromAny - is this incrementing the ref.count (implicitely) ? The numarray equivalent (NA_InputArray) IS incrementing the ref.count (as far as I know...). Furthermore on that same wiki page the PyArray_FromAny() is called together with PyArray_DescrFromType(<type>). After searching through the numpy source I found that in blasdot/_dotblas.c (in dotblas_matrixproduct() )there is an explicit Py_INCREF even on the dtype returned from PyArray_DescrFromType. I would argue that ref.counting is always very tricky territory ;-) Hopefully someone can enlighten me . Thanks, Sebastian Haase |
From: Filip W. <fil...@gm...> - 2006-07-23 20:04:39
|
On 7/23/06, Eric Firing <ef...@ha...> wrote: > Sebastian Haase wrote: > > Hi, > > I have a (medical) image file. > > I wrote a nice interface based on memmap using numarray. > > The class design I used was essentially to return a numarray array > > object with a new "custom" attribute giving access to special > > information about the base file. > > > > Now with numpy I noticed that a numpy object does not allow adding new > > attributes !! (How is this ? Why ?) > > > > Travis already suggested (replying to one of my last postings) to create > > a new sub class of numpy.ndarray. > > > > But how do I initialize an object of my new class to be "basically > > identically to" an existing ndarray object ? > > Normally I could do > > class B(N.ndarray): > > pass > > a=N.arange(10) > > a.__class__ = B > > Isn't this what you need to do instead? > > In [1]:import numpy as N > > In [2]:class B(N.ndarray): > ...: pass > ...: > > In [3]:a = B(N.arange(10)) It won't work like that. The constructor for the ndarray is: | ndarray.__new__(subtype, shape=, dtype=int_, buffer=None, | offset=0, strides=None, fortran=False) so you will get either an exception caused by inappropriate shape value or completely wrong result. >>> numpy.ndarray([1,2]) array([[10966528, 18946344]]) >>> numpy.ndarray([1,2]).shape (1, 2) >>> numpy.ndarray(numpy.arange(5)) array([], shape=(0, 1, 2, 3, 4), dtype=int32) And this is a thing you souldn't do rather than a bug. To create an instance of ndarray's subclass B from ndarray object, one need to call the ndarray.view method or the ndarray.__new__ constructor explicitly: class B(numpy.ndarray): def __new__(subtype, data): if isinstance(data, B): return data if isinstance(data, numpy.ndarray): return data.view(subtype) arr = numpy.array(data) return numpy.ndarray.__new__(B, shape=arr.shape, dtype=arr.dtype, buffer=arr) A good example of subclasing ndarray is the matrix class in core/defmatrix.py (SVN version). cheers, fw |
From: Sebastian H. <ha...@ms...> - 2006-07-23 19:19:18
|
Kevin Jacobs <ja...@bi...> wrote: > On 7/22/06, *Sebastian Haase* <ha...@ms... > <mailto:ha...@ms...>> wrote: > > Normally I could do > class B(N.ndarray): > pass > a=N.arange(10) > a.__class__ = B > > BUT I get this error: > #>>> a.__class__ = B > Traceback (most recent call last): > File "<input>", line 1, in ? > TypeError: __class__ assignment: only for heap types > > What is a "heap type" ? Why ? How can I do what I want ? > > > > Assigning to __class__ makes sense for objects that allocate a > dictionary for storage of attributes or have slots allocated to hold the > values. The heap type error is due to a missing flag in the class > definition and could be corrected. However, it may not be the best > thing to do. Calling B(array) is certainly safer, although a bit more > expensive. > > -Kevin > Thanks for the replies. Googling for this I was surprised myself that it IS legal (and done) to assign to obj.__class__. Kevin, I tried what you suggested first -- I think in C++ it would be called "using the copy-constructor". But I get an error - something like: "__new__() needs at least 3 arguments" In other words: (maybe?) in Python there is not always a "copy-constructor" (in fact there is no constructor overloading at all ...) So if there is "just a missing flag" - it would be great if this could be put in. It turns out that the "assigning to __class__"-scheme worked for the ndarray subclass "memmap" (i.e. I was sub classing from memmap and then I could assing origMemmapObj.__class__ = myClass) Thanks, Sebastian Haase |
From: Kevin J. <ja...@bi...> - 2006-07-23 17:02:12
|
On 7/22/06, Sebastian Haase <ha...@ms...> wrote: > > Normally I could do > class B(N.ndarray): > pass > a=N.arange(10) > a.__class__ = B > > BUT I get this error: > #>>> a.__class__ = B > Traceback (most recent call last): > File "<input>", line 1, in ? > TypeError: __class__ assignment: only for heap types > > What is a "heap type" ? Why ? How can I do what I want ? Assigning to __class__ makes sense for objects that allocate a dictionary for storage of attributes or have slots allocated to hold the values. The heap type error is due to a missing flag in the class definition and could be corrected. However, it may not be the best thing to do. Calling B(array) is certainly safer, although a bit more expensive. -Kevin |
From: Eric F. <ef...@ha...> - 2006-07-23 07:12:15
|
Sebastian Haase wrote: > Hi, > I have a (medical) image file. > I wrote a nice interface based on memmap using numarray. > The class design I used was essentially to return a numarray array > object with a new "custom" attribute giving access to special > information about the base file. > > Now with numpy I noticed that a numpy object does not allow adding new > attributes !! (How is this ? Why ?) > > Travis already suggested (replying to one of my last postings) to create > a new sub class of numpy.ndarray. > > But how do I initialize an object of my new class to be "basically > identically to" an existing ndarray object ? > Normally I could do > class B(N.ndarray): > pass > a=N.arange(10) > a.__class__ = B Isn't this what you need to do instead? In [1]:import numpy as N In [2]:class B(N.ndarray): ...: pass ...: In [3]:a = B(N.arange(10)) In [4]:a.__class__ Out[4]:<class '__main__.B'> In [5]:a.stuff = 'stuff' I don't think it makes sense to try to change the __class__ attribute by assignment. Eric |
From: Sebastian H. <ha...@ms...> - 2006-07-23 05:32:35
|
Hi! I'm trying to convert my numarray records code to numpy. >>> type(m.hdrArray) <class 'numpy.core.records.recarray'> >>> m.hdrArray.d [(array([ 1., 1., 1.], dtype=float32),)] but I get: >>> m.hdrArray[0].getfield('d') 5.43230922614e-312 Am I missing something or is this a bug ? Further details: >>> m.hdrArray.dtype.descr [('Num', [('f1', '<i4', 3)]), ('PixelType', [('f1', '<i4')]), ('mst', [('f1', '<i4', 3)]), ('m', [('f1', '<i4', 3)]), ('d', [('f1', '<f4', 3)]), ####!!!! ('angle', [('f1', '<f4', 3)]), ('axis', [('f1', '<i4', 3)]), ('mmm1', [('f1', '<f4', 3)]), ('type', [('f1', '<i2')]), ('nspg', [('f1', '<i2')]), ('next', [('f1', '<i4')]), ('dvid', [('f1', '<i2')]), ('blank', [('f1', '|i1', 30)]), ('NumIntegers', [('f1', '<i2')]), ('NumFloats', [('f1', '<i2')]), ('sub', [('f1', '<i2')]), ('zfac', [('f1', '<i2')]), ('mm2', [('f1', '<f4', 2)]), ('mm3', [('f1', '<f4', 2)]), ('mm4', [('f1', '<f4', 2)]), ('ImageType', [('f1', '<i2')]), ('LensNum', [('f1', '<i2')]), ('n1', [('f1', '<i2')]), ('n2', [('f1', '<i2')]), ('v1', [('f1', '<i2')]), ('v2', [('f1', '<i2')]), ('mm5', [('f1', '<f4', 2)]), ('NumTimes', [('f1', '<i2')]), ('ImgSequence', [('f1', '<i2')]), ('tilt', [('f1', '<f4', 3)]), ('NumWaves', [('f1', '<i2')]), ('wave', [('f1', '<i2', 5)]), ('zxy0', [('f1', '<f4', 3)]), ('NumTitles', [('f1', '<i4')]), ('title', [('f1', '|S80', 10)])] >>> >>> >>> m.hdrArray[0].dtype.descr [('Num', [('f1', '<i4', 3)]), ('PixelType', [('f1', '<i4')]), ('mst', [('f1', '<i4', 3)]), ('m', [('f1', '<i4', 3)]), ('d', [('f1', '<f4', 3)]), ####!!!! ('angle', [('f1', '<f4', 3)]), ('axis', [('f1', '<i4', 3)]), ('mmm1', [('f1', '<f4', 3)]), ('type', [('f1', '<i2')]), ('nspg', [('f1', '<i2')]), ('next', [('f1', '<i4')]), ('dvid', [('f1', '<i2')]), ('blank', [('f1', '|i1', 30)]), ('NumIntegers', [('f1', '<i2')]), ('NumFloats', [('f1', '<i2')]), ('sub', [('f1', '<i2')]), ('zfac', [('f1', '<i2')]), ('mm2', [('f1', '<f4', 2)]), ('mm3', [('f1', '<f4', 2)]), ('mm4', [('f1', '<f4', 2)]), ('ImageType', [('f1', '<i2')]), ('LensNum', [('f1', '<i2')]), ('n1', [('f1', '<i2')]), ('n2', [('f1', '<i2')]), ('v1', [('f1', '<i2')]), ('v2', [('f1', '<i2')]), ('mm5', [('f1', '<f4', 2)]), ('NumTimes', [('f1', '<i2')]), ('ImgSequence', [('f1', '<i2')]), ('tilt', [('f1', '<f4', 3)]), ('NumWaves', [('f1', '<i2')]), ('wave', [('f1', '<i2', 5)]), ('zxy0', [('f1', '<f4', 3)]), ('NumTitles', [('f1', '<i4')]), ('title', [('f1', '|S80', 10)])] Thanks, Sebastian Haase |
From: Sebastian H. <ha...@ms...> - 2006-07-23 03:39:27
|
Hi, I have a (medical) image file. I wrote a nice interface based on memmap using numarray. The class design I used was essentially to return a numarray array object with a new "custom" attribute giving access to special information about the base file. Now with numpy I noticed that a numpy object does not allow adding new attributes !! (How is this ? Why ?) Travis already suggested (replying to one of my last postings) to create a new sub class of numpy.ndarray. But how do I initialize an object of my new class to be "basically identically to" an existing ndarray object ? Normally I could do class B(N.ndarray): pass a=N.arange(10) a.__class__ = B BUT I get this error: #>>> a.__class__ = B Traceback (most recent call last): File "<input>", line 1, in ? TypeError: __class__ assignment: only for heap types What is a "heap type" ? Why ? How can I do what I want ? Thanks, Sebastian Haase |
From: Steve L. <lis...@ar...> - 2006-07-22 21:50:25
|
Hi folks, Since 1.0 release is eminent, I just wanted to draw the attention to two failures I get when I run numpy.test(1). I've never been able to get numpy to pass all test cases, but now it fails a second one, so .. I'm pasting it below. Please let me know if these are non-consequential. System info: + Intel Mac (MacBook Pro) + OS X.4.7 + numpy version: 1.0.2881 test failures: FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ test_scalarmath.py", line 47, in check_large_types assert b == 6765201, "error with %r: got %r" % (t,b) AssertionError: error with <type 'float128scalar'>: got 0.0 ====================================================================== FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ test_scalarmath.py", line 20, in check_types assert a == 1, "error with %r: got %r" % (atype,a) AssertionError: error with <type 'float128scalar'>: got 1.02604810941480982577e-4936 ---------------------------------------------------------------------- Ran 468 tests in 1.157s FAILED (failures=2) Out[2]: <unittest.TextTestRunner object at 0x15e3510> ---------------------------------------- Thanks, -steve |
From: Sven S. <sve...@gm...> - 2006-07-21 15:08:26
|
Hi, Summary: Slicing seems to be broken with matrices now. I eagerly installed the new beta, but soon stumbled over this bug. I hope I'm missing something, but afaics that behavior used to be different (and correct) before in 0.9.8. Don't know exactly when this changed, though. I did a fresh install (uninstalled old numpy and also matplotlib first) of the official binary for windows/python 2.4. Example: >>> import numpy as n >>> n.__version__ '1.0b1' >>> import numpy.matlib as m >>> a = n.zeros((2,3)) >>> b = m.zeros((2,3)) >>> a[:1,:].shape (1, 3) >>> b[:1,:].shape (3, 1) >>> Note the array slicing works correct, but the equivalent thing with the matrix does not. I also noticed the following (in a new python session) import strangeness: >>> import numpy >>> numpy.matlib.zeros((2,3)) Traceback (most recent call last): File "<interactive input>", line 1, in ? AttributeError: 'module' object has no attribute 'matlib' >>> Why is the direct access to matlib impossible? Either I'm missing something (I well may be, because I'm melting away at 36 centigrades or so...), or imho a new beta should be put out quickly to enable further testing (and use). Thanks, Sven |
From: Bill B. <wb...@gm...> - 2006-07-21 12:22:22
|
On 7/21/06, Sven Schreiber <sve...@gm...> wrote: > Bill Baxter schrieb: > > > Finally, I noticed that the atleast_nd methods return arrays > > regardless of input type. > > Are you sure? I reported that issue with *stack and I remember it was fixed. Doh! My bad. You're right. I was looking at the code in SVN for the atleast_* methods and didn't realize that the array constructor (or any constructor) could actually return something besides an object of that class. But that's exactly what array(subok=True) allows. > > SUMMARY: > > * make r_ behave like "vstack plus range literals" > > * make column_stack only transpose its 1d inputs. > > * rename r_,c_ to v_,h_ (or something else) to make their connection > > with vstack and hstack clearer. Maybe vs_ and hs_ would be better? > > * make a new vertsion of 'c_' that acts like column_stack so that > > theres a nice parallel v_<=>vstack, h_<=>hstack, c_<=>column_stack > > * make atleast_*d methods preserve the input type whenever possible > > > > One problem with all that renaming is the other (maybe more important) > function of r_: build (array or) matrix ranges quickly. How would you duplicate this kind of behavior, you mean? >>> r_[1:4,0,4] array([1, 2, 3, 0, 4]) That's what h_ would do. Just as if you had done hstack( (range(1,4),0,4) ). That's actually a good reason for the renaming/retooling. r_ is kind of schitzophrenic now in that it acts *either* as "concatenate rows (vstack-like, for >=2-d)" or "build me a row (hstack-like, for <2d)". So it's hard to remember what the 'r' in r_ stands for. On the other hand, c_ is always hstack-like. By the way, I noticed that 'row_stack' has appeared as a synonym for 'vstack' in SVN. Thanks to whomever added it! --bb |
From: <Fer...@eu...> - 2006-07-21 11:36:19
|
Hi Tim, many thanks for the tipps, i used the same way with vectorized (chunk) method on the indexing operation. .. .... ...... ............ # out = zeros((size_mcf[0],sizes_smatrix[2]+5),Float32) # size_mcf[0] ~ 240000 eig = zeros((size_mcf[0],3,3),dtype=Float32) eigwert = zeros((size_mcf[0],3),dtype=Float64) # # here is speed up ~30 #for j in arange(0,size_mcf[0]): #eig[0,0] = out[j,1] #eig[1,1] = out[j,2] #eig[2,2] = out[j,3] # #eig[0,1] = out[j,4] #eig[0,2] = out[j,6] #eig[1,0] = out[j,4] #eig[1,2] = out[j,5] #eig[2,0] = out[j,6] #eig[2,1] = out[j,5] # eig[:,0,0] = out[:,1] eig[:,1,1] = out[:,2] eig[:,2,2] = out[:,3] eig[:,1,0] = eig[:,0,1] = out[:,4] eig[:,2,0] = eig[:,0,2] = out[:,6] eig[:,2,1] = eig[:,1,2] = out[:,5] # for i in arange(size_mcf[0]): eigwert[i] = eigvals(eig[i,:,:]) # out[:,7:10] = sort(eigwert[:,:].astype(float32)) out[:,10] = abs(out[:,7]-out[:,9]) speedup factor ~30 ! f. |
From: Sven S. <sve...@gm...> - 2006-07-21 09:00:36
|
Bill Baxter schrieb: > Finally, I noticed that the atleast_nd methods return arrays > regardless of input type. At a minimum, atleast_1d and atleast_2d on > matrices should return matrices. I'm not sure about atleast_3d, since > matrices can't be 3d. (But my opinon is that the matrix type should > be allowed to be 3d). Anyway, since these methods are used by the > *stack methods, those also do not currently preserve the matrix type > (in SVN numpy). > Are you sure? I reported that issue with *stack and I remember it was fixed. > SUMMARY: > * make r_ behave like "vstack plus range literals" > * make column_stack only transpose its 1d inputs. > * rename r_,c_ to v_,h_ (or something else) to make their connection > with vstack and hstack clearer. Maybe vs_ and hs_ would be better? > * make a new vertsion of 'c_' that acts like column_stack so that > theres a nice parallel v_<=>vstack, h_<=>hstack, c_<=>column_stack > * make atleast_*d methods preserve the input type whenever possible > One problem with all that renaming is the other (maybe more important) function of r_: build (array or) matrix ranges quickly. So I'm a bit against the renaming I guess. Cleaning up the irritations with the *stacks seems useful though. (Although I have to confess I haven't read your last mail very thoroughly.) -Sven |
From: Travis O. <oli...@ie...> - 2006-07-21 08:31:54
|
I've created the 1.0b1 release tag in SVN and will be uploading files shortly to Sourceforge. I've also created a 1.0 release branch called ver1.0 The trunk is now version 1.1 of NumPy and should be used for new-development only. I don't expect 1.1 to come out for at least a year. Bug-fixes and small changes can happen on the 1.0 branch. These will be merged periodically to 1.1 or vice-versa. But, the 1.0 branch will be used for releases for the next year. AFAIK, this is similar to Python's release plan. I'm also going to be out of town for a few days and may not be able to check my email so you can ask questions, but I may not answer them for several days :-) Thanks to all you who helped with this release with bug reports and patches: Robert Kern David Cooke Pearu Peterson Alexander Belopolsky (Sasha) Albert Strasheim Stefan van der Walt Tim Hochberg Christopher Hanley Perry Greenfield Todd Miller David Huard Nils Wagner Thank you... I hope you all have a great weekend :-) Let's continue to make the beta release period productive by improving documentation and getting code-coverage tests and tracking down any bugs. Best regards, -Travis |
From: Sebastian H. <ha...@ms...> - 2006-07-21 04:25:43
|
NumPy wrote: > #188: dtype should have "nice looking" str representation > -------------------------+-------------------------------------------------- > Reporter: sebhaase | Owner: oliphant > Type: enhancement | Status: closed > Priority: normal | Milestone: 1.0 Release > Component: numpy.core | Version: > Severity: normal | Resolution: wontfix > Keywords: | > -------------------------+-------------------------------------------------- > Changes (by oliphant): > > * status: new => closed > * resolution: => wontfix > > Comment: > > I'm not sure what the best thing to display here actually is. The current > string is very informative. Dropping the byte-ordering character is a > bad-idea. > Just yesterday I showed the new numpy a colleague of mine and he indeed read "<i4" as "less than int 4" !!! Would it be conceivable to have str() being different from repr() ? Most interactive shells are setup to return repr() - but I have already customized our lab's "sys._displayhook" so that the shell responds with str(), since .29999999999998 instead of .3 was never acceptable to me in a "matlab replacement" ... (Of course I can adjust my displayhook function further if the encoded "<i4" is really important for you) Thanks anyway, Sebastian Haase |
From: Paul B. <peb...@gm...> - 2006-07-21 04:01:49
|
I'm having a problem converting a C extension module that was originally written for numarray to use numpy. I using swig to create a wrapper flle for the C code. I have added the numpy.get_numarray_include() method to my setup.py file and have changed the numarray/libnumarray.h to use numpy/libnumarray.h. The extension appears to compile fine (with the exception of some warning messages). However, when I import the module, I get a segfault. Do I need to add anything else to the share library's initialization step other than import_libnumarray()? This is using Python 2.4.3 and gcc 4.1.1 on FC5. -- Paul |
From: Stephen S. <ma...@st...> - 2006-07-21 03:45:06
|
While playing a little more with bincount(), one modification would be handy: Allow negative integers in the bin list, but skip them when counting bins My specific use case is calculating subtotals on columns of large datasets (1m rows x 30 cols), where some rows need to be excluded. The groupings are expensive to compute, and sometimes will involve ~99% of the rows (eliminate only outliers/errors), and other times only ~5% of the rows (focus in on a subset). I'd like to calculate subtotals like this using bincount(), without having to copy the large datasets just to eliminate the unwanted rows: # Assign each row to a group numbered from 0..G, except for -1 for rows to exclude row_groups = expensive_function(data) # Count number in each group, excluding those with grp==-1 grp_counts = bincount(list=row_groups) # Use bincount() to form subtotals by column, excluding those with grp==-1 subtotals = column_stack([ bincount(list=row_groups, weights=data[:,i]) for i in range(G+1) ]) Is there any appetite to make such a change to bincount()? This would require two simple changes to bincount() in _compiled_base.c and an update to the docstring. Here is the diff file with enough context to show the entire arr_bincount() function: *** orig_compiled_base.c 2006-07-21 13:14:21.250000000 +1000 --- _compiled_base.c 2006-07-21 13:34:41.718750000 +1000 *************** *** 70,143 **** intp j ; for ( j = 1 ; j < len; j ++ ) if ( i [j] < min ) {min = i [j] ; mn = j ;} return mn; } static PyObject * arr_bincount(PyObject *self, PyObject *args, PyObject *kwds) { /* histogram accepts one or two arguments. The first is an array ! * of non-negative integers and the second, if present, is an * array of weights, which must be promotable to double. * Call these arguments list and weight. Both must be one- * dimensional. len (weight) == len(list) * If weight is not present: ! * histogram (list) [i] is the number of occurrences of i in list. * If weight is present: * histogram (list, weight) [i] is the sum of all weight [j] ! * where list [j] == i. */ /* self is not used */ PyArray_Descr *type; PyObject *list = NULL, *weight=Py_None ; PyObject *lst=NULL, *ans=NULL, *wts=NULL; ! intp *numbers, *ians, len , mxi, mni, ans_size; int i; double *weights , *dans; static char *kwlist[] = {"list", "weights", NULL}; Py_Try(PyArg_ParseTupleAndKeywords(args, kwds, "O|O", kwlist, &list, &weight)); Py_Try(lst = PyArray_ContiguousFromAny(list, PyArray_INTP, 1, 1)); len = PyArray_SIZE(lst); numbers = (intp *) PyArray_DATA(lst); mxi = mxx (numbers, len) ; - mni = mnx (numbers, len) ; - Py_Assert(numbers[mni] >= 0, - "irst argument of bincount must be non-negative"); ans_size = numbers [mxi] + 1 ; type = PyArray_DescrFromType(PyArray_INTP); if (weight == Py_None) { Py_Try(ans = PyArray_Zeros(1, &ans_size, type, 0)); ians = (intp *)(PyArray_DATA(ans)); for (i = 0 ; i < len ; i++) ! ians [numbers [i]] += 1 ; Py_DECREF(lst); } else { Py_Try(wts = PyArray_ContiguousFromAny(weight, PyArray_DOUBLE, 1, 1)); weights = (double *)PyArray_DATA (wts); Py_Assert(PyArray_SIZE(wts) == len, "bincount: length of weights " \ "does not match that of list"); type = PyArray_DescrFromType(PyArray_DOUBLE); Py_Try(ans = PyArray_Zeros(1, &ans_size, type, 0)); dans = (double *)PyArray_DATA (ans); for (i = 0 ; i < len ; i++) { ! dans[numbers[i]] += weights[i]; } Py_DECREF(lst); Py_DECREF(wts); } return ans; fail: Py_XDECREF(lst); Py_XDECREF(wts); Py_XDECREF(ans); return NULL; } --- 70,145 ---- intp j ; for ( j = 1 ; j < len; j ++ ) if ( i [j] < min ) {min = i [j] ; mn = j ;} return mn; } static PyObject * arr_bincount(PyObject *self, PyObject *args, PyObject *kwds) { /* histogram accepts one or two arguments. The first is an array ! * of integers and the second, if present, is an * array of weights, which must be promotable to double. * Call these arguments list and weight. Both must be one- * dimensional. len (weight) == len(list) * If weight is not present: ! * histogram (list) [i] is the number of occurrences of i in list ! * for i>=0. Negative i values are ignored. * If weight is present: * histogram (list, weight) [i] is the sum of all weight [j] ! * where list [j] == i and i>=0. */ /* self is not used */ PyArray_Descr *type; PyObject *list = NULL, *weight=Py_None ; PyObject *lst=NULL, *ans=NULL, *wts=NULL; ! intp *numbers, *ians, len , mxi, ans_size; int i; double *weights , *dans; static char *kwlist[] = {"list", "weights", NULL}; Py_Try(PyArg_ParseTupleAndKeywords(args, kwds, "O|O", kwlist, &list, &weight)); Py_Try(lst = PyArray_ContiguousFromAny(list, PyArray_INTP, 1, 1)); len = PyArray_SIZE(lst); numbers = (intp *) PyArray_DATA(lst); mxi = mxx (numbers, len) ; ans_size = numbers [mxi] + 1 ; type = PyArray_DescrFromType(PyArray_INTP); if (weight == Py_None) { Py_Try(ans = PyArray_Zeros(1, &ans_size, type, 0)); ians = (intp *)(PyArray_DATA(ans)); for (i = 0 ; i < len ; i++) ! if (numbers[i]>=0) { ! ians[numbers [i]] += 1 ; ! } Py_DECREF(lst); } else { Py_Try(wts = PyArray_ContiguousFromAny(weight, PyArray_DOUBLE, 1, 1)); weights = (double *)PyArray_DATA (wts); Py_Assert(PyArray_SIZE(wts) == len, "bincount: length of weights " \ "does not match that of list"); type = PyArray_DescrFromType(PyArray_DOUBLE); Py_Try(ans = PyArray_Zeros(1, &ans_size, type, 0)); dans = (double *)PyArray_DATA (ans); for (i = 0 ; i < len ; i++) { ! if (numbers[i]>=0) { ! dans[numbers[i]] += weights[i]; ! } } Py_DECREF(lst); Py_DECREF(wts); } return ans; fail: Py_XDECREF(lst); Py_XDECREF(wts); Py_XDECREF(ans); return NULL; } Cheers Stephen |
From: Bill B. <wb...@gm...> - 2006-07-21 03:12:28
|
Howdy, Is there any nicer syntax for the following operations on arrays? Append a row: a = vstack((a,row)) Append a column: a = hstack((a,col)) Append a row of zeros: a = vstack((a,zeros((1,a.shape[1])))) Append a col of zeros: a = hstack((a,zeros((a.shape[0],1)))) Insert a row before row j a = vstack(( a[:j], row, a[j:] )) Insert a column before col j a = hstack(( a[:j], col, a[j:] )) Insert a row of zeros before row j a = vstack(( a[:j], zeros((1,a.shape[1])), a[j:] )) Insert a column of zeros before col j a = hstack(( a[:j], zeros((a.shape[0],1)), a[j:] )) Delete row j: a = vstack(( a[:j], a[j+1:] )) Delete col j: a = hstack(( a[:j], a[j+1:] )) ...And in more general the same types of operations for N-d arrays. I find myself using python lists of lists a lot just for the easy readability of a.append(row) compared to a = vstack((a,row)). I guess, though, if I'm building an array by appending a row at a time, then maybe it *is* better to use a python list o list for that? Then each 'append' only copies pointers of the existing rows rather than copying the data in each row. Is that correct? Also do python lists over-allocate in order to avoid having to re-allocate and copy every time there's an append()? Numpy arrays don't over-allocate I assume. Thanks, --bb |
From: Stephen S. <ma...@st...> - 2006-07-21 02:05:57
|
Hi, The function bincount() counts the number of each value found in the input array: In [15]: numpy.bincount( array([1,3,3,3,4],dtype=int32) ) Out[15]: array([0, 1, 0, 3, 1]) According to the documentation, the input array must be non-negative integers. However an exception occurs when the input data type are unsigned integers (which is an explicit guarantee of this non-negativity condition): In [157]: numpy.bincount( array([1,3,3,3,4],dtype=uint32) ) TypeError: array cannot be safely cast to required type This seems to be a bug. Cheers Stephen P.S. I'm not familiar enough with the numpy source to track down where this typechecking is done. But I did find a trivial typo in an error msg in function arr_bincount() in numpy/lib/src/_compiled_base.c. The assert message here has lost its initial 'F': Py_Assert(numbers[mni] >= 0, "irst argument of bincount must be non-negative"); |