You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Robert K. <rob...@gm...> - 2006-07-12 04:05:43
|
Travis Oliphant wrote: > So, I'm opposed to getting rid of the *args based syntax. My feelings > are weaker regarding adding the capability to rand and randn to accept a > tuple. I did test it out and it does seem feasible to add this feature > at the cost of an additional comparison. I know Robert is opposed to it > but I'm not sure I understand completely why. > > Please correct me if I'm wrong, but I think it has something to do with > making the random_sample and standard_normal functions irrelevant and > unnecessary combined with my hypothesis that Robert doesn't like the > *args-style syntax. Therefore, adding support to rand and randn for > tuples, would make them the default random-number generators and there > would be proliferating code that was "harder to read" because of the > different usages. My opposition has much to do with my natural orneriness, I suppose. However, for most of this argument I've never seen a reason why rand() ought to be considered a peer of ones() and zeros() and thus have any need to be "consistent" in this one respect with them. I've always considered ones() and zeros() to be fundamental constructors of basic arrays of an arbitrary dtype that (most likely) you'll be mangling up immediately. I've seen rand() and all of the other RandomState methods as simply functions that return arrays. In that respect, they have more in common with arange() than anything else. However, I've come to realize that because rand() is exposed in the top-level namespace, other people probably *are* seeing it as another fundamental constructor of arrays (albeit of a single dtype). When I was writing the package, I never even considered that some of its functions might be imported into the top-level namespace. I'm somewhat more sympathetic to the confusion caused by the rand()-as-constructor viewpoint, now. That's why my currently preferred compromise position is to remove rand() from numpy.* and leave it in numpy.random*. I also have found that I just don't use those functions convenient in "real" code. They usually come into play when I need one or lots of arbitrary, but non-degenerate-with-high-probability arrays to as a test or demo input to another piece of code. Otherwise, I probably need something more sophisticated. Manually shifting and scaling standard variates makes the code more verbose without making it more comprehensible. I do indeed dislike functions that try to be clever in determining what the user wanted by the number and types of arguments provided. Even if they are easy to implement robustly (as in this case), I think that it makes explaining the function more complicated. It also makes the same function be called in two very different ways; I find this break in consistency rather more egregious than two different functions, used for different purposes in different circumstances, being called in two different ways. In my experience, it's much more likely to cause confusion. I'm currently trudging my way through a mountain of someone else's C++ code at the moment. There are good uses of C++'s function overloading, for example to do type polymorphism on the same number of arguments when templates don't quite cut it. However, if it is abused to create fundamentally different ways to call the same function, I've found that the readability drops rapidly. I have not been a happy camper. I have yet to see a good exposition of the actual *problems* the current situation is causing. Most of the argument in favor of changing anything has been a call for consistency. However, consistency can never be an end of itself. It must always be considered a means to achieve a comprehensible, flexible, usable API. But it's only one of several means to that goal, and even that goal must be weighed against other goals, too. When real problems were discussed, I didn't find the solution to fit those problems. Alan said that it was an annoyance (at the very least) to have to teach students that rand() is called differently from ones(). The answer here is to not teach rand(); use the functions that follow the one convention that you want to teach. Also teach the students to read docstrings so if they come across rand() in their copious spare time, they know what's what. Another almost-compelling-to-me real problem was someone saying that they always had to pause when writing rand() and think about what the calling convention was. My answer is similar: don't use rand(). This problem, as described, is a "write-only" problem; if you want consistency in the calling convention between ones() et al. and your source of random numbers, it's there. You might run into uses of rand() in other people's code, but it'll never confuse you as to what the calling convention is. However, I do think that the presence of rand() in the numpy.* namespace is probably throwing people off. They default to rand() and chafe against its API even though they'd be much happier with the tuple-based API. But mostly my opposition follows from this observation: making rand() cleverly handle its argument tuple *does not solve the problem*. A polymorphic rand() will *still* be inconsistent with ones() and zeros() because ones() and zeros() won't be polymorphic, too. And they can't be really; the fact that they take other arguments besides the shape tuple makes the implementation and use idioms rather harder than for rand(). And mark my words, if we make rand() polymorphic, we will get just as many newbies coming to the list asking why ones(3, 4) doesn't work. I've already described how I feel that this would just trade one inconsistency (between different functions) for another (between different uses of the same function). I find the latter much worse. To summarize: while I'd rather just leave things as they are, I realize that feeling is more from spite than anything else, and I'm above that. Mostly. I *do* think that the brouhaha will rapidly diminish if rand() and rand() are simply removed from numpy.* and left in numpy.random.* where they belong. While I'm sure that some who have been pushing for consistency the hardest will still grumble a bit, I strongly suspect that no one would have thought to complain if this had been the configuration from the very beginning. Okay, now I think I've officially spent more time on this email than I ever did using or implementing rand(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Keith G. <kwg...@gm...> - 2006-07-12 02:50:49
|
On 7/11/06, David Huard <dav...@gm...> wrote: > > > 2006/7/11, JJ <jos...@ya...>: > > 1) is it possible to get the function unique() to work with matrices, > perhaps > > with a unique_rows() function to work with matrices of more than one > column? > > > The problem is that applying unique to different rows will return vectors > with different lengths. So you could not return an array, much less a > matrix. ...unless it returned the unique rows instead of the unique elements in each row. So if the matrix is 1 2 2 3 4 5 1 2 2 then the unique rows would be 1 2 2 3 4 5 |
From: David H. <dav...@gm...> - 2006-07-12 02:42:26
|
2006/7/11, JJ <jos...@ya...>: > > 1) is it possible to get the function unique() to work with matrices, > perhaps > with a unique_rows() function to work with matrices of more than one > column? The problem is that applying unique to different rows will return vectors with different lengths. So you could not return an array, much less a matrix. You'd have to return a list of arrays or 1D matrices. David |
From: Sasha <nd...@ma...> - 2006-07-12 02:38:34
|
I had similar hopes when I submited the array interface patch <https://sourceforge.net/tracker/index.php?func=detail&aid=1452906&group_id=5470&atid=305470> and announced it on python-dev <http://aspn.activestate.com/ASPN/Mail/Message/python-dev/3068813>. I am still waiting for anyone to comment on it :-( On 7/11/06, Travis Oliphant <oli...@ee...> wrote: > Filip Wasilewski wrote: > > >Hi, > > > >the way of accessing data with __array_interface__, as shown by Travis > >in [1], also works nicely when used with builtin array.array (if someone > >here is still using it;). > > > >Time to convert array.array to ndarray is O(N) but can be made O(1) just > >by simple subclassing. > > > >[1] http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3191164 > > > > > > > This is exactly the kind of thing I'd like to see get into Python. > Thanks for picking up the ball.. > > > -Travis > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: David H. <dav...@gm...> - 2006-07-12 02:36:12
|
There probably is a bug because stats.binom.ppf(.975, 100, .5) crashes the ipython shell. win2k, P3, latest binary release. David 2006/7/11, JJ <jos...@ya...>: > > Am I using the wrong syntax for the binom.ppf command, or is there a bug? > > >>> stats.binom.ppf(.975,100,.5) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "/usr/lib64/python2.4/site-packages/scipy/stats/distributions.py", > line > 3590, in ppf > insert(output,cond,self._ppf(*goodargs) + loc) > File "/usr/lib64/python2.4/site-packages/numpy/lib/function_base.py", > line > 501, in insert > return _insert(arr, mask, vals) > TypeError: array cannot be safely cast to required type > >>> > ----------------- > The info pages for binom.ppf state that: > binom.ppf(q,n,pr,loc=0) > - percent point function (inverse of cdf --- percentiles) > So I would expect binom.ppf to take three variables. > > I expected the function to return a number, such as is done in matlab: > N = 100 > alpha = 0.05 > p1 = 0.30 > cutoff = binoinv(1-alpha, N, p1) > > cutoff = > > 38 > Any suggestions? > > > JJ > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Travis O. <oli...@ie...> - 2006-07-12 01:24:06
|
Christian Kristukat wrote: > Hi, > currently the bdist_rpm build method seems to be quite unstable. It works for > some recent svn revisions, for the current 2804, however, not. > The error messages begin with: > > building 'numpy.core.multiarray' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 > -mtune=i686 -fmessage-length=0 -Wall - > D_FORTIFY_SOURCE=2 -g -O2 -g -m32 -march=i586 -mtune=i686 -fmessage-length=0 > -D_FORTIFY_SOURCE=2 -fPIC > > creating build/temp.linux-i686-2.4 > creating build/temp.linux-i686-2.4/numpy > creating build/temp.linux-i686-2.4/numpy/core > creating build/temp.linux-i686-2.4/numpy/core/src > compile options: '-Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include > -Ibuild/src.linux-i686-2.4/n > umpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' > gcc: numpy/core/src/multiarraymodule.c > numpy/core/src/multiarraymodule.c:24:28: error: numpy/noprefix.h: No such file > or directory > numpy/core/src/multiarraymodule.c:33: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or > ‘__attribute__’ before ‘*’ toke > n > > Make sure you get rid of the MANIFEST file in the source directory before trying to run sdist or bdist_rpm. The MANIFEST file is not being deleted when it is dated... -Travis |
From: Christian K. <ck...@ho...> - 2006-07-12 01:11:32
|
Hi, currently the bdist_rpm build method seems to be quite unstable. It works for some recent svn revisions, for the current 2804, however, not. The error messages begin with: building 'numpy.core.multiarray' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mtune=i686 -fmessage-length=0 -Wall - D_FORTIFY_SOURCE=2 -g -O2 -g -m32 -march=i586 -mtune=i686 -fmessage-length=0 -D_FORTIFY_SOURCE=2 -fPIC creating build/temp.linux-i686-2.4 creating build/temp.linux-i686-2.4/numpy creating build/temp.linux-i686-2.4/numpy/core creating build/temp.linux-i686-2.4/numpy/core/src compile options: '-Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/n umpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/core/src/multiarraymodule.c numpy/core/src/multiarraymodule.c:24:28: error: numpy/noprefix.h: No such file or directory numpy/core/src/multiarraymodule.c:33: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ toke n Regards, Christian |
From: JJ <jos...@ya...> - 2006-07-12 00:23:50
|
Travis Oliphant <oliphant <at> ee.byu.edu> writes: > But, some kind of function that returns an array with specific > entries deleted would be nice. I agree. This would be just fine. > We could over-ride the iterator > behavior of matrices, though to handle 1xn and nx1 matrices > identically if that is desirable. I had tried this iteration on a month-old version of numpy and it did not work. I guess this now has been changed. I just updated my copy but have not yet tried it. An over-ride might be nice. But just off the topic, could you get a matrix of real numbers such as A= [[1.0 2.0,3.0]] to be used to select rows/colums as in B[:,A]? I guess this would require a hidden conversion to integers, as well as code to handle selection using a matrix. > Svd returns matrices now. Except for the list of singular values > which is still an array. Do you want a 1xn matrix instead of an > array? I had just tried this with my new version of numpy, but I had used svd as follows: import scipy.linalg as la res = la.svd(M) That returned arrays, but I see that using: res = linalg.svd(M) returns matrices. Apparently, both numpy and scipy have linalg packages, which differ. I did not know that. Whoops. |
From: Keith G. <kwg...@gm...> - 2006-07-12 00:11:15
|
On 7/11/06, Travis Oliphant <oli...@ee...> wrote: > JJ wrote: > >4) It would be nice if the linear algebra package and other packages returned > >matrices if given matrices. For example, if M is a matrix, svd(M) now returns > > > > > Svd returns matrices now. Except for the list of singular values which > is still an array. Do you want a 1xn matrix instead of an array? That sounds good to me. The same goes for eig and eigh: >> eigval,eigvec = linalg.eig(rand(2,2)) >> eigval array([-0.06035002, 0.14320639]) >> eigvec matrix([[ 0.54799954, -0.83647863], [-0.83647863, -0.54799954]]) |
From: Michael S. <mic...@gm...> - 2006-07-12 00:07:57
|
On 7/12/06, JJ <jos...@ya...> wrote: > 2) It would be very convienient to have some simple way to delete selected > columns of a matrix. For example, in matlab the command is X[:,[3,5,7]]=[] to > delete the three selected columns. It would be nice if such a command would > also work with selections, as in X[:,A[0,:]<4] = [], where X and A are matrices. +1. In R negative integers are used for this purpose and a copy of the array is returned. e.g. > x = 1:10 > x [1] 1 2 3 4 5 6 7 8 9 10 > x[-1] [1] 2 3 4 5 6 7 8 9 10 > x[c(-1,-2)] [1] 3 4 5 6 7 8 9 10 > x[-c(1,2)] [1] 3 4 5 6 7 8 9 10 I like the current use of negative indices in numpy, but I do find myself missing the ability to easily make a copy of the array without certain indices. Mike |
From: JJ <jos...@ya...> - 2006-07-12 00:01:06
|
Am I using the wrong syntax for the binom.ppf command, or is there a bug? >>> stats.binom.ppf(.975,100,.5) Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/stats/distributions.py", line 3590, in ppf insert(output,cond,self._ppf(*goodargs) + loc) File "/usr/lib64/python2.4/site-packages/numpy/lib/function_base.py", line 501, in insert return _insert(arr, mask, vals) TypeError: array cannot be safely cast to required type >>> ----------------- The info pages for binom.ppf state that: binom.ppf(q,n,pr,loc=0) - percent point function (inverse of cdf --- percentiles) So I would expect binom.ppf to take three variables. I expected the function to return a number, such as is done in matlab: N = 100 alpha = 0.05 p1 = 0.30 cutoff = binoinv(1-alpha, N, p1) cutoff = 38 Any suggestions? JJ |
From: Travis O. <oli...@ee...> - 2006-07-11 23:43:11
|
Filip Wasilewski wrote: >Hi, > >the way of accessing data with __array_interface__, as shown by Travis >in [1], also works nicely when used with builtin array.array (if someone >here is still using it;). > >Time to convert array.array to ndarray is O(N) but can be made O(1) just >by simple subclassing. > >[1] http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3191164 > > > This is exactly the kind of thing I'd like to see get into Python. Thanks for picking up the ball.. -Travis |
From: Travis O. <oli...@ee...> - 2006-07-11 23:41:56
|
JJ wrote: >Hello. For what its worth, as a newly ex-matlab user I would like to make a few >suggestions on use of matrices in numpy. As per earlier discussions, I like the >idea of being able to choose matrices as the default (vs arrays). But if >possible, it would be nice if all functions etc that took matrices also returned >matrices. I know effort has been made on this. Here are my suggestions: > >1) is it possible to get the function unique() to work with matrices, perhaps >with a unique_rows() function to work with matrices of more than one column? > >2) It would be very convienient to have some simple way to delete selected >columns of a matrix. > This is a good idea. It would be nice to address it at some point. There is a Python syntax for it, but we are not using it yet: del X[...] Of course one of the problems with this syntax (as opposed to a function that returns a new array) is that because X can share it's data with other arrays, you can't just re-size it's memory or other arrays depending on that chunk of memory will be in deep trouble. So, we are probably not going to be able to have a "syntax-style" delete. But, some kind of function that returns an array with specific entries deleted would be nice. >3) It would be nice if matrices could be used for iterations. For example, if M >was a 1 x n matrix, it would be nice to be able to use: for i in M: and >iterate over the individual items in M. > > They can be used as iterators. The problem here is simply convention (rows are iterated over first). We could over-ride the iterator behavior of matrices, though to handle 1xn and nx1 matrices identically if that is desirable. >4) It would be nice if the linear algebra package and other packages returned >matrices if given matrices. For example, if M is a matrix, svd(M) now returns > > Svd returns matrices now. Except for the list of singular values which is still an array. Do you want a 1xn matrix instead of an array? -Travis |
From: Andrew S. <str...@as...> - 2006-07-11 23:38:53
|
John Hunter wrote: >>>>>>"Eric" == Eric Firing <ef...@ha...> writes: >>>>>> >>>>>> > > Eric> Correction: I did fix the first problem, and the second > Eric> problem is not at all what I thought. Instead, the > Eric> examples/data/lena.jpg file in my svn mpl directory is > Eric> corrupted. I have no idea why. Looking directly at the > >This usually happens whenever Andrew commits -- don't know why >(platform dependent new line problem, perhaps?) > >peds-pc311:~/mpl> svn log | grep astraw|head >r2480 | astraw | 2006-06-15 06:33:07 -0500 (Thu, 15 Jun 2006) | 1 line >r2430 | astraw | 2006-06-06 15:12:33 -0500 (Tue, 06 Jun 2006) | 1 line >r2279 | astraw | 2006-04-10 10:35:31 -0500 (Mon, 10 Apr 2006) | 3 >lines >r2180 | astraw | 2006-03-20 15:38:12 -0600 (Mon, 20 Mar 2006) | 1 line > > > Hmm -- "usually happens"? I never noticed that. And I'm mystified as to whether the output of svn log shows that. Let me know if I play any more evil-line-ending tricks. Anyhow, I think I fixed the corrupted file issue. I changed the deleted the svn:eol-style property and added the set svn:mime-type property to image/jpg and re-uploaded lena.jpg. I suspect this may have been a victim of the cvs2svn switch, or perhaps I never checked it into cvs properly. Cheers! Andrew |
From: Filip W. <fi...@ft...> - 2006-07-11 23:36:16
|
Hi, the way of accessing data with __array_interface__, as shown by Travis in [1], also works nicely when used with builtin array.array (if someone here is still using it;). Time to convert array.array to ndarray is O(N) but can be made O(1) just by simple subclassing. [1] http://aspn.activestate.com/ASPN/Mail/Message/numpy-discussion/3191164 cheers, fw ----------------------------------------------------------------------- #!/usr/bin/env python # -*- coding: utf-8 -*- import array as _array import sys if sys.byteorder == 'little': _ENDIAN = '<' else: _ENDIAN = '>' _TYPES_CONV ={ 'c': '|u%%d', #character 1 'b': '|i%%d', #signed integer 1 'B': '|u%%d', #unsigned integer 1 'u': '%su%%d' % _ENDIAN, #Unicode character 2 'h': '%si%%d' % _ENDIAN, #signed integer 2 'H': '%su%%d' % _ENDIAN, #unsigned integer 2 'i': '%si%%d' % _ENDIAN, #signed integer 2 (4?) 'I': '%su%%d' % _ENDIAN, #unsigned integer 2 (4?) 'l': '%si%%d' % _ENDIAN, #signed integer 4 'L': '%su%%d' % _ENDIAN, #unsigned integer 4 'f': '%sf%%d' % _ENDIAN, #floating point 4 'd': '%sf%%d' % _ENDIAN, #floating point 8 } class array(_array.array): def __get_array_interface__(self): new = {} shape, typestr = (self.__len__(),), (_TYPES_CONV[self.typecode] % self.itemsize) new['shape'] = shape new['typestr'] = typestr new['data'] = (self.buffer_info()[0], False) # writable return new __array_interface__ = property(__get_array_interface__, None, doc="array interface") if __name__ == '__main__': size = 1000000 typecode = 'f' new = array(typecode, xrange(size)) old = _array.array(typecode, xrange(size)) import numpy from time import clock as time t1 = time() nd = numpy.asarray(new) t1 = time() - t1 #print nd t2 = time() nd = numpy.asarray(old) t2 = time() - t2 #print nd print "new:", t1 print "old:", t2 #EOF |
From: Fernando P. <fpe...@gm...> - 2006-07-11 23:19:22
|
On 7/11/06, John Hunter <jdh...@ac...> wrote: > >>>>> "Eric" == Eric Firing <ef...@ha...> writes: > > Eric> Correction: I did fix the first problem, and the second > Eric> problem is not at all what I thought. Instead, the > Eric> examples/data/lena.jpg file in my svn mpl directory is > Eric> corrupted. I have no idea why. Looking directly at the > > This usually happens whenever Andrew commits -- don't know why > (platform dependent new line problem, perhaps?) Is that file tagged as binary in the repo? If it is, it should be impervious to OS-dependent EOL conventions... Cheers, f |
From: JJ <jos...@ya...> - 2006-07-11 23:17:11
|
Hello. For what its worth, as a newly ex-matlab user I would like to make a few suggestions on use of matrices in numpy. As per earlier discussions, I like the idea of being able to choose matrices as the default (vs arrays). But if possible, it would be nice if all functions etc that took matrices also returned matrices. I know effort has been made on this. Here are my suggestions: 1) is it possible to get the function unique() to work with matrices, perhaps with a unique_rows() function to work with matrices of more than one column? 2) It would be very convienient to have some simple way to delete selected columns of a matrix. For example, in matlab the command is X[:,[3,5,7]]=[] to delete the three selected columns. It would be nice if such a command would also work with selections, as in X[:,A[0,:]<4] = [], where X and A are matrices. For me, the single most frustrating and time-consuming aspect of switching to and using numpy is determing how to select columns/rows of arrays/matrices in different situations. In addition to being time consuming to figure out (relative to Matlab), often the code is quite verbose. I have made a few suggestions on this topic in an earlier post (under the title "Whats wrong with matrices?"). 3) It would be nice if matrices could be used for iterations. For example, if M was a 1 x n matrix, it would be nice to be able to use: for i in M: and iterate over the individual items in M. 4) It would be nice if the linear algebra package and other packages returned matrices if given matrices. For example, if M is a matrix, svd(M) now returns arrays. Just some suggestions. I wish I knew more so I could help implement them. Maybe one day. JJ |
From: John H. <jdh...@ac...> - 2006-07-11 23:05:19
|
>>>>> "Eric" == Eric Firing <ef...@ha...> writes: Eric> Correction: I did fix the first problem, and the second Eric> problem is not at all what I thought. Instead, the Eric> examples/data/lena.jpg file in my svn mpl directory is Eric> corrupted. I have no idea why. Looking directly at the This usually happens whenever Andrew commits -- don't know why (platform dependent new line problem, perhaps?) peds-pc311:~/mpl> svn log | grep astraw|head r2480 | astraw | 2006-06-15 06:33:07 -0500 (Thu, 15 Jun 2006) | 1 line r2430 | astraw | 2006-06-06 15:12:33 -0500 (Tue, 06 Jun 2006) | 1 line r2279 | astraw | 2006-04-10 10:35:31 -0500 (Mon, 10 Apr 2006) | 3 lines r2180 | astraw | 2006-03-20 15:38:12 -0600 (Mon, 20 Mar 2006) | 1 line JDH |
From: Eric F. <ef...@ha...> - 2006-07-11 22:56:04
|
Eric Firing wrote: > Andrew Straw wrote: > > >>Actually, this has been in MPL for a while. For example, see the >>image_demo3.py example. You don't need the __array_interface__ for this >>bit of functionality. > > > It's broken. > > The first problem is that the kw "aspect = 'preserve'" is no longer > needed or supported. Removing that (as I will do in svn shortly), I get > a somewhat scrambled image. Correction: I did fix the first problem, and the second problem is not at all what I thought. Instead, the examples/data/lena.jpg file in my svn mpl directory is corrupted. I have no idea why. Looking directly at the version on svn via the svn browser, I see that it is corrupted also. Eric |
From: Eric F. <ef...@ha...> - 2006-07-11 22:38:32
|
Andrew Straw wrote: > Actually, this has been in MPL for a while. For example, see the > image_demo3.py example. You don't need the __array_interface__ for this > bit of functionality. It's broken. The first problem is that the kw "aspect = 'preserve'" is no longer needed or supported. Removing that (as I will do in svn shortly), I get a somewhat scrambled image. Eric |
From: Andrew S. <str...@as...> - 2006-07-11 22:25:52
|
Travis Oliphant wrote: >Filip Wasilewski wrote: > > > >>Hi Travis, >> >>this is a great example of the __array_interface__ usage. >> >> >> >> >> > >Just to complete the example: > >With the Image.py patch that adds the __array_interface__ > >you can do > >import Image, pylab > >im = Image.open('somefile.jpg') >pylab.imshow(im, origin='lower') > >and get a nice picture in the matplotlib window (at least if you are >running NumPy). > > Actually, this has been in MPL for a while. For example, see the image_demo3.py example. You don't need the __array_interface__ for this bit of functionality. |
From: Travis O. <oli...@ee...> - 2006-07-11 22:02:20
|
Filip Wasilewski wrote: >Hi Travis, > >this is a great example of the __array_interface__ usage. > > > >The second seems to be more complex and may be a more general. The >memory of string created by self.tostring() seems to be deallocated >before array is created (v 0.9.9.2788, win). >Everything works fine after storing the reference to data, but this >probably should be done somewhere else: > > def __get_array_interface__(self): > new = {} > shape, typestr = _conv_type_shape(self) > new['shape'] = shape > new['typestr'] = typestr > new['data'] = self.tostring() > self._str_data = new['data'] # a dirty hack > return new > > > This is now fixed in NumPy. The problem was that when the "buffer" interface was used a reference to the object was kept (but not the buffer). In this case it's the reference to the buffer that is needed. -Travis |
From: Travis O. <oli...@ee...> - 2006-07-11 22:02:05
|
Filip Wasilewski wrote: >Hi Travis, > >this is a great example of the __array_interface__ usage. > > > Just to complete the example: With the Image.py patch that adds the __array_interface__ you can do import Image, pylab im = Image.open('somefile.jpg') pylab.imshow(im, origin='lower') and get a nice picture in the matplotlib window (at least if you are running NumPy). -Travis |
From: Filip W. <fi...@ft...> - 2006-07-11 21:32:13
|
Hi Travis, this is a great example of the __array_interface__ usage. I have spotted some problems after patching the Image.py module and trying to display an array created from Image in matplotlib. First issue is a minor one. There is a difference in axis order between ndarray and PIL: def _conv_type_shape(im): shape = im.size[::-1] typ, extra = _MODE_CONV[im.mode] if extra is None: return shape, typ shape += (extra,) return shape, typ The second seems to be more complex and may be a more general. The memory of string created by self.tostring() seems to be deallocated before array is created (v 0.9.9.2788, win). Everything works fine after storing the reference to data, but this probably should be done somewhere else: def __get_array_interface__(self): new = {} shape, typestr = _conv_type_shape(self) new['shape'] = shape new['typestr'] = typestr new['data'] = self.tostring() self._str_data = new['data'] # a dirty hack return new best, fw |
From: Mathew Y. <my...@jp...> - 2006-07-11 21:17:30
|
I bestow upon Sasha the mantle of guru. Sasha wrote: > Here is the solution of a half of the problem: > >>>> a=array([1,2,3,0,40,50,60,0,7,8,9]) >>>> 5+where(logical_and.accumulate(a[5:]!=0)) > array([5, 6]) > > the rest is left as an exercise to the reader :-) > > Hint a[::-1] will reverse a. > > > On 7/11/06, Mathew Yeates <my...@jp...> wrote: >> I can handle the following problem by iterating through some indices but >> I'm looking for a more elegant solution. >> >> If I have a 1d array, I want to find a contiguous nonzero region about a >> given index. For example, if a=[1,2,3,0,40,50,60,0,7,8,9] and we start >> with the index of 5, then I want the indices 4,5,6 >> >> Any gurus out there? >> >> Mathew >> >> >> >> ------------------------------------------------------------------------- >> >> Using Tomcat but need to do more? Need to support web services, >> security? >> Get stuff done quickly with pre-integrated technology to make your >> job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> > |