You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Fernando P. <fpe...@gm...> - 2006-07-08 01:07:01
|
On 7/7/06, Travis Oliphant <oli...@ee...> wrote: > I just committed a big change to the NumPy SVN (r2773-r2777) which adds > the prefix npy_ or NPY_ to all names not otherwise pre-fixed. > > There is also a noprefix.h header that allows you to use the names > without the prefixes defined, as before > > Plus: > > 1) The special FLAG names with _FLAGS now have the _FLAGS removed > 2) The PY_ARRAY_TYPES_PREFIX is ignored. > 3) The tMIN/tMAX macros are removed > 4) MAX_DIMS --> NPY_MAXDIMS > 5) OWN_DATA --> NPY_OWNDATA Make sure scipy builds after these, I think I just saw it not build with 'OWN_DATA' errors. Maybe I just caught you in-between commits... f |
From: Travis O. <oli...@ee...> - 2006-07-08 01:03:06
|
I just committed a big change to the NumPy SVN (r2773-r2777) which adds the prefix npy_ or NPY_ to all names not otherwise pre-fixed. There is also a noprefix.h header that allows you to use the names without the prefixes defined, as before Plus: 1) The special FLAG names with _FLAGS now have the _FLAGS removed 2) The PY_ARRAY_TYPES_PREFIX is ignored. 3) The tMIN/tMAX macros are removed 4) MAX_DIMS --> NPY_MAXDIMS 5) OWN_DATA --> NPY_OWNDATA There is the header oldnumeric.h that can be used for compatibility with the Numeric C-API (including the names CONTIGUOUS and OWN_DATA). Please, try out the new C-API and let's get the bugs wrinkled out. Hopefully this will give us a more solid foundation for the future... I've already committed changes to matplotlib SVN that allow it to work with old and new NumPy. -Travis |
From: Keith G. <kwg...@gm...> - 2006-07-07 23:01:48
|
On 7/7/06, Travis Oliphant <oli...@ie...> wrote: > Bill Baxter wrote: > > 4) eye,empty,rand,ones,zeros,arange and anything else that builds an > > array from scratch or from a python list should have a matrix equivalent > Would > > from numpy.defmatrix import ones, zeros, ... > > work? Can defmatrix be shortened to matrix? So from numpy.matrix import ones, zeros, ... |
From: Sasha <nd...@ma...> - 2006-07-07 22:41:30
|
Travis' recent change <http://projects.scipy.org/scipy/numpy/changeset/2771> highlighted the definitions of tMIN/tMAX macros. Typed min/max were a subject for some heated discussion between Linux kernel developers many years ago <http://lwn.net/2001/0823/kernel.php3> that resulted in the following definitions in the curent kernel: """ /* * min()/max() macros that also do * strict type-checking.. See the * "unnecessary" pointer comparison. */ #define min(x,y) ({ \ typeof(x) _x = (x); \ typeof(y) _y = (y); \ (void) (&_x == &_y); \ _x < _y ? _x : _y; }) #define max(x,y) ({ \ typeof(x) _x = (x); \ typeof(y) _y = (y); \ (void) (&_x == &_y); \ _x > _y ? _x : _y; }) /* * ..and if you can't take the strict * types, you can specify one yourself. * * Or not use min/max at all, of course. */ #define min_t(type,x,y) \ ({ type __x = (x); type __y = (y); __x < __y ? __x: __y; }) #define max_t(type,x,y) \ ({ type __x = (x); type __y = (y); __x > __y ? __x: __y; }) """ The idea is to force people to use _t versions unless the types of x and y are exactly the same. The numpy's tMIN and tMAX are clearly addressing the same problem, but current definitions #define tMAX(a,b,typ) {typ _x_=(a); typ _y_=(b); _x_>_y_ ? _x_ : _y_} #define tMIN(a,b,typ) {typ _x_=(a); typ _y_=(b); _x_<_y_ ? _x_ : _y_} are unlikely to work with any compiler. Linux kernel uses gcc trick of wrapping a block in parenthesis to get an expression but I don't think this is acceptable in numpy code. Not surprizingly, these macros are not used anywhere. I propose to remove them. |
From: Tim H. <tim...@co...> - 2006-07-07 22:01:58
|
So I put together a prototype for an extended dot function that takes multiple arguments. This allows multiple dots to be computed in a single call thusly: dot(dot(dot(a, b), c), d) => dotn(a, b, c, d) On Bill Baxter's suggestion, dotn attempts to do the dots in an order that minimizes operations. That appears to work fine, although I wasn't very careful and I wouldn't be at all surprised if some problems crop up with that part. The interesting thing is that, since dot can perform both matrix and dot products, it's not associative. That is, dot(a, dot(b, c) != dot(dot(a, b), c) in general. The simplest examples is three vectors. >>> dot(dot([1,2,3], [3,2,1]), [1,1,1]) array([10, 10, 10]) >>> dot([1,2,3], dot([3,2,1], [1,1,1])) array([ 6, 12, 18]) That's mind numbingly obvious in retrospect, but means that my simple minded dot product optimizer is all wrong because it can change the order of evaluation in such a way that the result changes. That means two things: 1. I need to pick an effective order of evaluation. So, dotn(a, b, c ...) will be evaluated as if the products were evaluated in left to right order. 2. The optimizer needs to respect that order when rearranging the order in which the products are performed. Dot products must remain dot products and matrix products must remain matrix products under any transformations that take place. Anyway, that's that for now. I'll report back when I get it fixed up. -tim |
From: Mark H. <ma...@mi...> - 2006-07-07 21:28:09
|
The problem was matplotlib-0.87.2-1; it is broken wrt to the Demo. The lastest matplotlib .3 works. In summary, to run the demo, the following versions were required: Released Packages, all built from tar balls: ------------------------------------------------------ matplotlib-0.87.3 , ie, the current package .2 in Fedora extras fails, meaning I needed to chase down the matplotlib dependencies myself (eg tk.h in tk-devel) Numpy 0.9.8 (latest Fedora package is 0.9.5 which is too old) Scipy 0.4.9 And, at least as of now, svn check outs from the numpy, scipy, and matplotlib subversion repos also work together for Demo1. Mark Robert Kern wrote: > Mark Heslep wrote: > >> Robert Kern wrote: >> >>> The latest SVN revisions should match (it's a bug, otherwise). numpy 2761 is >>> recent enough that an SVN checkout of scipy will probably be fine. >>> >>> >> Yes SVN scipy/numpy fix the problem. Sorry, I missed Travis's post on >> scipy-user here: >> http://projects.scipy.org/pipermail/scipy-user/2006-June/008438.html >> to that effect. >> >> I should just get specific: I was simply trying to run the basic >> optimization demo on the front page of the wiki as check on SVN >> (http://www.scipy.org/Cookbook/OptimizationDemo1) and it appears SVN >> scipy/numpy breaks matplotlib -0.87.2-1 (fc5) in the demo. Do I need SVN >> matplotlib as well? >> > > Yes, you do, thanks to the removal of the Numeric typecodes (Int32, etc.). > However, the error you get looks to be unrelated. > > |
From: Tim H. <tim...@co...> - 2006-07-07 21:22:01
|
Tim Hochberg wrote: > > > So, I put together of a prototype dot function > > dot( > Ooops! This wasn't supposed to go out yet, sorry. More later. -tim |
From: Tim H. <tim...@co...> - 2006-07-07 21:21:20
|
So, I put together of a prototype dot function dot( |
From: Fernando P. <fpe...@gm...> - 2006-07-07 20:53:52
|
On 7/7/06, Travis Oliphant <oli...@ee...> wrote: > I'm not opposed to putting a *short* prefix in front of everything (the > Int32, Float64, stuff came from numarray which now has it's own > back-ward compatible header where it could be placed now anyway). > Perhaps npy_ would be a suitable prefix. > > That way we could get rid of the cruft entirely. Well, now is your chance to clean up all the APIs, /including/ the C ones :) npy_ or NPy, I'm not too sure what conventions you are following at the C naming level. I'm all for cruft removal and making things easy to use out of the box, even at the cost of making the transition a bit more work. Remember, that's a one time cost. And numpy is so good that people /will/ transition from Numeric eventually, so might as well make the end result as nice and appealing as possible. As other tools (like matplotlib) move eventually to numpy-only support, the incentive to making the switch will really go up for just about anyone using python for numerical work. At the risk of sounding a bit harsh, I think you can then say, 'take the pain for the switch if you really want all the new goodies'. Those who positively, absolutely can't update from Numeric can then just keep a frozen codebase. It's not like you're breaking Numeric 24.2 or deleting it from the internet :) Cheers, f |
From: Sven S. <sve...@gm...> - 2006-07-07 20:44:59
|
Ed Schofield schrieb: > > Okay ... <Ed rolls up his sleeves> ... let's make this the thread ;) > I'd like to know why you, Sven, and anyone else on the list have gone > back to using arrays after trying matrices. What was inconvenient about > them? I'd like a nice juicy list. The whole purpose of the matrix > class is to simplify 2d linear algebra. Where is it failing? No, no, I must have given the wrong impression, I'm still in the matrix camp. My main complaint would have been absence of equivalents for ones, zeros, etc., but it seems Travis has introduced numpy.matlib exactly for that, which is great. Before that, I sometimes felt like a second-class citizen because many people on the list argued that users should invent their own workarounds (which btw I have done in the meantime, but that's not the purpose of a ready-to-use matrix package, is it?). > > I'd like to help to make matrices more usable. Tell me what you want, > and I'll work on some patches. > Well if numpy.matlib does what I think it does there are no pressing issues for me right now. Element-wise multiplication isn't that important for me. Of course I'll let you know as soon as something occurs to me ;-) Thanks for offering, Sven |
From: Travis O. <oli...@ee...> - 2006-07-07 20:39:17
|
Fernando Perez wrote: >On 7/7/06, Travis Oliphant <oli...@ee...> wrote: > > > >>Also, (in latest SVN) the MAXMIN macros can be avoided using >> >>#define PYA_NOMAXMIN >> >>before including arrayobject.h >> >> > >Mmh, this looks crufty to me: special cases like these look bad in a >library, and break the 'just works' ideal we all strive for, IMHO. > > But, it fixes the problem he's having without breaking anybody else's code that uses the MAX / MIN macros, already. Besides, the PY_ARRAY_TYPES_PREFIX business is a lot more crufty. I'm not opposed to putting a *short* prefix in front of everything (the Int32, Float64, stuff came from numarray which now has it's own back-ward compatible header where it could be placed now anyway). Perhaps npy_ would be a suitable prefix. That way we could get rid of the cruft entirely. I suppose we could also provide the noprefix.h header that defines the old un-prefixed cases for "backwards NumPy compatibility". -Travis |
From: David M. C. <co...@ph...> - 2006-07-07 20:22:26
|
On Fri, 7 Jul 2006 15:26:41 +0100 "George Nurser" <gn...@go...> wrote: > On 07/07/06, Robert Hetland <rhe...@ma...> wrote: > [snip] > > However, I use transpose often when not dealing with linear algebra, in > > particular with reading in data, and putting various columns into > > variables. Also, occasional in plotting (which expects things in > > 'backward' order relative to x-y space), and communicating between > > fortran programs (which typically use 'forward' order (x, y, z)) and > > numpy (backward -- (z, x, y)). > > > This is my usage as well. Also my primitive knowledge of numpy > requires use of the transpose when iterating over indexes from where. > Moreover I think the notation .T is perfectly reasonable. So I agree > with: same. > > > I am very much in favor of .T, but it should be a full .transpose(), not > > just swap the last two axes. I don't care so much for the others. > > +1 for .T == .transpose() Another +1 from me. If transpose was a shorter word I wouldn't care :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Travis O. <oli...@ee...> - 2006-07-07 20:21:46
|
I didn't compile the results, but the discussion on the idea of adding new attributes to the array object led to the following result. Added: .T attribute to mean self.transpose() .T This was rather controversial with many possibilities emerging. In the end, I think the common case of going back and forth between C-order and Fortran-order codes in a wide variety of settings convinced me to make .T a short-hand for .transpose() and add it as an attribute. This is now the behavior in SVN. Right now, for self.ndim < 2, this just returns a new reference to self (perhaps it should return a new view instead). .M While some were in favor, too many people opposed this (although the circular reference argument was not convincing). Instead a numpy.matlib module was started to store matrix versions of the standard array-creation functions and mat was re-labeled to "asmatrix" so that a copy is not made by default. .A A few were in favor, but as this is just syntactic sugar for .__array__() or asarray(obj) or .view(ndarray) it was thrown out because it is not used enough to add an additional attribute .H A few were in favor, but this can now be written .T.conj() which is not bad so does not get a new attribute. -Travis |
From: Robert K. <rob...@gm...> - 2006-07-07 19:56:54
|
Mark Heslep wrote: > Robert Kern wrote: >> The latest SVN revisions should match (it's a bug, otherwise). numpy 2761 is >> recent enough that an SVN checkout of scipy will probably be fine. >> > Yes SVN scipy/numpy fix the problem. Sorry, I missed Travis's post on > scipy-user here: > http://projects.scipy.org/pipermail/scipy-user/2006-June/008438.html > to that effect. > > I should just get specific: I was simply trying to run the basic > optimization demo on the front page of the wiki as check on SVN > (http://www.scipy.org/Cookbook/OptimizationDemo1) and it appears SVN > scipy/numpy breaks matplotlib -0.87.2-1 (fc5) in the demo. Do I need SVN > matplotlib as well? Yes, you do, thanks to the removal of the Numeric typecodes (Int32, etc.). However, the error you get looks to be unrelated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Mark H. <ma...@mi...> - 2006-07-07 19:42:06
|
Robert Kern wrote: > > The latest SVN revisions should match (it's a bug, otherwise). numpy 2761 is > recent enough that an SVN checkout of scipy will probably be fine. > Yes SVN scipy/numpy fix the problem. Sorry, I missed Travis's post on scipy-user here: http://projects.scipy.org/pipermail/scipy-user/2006-June/008438.html to that effect. I should just get specific: I was simply trying to run the basic optimization demo on the front page of the wiki as check on SVN (http://www.scipy.org/Cookbook/OptimizationDemo1) and it appears SVN scipy/numpy breaks matplotlib -0.87.2-1 (fc5) in the demo. Do I need SVN matplotlib as well? Mark From the Opt. Demo: > In [1]: from scipy import arange, special, optimize > In [2]: x = arange(0,10,0.01) > In [4]: for k in arange(0.5,5.5): > ...: y = special.jv(k,x) > ...: plot(x,y) > ...: > --------------------------------------------------------------------------- > exceptions.ZeroDivisionError Traceback (most recent call ... ... > /usr/lib/python2.4/site-packages/matplotlib/ticker.py in > bin_boundaries(self, vmin, vmax) > 766 def bin_boundaries(self, vmin, vmax): > 767 nbins = self._nbins > --> 768 scale, offset = scale_range(vmin, vmax, nbins) > 769 vmin -= offset > 770 vmax -= offset > > /usr/lib/python2.4/site-packages/matplotlib/ticker.py in > scale_range(vmin, vmax, n, threshold) > 731 dv = abs(vmax - vmin) > 732 meanv = 0.5*(vmax+vmin) > --> 733 var = dv/max(abs(vmin), abs(vmax)) > 734 if var < 1e-12: > 735 return 1.0, 0.0 > > ZeroDivisionError: float division |
From: David H. <dav...@gm...> - 2006-07-07 18:48:08
|
Hi, For the first release, it would be nice if every function had a docstring, even a small one. There are 279 callable items in the numpy namespace, and 94 of those lack a docstring, albeit most of those probably don't get much usage. To help the process, I filed Ticket #174 and attached a couple of docstrings, feel free to add your own. Missing docstrings: ['issubsctype', 'unicode_', 'string', 'float96', 'pkgload', 'void', 'unicode0', 'void0', 'object0', 'memmap', 'nan_to_num', 'PackageLoader', 'object_', 'dtype', 'unsignedinteger', 'uintc', 'uint0', 'uint8', 'chararray', 'uint64', 'finfo', 'add_newdoc', 'array_repr', 'array_str', 'longlong', 'int16', 'mat', 'uint', 'correlate', 'int64', 'choose', 'complexfloating', 'recarray', 'mean', 'str_', 'ulonglong', 'matrix', 'uint32', 'byte', 'ctypes_load_library', 'signedinteger', 'ndim', 'number', 'bool8', 'msort', 'bool_', 'inexact', 'broadcast', 'short', 'ubyte', 'std', 'double', 'require', 'take', 'issubclass_', 'longfloat', 'deprecate', 'bincount', 'array2string', 'float64', 'ushort', 'float_', 'geterrobj', 'iterable', 'intp', 'flexible', 'sctype2char', 'longdouble', 'flatiter', 'generic', 'show_config', 'i0', 'uintp', 'character', 'uint16', 'float32', 'int32', 'integer', 'get_printoptions', 'seterrobj', 'add_docstring', 'intc', 'var', 'int_', 'histogram', 'issubdtype', 'int0', 'int8', 'record', 'obj2sctype', 'single', 'floating', 'test', 'string0'] Cheers, David |
From: Robert K. <rob...@gm...> - 2006-07-07 18:46:36
|
Mark Heslep wrote: > Is there any general sync point with development Numpy from subversion > and the SciPy releases? Ive got Numpy 0.9.9.2761 and Scipy 0.4.9 > installed with (I believe several) breakages, in particular: > >> In [8]: from scipy import special > ... >> /usr/lib/python2.4/site-packages/scipy/linalg/basic.py >> 20 >> conjugate,ravel,r_,mgrid,take,ones,dot,transpose,sqrt,add,real >> 21 import numpy >> ---> 22 from numpy import asarray_chkfinite, outerproduct, >> concatenate, reshape, single >> 23 from numpy import matrix as Matrix >> 24 import calc_lwork >> >> ImportError: cannot import name outerproduct > I suppose Scipy is not picking up the deprecation of outerproduct. No > surprise that bleeding edge Numpy subversion doesn't play with a SciPy > release; I was just wondering if there generally used/known way to make > it happen. Do I need to fall back to a Numpy release? Or move forward > on SciPy? The releases are synched such that the numpy version is "twice" (by an idiosyncratic form of arithmetic) that of the scipy version. numpy 0.9.8 <-> scipy 0.4.9 . The latest SVN revisions should match (it's a bug, otherwise). numpy 2761 is recent enough that an SVN checkout of scipy will probably be fine. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Mark H. <ma...@mi...> - 2006-07-07 18:36:40
|
Is there any general sync point with development Numpy from subversion and the SciPy releases? Ive got Numpy 0.9.9.2761 and Scipy 0.4.9 installed with (I believe several) breakages, in particular: > In [8]: from scipy import special ... > /usr/lib/python2.4/site-packages/scipy/linalg/basic.py > 20 > conjugate,ravel,r_,mgrid,take,ones,dot,transpose,sqrt,add,real > 21 import numpy > ---> 22 from numpy import asarray_chkfinite, outerproduct, > concatenate, reshape, single > 23 from numpy import matrix as Matrix > 24 import calc_lwork > > ImportError: cannot import name outerproduct I suppose Scipy is not picking up the deprecation of outerproduct. No surprise that bleeding edge Numpy subversion doesn't play with a SciPy release; I was just wondering if there generally used/known way to make it happen. Do I need to fall back to a Numpy release? Or move forward on SciPy? Mark |
From: Travis O. <oli...@ie...> - 2006-07-07 18:13:17
|
Bill Baxter wrote: > On 7/7/06, *Ed Schofield* <sch...@ft... <mailto:sch...@ft...>> > wrote: > > Okay ... <Ed rolls up his sleeves> ... let's make this the thread ;) > I'd like to know why you, Sven, and anyone else on the list have gone > back to using arrays after trying matrices. What was inconvenient > about > them? I'd like a nice juicy list. The whole purpose of the matrix > class is to simplify 2d linear algebra. Where is it failing? > > > Okay, here are a few that come to mind. > 1) Functions that take a matrix but return an array. Maybe these are > all fixed now. But they better be fixed not just in numpy but in > scipy too. To me this implies there needs to be some standard idiom > for how to write a generic array-protocol-using function so that you > don't have to think about it. A lot of these are fixed. The mechanism for handling this is in-place: either using asanyarray in the function or (more generally) using a decorator that wraps the arguments with asarray and returns the output with __array_wrap__. But, we need people to help with fleshing it out. > > 2) At the time I was using matrix, scalar * matrix was broken. Fixed > now, but that kind of thing just shouldn't happen. There should be a > tests for basic operations like that if there aren't already. We need people to write and implement the tests. It's one way everybody can contribute. I do use matrices occasionally (not never as has been implied). But, I do more coding than linear algebra (particularly with images), therefore my need for matrix math is reduced. Nonetheless, I've been very willing to make well-defined changes that are needed. Most problems cannot be found out without people who use and test the code. > > 3) mat() doesn't make sense as a shortcut for matrix construction. It > only saves 3 letters over typing matrix(), and asmatrix is generally > more useful. So mat() should be a synonym for asmatrix() I'd be willing to make that change, but it will break some people's SciPy code. > > 4) eye,empty,rand,ones,zeros,arange and anything else that builds an > array from scratch or from a python list should have a matrix equivalent Would from numpy.defmatrix import ones, zeros, ... work? > > 5) I've got squeezes like crazy all over my matrix-using code. Maybe > this was a bug in 0.9.5 or so that's been fixed? I do seem to recall > some problem with indexing or c_ or something that was causing > matrices to grow extra levels of length 1 axes. Again, like the > scalar*matrix bug, things like that shouldn't happen. Sure, but it's going to happen in a beta release... That's why we need testers. As I recall, most bugs with matrices have been fixed fairly quickly as soon as they are reported. > > 6) No good way to do elementwise operations? Sometimes you just want > to do an elementwise mult or divide or exponentiation. I guess you're > supposed to do Z = asmatrix(X.A * Y.A). Yah, right. This is a problem with a dearth of infix operators. In fact, if we had a good way to write matrix multiplication as an infix operator, perhaps there wouldn't be any need for matrices. I'm really not sure how to fix the problem (the .M attribute of arrays was an attempt to make it easier): (X.A * Y.A).M But, there is always multiply(X,Y) > > 7) Finally, once all that is fixed, I find the slavish adherance to > ndim=2 to be too restrictive. > a) Sometimes it's useful to have a stack of matrices. Pretty > often, in fact, for me. I guess I could use a python list of matrices > or an object array of matrix or something, but I also think there are > times when it's useful to treat different pairs of axes as the > 'matrix' part. So I'd like matrices to be able to have ndim>2. I suppose this restriction could be lifted. > b) On the other end, I think ndim<2 is useful sometimes too. Take > a function like mean(), for example. With no arguments the return > value is a 1x1 matrix (as opposed to a scalar). Have you checked lately. It's a scalar now... This has been fixed. > Or take indexing. It seems odd to me that where() reurns a tuple of > shape==(1,N) objects instead of just (N,) . The way to fix some of these is to return arrays for indexing instead of allowing matrices. But, matrices that are less than 2-d just don't make sense. > Maybe I can get over that though, as long as it works for indexing > (which it seems it does). But I think the scalar return case is a > real issue. Here's another: sum(). For an array you can do > sum(sum(a)) and get a scalar if a is 2-d, but for matrix sum(sum(m)) > is the same as sum(m). And along these lines, m[newaxis] just > silently doesn't do anything. That doesn't seem right. These are just semantic questions. It's no surprise that sum(sum(m)) returns the same as sum(m) for a matrix because summing over the same axis won't change the result. You have to sum over both axes in a matrix. Thanks for the feedback. -Travis |
From: Christopher B. <Chr...@no...> - 2006-07-07 18:11:32
|
Robert Kern wrote: > Just > because linear algebra is "the base" for a lot of numerical computing does not > mean that everyone is using numpy arrays for linear algebra all the time. Much > less does it mean that all of those conventions you've devised should be shoved > into the core array type. I totally agree here. What bugged me most about MATLAB was that it was so darn Matrix/Linear Algebra centric. Yes, much of the code I wrote used linear algebra, but mostly it was a tiny (though critical) part of the actual code: Lots of code to set up a matrix equation, then solve it. The solve it was one line of code. For the rest, I prefer an array approach. A Matrix/Linear Algebra centric approach is good for some things, but I think it should be all or nothing. If you want it, then there should be a Matrix package, that includes the Matrix object, AND a matrix version of all the utility functions, like ones, zeros, etc. So all you would have to do is do: from numpy.matrix import * instead of from numpy import * and you'd get all the same stuff. Most of what would need to be added to the matrix package would be pretty easy, boiler plate code. Then we'd need a bunch more testing to root out all the operations that returned arrays where they should return matrices. If there is no one that wants to do all that work, then we have our answer. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Travis O. <oli...@ie...> - 2006-07-07 18:10:19
|
Martin Wiechert wrote: > Hi all, > > for me > > M [ix_(I, J)] > > does not work if I, J are boolean arrays. Is this intended or a bug/missing > feature? > Which version? Using boolean arrays as separate indices was a recent feature. You have to get SVN to use it. -Travis |
From: Christopher B. <Chr...@no...> - 2006-07-07 18:02:35
|
Satellite Data Research Group wrote: > Quoting Christopher Barker <Chr...@no...>: >> Which Python 2.4.1 are you using? It would be great if you would give >> the Python2.4.3 version found here a try: >> >> http://www.pythonmac.org/packages/py24-fat/index.html > Thanks for that, python 2.4.3 works perfectly. > Cheers, Joe Corbett There is now a Numpy0.9.8 package on that page too. Please let me know (or write to the macpython list) if you have a problem with it. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Pau G. <pau...@gm...> - 2006-07-07 15:51:08
|
On 7/7/06, Martin Wiechert <mar...@gm...> wrote: > Hi all, > > for me > > M [ix_(I, J)] > > does not work if I, J are boolean arrays. Is this intended or a bug/missing > feature? > > And is there a way (other than I = where (I) [0] etc.) to make it work? > > Thanks, > Martin it is a recent feature. It works for me on version '0.9.9.2660' pau >>> import numpy >>> numpy.__version__ '0.9.9.2660' >>> a = numpy.rand(3,4) >>> a array([[ 0.24347161, 0.25636386, 0.64373189, 0.82730095], [ 0.02062571, 0.12244009, 0.60053928, 0.10624435], [ 0.75472591, 0.00614411, 0.75388955, 0.40481918]]) >>> I = a[:,0]<0.5 >>> J = a[0,:]<0.5 >>> a[numpy.ix_(I,J)] array([[ 0.24347161, 0.25636386], [ 0.02062571, 0.12244009]]) |
From: Charles R H. <cha...@gm...> - 2006-07-07 15:48:17
|
On 7/7/06, Bill Baxter <wb...@gm...> wrote: > > On 7/7/06, Tim Hochberg <tim...@co...> wrote: > > > > The funny thing is that having a dot(a,b,c,...) would lead to the > > > exact same kind of hidden performance problems you're arguing against. > > Not exactly arguing -- this isn't why I don't like H and friends -- just > > > > noting that this is one of the traps that people are likely to fall into > > when transferring equations to code. > > <snip> A = D > A *= -2 > A += C > A += B > I would like to write something like: A = D.copy().times(-2).plus(C).plus(B) i.e. copy produces a "register", the rest is reverse Polish, and = "stores" the result. Chuck |
From: Keith G. <kwg...@gm...> - 2006-07-07 15:45:24
|
On 7/7/06, Ed Schofield <sch...@ft...> wrote: > I'd like to help to make matrices more usable. Tell me what you want, > and I'll work on some patches. I can't pass up an offer like that. DIAG diag(M) returns an array. It would be nice if diag(M) returned asmatrix(diag(M)).T. It would also be nice if you can construct a diagonal matrix directly from what is returned from diag(M). But right now you can't: >> x matrix([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >> diag(x) array([0, 4, 8]) >> d = asmatrix(diag(x)).T >> d matrix([[0], [4], [8]]) >> diag(d) array([0]) <-- this should be a 3x3 matrix MATRIX CONSTRUCTION Making it easier to construct matrices would be a big help. Could the following function be made to return matrices? ones, zeros, rand, randn, eye, linspace, empty I guess the big decision is how to tell numpy to use matrices. How about from numpy.matrix import ones, zeros ? I would prefer something even more global. Something that acts like the global variable 'usematrix = True'. Once that is declared, the default changes from array to matrix. INDEXING >> x matrix([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >> y matrix([[0], [1], [2]]) >> x[y>1,:] matrix([[6]]) This is a big one for me. If x[y>1,:] returned >> x[2,:] matrix([[6, 7, 8]]) then there would no longer be a need for array :) |