You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Bill B. <wb...@gm...> - 2006-07-07 06:17:38
|
On 7/7/06, Robert Kern <rob...@gm...> wrote: > > Bill Baxter wrote: > > I am also curious, given the number of times I've heard this nebulous > > argument of "there are lots kinds of numerical computing that don't > > invlolve linear algebra", that no one ever seems to name any of these > > "lots of kinds". Statistics, maybe? But you can find lots of linear > > algebra in statistics. > > That's because I'm not waving my hands at general fields of application. > I'm > talking about how people actually use array objects on a line-by-line > basis. If > I represent a dataset as an array and fit a nonlinear function to that > dataset, > am I using linear algebra at some level? Sure! Does having a .T attribute > on > that array help me at all? No. Arguing about how fundamental linear > algebra is > to numerical endeavors is entirely besides the point. Ok. If line-by-line usage is what everyone really means, then I'll get off the linear algebra soap box, but that's not what it sounded like to me. So, if you want to talk line-by-line, I really can't talk about much beside my own code. But I just grepped through it and out of 2445 non-empty lines of code: 927 lines contain '=' 390 lines contain a '[' 75 lines contain matrix,asmatrix, or mat ==> 47 lines contain a '.T' or '.transpose' of some sort. <== 33 lines contain array, or asarray, or asanyarray 24 lines contain 'rand(' --- I use it for generating bogus test data a lot 17 lines contain 'newaxis' or 'NewAxis' 16 lines contain 'zeros(' 13 lines contain 'dot(' 12 lines contain 'empty(' 8 lines contain 'ones(' 7 lines contain 'inv(' I'm pretty new to numpy, so that's all the code I got right now. I'm sure I've written many more lines of emails about numpy than I have lines of actual numpy code. :-/ But from that, I can say that -- at least in my code -- transpose is pretty common. If someone can point me to some larger codebases written in numpy or numeric, I'd be happy to do a similar analysis of those. I'm not saying that people who do use arrays for linear algebra are rare or > unimportant. It's that syntactical convenience for one set of conventional > ways > to use an array object, by itself, is not a good enough reason to add > stuff to > the core array object. I wish I had a way to magically find out the distribution of array dimensions used by all numpy and numeric code out there. My guess is it would be something like 1-d: 50%, 2-d: 30%, 3-d: 10%, everything else: 10%. I can't think of a good way to even get an estimate on that. But in any event, I'm positive ndims==2 is a significant percentage of all usages. It seems like the opponents to this idea are suggesting the distribution is more flat than that. But whatever the distribution is, it has to have a fairly light tail since memory usage is exponential in ndim. If ndim == 20, then it takes 8 megabytes just to store the smallest possible non-degenerate array of float64s (i.e. a 2x2x2x2x...) It seems crazy to even be arguing this. Transposing is not some specialized esoteric operation. It's important enough that R and S give it a one letter function, and Matlab, Scilab, K all give it a single-character operator. [*] Whoever designed the numpy.matrix class also thought it was worthy of a shortcut, and I think came up with a pretty good syntax for it. And the people who invented math itself decided it was worth assigning a 1-character exponent to it. So I think there's a clear argument for having a .T attribute. But ok, let's say you're right, and a lot of people won't use it. Fine. IT WILL DO THEM ABSOLUTELY NO HARM. They don't have to use it if they don't like it! Just ignore it. Unlike a t() function, .T doesn't pollute any namespace users can define symbols in, so you really can just ignore it if you're not interested in using it. It won't get in your way. For the argument that ndarray should be pure like the driven snow, just a raw container for n-dimensional data, I think that's what the basearray thing that goes into Python itself should be. ndarray is part of numpy and numpy is for numerical computing. Regards, --Bill [*] Full disclosure: I did find two counter-examples -- Maple and Mathematica. Maple has only a transpose() function and Mathematica has only Transpose[] (but you can use [esc]tr[esc] as a shortcut) However, both of those packages are primarily known for their _symbolic_ math capabilities, not their number crunching, so they less are similar to numpy than R,S,K,Matlab and Scilab in that regard. |
From: Tim H. <tim...@co...> - 2006-07-07 06:04:09
|
Sasha wrote: > On 7/6/06, Bill Baxter <wb...@gm...> wrote: > >> ... >> Yep, like Tim said. The usage is say a N sets of basis vectors. Each set >> of basis vectors is a matrix. >> > > This brings up a feature that I really miss from numpy: an ability to do > > array([f(x) for x in a]) > Please note that there is now a fromiter function so that much of the overhead of the above function can be removed by using: numpy.fromiter((f(x) for x in a), float) This won't generate an intermediate list or use significantly extra storage. I doubt it's a full replacement for adverbs as you've described below though. -tim > without python overhead. APL-like languages have a notion of "adverb" > - a higher level operator that maps a function to a function. Numpy > has some adverbs implemented as attributes to ufuncs: for example > add.reduce is the same as +/ in K and add.accumulate is the same as +\ > ('/' and '\' are 'over' and 'scan' adverbs in K). However, there is no > way to do f/ or f\ where f is an arbitrary dyadic function. > > The equivalent of array([f(x) for x in a]) is spelled f'(a) in K (' is > an adverb 'each'). The transpose operator (+) is swaps the first two > axes, so in order to apply to the array of matrices, one would have to > do +:'a (: in +: disambiguates + as a unary operator). > > I don't know of a good way to introduce adverbs in numpy, nor can I > think of a good way to do list comprehensions, but array friendly > versions of map, filter and reduce may be a good addition. These > higher order functions may take an optional axes argument to deal with > the higher rank arrays and may be optimized to recognize ufuncs so > that map(f, a) could call f(a) and reduce(f, a) could do f.reduce(a) > when f is a ufunc. > > [snip] > >> Either way swapaxes(-2,-1) is likely more likely to be what you want than >> .transpose(). >> >> > > Agree, but swapaxes(0, 1) is a close runner-up which is also known as > zip in python. > > >> Well, I would be really happy for .T to return an (N,1) column vector if >> handed an (N,) 1-d array. But I'm pretty sure that would raise more furuor >> among the readers of the list than leaving it 1-d. >> >> > > Would you be even happier if .T would return a matrix? I hope not > because my .M objection will apply. Maybe we can compromize by > implementing a.T so that it raises ValueError unless rank(a) == 2 or > at least unless rank(a) <= 2? > > >> I have serious reservations about a function called t(). x,y,z, and t are >> probably all in the top 10 variable names in scientific computing. >> >> > > What about T()? > > >>> K (an APL-like language) overloads >>> unary '+' to do swapaxes(0,1) for rank>=2 and nothing for lower rank. >>> >> Hmm. That's kind of interesting, it seems like an abuse of notation to me. >> And precedence might be an issue too. The precedence of unary + isn't as >> high as attribute access. >> > > It is high enough AFAICT - higher than any binary operator. > > >> Anyway, as far as the meaning of + in K, I'm >> guessing K's arrays are in Fortran order, so (0,1) axes vary the fastest. >> > > No, K has 1d arrays only, but they can be nested. Matrices are arrays > of arrays and tensors are arrays of arrays of arrays ..., but you are > right (0,1) swap is faster than (-2,-1) swap and this motivated the > choice for the primitive. > > >> I couldn't find any documentation for the K language from a quick search, >> though. >> > > Kx Systems, the company behind K has replaced K with Q and pulled old > manuals from the web. Q is close enough to K: see > http://kx.com/q/d/k.txt for a terse summary. > > [snip] > >>> Why would anyone do that if b was a matrix? >>> >> Maybe because, like you, they think "that a.T is fairly cryptic". >> >> > If they are like me, they will not use numpy.matrix to begin with :-). > > >>>> But probably a better solution >>>> would be to have matrix versions of these in the library as an optional >>>> module to import so people could, say, import them as M and use >>>> >> M.ones(2,2). >> >>> This is the solution used by ma, which is another argument for it. >>> >> Yeh, I'm starting to think that's better than slapping an M attribute on >> arrays, too. Is it hard to write a module like that? >> >> > > Writing matrixutils with > > def zeros(shape, dtype=float): > return asmatrix(zeros(shape, dtype)) > > is trivial, but matrixutils.zeros will have two python function calls > overhead. This may be a case for making zeros a class method of > ndarray that can be written in a way that will make inherited > matrix.zeros do the right thing with no overhead. > > [snip] > >> * +A implies addition. >> > No, it does not. Unary '+' is a noop. Does * imply multiplication or > ** imply pow in f(*args, **kwds) to you? > > >> The general rule with operator overloading is that >> the overload should have the same general meaning as the original operator. >> > Unary '+' has no preset meaning in plain python. It can be interpreted > as transpose if you think of scalars as 1x1 matrices. > > >> So overloading * for matrix multiplication makes sense. >> > > It depends on what you consider part of "general meaning". If the > commutativity property is part of it then overloading * for matrix > multiplication doesn't make sense. If the "general meaning" of unary + > includes x = +x invariant, then you are right, but I am willing to > relax that to x = ++x invariant when x is a non-symmetric matrix. > > >> ... New users looking at something like A + +B are pretty >> certain to be confused because they think they know what + means, but >> they're wrong. >> > > In my experience new users don't realize that unary + is defined for > arrays. Use of unary + with non-literal numbers is exotic enough that > new users seeing "something like A + +B" will not assume that they > know what it means. > > [snip] > >> * +A has different precedence than the usual transpose operator. (But I >> can't think of a case where that would make a difference now.) >> >> > Maybe you can't because it doesn't? :-) > > >> I would be willing to accept a .T that just threw an exception if ndim were >> >>> 2. >>> > > Aha! Let's start with an error unless ndim != 2. It is always easier > to add good features than to remove bad ones. > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: Sasha <nd...@ma...> - 2006-07-07 04:53:33
|
On 7/7/06, Travis Oliphant <oli...@ie...> wrote: > 1) .T Have some kind of .T attribute > -1 (but -0 if raises an error if ndim != 2) > If >0 on this then: > > a) .T == .swapaxes(-2,-1) > > b) .T == .transpose() > > c) .T raises error for ndim > 2 > > d) .T returns (N,1) array for length (N,) array > > e) .T returns self for ndim < 2 > Write-in: f) .T raises error for ndim != 2 (still <0) > > 2) .H returns .T.conj() > -1 > > 3) .M returns matrix version of array > -1 > > 4) .A returns basearray (useful for sub-classes). > -1 |
From: Robert K. <rob...@gm...> - 2006-07-07 04:48:27
|
Travis Oliphant wrote: > This is a call for a vote on each of the math attributes. Please post > your vote as > > +1 : support > +0 : don't care so go ahead > -0 : don't care so why do it > -1 : against > > Vote on the following issues separately: > > 1) .T Have some kind of .T attribute > > If >0 on this then: > > a) .T == .swapaxes(-2,-1) > b) .T == .transpose() > c) .T raises error for ndim > 2 > d) .T returns (N,1) array for length (N,) array > e) .T returns self for ndim < 2 The fact that there are a, b, c, d, and e makes me fully -1 on this. In the face of ambiguity, refuse the temptation to guess. > 2) .H returns .T.conj() -1 > 3) .M returns matrix version of array -1 > 4) .A returns basearray (useful for sub-classes). -1 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Sasha <nd...@ma...> - 2006-07-07 04:34:54
|
On 7/6/06, Bill Baxter <wb...@gm...> wrote: > ... > Yep, like Tim said. The usage is say a N sets of basis vectors. Each set > of basis vectors is a matrix. This brings up a feature that I really miss from numpy: an ability to do array([f(x) for x in a]) without python overhead. APL-like languages have a notion of "adverb" - a higher level operator that maps a function to a function. Numpy has some adverbs implemented as attributes to ufuncs: for example add.reduce is the same as +/ in K and add.accumulate is the same as +\ ('/' and '\' are 'over' and 'scan' adverbs in K). However, there is no way to do f/ or f\ where f is an arbitrary dyadic function. The equivalent of array([f(x) for x in a]) is spelled f'(a) in K (' is an adverb 'each'). The transpose operator (+) is swaps the first two axes, so in order to apply to the array of matrices, one would have to do +:'a (: in +: disambiguates + as a unary operator). I don't know of a good way to introduce adverbs in numpy, nor can I think of a good way to do list comprehensions, but array friendly versions of map, filter and reduce may be a good addition. These higher order functions may take an optional axes argument to deal with the higher rank arrays and may be optimized to recognize ufuncs so that map(f, a) could call f(a) and reduce(f, a) could do f.reduce(a) when f is a ufunc. [snip] > Either way swapaxes(-2,-1) is likely more likely to be what you want than > .transpose(). > Agree, but swapaxes(0, 1) is a close runner-up which is also known as zip in python. > Well, I would be really happy for .T to return an (N,1) column vector if > handed an (N,) 1-d array. But I'm pretty sure that would raise more furuor > among the readers of the list than leaving it 1-d. > Would you be even happier if .T would return a matrix? I hope not because my .M objection will apply. Maybe we can compromize by implementing a.T so that it raises ValueError unless rank(a) == 2 or at least unless rank(a) <= 2? > I have serious reservations about a function called t(). x,y,z, and t are > probably all in the top 10 variable names in scientific computing. > What about T()? > > > K (an APL-like language) overloads > > unary '+' to do swapaxes(0,1) for rank>=2 and nothing for lower rank. > > > Hmm. That's kind of interesting, it seems like an abuse of notation to me. > And precedence might be an issue too. The precedence of unary + isn't as > high as attribute access. It is high enough AFAICT - higher than any binary operator. > Anyway, as far as the meaning of + in K, I'm > guessing K's arrays are in Fortran order, so (0,1) axes vary the fastest. No, K has 1d arrays only, but they can be nested. Matrices are arrays of arrays and tensors are arrays of arrays of arrays ..., but you are right (0,1) swap is faster than (-2,-1) swap and this motivated the choice for the primitive. > I couldn't find any documentation for the K language from a quick search, > though. Kx Systems, the company behind K has replaced K with Q and pulled old manuals from the web. Q is close enough to K: see http://kx.com/q/d/k.txt for a terse summary. [snip] > > Why would anyone do that if b was a matrix? > Maybe because, like you, they think "that a.T is fairly cryptic". > If they are like me, they will not use numpy.matrix to begin with :-). > > > > But probably a better solution > > > would be to have matrix versions of these in the library as an optional > > > module to import so people could, say, import them as M and use > M.ones(2,2). > > > > > > > This is the solution used by ma, which is another argument for it. > > Yeh, I'm starting to think that's better than slapping an M attribute on > arrays, too. Is it hard to write a module like that? > Writing matrixutils with def zeros(shape, dtype=float): return asmatrix(zeros(shape, dtype)) is trivial, but matrixutils.zeros will have two python function calls overhead. This may be a case for making zeros a class method of ndarray that can be written in a way that will make inherited matrix.zeros do the right thing with no overhead. [snip] > * +A implies addition. No, it does not. Unary '+' is a noop. Does * imply multiplication or ** imply pow in f(*args, **kwds) to you? > The general rule with operator overloading is that > the overload should have the same general meaning as the original operator. Unary '+' has no preset meaning in plain python. It can be interpreted as transpose if you think of scalars as 1x1 matrices. > So overloading * for matrix multiplication makes sense. It depends on what you consider part of "general meaning". If the commutativity property is part of it then overloading * for matrix multiplication doesn't make sense. If the "general meaning" of unary + includes x = +x invariant, then you are right, but I am willing to relax that to x = ++x invariant when x is a non-symmetric matrix. > ... New users looking at something like A + +B are pretty > certain to be confused because they think they know what + means, but > they're wrong. In my experience new users don't realize that unary + is defined for arrays. Use of unary + with non-literal numbers is exotic enough that new users seeing "something like A + +B" will not assume that they know what it means. [snip] > * +A has different precedence than the usual transpose operator. (But I > can't think of a case where that would make a difference now.) > Maybe you can't because it doesn't? :-) > I would be willing to accept a .T that just threw an exception if ndim were > > 2. Aha! Let's start with an error unless ndim != 2. It is always easier to add good features than to remove bad ones. |
From: Travis O. <oli...@ie...> - 2006-07-07 04:28:20
|
Travis Oliphant wrote: > This is a call for a vote on each of the math attributes. Please post > your vote as > > +1 : support > +0 : don't care so go ahead > -0 : don't care so why do it > -1 : against > > Vote on the following issues separately: > > > > 1) .T Have some kind of .T attribute > > +1 > If >0 on this then: > > a) .T == .swapaxes(-2,-1) > +0 > b) .T == .transpose() > -0 > c) .T raises error for ndim > 2 > -1 > d) .T returns (N,1) array for length (N,) array > > +1 > e) .T returns self for ndim < 2 > > -1 > 2) .H returns .T.conj() > > > +0 > 3) .M returns matrix version of array > > +0 > 4) .A returns basearray (useful for sub-classes). > +0 -Travis |
From: Travis O. <oli...@ie...> - 2006-07-07 04:26:15
|
This is a call for a vote on each of the math attributes. Please post your vote as +1 : support +0 : don't care so go ahead -0 : don't care so why do it -1 : against Vote on the following issues separately: 1) .T Have some kind of .T attribute If >0 on this then: a) .T == .swapaxes(-2,-1) b) .T == .transpose() c) .T raises error for ndim > 2 d) .T returns (N,1) array for length (N,) array e) .T returns self for ndim < 2 2) .H returns .T.conj() 3) .M returns matrix version of array 4) .A returns basearray (useful for sub-classes). -Travis |
From: Robert K. <rob...@gm...> - 2006-07-07 04:23:26
|
Bill Baxter wrote: > On 7/7/06, *Robert Kern* <rob...@gm... > <mailto:rob...@gm...>> wrote: > > Bill Baxter wrote: > > Robert Kern wrote: [snip] > > I don't think that just because arrays are often used for linear > > algebra that > > > > linear algebra assumptions should be built in to the core > array type. > > > > It's not just that "arrays can be used for linear algebra". It's > that > > linear algebra is the single most popular kind of numerical > computing in > > the world! It's the foundation for a countless many fields. What > > you're saying is like "grocery stores shouldn't devote so much shelf > > space to food, because food is just one of the products people > buy", or > [etc.] > > I'm sorry, but the argument-by-inappropriate-analogy is not > convincing. Just > because linear algebra is "the base" for a lot of numerical > computing does not > mean that everyone is using numpy arrays for linear algebra all the > time. Much > less does it mean that all of those conventions you've devised > should be shoved > into the core array type. I hold a higher standard for the design of > the core > array type than I do for the stuff around it. "It's convenient for > what I do," > just doesn't rise to that level. There has to be more of an argument > for it. > > My argument is not that "it's convenient for what I do", it's that "it's > convenient for what 90% of users want to do". But unfortunately I can't > think of a good way to back up that claim with any sort of numbers. [snip] > I am also curious, given the number of times I've heard this nebulous > argument of "there are lots kinds of numerical computing that don't > invlolve linear algebra", that no one ever seems to name any of these > "lots of kinds". Statistics, maybe? But you can find lots of linear > algebra in statistics. That's because I'm not waving my hands at general fields of application. I'm talking about how people actually use array objects on a line-by-line basis. If I represent a dataset as an array and fit a nonlinear function to that dataset, am I using linear algebra at some level? Sure! Does having a .T attribute on that array help me at all? No. Arguing about how fundamental linear algebra is to numerical endeavors is entirely besides the point. I'm not saying that people who do use arrays for linear algebra are rare or unimportant. It's that syntactical convenience for one set of conventional ways to use an array object, by itself, is not a good enough reason to add stuff to the core array object. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Bill B. <wb...@gm...> - 2006-07-07 02:56:16
|
On 7/7/06, Robert Kern <rob...@gm...> wrote: > > Bill Baxter wrote: > > Robert Kern wrote: > > > > > > The slippery slope argument only applies to the .M, not the .T or .H. > > No, it was the "Let's have a .T attribute. And if we're going to do that, > then > we should also do this. And this. And this." There's no slippery slope there. It's just "Let's have a .T attribute, and if we have that then we should have .H also." Period. The slope stops there. The .M and .A are a separate issue. > I don't think that just because arrays are often used for linear > > algebra that > > > > linear algebra assumptions should be built in to the core array > type. > > > > It's not just that "arrays can be used for linear algebra". It's that > > linear algebra is the single most popular kind of numerical computing in > > the world! It's the foundation for a countless many fields. What > > you're saying is like "grocery stores shouldn't devote so much shelf > > space to food, because food is just one of the products people buy", or > [etc.] > > I'm sorry, but the argument-by-inappropriate-analogy is not convincing. > Just > because linear algebra is "the base" for a lot of numerical computing does > not > mean that everyone is using numpy arrays for linear algebra all the time. > Much > less does it mean that all of those conventions you've devised should be > shoved > into the core array type. I hold a higher standard for the design of the > core > array type than I do for the stuff around it. "It's convenient for what I > do," > just doesn't rise to that level. There has to be more of an argument for > it. My argument is not that "it's convenient for what I do", it's that "it's convenient for what 90% of users want to do". But unfortunately I can't think of a good way to back up that claim with any sort of numbers. But here's one I just found: http://www.netlib.org/master_counts2.html download statistics for various numerical libraries on netlib.org. The top 4 are all linear algebra related: /lapack <http://www.netlib.org/lapack/> 37,373,505 /lapack/lug<http://www.netlib.org/lapack/lug/> 19,908,865 /scalapack <http://www.netlib.org/scalapack/> 14,418,172 /linalg <http://www.netlib.org/linalg/> 11,091,511 The next three are more like general computing issues: parallelization lib, performance monitoring, benchmarks: /pvm3 <http://www.netlib.org/pvm3/> 10,360,012 /performance<http://www.netlib.org/performance/> 7,999,140 /benchmark <http://www.netlib.org/benchmark/> 7,775,600 Then the next one is more linear algebra. And that seems to hold pretty far down the list. It looks like mostly stuff that's either linear algebra related or parallelization/benchmarking related. And as another example, there's the success of higher level numerical environments like Matlab (and maybe R and S? and Mathematica, and Maple?) that have strong support for linear algebra right in the core, not requiring users to go into some syntax/library ghetto to use that functionality. I am also curious, given the number of times I've heard this nebulous argument of "there are lots kinds of numerical computing that don't invlolve linear algebra", that no one ever seems to name any of these "lots of kinds". Statistics, maybe? But you can find lots of linear algebra in statistics. --bb |
From: Robert K. <rob...@gm...> - 2006-07-07 02:31:15
|
Bill Baxter wrote: > Robert Kern wrote: > > Like Sasha, I'm mildly opposed to .T (as a synonym for .transpose()) > and much > more opposed to the rest (including .T being a synonym for > .swapaxes(-2, -1)). > It's not often that a proposal carries with it its own > slippery-slope argument > against itself. > > The slippery slope argument only applies to the .M, not the .T or .H. No, it was the "Let's have a .T attribute. And if we're going to do that, then we should also do this. And this. And this." > I don't think that just because arrays are often used for linear > algebra that > > linear algebra assumptions should be built in to the core array type. > > It's not just that "arrays can be used for linear algebra". It's that > linear algebra is the single most popular kind of numerical computing in > the world! It's the foundation for a countless many fields. What > you're saying is like "grocery stores shouldn't devote so much shelf > space to food, because food is just one of the products people buy", or [etc.] I'm sorry, but the argument-by-inappropriate-analogy is not convincing. Just because linear algebra is "the base" for a lot of numerical computing does not mean that everyone is using numpy arrays for linear algebra all the time. Much less does it mean that all of those conventions you've devised should be shoved into the core array type. I hold a higher standard for the design of the core array type than I do for the stuff around it. "It's convenient for what I do," just doesn't rise to that level. There has to be more of an argument for it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Bill B. <wb...@gm...> - 2006-07-07 02:21:01
|
On 7/7/06, Tim Hochberg <tim...@co...> wrote: > I'd caution here though that the H is another thing that's going to > encourage people to write code that's less accurate and slower than it > needs to be. Consider the simple equation: > > [Side note: Here's something I periodically think about -- it would be > nice if dot could take multiple arguments and chain them into a series > of matrix multiplies so that dot(dot(a, b), a.H) could be written dot(a, > b, a.H). I'll use that here for clarity]. > > x = dot(a, b, a.H) > > versus: > > x = dot(a.real, b, a.real.transpose()) + dot(a.imag, b, a.imag.transpose > ()) > > The latter should be about twice as fast and won't have any pesky > imaginary residuals showing up from the finite precision of the various > operations. The funny thing is that having a dot(a,b,c,...) would lead to the exact same kind of hidden performance problems you're arguing against. The cost of a series of matrix multiplications can vary dramatically with the order in which you perform them. E.g. A*B*C*v where A,B,C are 2-dim and v is a column vector. Then you should do (A*(B*(C*v))), and definitely not ((A*B)*C)*v) But that said, dot could maybe be made smart enough to figure out the best order. That would be a nice feature. dot(A, dot(B, dot(C, v))) is pretty darn ugly. I seem to remember finding the best order being an example of a dynamic programming problem, and probably O(N^2) in the number of terms. But presumably N is never going to be very big, and the N==2 (no choice) and N=3 (only one choice) cases could be optimized. --bb |
From: Bill B. <wb...@gm...> - 2006-07-07 01:56:46
|
Tim Wrote: > That second argument is particularly uncompelling, but I think I agree > that in a vacuum swapaxes(-2,-1) would be a better choice for .T than > reversing the axes. However, we're not in a vacuum and there are several > reasons not to do this. > 1. A.T and A.transpose() should really have the same behavior. There may be a certain economy to that but I don't see why it should necessarily be so. Especially if it's agreed that the behavior .transpose() is not very useful. The use case for .T is primarily to make linear algebra stuff easier. If you're doing n-dim stuff and need something specific, you'll use the more general .transpose(). 2. Changing A.transpose would be one more backwards compatibility issue. Maybe it's a change worth making though, if we are right in saying that the current .transpose() for ndim>2 is hardly ever what you want. 3. Since, as far as I can tell there's not concise way of spelling > A.swapaxes(-2,-1) in terms of A.transpose it would make documenting and > explaining the default case harder. > Huh? A.swapaxes (-2,-1) is pretty concise. Why should it have to have an explanation in terms of A.transpose? Here's the explanation for the documentation: "A.T returns A with the last two axes transposed. It is equivalent to A.swapaxes (-2,-1). For a 2-d array, this is the usual matrix transpose." This just is a non-issue. Sasha wrote: > > more common to want to swap just two axes, and the last two seem a > logical > > choice since a) in the default C-ordering they're the closest together > in > > memory and b) they're the axes that are printed contiguously when you > say > > "print A". > > It all depends on how you want to interpret a rank-K tensor. You seem > to advocate a view that it is a (K-2)-rank array of matrices and .T is > an element-wise transpose operation. Alternatively I can expect that > it is a matrix of (K-2)-rank arrays and then .T should be > swapaxes(0,1). Do you have real-life applications of swapaxes(-2,-1) > for rank > 2? > Yep, like Tim said. The usage is say a N sets of basis vectors. Each set of basis vectors is a matrix. And say I have a different basis associated with each of N points in space. Usually I'll want to print it out organized by basis vector set. I.e. look at the matrix associated with each of the points. So it makes sense to organize it as shape=(N,a,b) so that if I print it I get something that's easy to interpret. If I set it up as shape=(a,b,N) then what's easiest to see in the print output is all N first basis vectors, all N second basis vectors, etc. Also again in a C memory layout, the last two axes are closest in memory, so it's more cache friendly to have the bits that will usually be used together in computations be on the trailing end. In matlab (which is fortran order), I do things the other way, with the N at the end of the shape. (And note that Matlab prints out the first two axes contiguously.) Either way swapaxes(-2,-1) is likely more likely to be what you want than .transpose(). > > and swapaxes(-2,-1) is > > > invalid for rank < 2. > > > > > At least in numpy 0.9.8, it's not invalid, it just doesn't do anything. > > > > > That's bad. What sense does it make to swap non-existing axes? Many > people would expect transpose of a vector to be a matrix. This is the > case in S+ and R. > Well, I would be really happy for .T to return an (N,1) column vector if handed an (N,) 1-d array. But I'm pretty sure that would raise more furuor among the readers of the list than leaving it 1-d. > > My main objection is that a.T is fairly cryptic > > > - is there any other language that uses attribute for transpose? > > > > > > Does it matter what other languages do? It's not _that_ cryptic. > If something is clear and natural, chances are it was done before. The thing is most other numerical computing languages were designed for doing numerical computing. They weren't designed originally for writing general purpose software, like Python was. So in matlab, for instance, transpose is a simple single-quote. But that doesn't help us decide what it should be in numpy. For me prior art is always a useful guide when making a design choice. > For example, in R, the transpose operation is t(a) and works on rank > <= 2 only always returning rank-2. I have serious reservations about a function called t(). x,y,z, and t are probably all in the top 10 variable names in scientific computing. K (an APL-like language) overloads > unary '+' to do swapaxes(0,1) for rank>=2 and nothing for lower rank. Hmm. That's kind of interesting, it seems like an abuse of notation to me. And precedence might be an issue too. The precedence of unary + isn't as high as attribute access. Anyway, as far as the meaning of + in K, I'm guessing K's arrays are in Fortran order, so (0,1) axes vary the fastest. I couldn't find any documentation for the K language from a quick search, though. Both R and K solutions are implementable in Python with R using 3 > characters and K using 1(!) compared to your two-character ".T" > notation. I would suggest that when inventing something new, you > should consider prior art and explain how you invention is better. > That's why what other languages do matter. (After all, isn't 'T' > chosen because "transpose" starts with "t" in the English language?) Yes you're right. My main thought was just what I said above, that there probably aren't too many other examples that can really apply in this case, both because most numerical computing languages are custom-designed for numerical computing, and also because Python's attributes are also kind of uncommon among programming languages. So it's worth looking at other examples, but in the end it has to be something that makes sense for a numerical computing package written in Python, and there aren't too many examples of that around. > You could write a * b.transpose(1,0) > > right now and still not know whether it was matrix or element-wise > > multiplication. > > Why would anyone do that if b was a matrix? > Maybe because, like you, they think "that a.T is fairly cryptic". > But probably a better solution > > would be to have matrix versions of these in the library as an optional > > module to import so people could, say, import them as M and use M.ones > (2,2). > > > This is the solution used by ma, which is another argument for it. > Yeh, I'm starting to think that's better than slapping an M attribute on arrays, too. Is it hard to write a module like that? I only raised a mild objection against .T, but the slippery slope > argument makes me dislike it much more. At the very least I would > like to see a discussion of why a.T is better than t(a) or +a. > * A.T puts the T on the proper side of A, so in that sense it looks more like the standard math notation. * A.T has precedence that roughly matches the standard math notation * t(A) uses an impossibly short function name that's likely to conflict with local variable names. To avoid the conflict people will just end up using it as numpy.t(A), at which point it's value as a shortcut for transpose is nullified. Or they'll have to do a mini-import within specific functions ("from numpy import t") to localize the namespace pollution. But at that point they might as well just say " t= numpy.transpose". * t(A) puts the transpose operator on the wrong side of A * +A puts the transpose operator on the wrong side of A also. * +A implies addition. The general rule with operator overloading is that the overload should have the same general meaning as the original operator. So overloading * for matrix multiplication makes sense. Overloading & for would be a bad idea. New users looking at something like A + +B are pretty certain to be confused because they think they know what + means, but they're wrong. If you see A + B.T, you either know what it means or you know immediately that you don't know what it means and you go look it up. * +A has different precedence than the usual transpose operator. (But I can't think of a case where that would make a difference now.) Tim Hochberg wrote: > > Well, you could overload __rpow__ for a singleton T and spell it A**T > > ... (I hope no one will take that proposal seriosely). Visually, A.T > > looks more like a subscript rather than superscript. > > > No, no no. Overload __rxor__, then you can spell it A^t, A^h, etc. Much > better ;-). [Sadly, I almost like that....] > Ouch! No way! It's got even worse precedence problems than the +A proposal. How about A+B^t ? And you still have to introduce 'h' and 't' into the global namespace for it to work. Here's a half baked thought: if the objection to t(A) is that it doesn't > mirror the formulae where t appears as a subscript after A. Conceivably, > __call__ could be defined so that A(x) returns x(A). That's kind of > perverse, but it means that A(t), A(h), etc. could all work > appropriately for suitably defined singletons. These singletons could > either be assembeled in some abbreviations namespace or brought in by > the programmer using "import transpose as t", etc. The latter works for > doing t(a) as well of course. Same problem with the need for global t. And it is kind of perverse, besides. Robert Kern wrote: > Like Sasha, I'm mildly opposed to .T (as a synonym for .transpose()) and > much > more opposed to the rest (including .T being a synonym for .swapaxes(-2, > -1)). > It's not often that a proposal carries with it its own slippery-slope > argument > against itself. > The slippery slope argument only applies to the .M, not the .T or .H. And I think if there's a matrixutils module with redefinitions of ones and zeros etc, and if other functions are all truly fixed to preserve matrix when matrix is passed in, then I agree, there's not so much need for .M. I don't think that just because arrays are often used for linear algebra > that > linear algebra assumptions should be built in to the core array type. It's not just that "arrays can be used for linear algebra". It's that linear algebra is the single most popular kind of numerical computing in the world! It's the foundation for a countless many fields. What you're saying is like "grocery stores shouldn't devote so much shelf space to food, because food is just one of the products people buy", or "this mailing list shouldn't be conducted in English, because English is just one of the languages people can speak here", or "I don't think my keyboard should devote so much space to the A-Z keys, because there are so many characters in the Unicode character set that could be there instead", or to quote from a particular comedy troop: "Ah, how about Cheddar?" "Well, we don't get much call for it around here, sir." "Not much ca- It's the single most popular cheese in the world!" "Not round here, sir." Linear algebra is pretty much the 'cheddar' of the numerical computing world. But it's more than that. It's like the yeast of the beer world. Pretty much everything starts with it as a base. It makes sense to make it as convenient as possible to do with numpy, even if it is a "special case". I wish I could think of some sort of statistics or google search I could cite to back this claim up, but as far as my academic background from high school though Ph.D. goes, linear algebra is a mighty big deal, not merely an "also ran" in the world of math or numerical computing. Sasha Wrote: > In addition, transpose is a (rank-2) array or matrix operation and not > a linear algebra operation. Transpose corresponds to the "adjoint" > linear algebra operation if you represent vectors as single column > matrices and co-vectors as single-row matrices. This is a convenient > representation followed by much of the relevant literature, but it > does not alow generalization beyond rank-2. > I would be willing to accept a .T that just threw an exception if ndim were > 2. That's what Matlab does with its transpose operator. I don't like that behavior myself -- it seems wasteful when it could just have some well defined behavior that would let it be useful at least some of the time on N-d arrays. I don't like it either, but I don't like .T even more. These days I > hate functionality I cannot google for. Call me selfish, but I > already know what unary '+' can do to a higher rank array, but with .T > I will always have to look up which axes it swaps ... I think '.T' is more likely to be searchable than '+'. And when you say you already know what unary + can do, you mean because you've used K? That's not much use to the typical user, who also thinks they know what a unary + does, but they'd be wrong in this case. So, in summary, I vote for: - Keep the .T and the .H on array - Get rid of .M - Instead implement a matrix helper module that could be imported as M, allowing M.ones(...) etc. And also: - Be diligent about fixing any errors from matrix users along the lines of " numpy.foo returns an array when given a matrix" (Travis has been good about this -- but we need to keep it up.) Part of the motivation for .M attribute was just as a band-aid on the problem of matrices getting turned into arrays. Having .M means you can just slap a .M on the end of any result you aren't sure about. It's better (but harder) to fix the upstream problem of functions not preserving subtypes. |
From: Tim H. <tim...@co...> - 2006-07-06 22:11:25
|
Sasha wrote: > On 7/6/06, Robert Kern <rob...@gm...> wrote: > >> ... >> I don't think that just because arrays are often used for linear algebra that >> linear algebra assumptions should be built in to the core array type. >> >> > > In addition, transpose is a (rank-2) array or matrix operation and not > a linear algebra operation. Transpose corresponds to the "adjoint" > linear algebra operation if you represent vectors as single column > matrices and co-vectors as single-row matrices. This is a convenient > representation followed by much of the relevant literature, but it > does not alow generalization beyond rank-2. Another useful feature is > that inner product can be calculated as the matrix product as long as > you accept a 1x1 matrix for a scalar. This feature does not work > beyond rank-2 either because in order to do tensor inner product you > have to be explicit about the axes being collapsed (for example using > Einstein notation). > At various times, I've thought about how one might do Einstein notation within Python. About the best I could come up with was: A.ijk * B.klm or A("ijk") * B("klm") Neither is spectacular, the first is a cleaner notation, but conceptually messy since it abuses getattr. Both require some intermediate pseudo object that wraps the array as well as info about the indexing. > Since ndarray does not distinguish between upper an lower indices, it > is not possible distinguish between vectors and co-vectors in any way > other than using matrix convention. This makes ndarrays a poor model > for linear algebra tensors. > My tensor math is rusty, but isn't it possible to represent all ones tensors as either covariant and contravariant and just embed the information about the metric into the product operator? It would seem that the inability to specify lower and upper indices is not truly limiting, but the inability to specify what axis to contract over is a fundamental limitation of sorts. I'm sure I'm partly influenced by my feeling that in practice upper and lower indices (aka contra- and covariant- and mixed-tensors) would be a pain in the neck, but a more capable inner product operator might well be useful if we could come up with correct syntax. -tim |
From: Travis O. <oli...@ee...> - 2006-07-06 21:49:55
|
Arnd Baecker wrote: >Bingo! Only numpy 0.9.9.2749 + math is slower than >Numeric+math by a factor 1.6 and mod-array is 2.5 times slower >than numpy. >It would be nice if you could have a look at the module operation, >if possible ... > > O.K. I looked at how Python computed modulo and implemented the same thing for both umath (the remainder function) and scalarmath. The result is a significant speed up for modulo...yeah... I also placed in hooks so you can replace the scalarmath (for int, float, and complex) with the Python version of math (this works because the int, float, and complex scalars are sub-classes of the corresponding Python object). import numpy.core.scalarmath as ncs # Replace "scalarmath" with standard Python arithmetic. ncs.use_pythonmath(<int | float | complex>) # Replace "scalarmath" with actual scalarmath code. ncs.use_scalarmath(<int | float | complex>) This can test the speed of the implementations. -Travis |
From: Travis O. <oli...@ee...> - 2006-07-06 21:44:09
|
Mathew Yeates wrote: >okay, I went back to the binary windows distrib. Based on Keths code I wrote > > >> print numpy.asmatrix(all_dates == start_dates[row],dtype=int) >[[0 0 0 0 0 0 0 0 0 0 0 1 0 0]] > >> [row,numpy.asmatrix(all_dates == start_dates[row],dtype=int)] = -1 > >> print A[row,:] >[[-1. -1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] > >huh? It set the first 2 elements and not the 12'th!! > > Indexing has changed in SVN python, but in general, index matrices are not what you want because the dimensionality of the index arrays means something and matrices have two dimensions. So use arrays for indexing. -Travis |
From: Keith G. <kwg...@gm...> - 2006-07-06 21:43:34
|
On 7/6/06, Mathew Yeates <my...@jp...> wrote: > okay, I went back to the binary windows distrib. Based on Keths code I wrote > > >> print numpy.asmatrix(all_dates == start_dates[row],dtype=int) > [[0 0 0 0 0 0 0 0 0 0 0 1 0 0]] > >> [row,numpy.asmatrix(all_dates == start_dates[row],dtype=int)] = -1 > >> print A[row,:] > [[-1. -1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] > > huh? It set the first 2 elements and not the 12'th!! You are assigning -1 to column 0 thirteen times and assinging -1 to column 1 once. For now, until boolean indexing works with matrices, I would just use brute force: A[row,where(all_dates.A == 10)[0]] Or you can do all rows at once with A[all_dates== 10] where all_dates is the same size as A. |
From: Mathew Y. <my...@jp...> - 2006-07-06 21:20:59
|
okay, I went back to the binary windows distrib. Based on Keths code I wrote >> print numpy.asmatrix(all_dates == start_dates[row],dtype=int) [[0 0 0 0 0 0 0 0 0 0 0 1 0 0]] >> [row,numpy.asmatrix(all_dates == start_dates[row],dtype=int)] = -1 >> print A[row,:] [[-1. -1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] huh? It set the first 2 elements and not the 12'th!! Mathew |
From: Keith G. <kwg...@gm...> - 2006-07-06 20:43:03
|
On 7/6/06, Travis Oliphant <oli...@ee...> wrote: > Mathew Yeates wrote: > > >Not working. > >A[row,all_dates == 10] = -1 where all_dates is a matrix with column > >length of 14 [[960111,..,.. > >and A is a matrix with same column length > > > >I get > >IndexError: arrays used as indices must be of integer type > > > >when I print out all_dates == 10 > >I get > >[True True True True True True True True True False False False True True]] > > > >I experimented with "<" instead of "==" but I still get boolean values > >as indices. > > > >Any help? > > > > > What version are you using? Can you give an example that shows the > error. It's hard to guess the type of all the variables. The following > works for me. > > import numpy > print numpy.__version__ > A = numpy.matrix(rand(3,14)) > all_dates = array([10,10,1,10,1,10,0,10,0,10,0,1,10,1]) > row = 2 > A[row, all_dates == 10] This is what NASA is doing (and what I would like to do): >> A[row, asmatrix(all_dates == 10)] --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/kwg/<ipython console> /usr/local/lib/python2.4/site-packages/numpy/core/defmatrix.py in __getitem__(self, index) 122 123 def __getitem__(self, index): --> 124 out = N.ndarray.__getitem__(self, index) 125 # Need to swap if slice is on first index 126 # or there is an integer on the second ValueError: too many indices for array |
From: Mathew Y. <my...@jp...> - 2006-07-06 20:42:17
|
The very example you give produces IndexError: arrays used as indices must be of integer type this is with 0.9.8 Also .....while your example says "rand" I had to say numpy.rand This is on WindowsXP Mathew Travis Oliphant wrote: > Mathew Yeates wrote: > >> Not working. >> A[row,all_dates == 10] = -1 where all_dates is a matrix with column >> length of 14 [[960111,..,.. >> and A is a matrix with same column length >> >> I get >> IndexError: arrays used as indices must be of integer type >> >> when I print out all_dates == 10 >> I get >> [True True True True True True True True True False False False True >> True]] >> >> I experimented with "<" instead of "==" but I still get boolean >> values as indices. >> >> Any help? >> >> > What version are you using? Can you give an example that shows the > error. It's hard to guess the type of all the variables. The > following works for me. > > import numpy > print numpy.__version__ > A = numpy.matrix(rand(3,14)) > all_dates = array([10,10,1,10,1,10,0,10,0,10,0,1,10,1]) > row = 2 > A[row, all_dates == 10] > > > > > > -Trvis > > |
From: Sasha <nd...@ma...> - 2006-07-06 20:41:48
|
On 7/6/06, Tim Hochberg <tim...@co...> wrote: > ... > It looks even closer to =86 (dagger if that doesn't make it through) whic= h > is the symbol used for the hermitian adjoint. If it pleases the matlab crowd, '+' can be defined to do the hermitian adjoint. on the complex type. > ... > Perhaps it's not as perverse as it first appears. Although I still don't > have to like it ;-) I don't like it either, but I don't like .T even more. These days I hate functionality I cannot google for. Call me selfish, but I already know what unary '+' can do to a higher rank array, but with .T I will always have to look up which axes it swaps ... |
From: Travis O. <oli...@ee...> - 2006-07-06 20:37:01
|
Mathew Yeates wrote: >Not working. >A[row,all_dates == 10] = -1 where all_dates is a matrix with column >length of 14 [[960111,..,.. >and A is a matrix with same column length > >I get >IndexError: arrays used as indices must be of integer type > >when I print out all_dates == 10 >I get >[True True True True True True True True True False False False True True]] > >I experimented with "<" instead of "==" but I still get boolean values >as indices. > >Any help? > > What version are you using? Can you give an example that shows the error. It's hard to guess the type of all the variables. The following works for me. import numpy print numpy.__version__ A = numpy.matrix(rand(3,14)) all_dates = array([10,10,1,10,1,10,0,10,0,10,0,1,10,1]) row = 2 A[row, all_dates == 10] -Trvis |
From: Sasha <nd...@ma...> - 2006-07-06 20:30:49
|
On 7/6/06, Robert Kern <rob...@gm...> wrote: > ... > I don't think that just because arrays are often used for linear algebra that > linear algebra assumptions should be built in to the core array type. > In addition, transpose is a (rank-2) array or matrix operation and not a linear algebra operation. Transpose corresponds to the "adjoint" linear algebra operation if you represent vectors as single column matrices and co-vectors as single-row matrices. This is a convenient representation followed by much of the relevant literature, but it does not alow generalization beyond rank-2. Another useful feature is that inner product can be calculated as the matrix product as long as you accept a 1x1 matrix for a scalar. This feature does not work beyond rank-2 either because in order to do tensor inner product you have to be explicit about the axes being collapsed (for example using Einstein notation). Since ndarray does not distinguish between upper an lower indices, it is not possible distinguish between vectors and co-vectors in any way other than using matrix convention. This makes ndarrays a poor model for linear algebra tensors. |
From: Mathew Y. <my...@jp...> - 2006-07-06 20:24:00
|
Not working. A[row,all_dates == 10] = -1 where all_dates is a matrix with column length of 14 [[960111,..,.. and A is a matrix with same column length I get IndexError: arrays used as indices must be of integer type when I print out all_dates == 10 I get [True True True True True True True True True False False False True True]] I experimented with "<" instead of "==" but I still get boolean values as indices. Any help? Mathew Keith Goodman wrote: > On 7/5/06, Mathew Yeates <my...@jp...> wrote: >> What is the typical way of doing the following >> starting with a 0 matrix, set all values to 1 when a certain condition >> is met, set to -1 when another condition is met, left alone if neither >> condition is met. > > This works on recent versions of numpy: > >>> x = asmatrix(zeros((2,2))) > >>> x > > matrix([[0, 0], > [0, 0]]) > >>> y = asmatrix(rand(2,2)) > >>> y > > matrix([[ 0.85219404, 0.48311427], > [ 0.41026966, 0.2184193 ]]) > >>> x[y > 0.5] = 1 > >>> x[y < 0.5] = -1 > >>> x > > matrix([[ 1, -1], > [-1, -1]]) > |
From: Tim H. <tim...@co...> - 2006-07-06 20:23:27
|
Alexander Belopolsky wrote: > On 7/6/06, Tim Hochberg <tim...@co...> wrote: >> ... >> Overloading '+' sure seems perverse, but maybe that's just me. >> > The first time I saw it, it seemed perverse to me as well, but it > actually make a lot of sense: > > 1. It is visually appealing as in '+' makes '|' from '-' and '-' from > '|' and looks close enough to 't'. It looks even closer to † (dagger if that doesn't make it through) which is the symbol used for the hermitian adjoint. > 2. It puts an otherwise useless operator to work. > 3. Prefix spelling suggests that it should be swapaxes(0,1) rather > than swapaxes(-2,-1), which is the choice made by K. > 4. You can't get any shorter than that (at least using a fixed width > font :-). > 5. It already does the right thing for rank<2. Perhaps it's not as perverse as it first appears. Although I still don't have to like it ;-) -tim > > |
From: Sasha <nd...@ma...> - 2006-07-06 19:38:02
|
On 7/6/06, Tim Hochberg <tim...@co...> wrote: > ... > Overloading '+' sure seems perverse, but maybe that's just me. > The first time I saw it, it seemed perverse to me as well, but it actually make a lot of sense: 1. It is visually appealing as in '+' makes '|' from '-' and '-' from '|' and looks close enough to 't'. 2. It puts an otherwise useless operator to work. 3. Prefix spelling suggests that it should be swapaxes(0,1) rather than swapaxes(-2,-1), which is the choice made by K. 4. You can't get any shorter than that (at least using a fixed width font :-). 5. It already does the right thing for rank<2. |