You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Gael V. <gae...@no...> - 2006-11-12 17:12:38
|
You're probably right. Well it would most definitely be useful for all the lads in my lab, but I am not sure this is a broad audience. The use case is when you have a array representing data in an "mgrid" way, and you wnat to apply transformations to the coordinates. It is something I have done and seen done quite often. If I am the only one a the list who see a use for this, then forget it. Ga=EBl On Sun, Nov 12, 2006 at 06:07:30PM +0100, Sven Schreiber wrote: > Well you didn't mentioned why it would be useful for a broad audience, > only that you yourself use it alot. Together with the fact that it's a > one-liner this presumably means that it's not exactly a prime candidate > for adoption. (I'm just another user, so I'm speculating a bit here > based on my reading of past postings on this list.) |
From: Sven S. <sve...@gm...> - 2006-11-12 17:07:49
|
Gael Varoquaux schrieb: > Hi all, > > I didn't get any answers to this email. Is it because the proposed > addition to numpy is not of any interest to anybody apart from me ? > Maybe the way I introduced this is wrong. Please tell me what is wrong > with this proposition. > Well you didn't mentioned why it would be useful for a broad audience, only that you yourself use it alot. Together with the fact that it's a one-liner this presumably means that it's not exactly a prime candidate for adoption. (I'm just another user, so I'm speculating a bit here based on my reading of past postings on this list.) -sven |
From: Gael V. <gae...@no...> - 2006-11-12 15:51:11
|
Hi all, I didn't get any answers to this email. Is it because the proposed addition to numpy is not of any interest to anybody apart from me ? Maybe the way I introduced this is wrong. Please tell me what is wrong with this proposition. Regards, Ga=EBl On Fri, Oct 20, 2006 at 01:28:52PM +0200, Gael Varoquaux wrote: > Hi, > There is an operation I do a lot, I would call it "unrolling" a array. > The best way to describe it is probably to give the code: > def unroll(M): > """ Flattens the array M and returns a 2D array with the first colu= mns=20 > being the indices of M, and the last column the flatten M. > """ > return hstack((indices(M.shape).reshape(-1,M.ndim),M.reshape(-1,1))= ) > Example: > >>> M > array([[ 0.73530097, 0.3553424 , 0.3719772 ], > [ 0.83353373, 0.74622133, 0.14748905], > [ 0.72023762, 0.32306969, 0.19142366]]) > >>> unroll(M) > array([[ 0. , 0. , 0.73530097], > [ 0. , 1. , 0.3553424 ], > [ 1. , 1. , 0.3719772 ], > [ 2. , 2. , 0.83353373], > [ 2. , 0. , 0.74622133], > [ 1. , 2. , 0.14748905], > [ 0. , 1. , 0.72023762], > [ 2. , 0. , 0.32306969], > [ 1. , 2. , 0.19142366]]) > The docstring sucks. The function is trivial (when you know numpy a bit= ). > Maybe this function already exists in numpy, if so I couldn't find it. > Elsewhere I propose it for inclusion. > Cheers, > Ga=EBl > -----------------------------------------------------------------------= -- > Using Tomcat but need to do more? Need to support web services, securit= y? > Get stuff done quickly with pre-integrated technology to make your job = easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geron= imo > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Neal B. <ndb...@gm...> - 2006-11-12 11:25:39
|
I see you're on linux. If this is Fedora, atlas (and the other libaries) are available from standard repos (core + extras) |
From: Robert K. <rob...@gm...> - 2006-11-12 07:35:23
|
Abhishek Roy wrote: > Hello, > I am trying to compile Numpy with lapack & atlas. In site.cfg I put, > [atlas] > library_dirs = /usr/local/lib/atlas/ > atlas_libs = cblas, lapack, f77blas, atlas > > and installation seems to go well. It says, > FOUND: > libraries = ['cblas', 'lapack', 'f77blas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas/'] > language = c > include_dirs = ['/usr/local/lib/atlas'] > and the lines with gcc <something> -lcblas -llapack don't give an error. But > afterwards running, > $ ldd ~/numpy-1.0/build/lib.linux-i686-2.4/numpy/linalg/lapack_lite.so > linux-gate.so.1 => (0xffffe000) > libpthread.so.0 => /lib/libpthread.so.0 (0x40238000) > libc.so.6 => /lib/libc.so.6 (0x4028b000) > /lib/ld-linux.so.2 (0x80000000) > > which I think means libcblas and libatlas are not linked. Can someone tell me > what's going wrong here? Are your ATLAS libraries shared or static (i.e., are they libatlas.so or libatlas.a)? ldd(1) only lists the shared libraries that are linked. ATLAS is usually installed as static libraries. Does that module import? If so, then you have no problems. If not, then copy the message here. It should tell you one of the symbols that are missing. Note that libraries should be listed in the order in which they depend on each other. lapack depends on f77blas and cblas which both depend on atlas. lapack,f77blas,cblas,atlas -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Abhishek R. <fin...@ya...> - 2006-11-12 07:17:38
|
Hello, I am trying to compile Numpy with lapack & atlas. In site.cfg I put, [atlas] library_dirs = /usr/local/lib/atlas/ atlas_libs = cblas, lapack, f77blas, atlas and installation seems to go well. It says, FOUND: libraries = ['cblas', 'lapack', 'f77blas', 'atlas'] library_dirs = ['/usr/local/lib/atlas/'] language = c include_dirs = ['/usr/local/lib/atlas'] and the lines with gcc <something> -lcblas -llapack don't give an error. But afterwards running, $ ldd ~/numpy-1.0/build/lib.linux-i686-2.4/numpy/linalg/lapack_lite.so linux-gate.so.1 => (0xffffe000) libpthread.so.0 => /lib/libpthread.so.0 (0x40238000) libc.so.6 => /lib/libc.so.6 (0x4028b000) /lib/ld-linux.so.2 (0x80000000) which I think means libcblas and libatlas are not linked. Can someone tell me what's going wrong here? Thanks, Abhishek Send instant messages to your online friends http://uk.messenger.yahoo.com |
From: Tim H. <tim...@ie...> - 2006-11-12 01:55:32
|
Charles R Harris wrote: > > > On 11/11/06, *Tim Hochberg* <tim...@ie... > <mailto:tim...@ie...>> wrote: > > Robert Kern wrote: > > Keith Goodman wrote: > > > >> How about a nanmin() function? > >> > > > > Already there. > > > > In [2]: nanmin? > > Type: function > > Base Class: <type 'function'> > > Namespace: Interactive > > File: > > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.0.1.dev3432-py2.5-macosx-10.4-i386.egg/numpy/lib/function_base.py > > Definition: nanmin(a, axis=None) > > Docstring: > > Find the minimium over the given axis, ignoring NaNs. > > > > > Bah! That seems excessive. If I was king of the world one of the many > things that I would stuff into a submodule is all the stuff > relating to > special values (nan, inf, isnan, isinf, namin, nanmax, nansum, > nanargmax, nanargmin, nan_to_num, infty, isneginf, isposinf and > probably some others that I'm missing). First, these are mostly > clutter > in the base namespace. Not only would it make the main namespace > easier > to navigate in (although there's much clean up that would have to be > done before it would be anything approaching easy to navigate). > Second, > for those who actually do need them, they'd be easier to find if they > were all grouped together -- Keith for example would almost certainly > have immediately found nanmin. Third, and this is perhaps a matter of > opinion, there seems to be a sudden urge to abuse NaNs. Perhaps if > they > were shunted a bit off to the side, this temptation would be lifted. > > Curmudgeonly yours, > > > Oh yes. And let's reserve a bit of abuse for whoever mixed up the nans > with the rest. I mean, the infs and such actually make some sense as > numbers, but nans are, well, nans. So it would have been nice to > enable everything *except* nans, and have an errorflag set whenever > the latter turned up. Actually you can do this. At least for the most part, I'm sure there are some corners that aren't working right yet. Operations on NaNs set the invalid flag while infinities set the overflow flag (actually the flags are only set when you first get the infinity or NaN, at least on my platform). So, you can make invalid a flag and ignore overflow by using: np.seterr(over='ignore', invalid='raise') I doubt this does *exactly* what you want, but it may be close. -tim [snip] |
From: Charles R H. <cha...@gm...> - 2006-11-12 01:06:59
|
On 11/11/06, Tim Hochberg <tim...@ie...> wrote: > > Robert Kern wrote: > > Keith Goodman wrote: > > > >> How about a nanmin() function? > >> > > > > Already there. > > > > In [2]: nanmin? > > Type: function > > Base Class: <type 'function'> > > Namespace: Interactive > > File: > > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy- > 1.0.1.dev3432-py2.5-macosx-10.4-i386.egg/numpy/lib/function_base.py > > Definition: nanmin(a, axis=None) > > Docstring: > > Find the minimium over the given axis, ignoring NaNs. > > > > > Bah! That seems excessive. If I was king of the world one of the many > things that I would stuff into a submodule is all the stuff relating to > special values (nan, inf, isnan, isinf, namin, nanmax, nansum, > nanargmax, nanargmin, nan_to_num, infty, isneginf, isposinf and > probably some others that I'm missing). First, these are mostly clutter > in the base namespace. Not only would it make the main namespace easier > to navigate in (although there's much clean up that would have to be > done before it would be anything approaching easy to navigate). Second, > for those who actually do need them, they'd be easier to find if they > were all grouped together -- Keith for example would almost certainly > have immediately found nanmin. Third, and this is perhaps a matter of > opinion, there seems to be a sudden urge to abuse NaNs. Perhaps if they > were shunted a bit off to the side, this temptation would be lifted. > > Curmudgeonly yours, Oh yes. And let's reserve a bit of abuse for whoever mixed up the nans with the rest. I mean, the infs and such actually make some sense as numbers, but nans are, well, nans. So it would have been nice to enable everything *except* nans, and have an errorflag set whenever the latter turned up. Or if folks just have to have nans, make them compare less than anything else. If isnan were fast and easy one could use LT(a,b) := isnan(a) || a < b. Chuck |
From: Tim H. <tim...@ie...> - 2006-11-12 00:49:37
|
Robert Kern wrote: > Keith Goodman wrote: > >> How about a nanmin() function? >> > > Already there. > > In [2]: nanmin? > Type: function > Base Class: <type 'function'> > Namespace: Interactive > File: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.0.1.dev3432-py2.5-macosx-10.4-i386.egg/numpy/lib/function_base.py > Definition: nanmin(a, axis=None) > Docstring: > Find the minimium over the given axis, ignoring NaNs. > > Bah! That seems excessive. If I was king of the world one of the many things that I would stuff into a submodule is all the stuff relating to special values (nan, inf, isnan, isinf, namin, nanmax, nansum, nanargmax, nanargmin, nan_to_num, infty, isneginf, isposinf and probably some others that I'm missing). First, these are mostly clutter in the base namespace. Not only would it make the main namespace easier to navigate in (although there's much clean up that would have to be done before it would be anything approaching easy to navigate). Second, for those who actually do need them, they'd be easier to find if they were all grouped together -- Keith for example would almost certainly have immediately found nanmin. Third, and this is perhaps a matter of opinion, there seems to be a sudden urge to abuse NaNs. Perhaps if they were shunted a bit off to the side, this temptation would be lifted. Curmudgeonly yours, -tim |
From: Charles R H. <cha...@gm...> - 2006-11-12 00:49:22
|
On 11/11/06, Charles R Harris <cha...@gm...> wrote: > > > > On 11/11/06, Tim Hochberg <tim...@ie...> wrote: > > > > Robert Kern wrote: > > > <snip> > > My preference would be to raise an error / warning when there is a nan > > in the array. Technically, there is no minimum value when a nan is > > present. I believe that this would be feasible be swapping the compare > > from 'a < b' to '!(a >= b)'. This should return NaN if any NaNs are > > present and I suspect the extra '!' will have minimal performance impact > > but it would have to be tested. Then a warning or error could be issued > > on the way out depending on the erstate. Arguably returning NaN is more > > correct than returning the smallest non NaN anyway. > > > > No telling what compiler optimizations might do with '!(a >= b)' if they > assume that '!(a >= b)' == 'a < b'. For instance, > > if !(a >= b) > do something; > else > do otherwise; > This made me curious. Here is the code int test(double a, double b) { if (a > b) return 0; return 1; } Here is the relevant part of the assembly code when compiled with no optimizations fucompp fnstsw %ax sahf ja .L4 jmp .L2 .L4: movl $0, -20(%ebp) jmp .L5 .L2: movl $1, -20(%ebp) .L5: movl -20(%ebp), %eax leave ret Which jumps to the right place on a > b (ja) Here is the relevant part of the assembly code when compiled with -O3 fucompp fnstsw %ax popl %ebp sahf setbe %al movzbl %al, %eax ret Which sets the value of the return to the logical value of a <= b (setbe), which won't work right with NaNs. Maybe the compiler needs another flag to deal with the possibility of NaN's because the generated code is actually incorrect. Or maybe I just discovered a compiler bug. But boy, that compiler is awesome clever. Those optimizers are getting better all the time. Chuck |
From: Charles R H. <cha...@gm...> - 2006-11-12 00:09:45
|
On 11/11/06, Tim Hochberg <tim...@ie...> wrote: > > Robert Kern wrote: <snip> My preference would be to raise an error / warning when there is a nan > in the array. Technically, there is no minimum value when a nan is > present. I believe that this would be feasible be swapping the compare > from 'a < b' to '!(a >= b)'. This should return NaN if any NaNs are > present and I suspect the extra '!' will have minimal performance impact > but it would have to be tested. Then a warning or error could be issued > on the way out depending on the erstate. Arguably returning NaN is more > correct than returning the smallest non NaN anyway. > No telling what compiler optimizations might do with '!(a >= b)' if they assume that '!(a >= b)' == 'a < b'. For instance, if !(a >= b) do something; else do otherwise; might branch to the second statement on 'a <b' and fall through to the first otherwise. Chuck |
From: A. M. A. <per...@gm...> - 2006-11-11 23:58:40
|
On 11/11/06, Charles R Harris <cha...@gm...> wrote: > I think the problem is that the max and min functions use the first value in > the array as the starting point. That could be fixed by using the first > non-nan and returning nan if there aren't any "real" numbers. But it > probably isn't worth the effort as the behavior becomes more complicated. A > better rule of thumb is to note that comparisons involving nans are > basically invalid because nans aren't comparable -- the comparison violates > trichotomy. Don't really know what to do about that. Well, we could get simple consistent behaviour by taking inf as the initial value for min and -inf as the first value for max, then reducing as normal. This would then, depending on how max and min are implemented, either return NaN if any are present, or return the smallest/largest non-NaN value (or inf/-inf if there are none) A. M. Archibald |
From: Keith G. <kwg...@gm...> - 2006-11-11 23:52:37
|
On 11/11/06, Robert Kern <rob...@gm...> wrote: > Keith Goodman wrote: > > How about a nanmin() function? > > Already there. > > In [2]: nanmin? > Type: function > Base Class: <type 'function'> > Namespace: Interactive > File: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.0.1.dev3432-py2.5-macosx-10.4-i386.egg/numpy/lib/function_base.py > Definition: nanmin(a, axis=None) > Docstring: > Find the minimium over the given axis, ignoring NaNs. Thank you! I was just about to write my own using Chuck's email as a guide. |
From: Tim H. <tim...@ie...> - 2006-11-11 23:46:59
|
Robert Kern wrote: > Keith Goodman wrote: > >> x.min() and x.max() depend on the ordering of the elements: >> >> >>>> x = M.matrix([[ M.nan, 2.0, 1.0]]) >>>> x.min() >>>> >> nan >> >> >>>> x = M.matrix([[ 1.0, 2.0, M.nan]]) >>>> x.min() >>>> >> 1.0 >> >> If I were to try the latter in ipython, I'd assume, great, min() >> ignores NaNs. But then the former would be a bug in my program. >> >> Is this related to how sort works? >> > > Not really. sort() is a more complicated algorithm that does a number of > different comparisons in an order that is difficult to determine beforehand. > x.min() should just be a straight pass through all of the elements. However, the > core problem is the same: a < nan, a > nan, a == nan are all False for any a. > > Barring a clever solution (at least cleverer than I feel like being > immediately), the way to solve this would be to check for nans in the array and > deal with them separately (or simply ignore them in the case of x.min()). > However, this checking would slow down the common case that has no nans (sans > nans, if you will). > For ignoring NaNs, isn't is simply a matter of scanning through the array till you find the first non NaN the proceeding as normal? In the common case, this requires one extra compare (or rather is_nan) which should be negligible in most circumstances. Only when you have an array with a load of NaNs at the beginning would it be slow. One would have to decide whether to return NaN or raise an error when there were no real numbers. My preference would be to raise an error / warning when there is a nan in the array. Technically, there is no minimum value when a nan is present. I believe that this would be feasible be swapping the compare from 'a < b' to '!(a >= b)'. This should return NaN if any NaNs are present and I suspect the extra '!' will have minimal performance impact but it would have to be tested. Then a warning or error could be issued on the way out depending on the erstate. Arguably returning NaN is more correct than returning the smallest non NaN anyway. As for Keith Goodmans request for a NaN ignoring min function, I suggest: a[~np.isnan(a)].min() Or better yet, stop generating so many NaN's. -tim |
From: Robert K. <rob...@gm...> - 2006-11-11 23:39:54
|
Keith Goodman wrote: > How about a nanmin() function? Already there. In [2]: nanmin? Type: function Base Class: <type 'function'> Namespace: Interactive File: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy-1.0.1.dev3432-py2.5-macosx-10.4-i386.egg/numpy/lib/function_base.py Definition: nanmin(a, axis=None) Docstring: Find the minimium over the given axis, ignoring NaNs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Charles R H. <cha...@gm...> - 2006-11-11 23:35:45
|
On 11/11/06, Robert Kern <rob...@gm...> wrote: > > Keith Goodman wrote: > > x.min() and x.max() depend on the ordering of the elements: > > > >>> x = M.matrix([[ M.nan, 2.0, 1.0]]) > >>> x.min() > > nan > > > >>> x = M.matrix([[ 1.0, 2.0, M.nan]]) > >>> x.min() > > 1.0 > > > > If I were to try the latter in ipython, I'd assume, great, min() > > ignores NaNs. But then the former would be a bug in my program. > > > > Is this related to how sort works? > > Not really. sort() is a more complicated algorithm that does a number of > different comparisons in an order that is difficult to determine > beforehand. > x.min() should just be a straight pass through all of the elements. > However, the > core problem is the same: a < nan, a > nan, a == nan are all False for any > a. > > Barring a clever solution (at least cleverer than I feel like being > immediately), the way to solve this would be to check for nans in the > array and > deal with them separately (or simply ignore them in the case of x.min()). > However, this checking would slow down the common case that has no nans > (sans > nans, if you will). I think the problem is that the max and min functions use the first value in the array as the starting point. That could be fixed by using the first non-nan and returning nan if there aren't any "real" numbers. But it probably isn't worth the effort as the behavior becomes more complicated. A better rule of thumb is to note that comparisons involving nans are basically invalid because nans aren't comparable -- the comparison violates trichotomy. Don't really know what to do about that. Chuck |
From: Keith G. <kwg...@gm...> - 2006-11-11 23:25:14
|
On 11/11/06, Robert Kern <rob...@gm...> wrote: > Barring a clever solution (at least cleverer than I feel like being > immediately), the way to solve this would be to check for nans in the array and > deal with them separately (or simply ignore them in the case of x.min()). > However, this checking would slow down the common case that has no nans > (sans nans, if you will). I'm not one of the fans of sans nans. I'd prefer a slower min() that ignored nans. But I'm probably in the minority. How about a nanmin() function? |
From: Robert K. <rob...@gm...> - 2006-11-11 23:14:50
|
Keith Goodman wrote: > x.min() and x.max() depend on the ordering of the elements: > >>> x = M.matrix([[ M.nan, 2.0, 1.0]]) >>> x.min() > nan > >>> x = M.matrix([[ 1.0, 2.0, M.nan]]) >>> x.min() > 1.0 > > If I were to try the latter in ipython, I'd assume, great, min() > ignores NaNs. But then the former would be a bug in my program. > > Is this related to how sort works? Not really. sort() is a more complicated algorithm that does a number of different comparisons in an order that is difficult to determine beforehand. x.min() should just be a straight pass through all of the elements. However, the core problem is the same: a < nan, a > nan, a == nan are all False for any a. Barring a clever solution (at least cleverer than I feel like being immediately), the way to solve this would be to check for nans in the array and deal with them separately (or simply ignore them in the case of x.min()). However, this checking would slow down the common case that has no nans (sans nans, if you will). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Keith G. <kwg...@gm...> - 2006-11-11 23:04:31
|
x.min() and x.max() depend on the ordering of the elements: >> x = M.matrix([[ M.nan, 2.0, 1.0]]) >> x.min() nan >> x = M.matrix([[ 1.0, 2.0, M.nan]]) >> x.min() 1.0 If I were to try the latter in ipython, I'd assume, great, min() ignores NaNs. But then the former would be a bug in my program. Is this related to how sort works? >> x = M.matrix([[ M.nan, 2.0, 1.0]]) >> x.sort() >> x matrix([[ nan, 1. , 2. ]]) >> x = M.matrix([[ 1.0, 2.0, M.nan]]) >> x.sort() >> x matrix([[ 1. , 2. , nan]]) |
From: A. M. A. <per...@gm...> - 2006-11-11 23:04:20
|
On 11/11/06, koara <ko...@at...> wrote: > Hello, is there a way to do SVD of large sparse matrices efficiently in > python? All i found was scipy (little sparse support, no SVD), pysparse > (no SVD) and PROPACK (no python). Out of these, PROPACK seems the most > enticing choice, but i have no experience porting fortran code and this > seems too big a bite. Any pointers or suggestions are most welcome! Numpy includes the program "f2py" for generating python wrappers for fortran code; it's not too difficult to use, even for a non-fortran programmer like me. If you do write wrappers, please send them to the scipy list - it's always good to build up the library. A. M. Archibald |
From: koara <ko...@at...> - 2006-11-11 22:42:20
|
Hello, is there a way to do SVD of large sparse matrices efficiently in python? All i found was scipy (little sparse support, no SVD), pysparse (no SVD) and PROPACK (no python). Out of these, PROPACK seems the most enticing choice, but i have no experience porting fortran code and this seems too big a bite. Any pointers or suggestions are most welcome! Cheers. |
From: Stefan v. d. W. <st...@su...> - 2006-11-11 22:13:03
|
On Sat, Nov 11, 2006 at 01:59:40PM -0800, Keith Goodman wrote: > Would it make sense to upcast instead of downcast? >=20 > This upcasts: >=20 > >> x =3D M.matrix([[1, M.nan, 3]]) > >> x > matrix([[ 1. , nan, 3. ]]) >=20 > But this doesn't: >=20 > >> x =3D M.matrix([[1, 2, 3]]) > >> x[0,1] =3D M.nan > >> x > matrix([[1, 0, 3]]) This behaviour is consistent with x =3D N.array([[1,2.0,3]]) vs x =3D N.array([1,2,3]) x[0,1] =3D 2. > (BTW, how do you represent missing integers if you can't use NaN?) I think masked arrays should work on integer arrays (alternatively, if you have enough memory, cast your array to float). Regards St=E9fan |
From: Keith G. <kwg...@gm...> - 2006-11-11 21:59:43
|
On 11/11/06, Stefan van der Walt <st...@su...> wrote: > On Sat, Nov 11, 2006 at 10:40:22AM -0800, Keith Goodman wrote: > > I accidentally wrote a unit test using int32 instead of float64 and > > ran into this problem: > > > > >> x = M.matrix([[1, 2, 3]]) > > >> x[0,1] = M.nan > > >> x > > matrix([[1, 0, 3]]) <--- Got 0 instead of NaN > > > > But this, of course, works: > > > > >> x = M.matrix([[1.0, 2.0, 3.0]]) > > >> x[0,1] = M.nan > > >> x > > matrix([[ 1. , nan, 3. ]]) > > > > Is returning a 0 instead of NaN the expected behavior? > > NaN (or inf) is a floating point number, so seeing a zero in integer > representation seems correct: > > In [2]: int(N.nan) > Out[2]: 0L Would it make sense to upcast instead of downcast? This upcasts: >> x = M.matrix([[1, M.nan, 3]]) >> x matrix([[ 1. , nan, 3. ]]) But this doesn't: >> x = M.matrix([[1, 2, 3]]) >> x[0,1] = M.nan >> x matrix([[1, 0, 3]]) (BTW, how do you represent missing integers if you can't use NaN?) |
From: Stefan v. d. W. <st...@su...> - 2006-11-11 21:57:24
|
On Sat, Nov 11, 2006 at 06:30:06PM -0300, Lisandro Dalcin wrote: > On 11/11/06, Stefan van der Walt <st...@su...> wrote: > > NaN (or inf) is a floating point number, so seeing a zero in integer > > representation seems correct: > > > > In [2]: int(N.nan) > > Out[2]: 0L > > >=20 > Just to learn myself: Why int(N.nan) should be 0? Is it C behavior? As far as I know (and please correct me if I'm wrong), nan's are just a specific bit pattern set in memory when an invalid floating point operation occurs (in IEEE 754 nan's are represented by an exponent of all 1's and a non-zero mantissa). Most integer representations have no way of indication an invalid result (and C provides no such conversion, as far as I am aware), so nan's are interpreted as 0 (which could have been any arbitrary number for that matter, although 0 seems a logical choice). Regards St=E9fan |
From: Charles R H. <cha...@gm...> - 2006-11-11 21:56:34
|
On 11/11/06, Lisandro Dalcin <da...@gm...> wrote: > > On 11/11/06, Stefan van der Walt <st...@su...> wrote: > > NaN (or inf) is a floating point number, so seeing a zero in integer > > representation seems correct: > > > > In [2]: int(N.nan) > > Out[2]: 0L > > > > Just to learn myself: Why int(N.nan) should be 0? Is it C behavior? In [1]: int32(0)/int32(0) Warning: divide by zero encountered in long_scalars Out[1]: 0 In [2]: float32(0)/float32(0) Out[2]: nan In [3]: int(nan) Out[3]: 0L I think it was just a default for numpy. Hmmm, numpy now warns on integer division by zero, didn't used to. Looks like a warning should also be raised when casting nan to integer. It is probably a small bug not to. I also suspect int(nan) should return a normal python zero, not 0L. Chuck |