You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Jon W. <wr...@es...> - 2006-06-29 15:35:38
|
> Does it matter whether the lower or upper triangular part is stored? > We should just pick one convention and stick with it. That is simpler > than, say, ATLAS where the choice is one of the parameters passed to > the subroutine. I vote for lower triangular myself, if only because > that was my choice last time I implemented a Cholesky factorization. Wouldn't a keyword argument make more sense, there's a default, but you aren't denied access to ATLAS? It matters if you pass the factorisation to a legacy code which expects things to be a particular way around. Jon |
From: Zhang L. <zha...@gm...> - 2006-06-29 15:23:36
|
> I'm going to take a wild-ass guess and suggest that was a concious decision > by the authors. Shadowing builtins is generally a no-no. You just need to > be explicit instead of implicit: > > from numpy import min, max I see. But why by default sum is exported? Is that a wise decision? In [1]: from numpy import * In [2]: help sum ------> help(sum) Help on function sum in module numpy.core.oldnumeric: sum(x, axis=0, dtype=None) ... Zhang Le |
From: <sk...@po...> - 2006-06-29 15:09:45
|
Zhang> I'm using 0.9.8 and find numpy.ndarray.min() is not exported to Zhang> global space when doing a Zhang> from numpy import * I'm going to take a wild-ass guess and suggest that was a concious decision by the authors. Shadowing builtins is generally a no-no. You just need to be explicit instead of implicit: from numpy import min, max Skip |
From: Zhang L. <zha...@gm...> - 2006-06-29 14:58:00
|
Hi, I'm using 0.9.8 and find numpy.ndarray.min() is not exported to global space when doing a from numpy import * In [1]: from numpy import * In [2]: help min ------> help(min) Help on built-in function min in module __builtin__: min(...) min(sequence) -> value min(a, b, c, ...) -> value With a single sequence argument, return its smallest item. With two or more arguments, return the smallest argument. Also numpy.ndarray.max() is not available too. But the built-in sum() is replaced by numpy.ndarray.sum() as expected. Is this a bug or just intended to do so and user has to use numpy.ndarray.min() explicitly? Cheers, Zhang Le |
From: Glen W. M. <Gle...@sw...> - 2006-06-29 14:52:03
|
Hello, It seems that the 'order' parameter is not explained neither in the docstring nor in "Guide to NumPy". I'm guessing that the alternative to the default value of 'C' would be 'Fortran'? Thanks, Glen |
From: Charles R H. <cha...@gm...> - 2006-06-29 14:46:20
|
All, On 6/29/06, mf...@ae... <mf...@ae...> wrote: > > The SAS IML Cholesky function "root" returns upper triangular. Quoting > the > SAS documentation: > > The ROOT function performs the Cholesky decomposition of a matrix (for > example, A) such that > U'U = A > where U is upper triangular. The matrix A must be symmetric and positive > definite. Does it matter whether the lower or upper triangular part is stored? We should just pick one convention and stick with it. That is simpler than, say, ATLAS where the choice is one of the parameters passed to the subroutine. I vote for lower triangular myself, if only because that was my choice last time I implemented a Cholesky factorization. Chuck |
From: <mf...@ae...> - 2006-06-29 13:16:52
|
The SAS IML Cholesky function "root" returns upper triangular. Quoting the SAS documentation: The ROOT function performs the Cholesky decomposition of a matrix (for example, A) such that U'U = A where U is upper triangular. The matrix A must be symmetric and positive definite. Mark F. Morss Principal Analyst, Market Risk American Electric Power "Keith Goodman" <kwgoodman@gmail. com> To Sent by: "Robert Kern" numpy-discussion- <rob...@gm...> bo...@li...u cc rceforge.net num...@li...urceforge. net Subject 06/27/2006 11:25 Re: [Numpy-discussion] Should PM cholesky return upper or lowertriangular matrix? On 6/27/06, Robert Kern <rob...@gm...> wrote: > Keith Goodman wrote: > > Isn't the Cholesky decomposition by convention an upper triangular > > matrix? I noticed, by porting Octave code, that linalg.cholesky > > returns the lower triangular matrix. > > > > References: > > > > http://mathworld.wolfram.com/CholeskyDecomposition.html > > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/chol.html > > Lower: > http://en.wikipedia.org/wiki/Cholesky_decomposition > http://www.math-linux.com/spip.php?article43 > http://planetmath.org/?op=getobj&from=objects&id=1287 > http://rkb.home.cern.ch/rkb/AN16pp/node33.html#SECTION000330000000000000000 > http://www.riskglossary.com/link/cholesky_factorization.htm > http://www.library.cornell.edu/nr/bookcpdf/c2-9.pdf > > If anything, the convention appears to be lower-triangular. If you give me a second, I'll show you that the wikipedia supports my claim. OK. Lower it is. It will save me a transpose when I calculate joint random variables. Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Numpy-discussion mailing list Num...@li... https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Erin S. <eri...@gm...> - 2006-06-28 21:15:58
|
ANTLR was also used for GDL http://gnudatalanguage.sourceforge.net/ with amazing results. Erin On 6/28/06, Mathew Yeates <my...@jp...> wrote: > I've been looking at a project called ANTLR (www.antlr.org) to do the > translation. Unfortunately, although I may have a Matlab grammar, it > would still be a lot of work to use ANTLR. I'll look at some of the > links that have posted. > > Mathew > > > Robert Kern wrote: > > Vinicius Lobosco wrote: > > > >> Let's just let those who want to try to do that and give our support? I > >> would be happy if I could some parts of my old matlab programs > >> translated to Scipy. > >> > > > > I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging > > Mathew to work towards that end. Sheesh. > > > > > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Mathew Y. <my...@jp...> - 2006-06-28 20:15:13
|
I've been looking at a project called ANTLR (www.antlr.org) to do the translation. Unfortunately, although I may have a Matlab grammar, it would still be a lot of work to use ANTLR. I'll look at some of the links that have posted. Mathew Robert Kern wrote: > Vinicius Lobosco wrote: > >> Let's just let those who want to try to do that and give our support? I >> would be happy if I could some parts of my old matlab programs >> translated to Scipy. >> > > I do believe that, "Show me," is an *encouragement*. I am explicitly encouraging > Mathew to work towards that end. Sheesh. > > |
From: Keith G. <kwg...@gm...> - 2006-06-28 20:06:08
|
On 6/28/06, Travis Oliphant <oli...@ee...> wrote: > This should be better behaved now in SVN. Thanks for the reports. I'm impressed by how quickly features are added and bugs are fixed. And by how quick it is to install a new version of numpy. Thank you. |
From: Fernando P. <fpe...@gm...> - 2006-06-28 19:46:09
|
On 6/28/06, David M. Cooke <co...@ph...> wrote: > On Wed, 28 Jun 2006 13:18:35 -0600 > "Fernando Perez" <fpe...@gm...> wrote: > > > On 6/28/06, David M. Cooke <co...@ph...> wrote: > > > > > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > > > with an import of setuptools (it's based on the one used in matplotlib). > > > > > > easy_install still works, also. > > > > You beat me to it :) > > > > However, your patch has slightly different semantics from mine: if > > bdist_egg fails to import, the rest of setuptools is still used. I > > don't know if that's safe. My patch would consider /any/ failure in > > the setuptools imports as a complete setuptools failure, and revert > > out to basic distutils. > > Note that your patch will still import setuptools if the import of bdist_egg > fails. And you can't get around that by putting the bdist_egg import first, > as that imports setuptools first. Well, but that's still done after the 'if "setuptools" in sys.modules' check, just like yours. The only difference is that my patch treats a later failure as a complete failure, and reverts out to old_setup being pulled out of plain distutils. > (I think bdist_egg was added sometime after 0.5; if your version of > setuptools is *that* old, you'd be better off not having it installed.) Then it's probably fine to leave it either way, as /in practice/ the two approaches will produce the same results. > The use of setuptools by numpy.distutils is in two forms: explicitly > (controlled by this patch of code), and implicitly (because setuptools goes > and patches distutils). Disabling the explicit use won't actually fix your > problem with the 'install' command leaving .egg_info directories (which, > incidentally, are pretty small), as that's done by the implicit behaviour. It's not their size that matters, it's just that I don't like tools littering around with stuff I didn't ask for. Yes, I like my code directories tidy ;) > [Really, distutils sucks. I think (besides refactoring) it needs it's API > documented better, or least good conventions on where to hook into. > setuptools and numpy.distutils do their best, but there's only so much you > can do before everything goes fragile and breaks in unexpected ways.] I do hate distutils, having fought it for a long time. Its piss-poor dependency checking is one of its /many/ annoyances. For a package with as long a compile time as scipy, it really sucks not to be able to just modify random source files and trust that it will really recompile what's needed (no more, no less). Anyway, thanks for heeding this one. Hopefully one day somebody will do the (painful) work of replacing distutils with something that actually works (perhaps using scons for the build engine...) Until then, we'll trod along with massively unnecessary rebuilds :) Cheers, f |
From: David M. C. <co...@ph...> - 2006-06-28 19:37:37
|
On Wed, 28 Jun 2006 13:18:35 -0600 "Fernando Perez" <fpe...@gm...> wrote: > On 6/28/06, David M. Cooke <co...@ph...> wrote: > > > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > > with an import of setuptools (it's based on the one used in matplotlib). > > > > easy_install still works, also. > > You beat me to it :) > > However, your patch has slightly different semantics from mine: if > bdist_egg fails to import, the rest of setuptools is still used. I > don't know if that's safe. My patch would consider /any/ failure in > the setuptools imports as a complete setuptools failure, and revert > out to basic distutils. Note that your patch will still import setuptools if the import of bdist_egg fails. And you can't get around that by putting the bdist_egg import first, as that imports setuptools first. (I think bdist_egg was added sometime after 0.5; if your version of setuptools is *that* old, you'd be better off not having it installed.) The use of setuptools by numpy.distutils is in two forms: explicitly (controlled by this patch of code), and implicitly (because setuptools goes and patches distutils). Disabling the explicit use won't actually fix your problem with the 'install' command leaving .egg_info directories (which, incidentally, are pretty small), as that's done by the implicit behaviour. [Really, distutils sucks. I think (besides refactoring) it needs it's API documented better, or least good conventions on where to hook into. setuptools and numpy.distutils do their best, but there's only so much you can do before everything goes fragile and breaks in unexpected ways.] With the "if 'setuptools' in sys.modules" test, if you *are* using setuptools, you must have explicitly requested that, and so I think a failure on import of setuptools shouldn't be silently passed over. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Keith G. <kwg...@gm...> - 2006-06-28 19:23:40
|
On 6/28/06, Travis Oliphant <oli...@ee...> wrote: > Keith Goodman wrote: > > >On 6/28/06, Pau Gargallo <pau...@gm...> wrote: > > > > > >>i don't know why 'where' is returning matrices. > >>if you use: > >> > >> > >> > >>>>>idx = where(y.A > 0.5)[0] > >>>>> > >>>>> > >>everything will work fine (I guess) > >> > >> > > > >What about the second issue? Is this expected behavior? > > > > > > > >>>idx > >>> > >>> > >array([0, 1, 2]) > > > > > > > >>>y > >>> > >>> > > > >matrix([[ 0.63731308], > > [ 0.34282663], > > [ 0.53366791]]) > > > > > > > >>>y[idx] > >>> > >>> > > > >matrix([[ 0.63731308], > > [ 0.34282663], > > [ 0.53366791]]) > > > > > > > >>>y[idx,0] > >>> > >>> > >matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > > > >I was expecting a column vector. > > > > > > > This should be better behaved now in SVN. Thanks for the reports. Now numpy can do y[y > 0.5] instead of y[where(y.A > 0.5)[0]] where, for example, y = asmatrix(rand(3,1)). I know I'm pushing my luck here. But one more feature would make this perfect. Currently y[y>0.5,:] returns the first column even if y has more than one column. Returning all columns would make it perfect. Example: >> y matrix([[ 0.38828902, 0.91649964], [ 0.41074001, 0.7105919 ], [ 0.15460833, 0.16746956]]) >> y[y[:,1]>0.5,:] matrix([[ 0.38828902], [ 0.41074001]]) A better answer for matrix users would be: >> y[(0,1),:] matrix([[ 0.38828902, 0.91649964], [ 0.41074001, 0.7105919 ]]) |
From: Travis O. <oli...@ee...> - 2006-06-28 19:18:47
|
jo...@st... wrote: >Hi, > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > [TO]: > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > [TO]: NumPy wrapper). > [TO]: > [TO]: The numpy.dual library exists so you can use the SciPy calls if the > [TO]: person has SciPy installed or the NumPy ones otherwise. It exists > [TO]: precisely for the purpose of seamlessly taking advantage of > [TO]: algorithms/interfaces that exist in NumPy but are improved in SciPy. > >This strikes me as a little bit odd. Why not just provide the best-performing >function to both SciPy and NumPy? Would NumPy be more difficult to install >if the SciPy algorithm for inv() was incorporated? > > The main issue is that SciPy can take advantage and use Fortran code, but NumPy cannot as it must build without a Fortran compiler. This is the primary driver to the current duality. -Travis |
From: Fernando P. <fpe...@gm...> - 2006-06-28 19:18:41
|
On 6/28/06, David M. Cooke <co...@ph...> wrote: > Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' > with an import of setuptools (it's based on the one used in matplotlib). > > easy_install still works, also. You beat me to it :) However, your patch has slightly different semantics from mine: if bdist_egg fails to import, the rest of setuptools is still used. I don't know if that's safe. My patch would consider /any/ failure in the setuptools imports as a complete setuptools failure, and revert out to basic distutils. Let me know if you want me to put in my code instead, here's a patch from my code against current svn (after your patch), in case you'd like to try it out. Cheers, f Index: core.py =================================================================== --- core.py (revision 2701) +++ core.py (working copy) @@ -1,20 +1,30 @@ - import sys from distutils.core import * -if 'setuptools' in sys.modules: - have_setuptools = True - from setuptools import setup as old_setup - # easy_install imports math, it may be picked up from cwd - from setuptools.command import develop, easy_install +# Don't pull setuptools in unless the user explicitly requests by having it +# imported (Andrew's trick). +have_setuptools = 'setuptools' in sys.modules + +# Even if setuptools is in, do a few things carefully to make sure the version +# is recent enough to have everything we need before assuming we can proceed +# using setuptools throughout +if have_setuptools: try: - # very old versions of setuptools don't have this + from setuptools import setup as old_setup + # very old setuptools don't have this from setuptools.command import bdist_egg + # easy_install imports math, it may be picked up from cwd + from setuptools.command import develop, easy_install except ImportError: + # Any failure here is probably due to an old or broken setuptools + # leftover in sys.modules, so treat it as if it simply weren't + # available. have_setuptools = False -else: + +# If setuptools was flagged as unavailable due to import problems, we need the +# basic distutils support +if not have_setuptools: from distutils.core import setup as old_setup - have_setuptools = False from numpy.distutils.extension import Extension from numpy.distutils.command import config |
From: David M. C. <co...@ph...> - 2006-06-28 19:17:50
|
On Wed, 28 Jun 2006 11:22:38 -0600 "Fernando Perez" <fpe...@gm...> wrote: > On 6/28/06, Robert Kern <rob...@gm...> wrote: > > > The Capitalized versions are actually old typecodes for backwards > > compatibility with Numeric. In recent development versions of numpy, they > > are no longer exposed except through the numpy.oldnumeric compatibility > > module. A decision was made for numpy to use the actual width of a type > > in its name instead of the width of its component parts (when it has > > parts). > > > > Code in scipy which still requires actual string typecodes is a bug. > > Please report such cases on the Trac: > > > > http://projects.scipy.org/scipy/scipy > > Well, an easy way to make all those poke their ugly heads in a hurry > would be to remove line 32 in scipy's init: > > longs[Lib]> grep -n oldnum *py > __init__.py:31:import numpy.oldnumeric as _num > __init__.py:32:from numpy.oldnumeric import * > > > If we really want to push for the new api, I think it's fair to change > those two lines by simply > > from numpy import oldnumeric > > so that scipy also exposes oldnumeric, and let all deprecated names be > hidden there. > > I just tried this change: > > Index: __init__.py > =================================================================== > --- __init__.py (revision 2012) > +++ __init__.py (working copy) > @@ -29,9 +29,8 @@ > > # Import numpy symbols to scipy name space > import numpy.oldnumeric as _num > -from numpy.oldnumeric import * > -del lib > -del linalg > +from numpy import oldnumeric > + > __all__ += _num.__all__ > __doc__ += """ > Contents > > > and scipy's test suite still passes (modulo the test_cobyla thingie > Nils is currently fixing, which is not related to this). > > Should I apply this patch, so we push the cleaned-up API even a bit harder? Yes please. I think all the modules that still use the oldnumeric names actually import numpy.oldnumeric themselves. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Fernando P. <fpe...@gm...> - 2006-06-28 19:11:40
|
On 6/28/06, Robert Kern <rob...@gm...> wrote: > numpy.distutils uses setuptools if it is importable in order to make sure that > the two don't stomp on each other. It's probable that that test could probably > be done with Andrew Straw's method: > > if 'setuptools' in sys.modules: > have_setuptools = True > from setuptools import setup as old_setup > else: > have_setuptools = False > from distutils.core import setup as old_setup > > Tested patches welcome. Well, tested as in 'I wrote a unittest for installation', no. But tested as in 'I built numpy, scipy, matplotlib, and my f2py-using code', yes. They all build/install fine, and no more *egg-info directories are strewn around. If this satisfies your 'tested patches', the code is: Index: numpy/distutils/core.py =================================================================== --- numpy/distutils/core.py (revision 2698) +++ numpy/distutils/core.py (working copy) @@ -1,16 +1,30 @@ - import sys from distutils.core import * -try: - from setuptools import setup as old_setup - # very old setuptools don't have this - from setuptools.command import bdist_egg - # easy_install imports math, it may be picked up from cwd - from setuptools.command import develop, easy_install - have_setuptools = 1 -except ImportError: + +# Don't pull setuptools in unless the user explicitly requests by having it +# imported (Andrew's trick). +have_setuptools = 'setuptools' in sys.modules + +# Even if setuptools is in, do a few things carefully to make sure the version +# is recent enough to have everything we need before assuming we can proceed +# using setuptools throughout +if have_setuptools: + try: + from setuptools import setup as old_setup + # very old setuptools don't have this + from setuptools.command import bdist_egg + # easy_install imports math, it may be picked up from cwd + from setuptools.command import develop, easy_install + except ImportError: + # Any failure here is probably due to an old or broken setuptools + # leftover in sys.modules, so treat it as if it simply weren't + # available. + have_setuptools = False + +# If setuptools was flagged as unavailable due to import problems, we need the +# basic distutils support +if not have_setuptools: from distutils.core import setup as old_setup - have_setuptools = 0 from numpy.distutils.extension import Extension from numpy.distutils.command import config May I? keeping-the-world-setuptools-free-one-script-at-a-time-ly yours, f |
From: David M. C. <co...@ph...> - 2006-06-28 19:10:43
|
On Wed, 28 Jun 2006 13:32:15 -0500 Robert Kern <rob...@gm...> wrote: > Fernando Perez wrote: > > > Is it really necessary to have all that setuptools junk left around, > > for those of us who aren't asking for it explicitly? My personal > > opinions on setuptools aside, I think it's just a sane practice not to > > create this kind of extra baggage unless explicitly requested. > > > > I scoured my home directory for any .file which might be triggering > > this inadvertedly, but I can't seem to find any, so I'm going to guess > > this is somehow being caused by numpy's own setup. If it's my own > > mistake, I'll be happy to be shown how to coexist peacefully with > > setuptools. > > > > Since this also affects user code (I think via f2py or something > > internal to numpy, since all I'm calling is f2py in my code), I really > > think it would be nice to clean it. > > numpy.distutils uses setuptools if it is importable in order to make sure > that the two don't stomp on each other. It's probable that that test could > probably be done with Andrew Straw's method: > > if 'setuptools' in sys.modules: > have_setuptools = True > from setuptools import setup as old_setup > else: > have_setuptools = False > from distutils.core import setup as old_setup > > Tested patches welcome. Done. I've also added a 'setupegg.py' module that wraps running 'setup.py' with an import of setuptools (it's based on the one used in matplotlib). easy_install still works, also. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Fernando P. <fpe...@gm...> - 2006-06-28 19:09:34
|
On 6/28/06, David M. Cooke <co...@ph...> wrote: > On Wed, 28 Jun 2006 11:22:38 -0600 > "Fernando Perez" <fpe...@gm...> wrote: > > Should I apply this patch, so we push the cleaned-up API even a bit harder? > > Yes please. I think all the modules that still use the oldnumeric names > actually import numpy.oldnumeric themselves. Done, r2017. I also committed the simple one-liner: Index: weave/inline_tools.py =================================================================== --- weave/inline_tools.py (revision 2016) +++ weave/inline_tools.py (working copy) @@ -402,7 +402,7 @@ def compile_function(code,arg_names,local_dict,global_dict, module_dir, compiler='', - verbose = 0, + verbose = 1, support_code = None, headers = [], customize = None, from a discussion we had a few weeks ago, I'd forgotten to put it in. I did it as a separate patch (r 2018) so it can be reverted separately if anyone objects. Cheers, f |
From: David M. C. <co...@ph...> - 2006-06-28 18:48:32
|
On Wed, 28 Jun 2006 03:22:28 -0500 Robert Kern <rob...@gm...> wrote: > jo...@st... wrote: > > Hi, > > > > [TO]: NumPy uses Numeric's old wrapper to lapack algorithms. > > [TO]: > > [TO]: SciPy uses it's own f2py-generated wrapper (it doesn't rely on the > > [TO]: NumPy wrapper). > > [TO]: > > [TO]: The numpy.dual library exists so you can use the SciPy calls if > > the [TO]: person has SciPy installed or the NumPy ones otherwise. It > > exists [TO]: precisely for the purpose of seamlessly taking advantage of > > [TO]: algorithms/interfaces that exist in NumPy but are improved in > > SciPy. > > > > This strikes me as a little bit odd. Why not just provide the > > best-performing function to both SciPy and NumPy? Would NumPy be more > > difficult to install if the SciPy algorithm for inv() was incorporated? > > That's certainly the case for the FFT algorithms. Scipy wraps more (and > more complicated) FFT libraries that are faster than FFTPACK. > > Most of the linalg functionality should probably be wrapping the same > routines if an optimized LAPACK is available. However, changing the routine > used in numpy in the absence of an optimized LAPACK would require > reconstructing the f2c'ed lapack_lite library that we include with the > numpy source. That hasn't been touched in so long that I would hesitate to > do so. If you are willing to do the work and the testing to ensure that it > still works everywhere, we'd probably accept the change. Annoying to redo (as tracking down *good* LAPACK sources is a chore), but hardly as bad as it was. I added the scripts I used to generated lapack_lite.c et al to numpy/linalg/lapack_lite in svn. These are the same things that were used to generate those files in recent versions of Numeric (which numpy uses). You only need to specify the top-level routines; the scripts find the dependencies. I'd suggest using the source for LAPACK that Debian uses; the maintainer, Camm Maguire, has done a bunch of work adding patches to fix routines that have been floating around. For instance, eigenvalues works better than before (lot fewer segfaults). With this, the hard part is writing the wrapper routines. If someone wants to wrap extra routines, I can do the the lapack_lite generation for them. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Travis O. <oli...@ee...> - 2006-06-28 18:47:46
|
Keith Goodman wrote: >On 6/28/06, Pau Gargallo <pau...@gm...> wrote: > > >>i don't know why 'where' is returning matrices. >>if you use: >> >> >> >>>>>idx = where(y.A > 0.5)[0] >>>>> >>>>> >>everything will work fine (I guess) >> >> > >What about the second issue? Is this expected behavior? > > > >>>idx >>> >>> >array([0, 1, 2]) > > > >>>y >>> >>> > >matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > > >>>y[idx] >>> >>> > >matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > > >>>y[idx,0] >>> >>> >matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > >I was expecting a column vector. > > > This should be better behaved now in SVN. Thanks for the reports. -Travis |
From: David M. C. <co...@ph...> - 2006-06-28 18:42:06
|
On Wed, 28 Jun 2006 10:55:36 +0200 Jon Wright <wr...@es...> wrote: > Poking around in the svn of numpy.linalg appears to find the same lapack > routine as Numeric (dsyevd). Perhaps I miss something in the code logic? It's actually *exactly* the same as the latest Numeric :-) It hasn't been touched much. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Robert K. <rob...@gm...> - 2006-06-28 18:32:46
|
Fernando Perez wrote: > Is it really necessary to have all that setuptools junk left around, > for those of us who aren't asking for it explicitly? My personal > opinions on setuptools aside, I think it's just a sane practice not to > create this kind of extra baggage unless explicitly requested. > > I scoured my home directory for any .file which might be triggering > this inadvertedly, but I can't seem to find any, so I'm going to guess > this is somehow being caused by numpy's own setup. If it's my own > mistake, I'll be happy to be shown how to coexist peacefully with > setuptools. > > Since this also affects user code (I think via f2py or something > internal to numpy, since all I'm calling is f2py in my code), I really > think it would be nice to clean it. numpy.distutils uses setuptools if it is importable in order to make sure that the two don't stomp on each other. It's probable that that test could probably be done with Andrew Straw's method: if 'setuptools' in sys.modules: have_setuptools = True from setuptools import setup as old_setup else: have_setuptools = False from distutils.core import setup as old_setup Tested patches welcome. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Pau G. <pau...@gm...> - 2006-06-28 18:25:18
|
On 6/28/06, Keith Goodman <kwg...@gm...> wrote: > On 6/28/06, Pau Gargallo <pau...@gm...> wrote: > > i don't know why 'where' is returning matrices. > > if you use: > > > > >>> idx = where(y.A > 0.5)[0] > > > > everything will work fine (I guess) > > What about the second issue? Is this expected behavior? > > >> idx > array([0, 1, 2]) > > >> y > > matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > >> y[idx] > > matrix([[ 0.63731308], > [ 0.34282663], > [ 0.53366791]]) > > >> y[idx,0] > matrix([[ 0.63731308, 0.34282663, 0.53366791]]) > > I was expecting a column vector. > I have never played with matrices, but if y was an array, y[idx,0] will be an array of the same shape of idx. That is a 1d array. I guess that when y is a matrix, this 1d array is converted to a matrix and become a row vector. I don't know if this behaviour is wanted :-( cheers, pau |
From: Keith G. <kwg...@gm...> - 2006-06-28 18:04:11
|
On 6/28/06, Pau Gargallo <pau...@gm...> wrote: > i don't know why 'where' is returning matrices. > if you use: > > >>> idx = where(y.A > 0.5)[0] > > everything will work fine (I guess) What about the second issue? Is this expected behavior? >> idx array([0, 1, 2]) >> y matrix([[ 0.63731308], [ 0.34282663], [ 0.53366791]]) >> y[idx] matrix([[ 0.63731308], [ 0.34282663], [ 0.53366791]]) >> y[idx,0] matrix([[ 0.63731308, 0.34282663, 0.53366791]]) I was expecting a column vector. |