You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Lisandro D. <da...@gm...> - 2006-10-13 20:09:36
|
This post is surely OT, but I cannot imagine a better place to contact people about this subject. Please, don't blame me. Any people here interested in NumPy/SciPy + MPI? From some time ago, I've been developing mpi4py (first release at SF) and I am really near to release a new version. This package exposes an API almost identical to MPI-2 C++ bindings. Almost all MPI-1 and MPI-2 features (even one-sided communications and parallel I/O) are fully supported for any object exposing single-segment buffer interface, an only some of them for communication of general Python objects (with the help of pickle/marshal). The posibility of constructing any user-defined MPI datatypes, as well as virtual topologies (specially cartesian), can be really nice for anyone interested in parallel multidimensional array procesing. Before the next release, I would like to wait for any comment, You can contact me via private mail to get a tarbal with latest developments, or we can have some discussion here, if many of you consider this a good idea. In the long term, I would like to see mpi4py integrated as a subpackage of SciPy. Regards, --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |
From: A. M. A. <per...@gm...> - 2006-10-13 19:56:06
|
On 13/10/06, Charles R Harris <cha...@gm...> wrote: > You can also get *much* better results if you scale the x interval to [0,1] > as the problem will be better posed. For instance, with your data and a > degree 10 fit I get a condition number of about 2e7 when x is scaled to > [0,1], as opposed to about 1e36 when left as is. The former yields a > perfectly useable fit while the latter blows up. I suppose this could be > built into the polyfit routine if one were only interested in polynomial > fits of some sort, but the polynomial would have to carry around an offset > and scale factor to make evaluation work. [-1,1] would probably be even better, no? > If Travis is interested in such a thing we could put together some variant > of the polynomials that includes the extra data. At this point you might as well use a polynomial class that can accomodate a variety of bases for the space of polynomials - X^n, (X-a)^n, orthogonal polynomials (translated and scaled as needed), what have you. I think I vote for polyfit that is no more clever than it has to be but which warns the user when the fit is bad. A. M. Archibald |
From: A. M. A. <per...@gm...> - 2006-10-13 19:52:55
|
On 12/10/06, Charles R Harris <cha...@gm...> wrote: > Hi all, > > I note that polyfit looks like it should work for single and double, real > and complex, polynomials. On the otherhand, the default rcond needs to > depend on the underlying precision. On the other, other hand, all the svd > computations are done with dgelsd or zgelsd, i.e., double precision. Even so > problems can arise from inherent errors of the input data if it is single > precision to start with. I also think the final degree of the fit should be > available somewhere if wanted, as it is an indication of what is going on. > Sooo, any suggestions as to what to do? My initial impulse would be to set > rcond=1e-6 for single, 1e-14 for double, make rcond a keyword, and kick the > can down the road on returning the actual degree of the fit. I'd also be inclined to output a warning (which the user can ignore, read or trap as necessary) if the condition number is too bad or they supplied an rcond that is too small for the precision of their data. A. M. Archibald |
From: Charles R H. <cha...@gm...> - 2006-10-13 19:51:03
|
On 10/12/06, Greg Willden <gre...@gm...> wrote: > > On 10/12/06, Charles R Harris <cha...@gm...> wrote: > > > > And here is the location of the problem in numpy/linalg/linalg.py : > > > > def lstsq(a, b, rcond=1.e-10): > > > > The 1e-10 is a bit conservative. On the other hand, I will note that the > > condition number of the dot(V^T ,V) matrix is somewhere around 1e22, which > > means in general terms that you need around 22 digits of accuracy. Inverting > > it only works sorta by accident in the current case. Generally, using > > Vandermonde matrices and polynomial fits it a bad idea when the dynamic > > range of the interval gets large and the degree gets up around 4-5 as it > > leads to ill conditioned sets of equations. When you really need the best > > start with chebychev polynomials or, bestest, compute a set of polynomials > > orthogonal over the sample points. Anyway, I think rcond should be something > > like 1e-12 or 1e-13 by default and be available as a keyword in the polyfit > > function. If no one complains I will make this change, although it is just a > > bandaid and things will fall apart again as soon as you call polyfit(x,y,4). > > > > > > Hey that's great. I'm glad you tracked it down. > > Pardon my ignorance of polyfit algorithm details. > Is there a way of choosing rcond based on N that would give sensible > defaults for a variety of N? > Greg You can also get *much* better results if you scale the x interval to [0,1] as the problem will be better posed. For instance, with your data and a degree 10 fit I get a condition number of about 2e7 when x is scaled to [0,1], as opposed to about 1e36 when left as is. The former yields a perfectly useable fit while the latter blows up. I suppose this could be built into the polyfit routine if one were only interested in polynomial fits of some sort, but the polynomial would have to carry around an offset and scale factor to make evaluation work. If Travis is interested in such a thing we could put together some variant of the polynomials that includes the extra data. Chuck |
From: Michael S. <msu...@gm...> - 2006-10-13 19:23:29
|
Following up on my own message for archival purposes, after getting local help. If you're having a problem like this, read the file called INSTALL.txt. The current NumPy tarball doesn't have this file, but the SciPy tarball does. You may need to reinstall Atlas/Lapack libraries using a different compiler. Michael On 10/11/06, Michael Subotin <msu...@gm...> wrote: > > Hi, > > I know that it's a perennial topic on the list, but I haven't been able to > find my answer in the archives. After running the installation on a RedHat > Linux machine, I'm getting the import error: "/usr/lib/libblas.so.3: > undefined symbol: e_wsfe". Judging from earlier exchanges here, it seems > that I need to add libg2c (which this machine does have in /usr/lib, unlike > g2c) somewhere between 'f77blas' and 'cblas', but I'm not sure where I > should make this change. Not being well versed in Python distributions, I > tried my luck with a few candidates and the import error remains. The > machine should be running gcc. > > Thanks for any help. > > Michael > |
From: Stefan v. d. W. <st...@su...> - 2006-10-13 15:22:18
|
Hi all, I've noticed that 'astype' always forces a copy. Is this behaviour intended? It seems to conflict with 'asarray', that tries to avoid a copy. For example, when wrapping code in ctypes, the following snippet would have been useful: def foo(x): # ensure x is an array of the right type x =3D N.ascontiguousarray(x).astype(N.intc) but that will cause a copy, so you'll have to do def foo(x): try: x =3D N.ascontiguousarray(x,N.intc) except: x =3D N.ascontiguousarray(x).astype(N.intc) Maybe I'm missing something obvious here -- any pointers? Thanks St=E9fan |
From: Francesc A. <fa...@ca...> - 2006-10-13 14:09:11
|
Hi, Is it possible to test a numpy version directly from the source directory without having to install it? I mean, if I compile the sources and try to use the package directly from there, I get unexpected results. For example: $ export PYTHONPATH=3D/home/faltet/python.nobackup/numpy/trunk $ python2.4 -c "import numpy;print numpy.dtype([('col1', '(1,)i4')])" Running from numpy source directory. Traceback (most recent call last): File "<string>", line 1, in ? AttributeError: 'module' object has no attribute 'dtype' It would be nice to have a way of testing a recently built version of numpy prior to install it. Thanks, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=C3=A1rabos Coop. V. Enjoy Data "-" "Be careful about using the following code -- I've only proven that it works, I haven't tested it." -- Donald Knuth |
From: Tim H. <tim...@ie...> - 2006-10-13 13:44:51
|
Bill Baxter wrote: > On 10/13/06, Tim Hochberg <tim...@ie...> wrote: > >> For this sort of thing, I >> would just make a new module to pull together the function I want and >> use that instead. It's then easy to explain that this new module bbeconf >> (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an >> amalgamation of stuff from multiple sources. >> >> # bbeconf.py >> from numpy import * >> fromnumpy.scimath import sqrt >> # possibly some other stuff to correctly handle subpackages... >> > > That does sound like a good way to do it. > Then you just tell your users to import 'eduNumpy' rather than numpy, > and you're good to go. > Added that suggestion to http://www.scipy.org/NegativeSquareRoot > > I'd like to ask one basic Python question related my previous > suggestion of doing things like "numpy.sqrt = numpy.lib.scimath.sqrt": > In python does that make it so that any module importing numpy in the > same program will now see the altered sqrt function? E.g. in my > program I do "import A,B". Module A alters numpy.sqrt. Does that > also modify how module B sees numpy.sqrt? > Indeed it does. Module imports are cached in sys.modules, so numpy is only imported once. (With some effort, you can usually get your own private copy of a module, that you could mess with to your hearts content, but I generally wouldn't recommend it). > If so then that's a very good reason not to do it that way. > > I've heard people using the term "monkey-patch" before. Is that what that is? > I believe that is what the term refers to although I'm not absolutely certain. -tim |
From: Greg W. <gre...@gm...> - 2006-10-13 13:36:56
|
Hi, I just tried to checkout numpy and scipy to another machine and got the following errors: $ svn co http://svn.scipy.org/svn/numpy/trunk numpy svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request ( http://svn.scipy.org) $ svn co http://svn.scipy.org/svn/scipy/trunk scipy svn: REPORT request failed on '/svn/scipy/!svn/vcc/default' svn: REPORT of '/svn/scipy/!svn/vcc/default': 400 Bad Request ( http://svn.scipy.org) Any ideas? Greg -- Linux. Because rebooting is for adding hardware. |
From: Ciril R. <mu...@du...> - 2006-10-13 11:18:42
|
Hi, VkAGRA for LESS http://www.optuniosen.com =20 to come along as well. It will make an interesting gathering. button is to play, red to stop. Then he was gone. |
From: <num...@ao...> - 2006-10-13 07:48:36
|
<html><body>Yeah, it does, said Tonks decisively. She screwed up<BR> <img src="cid:baYIsqgDbgJPnnqQfAUGfHpWsrCO4DSwrMVbLni"><br> Ill come and help you, said Tonks brightly.<font size=1>eye remained focused on the ceiling. Damn it, he added restless energy that made him unable to settle to anything, during</font>Get me a glass of water, would you, Harry, requested Moody.Can you learn how to be a Metamorphmagus? Harry <BR> <BR> whats going on, I havent heard anything from anyone, whats Vol—?Back in the kitchen Moody had replaced his eye, which was <font size=1>This is Alastor Moody, Harry Lupin continued, pointing towards Moody.</FONT> Lower and lower they flew, until Harry could see individual headlights and <font size=1> teeth and clenching his fists, casting angry looks out at the empty, <BR><BR><BR><BR><BR><BR><BR><BR><BR><BR></FONT> when Harry had last said goodbye to him and his robes were<font size=1>Harrys eyes watered in the chill as they soared upwards; he could see . Shivering, Harry looked around. The grimy fronts of the surrounding </font>Come here, boy, said Moody gruffly, beckoning Harry She followed Harry back into the hall and up the stairs,<BR> <BR>She followed Harry back into the hall and up the stairs, window and Dudley had hit him were throbbing painfully.<font size=1> I want three hundred and sixty degrees visibility on the return journey.</FONT>collection of lights he had yet seen, a huge, sprawling crisscrossing mass,<font size=1>the real Potter would know. Unless anyone brought any Veritaserum?<BR><BR><BR><BR><BR><BR><BR><BR><BR><BR></FONT> wand like a conductors baton, Tonks made the trunk hover <font size=1>staring at Harrys midriff. movement over the floor.</font>Can you learn how to be a Metamorphmagus? Harry Harry touched down right behind her and dismounted on a patch of unkempt<BR> <BR>and every thought of the Ministry hearing was swept from his mind Er - yeah, said Harry. Look - he turned back to Lupin,<font size=1>quite blank, when his uncle entered his bedroom. Harry looked slowly </FONT>still eyeing Harry curiously. Too risky. Weve set up <font size=1>Disillusionment Charm, said Moody, raising his wand. <BR><BR><BR><BR><BR><BR><BR><BR><BR><BR></FONT> Harrys eyes watered in the chill as they soared upwards; he could see <font size=1>Stop being so cheerful, Mad-Eye, hell think were not taking this And I saw that, he added irritably, as the woman rolled</font>Can you learn how to be a Metamorphmagus? Harry read Harrys mind; the corners of his mouth twitched slightly.<BR> <BR>he was going home… for a few glorious moments, all his problems seemed to full of rage about the non-existent Lawn Competition… and Harry laughed <font size=1>felt a curious sensation as though Moody had just </FONT>Well, youll have to learn the hard way, Im afraid,<font size=1> dead clumsy, did you hear me break that plate when we <BR><BR><BR><BR><BR><BR><BR><BR><BR><BR></FONT> dead clumsy, did you hear me break that plate when we <font size=1>Borrowed it from Dumbledore, growled Moody, pocketing the Put-Outer.Hedwig gave a muffled hooting noise, her beak still full of frog.</font> to land him in a cell in Azkaban? Whenever this thought occurred, it, so as to illuminate the writing. Read quickly and memorise.<BR> <BR>slightly in the moonlight… now Emmeline Vance was on his right, her wand out,being this cold on a broom only once before, during the Quidditch<font size=1> sliding off the bed on to his feet - but a split second later it occurred </FONT> Lawn Competition.<font size=1>Time to start the descent! came Lupins voice. Follow Tonks, Harry!<BR><BR><BR><BR><BR><BR><BR><BR><BR><BR></FONT> </body></html> |
From: Travis O. <oli...@ie...> - 2006-10-13 06:46:21
|
Bill Baxter wrote: > On 10/13/06, Tim Hochberg <tim...@ie...> wrote: > >> For this sort of thing, I >> would just make a new module to pull together the function I want and >> use that instead. It's then easy to explain that this new module bbeconf >> (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an >> amalgamation of stuff from multiple sources. >> >> # bbeconf.py >> from numpy import * >> fromnumpy.scimath import sqrt >> # possibly some other stuff to correctly handle subpackages... >> > > That does sound like a good way to do it. > Then you just tell your users to import 'eduNumpy' rather than numpy, > and you're good to go. > Added that suggestion to http://www.scipy.org/NegativeSquareRoot > > I'd like to ask one basic Python question related my previous > suggestion of doing things like "numpy.sqrt = numpy.lib.scimath.sqrt": > In python does that make it so that any module importing numpy in the > same program will now see the altered sqrt function? E.g. in my > program I do "import A,B". Module A alters numpy.sqrt. Does that > also modify how module B sees numpy.sqrt? > > If so then that's a very good reason not to do it that way. > > I've heard people using the term "monkey-patch" before. Is that what that is? > > --bb > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Bill B. <wb...@gm...> - 2006-10-13 05:38:50
|
On 10/13/06, Tim Hochberg <tim...@ie...> wrote: > For this sort of thing, I > would just make a new module to pull together the function I want and > use that instead. It's then easy to explain that this new module bbeconf > (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an > amalgamation of stuff from multiple sources. > > # bbeconf.py > from numpy import * > fromnumpy.scimath import sqrt > # possibly some other stuff to correctly handle subpackages... That does sound like a good way to do it. Then you just tell your users to import 'eduNumpy' rather than numpy, and you're good to go. Added that suggestion to http://www.scipy.org/NegativeSquareRoot I'd like to ask one basic Python question related my previous suggestion of doing things like "numpy.sqrt = numpy.lib.scimath.sqrt": In python does that make it so that any module importing numpy in the same program will now see the altered sqrt function? E.g. in my program I do "import A,B". Module A alters numpy.sqrt. Does that also modify how module B sees numpy.sqrt? If so then that's a very good reason not to do it that way. I've heard people using the term "monkey-patch" before. Is that what that is? --bb |
From: Tim H. <tim...@ie...> - 2006-10-13 05:14:11
|
Bill Baxter wrote: > > I think efficiency is not a very good argument for the default > behavior here, because -- lets face it -- if efficient execution was > high on your priority list, you wouldn't be using python. I care very much about efficiency where it matters, which is only in a tiny fraction of my code. For the stuff that numpy does well it's pretty efficient, when that's not enough I can drop down to C, but I don't have to do that often. In fact, I've argued and still believe, that Python is frequently *more* efficient than C, given finite developer time, since it's easier to get the algorithms correct writing in Python. > And even if > you do care about efficiency, one of the top rules of optimization is > to first get it working, then get it working fast. > IMO, the current behavior is more likely to give you working code than auto-promoting to complex based on value. That concerns me more than efficiency. The whole auto-promotion thing looks like a good way to introduce data dependent bugs that don't surface till late in the game and are hard to track down. In contrast, when the current scheme causes a problem it should surface almost immediately. I would not use scipy.sqrt in code, even if the efficiency were the same, for this reason. I can see the attraction in the autopromoting version for teaching purposes and possibly for throwaway scripts, but not for "real" code. > Really, I'm just playing the devils advocate here, because I don't > work with complex numbers (I see quaternions more often than complex > numbers). But I would be willing to do something like > numpy.use_realmath() > in my code if it would make numpy more palatable to a wider audience. > I wouldn't like it, however, if I had to do some import thing where I > have to remember forever after that I should type 'numpy.tanh()' but > 'realmath.arctanh()'. > As I mentioned in my other message, the way to do this is to have a different entry point with different behavior. > Anyway it seems like the folks who care about performance are the ones > who will generally be more willing to make tweaks like that. > It's not just about performance though. It's also about correctness, or more accurately, resistance to bugs. > But that's about all I have to say about this, since the status quo > works fine for me. So I'll be quiet. Just it seems like the > non-status-quo'ers here have some good points. I taught intro to > computer science to non-majors one semester. I know I would not want > to have to confront all the issues with numerical computing right off > the bat if I was just trying to teach people how to do some math. > There's probably nothing wrong with having a package like this, it just shouldn't be numpy. It's easy enough to construct such a beast for yourself, it should take just a few lines of Python. Since what a beginners package should look like probably varies from teacher to teacher, let them construct a few. If they all, or at least most of them, have the same ideas about what constitutes such a package, that might be to the time to think about officially supporting a separate entry point that has the modified behaviour. For the moment, things look fine. -tim |
From: Tim H. <tim...@ie...> - 2006-10-13 04:43:26
|
Bill Baxter wrote: > On 10/12/06, Stefan van der Walt <st...@su...> wrote: > >> On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: >> >>> On 10/11/06, Bill Baxter <wb...@gm...> wrote: >>> >> I tried to explain the argument at >> >> http://www.scipy.org/NegativeSquareRoot >> >> > > The proposed fix for those who want sqrt(-1) to return 1j is: > > from numpy.lib import scimath as SM > SM.sqrt(-1) > > > But that creates a new namespace alias, different from numpy. So I'll > call numpy.array() to create a new array, but SM.sqrt() when I want a > square root. > Am I wrong to want some simple way to change the behavior of > numpy.sqrt itself? > > Seems like you can get that effect via something like: > > for n in numpy.lib.scimath.__all__: > numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] > > If that sort of function were available as "numpy.use_scimath()", then > folks who want numpy to be like scipy can achieve that with just one > line at the top of their files. The import under a different name > doesn't quite achieve the goal of making that behavior numpy's > "default". > > I guess I'm thinking mostly of the educational uses of numpy, where > you may have users that haven't learned about much about numerical > computing yet. I can just imagine the instructor starting off by > saying "ok everyone we're going to learn numpy today! First everyone > type this: 'import numpy, from numpy.lib import scimath as SM' -- > Don't worry about all the things there you don't understand." > Whereas "import numpy, numpy.use_scimath()" seems easier to explain > and much less intimidating as your first two lines of numpy to learn. > > Or is that just a bad idea for some reason? > > Isn't that just going to make your students *more* confused later when then run into the standard behavior of numpy? For this sort of thing, I would just make a new module to pull together the function I want and use that instead. It's then easy to explain that this new module bbeconf (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an amalgamation of stuff from multiple sources. # bbeconf.py from numpy import * fromnumpy.scimath import sqrt # possibly some other stuff to correctly handle subpackages... -tim |
From: Jay P. <pa...@gm...> - 2006-10-13 03:15:52
|
I hate bumping my own messages, but does no one have any insight into this? On 10/9/06, Jay Parlar <pa...@gm...> wrote: > In the process of finally switching over to Python 2.5, and am trying > to build numpy. Unfortunately, it dies during the build: > > Jay-Computer:~/Desktop/numpy-1.0rc2 jayparlar$ python setup.py build > Running from numpy source directory. > F2PY Version 2_3296 > blas_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec', > '-I/System/Library/Frameworks/vecLib.framework/Headers'] > > lapack_opt_info: > FOUND: > extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] > define_macros = [('NO_ATLAS_INFO', 3)] > extra_compile_args = ['-faltivec'] > > running build > running config_fc > running build_src > building py_modules sources > building extension "numpy.core.multiarray" sources > Generating build/src.macosx-10.3-fat-2.5/numpy/core/config.h > customize NAGFCompiler > customize AbsoftFCompiler > customize IbmFCompiler > Could not locate executable g77 > Could not locate executable f77 > Could not locate executable gfortran > Could not locate executable f95 > customize GnuFCompiler > customize Gnu95FCompiler > customize G95FCompiler > customize GnuFCompiler > customize Gnu95FCompiler > customize NAGFCompiler > customize NAGFCompiler using config > C compiler: gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double > -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > > compile options: > '-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 > -Inumpy/core/src -Inumpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 > -c' > gcc: _configtest.c > gcc: cannot specify -o with -c or -S and multiple compilations > gcc: cannot specify -o with -c or -S and multiple compilations > failure. > removing: _configtest.c _configtest.o > numpy/core/setup.py:50: DeprecationWarning: raising a string exception > is deprecated > raise "ERROR: Failed to test configuration" > Traceback (most recent call last): > File "setup.py", line 89, in <module> > setup_package() > File "setup.py", line 82, in setup_package > configuration=configuration ) > File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/core.py", > line 174, in setup > return old_setup(**new_attr) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/core.py", > line 151, in setup > dist.run_commands() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", > line 974, in run_commands > self.run_command(cmd) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", > line 994, in run_command > cmd_obj.run() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/command/build.py", > line 112, in run > self.run_command(cmd_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/cmd.py", > line 333, in run_command > self.distribution.run_command(command) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", > line 994, in run_command > cmd_obj.run() > File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", > line 87, in run > self.build_sources() > File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", > line 106, in build_sources > self.build_extension_sources(ext) > File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", > line 212, in build_extension_sources > sources = self.generate_sources(sources, ext) > File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", > line 270, in generate_sources > source = func(extension, build_dir) > File "numpy/core/setup.py", line 50, in generate_config_h > raise "ERROR: Failed to test configuration" > ERROR: Failed to test configuration > > > This is with the Universal 2.5 binary, and OS X 10.3.9. > > Any ideas? Sorry if this one has been asked before, but I can't seem > to find a solution anywhere. > > Jay P. > |
From: Bill B. <wb...@gm...> - 2006-10-13 03:11:10
|
On 10/13/06, Charles R Harris <cha...@gm...> wrote: > > > On 10/12/06, Bill Baxter <wb...@gm...> wrote: > > On 10/12/06, Stefan van der Walt <st...@su...> wrote: > > > On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: > > > > On 10/11/06, Bill Baxter < wb...@gm...> wrote: > > > I tried to explain the argument at > > > > > > http://www.scipy.org/NegativeSquareRoot > > > > > > > The proposed fix for those who want sqrt(-1) to return 1j is: > > > > from numpy.lib import scimath as SM > > SM.sqrt(-1) > > > > > > But that creates a new namespace alias, different from numpy. So I'll > > call numpy.array() to create a new array, but SM.sqrt() when I want a > > square root. > > Am I wrong to want some simple way to change the behavior of > > numpy.sqrt itself? > > > > Seems like you can get that effect via something like: > > > > for n in numpy.lib.scimath.__all__: > > numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] > > I don't like either of those ideas, although the second seems preferable. I > think it better to make an efficient way of calling a sqrt routine that > accepts negative floats and returns complex numbers. The behaviour could be > chosen either by key word or by specially named routines, or maybe even > some global flag, We have the "specially named routines" way already. "numpy.lib.scimath.sqrt" > but I don't think it asking too much for the students to > learn that sqrt(-1) doesn't exist as a real number and that efficient > computation uses real whenever possible because it is a) smaller, and b) > faster. That way we also avoid having software that only works for scimath > but not for numpy. I think efficiency is not a very good argument for the default behavior here, because -- lets face it -- if efficient execution was high on your priority list, you wouldn't be using python. And even if you do care about efficiency, one of the top rules of optimization is to first get it working, then get it working fast. Really, I'm just playing the devils advocate here, because I don't work with complex numbers (I see quaternions more often than complex numbers). But I would be willing to do something like numpy.use_realmath() in my code if it would make numpy more palatable to a wider audience. I wouldn't like it, however, if I had to do some import thing where I have to remember forever after that I should type 'numpy.tanh()' but 'realmath.arctanh()'. Anyway it seems like the folks who care about performance are the ones who will generally be more willing to make tweaks like that. But that's about all I have to say about this, since the status quo works fine for me. So I'll be quiet. Just it seems like the non-status-quo'ers here have some good points. I taught intro to computer science to non-majors one semester. I know I would not want to have to confront all the issues with numerical computing right off the bat if I was just trying to teach people how to do some math. --bb |
From: Greg W. <gre...@gm...> - 2006-10-13 02:30:04
|
On 10/12/06, Charles R Harris <cha...@gm...> wrote: > > And here is the location of the problem in numpy/linalg/linalg.py : > > def lstsq(a, b, rcond=1.e-10): > > The 1e-10 is a bit conservative. On the other hand, I will note that the > condition number of the dot(V^T ,V) matrix is somewhere around 1e22, which > means in general terms that you need around 22 digits of accuracy. Inverting > it only works sorta by accident in the current case. Generally, using > Vandermonde matrices and polynomial fits it a bad idea when the dynamic > range of the interval gets large and the degree gets up around 4-5 as it > leads to ill conditioned sets of equations. When you really need the best > start with chebychev polynomials or, bestest, compute a set of polynomials > orthogonal over the sample points. Anyway, I think rcond should be something > like 1e-12 or 1e-13 by default and be available as a keyword in the polyfit > function. If no one complains I will make this change, although it is just a > bandaid and things will fall apart again as soon as you call polyfit(x,y,4). > > Hey that's great. I'm glad you tracked it down. Pardon my ignorance of polyfit algorithm details. Is there a way of choosing rcond based on N that would give sensible defaults for a variety of N? Greg |
From: Charles R H. <cha...@gm...> - 2006-10-13 02:15:56
|
On 10/12/06, Bill Baxter <wb...@gm...> wrote: > > On 10/12/06, Stefan van der Walt <st...@su...> wrote: > > On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: > > > On 10/11/06, Bill Baxter <wb...@gm...> wrote: > > I tried to explain the argument at > > > > http://www.scipy.org/NegativeSquareRoot > > > > The proposed fix for those who want sqrt(-1) to return 1j is: > > from numpy.lib import scimath as SM > SM.sqrt(-1) > > > But that creates a new namespace alias, different from numpy. So I'll > call numpy.array() to create a new array, but SM.sqrt() when I want a > square root. > Am I wrong to want some simple way to change the behavior of > numpy.sqrt itself? > > Seems like you can get that effect via something like: > > for n in numpy.lib.scimath.__all__: > numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] I don't like either of those ideas, although the second seems preferable. I think it better to make an efficient way of calling a sqrt routine that accepts negative floats and returns complex numbers. The behaviour could be chosen either by key word or by specially named routines, or maybe even some global flag, but I don't think it asking too much for the students to learn that sqrt(-1) doesn't exist as a real number and that efficient computation uses real whenever possible because it is a) smaller, and b) faster. That way we also avoid having software that only works for scimath but not for numpy. Chuck. |
From: Bill B. <wb...@gm...> - 2006-10-13 01:49:54
|
On 10/12/06, Stefan van der Walt <st...@su...> wrote: > On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: > > On 10/11/06, Bill Baxter <wb...@gm...> wrote: > I tried to explain the argument at > > http://www.scipy.org/NegativeSquareRoot > The proposed fix for those who want sqrt(-1) to return 1j is: from numpy.lib import scimath as SM SM.sqrt(-1) But that creates a new namespace alias, different from numpy. So I'll call numpy.array() to create a new array, but SM.sqrt() when I want a square root. Am I wrong to want some simple way to change the behavior of numpy.sqrt itself? Seems like you can get that effect via something like: for n in numpy.lib.scimath.__all__: numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] If that sort of function were available as "numpy.use_scimath()", then folks who want numpy to be like scipy can achieve that with just one line at the top of their files. The import under a different name doesn't quite achieve the goal of making that behavior numpy's "default". I guess I'm thinking mostly of the educational uses of numpy, where you may have users that haven't learned about much about numerical computing yet. I can just imagine the instructor starting off by saying "ok everyone we're going to learn numpy today! First everyone type this: 'import numpy, from numpy.lib import scimath as SM' -- Don't worry about all the things there you don't understand." Whereas "import numpy, numpy.use_scimath()" seems easier to explain and much less intimidating as your first two lines of numpy to learn. Or is that just a bad idea for some reason? --bb |
From: David G. <Dav...@no...> - 2006-10-13 01:34:44
|
I find that acceptable for my purposes, but is there some way we can minimize the "surprise(s)" for newbies? (I know some suggestions have been put forward in this thread, but I don't know enough to cast a vote one way or another for any of those, just a vote for "please do it".) And, in closing, I too would like to: thank you very much for all your work - despite my "emotional outburst," I am generally very happy using numpy and regard it as an excellent product. DG Travis Oliphant wrote: > David Goldsmith wrote: > > >> Got it. And if I understand correctly, the import order you specify in >> the little mynumpy example you included in your latest response to >> Fernando will result in any "overlap" between numpy and >> numpy.lib.scimath to call the latter's version of things rather than the >> former's, yes? >> >> >> > > Right. The last import will be used for any common-names (variables get > re-bound to the new functions...) > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Charles R H. <cha...@gm...> - 2006-10-13 01:28:32
|
Hi all, I note that polyfit looks like it should work for single and double, real and complex, polynomials. On the otherhand, the default rcond needs to depend on the underlying precision. On the other, other hand, all the svd computations are done with dgelsd or zgelsd, i.e., double precision. Even so problems can arise from inherent errors of the input data if it is single precision to start with. I also think the final degree of the fit should be available somewhere if wanted, as it is an indication of what is going on. Sooo, any suggestions as to what to do? My initial impulse would be to set rcond=1e-6 for single, 1e-14 for double, make rcond a keyword, and kick the can down the road on returning the actual degree of the fit. Chuck PS, what is the best way of converting arbitrary arrays to the appropriate c data types float and double? |
From: <val...@bl...> - 2006-10-13 01:00:59
|
Wonderful, it works, thanks! Michele --- Discussion of Numerical Python <num...@li... wrote: On 10/12/06, Michele Vallisneri <va...@va...> wrote: > > Does anybody here have experience about offering the array interface > > from a SWIG-wrapped C struct? > > I have. > > > I have tried the following, borrowing code from numpy's arrayobject.c: > > > > %extend real_vec_t { > > PyObject *__array_struct__() { > > /* From numpy/arrayobject.c/array_struct_get */ > > You are extending real_vec_t with a new METHOD, but what numpy > requests is an ATTRIBUTE. So, numpy simply queries your vec like: > > arrstr = vec.__array_struct__ > > and not with a method call like this > > arrstr = vec.__array_struct__() > > > So here is what I would do (can fail with some SWIG optimizations) > > %extend Vec { > > PyObject* __array_struct__ () { /* ... */ } > > %pythoncode { > __array_struct__ = property(__array_struct__, > doc='Array protocol') > } > > } > > Hope you got the idea. > > -- > Lisandro Dalc�n > --------------- > Centro Internacional de M�todos Computacionales en Ingenier�a (CIMEC) > Instituto de Desarrollo Tecnol�gico para la Industria Qu�mica (INTEC) > Consejo Nacional de Investigaciones Cient�ficas y T�cnicas (CONICET) > PTLC - G�emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Neal B. <ndb...@gm...> - 2006-10-13 00:19:38
|
Has anyone built numpy-1.0rc2/scipy-0.5.1 on centos4.4? It seems that a fortran90+ compiler is required, so I installed gcc4-gfortran-4.1.0-18.EL4 libgfortran-4.1.0-18.EL4 Didn't quite work though. Built numpy+scipy OK, but then: Warning: FAILURE importing tests for <module 'scipy.linalg.blas' from '...packages/scipy/linalg/blas.pyc'> /usr/lib/python2.3/site-packages/scipy/linalg/tests/test_blas.py:22: ImportError: /usr/lib/python2.3/site-pac\ kages/scipy/linalg/fblas.so: undefined symbol: srotmg_ (in ?) Not sure yet what's causing this. I suspect it is because of a mixture of different compilers/libraries? Like, maybe blas was compiled with f77 from gcc3, while fblas was compiled with gfortran from gcc4? |
From: Charles R H. <cha...@gm...> - 2006-10-12 23:04:19
|
On 10/12/06, Charles R Harris <cha...@gm...> wrote: > > > > On 10/12/06, Charles R Harris <cha...@gm...> wrote: > > > > > > > > On 10/12/06, Greg Willden < gre...@gm...> wrote: > > > > > > On 10/12/06, Charles R Harris <cha...@gm... > wrote: > > > > > > > > I'm guessing that the rcond number in the lstsq version (default > > > > 1e-10) is the difference. Generally the lstsq version should work better > > > > than the MPL version because at*a is not as well conditioned and vandermonde > > > > matrices are notoriously ill conditioned anyway for higher degree fits. It > > > > would help if you attached the data files in ascii form unless they happen > > > > to contain thousands of data items. Just the x will suffice, zip it up if > > > > you have to. > > > > > > > > > > > > > Here are the files. > > > > > > Since the two algorithms behave differently and each has it place then > > > can both be included in numpy? > > > i.e. numpy.polyfit(x,y,N, mode='alg1') > > > numpy.polyfit (x,y,N, mode='alg2') > > > > > > replacing alg1 and alg2 with meaningful names. > > > > > > > The polyfit function looks seriously busted. If I do the fits by hand I > > get the same results using the (not so hot) MPL version or lstsq. I don't > > know what the problem is. The docstring is also incorrect for the method. > > Hmmm... > > > > Polyfit seems overly conservative in its choice of rcond. > > In [101]: lin.lstsq(v,y,1e-10)[0] > Out[101]: > array([ 5.84304475e-07, -5.51513630e-03, 1.51465472e+01, > 3.05631361e-02]) > In [107]: polyfit(x,y,3) > Out[108]: > array([ 5.84304475e-07, -5.51513630e-03, 1.51465472e+01, > 3.05631361e-02]) > > Compare > > In [112]: lin.lstsq(v,y,1e-12)[0] > Out[112]: > array([ -5.42970700e-07, 1.61425067e-03, 1.99260667e+00, > 6.51889107e+03]) > > In [113]: dot(lin.inv(vtv),dot(v.T,y)) > Out[113]: > array([ -5.42970700e-07, 1.61425067e-03, 1.99260667e+00, > 6.51889107e+03]) > > So the default needs to be changed somewhere. Probably polyfit shoud > accept rcond as a keyword. Where the actual problem lies is a bit obscure as > the normal rcond default for lin.lstsq is 1e-12. Maybe some sort of import > error somewhere down the line. > And here is the location of the problem in numpy/linalg/linalg.py : def lstsq(a, b, rcond=1.e-10): The 1e-10 is a bit conservative. On the other hand, I will note that the condition number of the dot(V^T ,V) matrix is somewhere around 1e22, which means in general terms that you need around 22 digits of accuracy. Inverting it only works sorta by accident in the current case. Generally, using Vandermonde matrices and polynomial fits it a bad idea when the dynamic range of the interval gets large and the degree gets up around 4-5 as it leads to ill conditioned sets of equations. When you really need the best start with chebychev polynomials or, bestest, compute a set of polynomials orthogonal over the sample points. Anyway, I think rcond should be something like 1e-12 or 1e-13 by default and be available as a keyword in the polyfit function. If no one complains I will make this change, although it is just a bandaid and things will fall apart again as soon as you call polyfit(x,y,4). Chuck |