You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: David H. <dav...@gm...> - 2006-07-16 01:38:19
|
2006/7/14, Nick Fotopoulos <nv...@mi...>: > > Any other suggestions? > Hi Nick, I had some success by coding the integrand in fortran and wrapping it with f2py. If your probability density function is standard, you may find it in the flib library of the PyMC module of Chris Fonnesbeck ( a library of likelihood functions coded in f77) and save the trouble. Hope this helps, David |
From: Keith G. <kwg...@gm...> - 2006-07-15 15:14:57
|
On 7/15/06, Steve Lianoglou <lis...@ar...> wrote: > 1) Did you install ubuntu by way of Parallels, or are you running > linux "full steam" Yes, it is a full steam dual boot system. My default boot is Ubuntu. > 2) If you did install ubuntu alongside/over/whatever mac os x and > didn't use parallels .. are there any links you can share w/ a few > tutorials? Is everything (else: like hardware support) working well? > (Maybe off list since it's OT -- but maybe others are interested) Here are the references I used: http://bin-false.org/?p=17 http://www.planetisaac.com/articles/ubuntuinstall.html (The second one, which was an annotated version of the first one, seems to have disappeared.) No, not everything is working. And the installation is not easy (for me). I think it will become easy in a few months. But numpy and scipy were easy to compile! If you run into any problems, just send me a note off list. |
From: Steve L. <lis...@ar...> - 2006-07-15 14:57:34
|
Hi Keith, I don't have any answers for you but ... 1) Did you install ubuntu by way of Parallels, or are you running linux "full steam" 2) If you did install ubuntu alongside/over/whatever mac os x and didn't use parallels .. are there any links you can share w/ a few tutorials? Is everything (else: like hardware support) working well? (Maybe off list since it's OT -- but maybe others are interested) 3) If you still have your Mac OS X partition lying around, the full r- platform installer [1] for the mac comes with an installer for a threaded compile of ATLAS (specidifally for the intel duo's .. somehow) so maybe you'd like to try to use that and save yourself some time ... but perhaps that whole malloc vs dmalloc report [2] might apply to this as well ... which I guess would suggest to use linux for maximal performance (refer to q 2 :-) -steve [1] : http://cran.r-project.org/bin/macosx/ [2] : http://sekhon.berkeley.edu/macosx/intel.html On Jul 14, 2006, at 9:22 PM, Keith Goodman wrote: > Is there much speed to be gained by compiling atlas for a dual core > system? > > I'm running Ubuntu on a Macbook. It's the first time I've had a dual > core system. > > My one line benchmark shows that the Macbook is slow compared to my > (old) desktop. > >>> t1=time.time();x=randn(500,1000);x = > x*x.T;a,b=linalg.eigh(x);t2=time.time();print t2-t1 > 1.31429600716 > > My desktop is less than half of that. > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Fernando P. <fpe...@gm...> - 2006-07-15 01:36:09
|
On 7/14/06, Bill Baxter <wb...@gm...> wrote: > I believe that's the problem that the indexing PEP from Travis is > supposed to address: > http://www.python.org/dev/peps/pep-0357/ > So I think there's not much anyone can do about it untill the PEP is > accepted and implemented. > > Maybe you can cast to int? > > In [34]: (1,2)[int(a[0]==b)] Yup, that's the workaround I'm using. I was just wondering if comparisons between array scalars shouldn't return /true/ booleans (which can be used as indices) instead of array scalar booleans. I realize that the __index__ support in 2.5 will make this point moot (the PEP you point to), but perhaps this particular one can actually be fixed today, for all users of python pre-2.5. However, I haven't really wrapped my head enough around all the subtleties of array scalars to know whether 'fixing' this particular problem will introduce other, worse ones. Cheers, f |
From: Bill B. <wb...@gm...> - 2006-07-15 01:26:09
|
I believe that's the problem that the indexing PEP from Travis is supposed to address: http://www.python.org/dev/peps/pep-0357/ So I think there's not much anyone can do about it untill the PEP is accepted and implemented. Maybe you can cast to int? > In [34]: (1,2)[int(a[0]==b)] --bb On 7/15/06, Fernando Perez <fpe...@gm...> wrote: > Hi all, > > I just got bit by this problem, and I'm not really sure if this is > something that should be considered a numpy bug or not. It is a bit > unpleasant, at the least, though it can be easily worked around. > Here's a simple demonstration: > > In [32]: a=array([1,2]) > > In [33]: b=1 > > In [34]: (1,2)[a[0]==b] > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /home/fperez/research/code/mwmerge/mwadap-merge/mwadap/test/<ipython console> > > TypeError: tuple indices must be integers > > > Whereas this works: > > In [35]: c=2 > > In [36]: (1,2)[c==b] > Out[36]: 1 > > Basically, it seems that array scalars, upon comparisons, return > something that doesn't 'looke enough like an int' to python for it to > let you use it as an index, it's a 'boolscalar': > > In [38]: a0==b0 > Out[38]: True > > In [39]: type _ > -------> type(_) > Out[39]: <type 'boolscalar'> > > > Advice? Is this a numpy bug? Or should it just be left alone, and > this kind of inconvenience will disappear when 2.5 is out, with > __index__ support? > > Cheers, > > f > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 |
From: Keith G. <kwg...@gm...> - 2006-07-15 01:22:21
|
Is there much speed to be gained by compiling atlas for a dual core system? I'm running Ubuntu on a Macbook. It's the first time I've had a dual core system. My one line benchmark shows that the Macbook is slow compared to my (old) desktop. >> t1=time.time();x=randn(500,1000);x = x*x.T;a,b=linalg.eigh(x);t2=time.time();print t2-t1 1.31429600716 My desktop is less than half of that. |
From: Fernando P. <fpe...@gm...> - 2006-07-15 00:53:01
|
Hi all, I just got bit by this problem, and I'm not really sure if this is something that should be considered a numpy bug or not. It is a bit unpleasant, at the least, though it can be easily worked around. Here's a simple demonstration: In [32]: a=array([1,2]) In [33]: b=1 In [34]: (1,2)[a[0]==b] --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/fperez/research/code/mwmerge/mwadap-merge/mwadap/test/<ipython console> TypeError: tuple indices must be integers Whereas this works: In [35]: c=2 In [36]: (1,2)[c==b] Out[36]: 1 Basically, it seems that array scalars, upon comparisons, return something that doesn't 'looke enough like an int' to python for it to let you use it as an index, it's a 'boolscalar': In [38]: a0==b0 Out[38]: True In [39]: type _ -------> type(_) Out[39]: <type 'boolscalar'> Advice? Is this a numpy bug? Or should it just be left alone, and this kind of inconvenience will disappear when 2.5 is out, with __index__ support? Cheers, f |
From: Robert K. <rob...@gm...> - 2006-07-14 22:00:50
|
Keith Goodman wrote: > I like seeing the bug reports on the list. It is an easy way to get > alerts of what to look out for. There are read-only mailing lists for new and updated tickets: http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Keith G. <kwg...@gm...> - 2006-07-14 21:54:12
|
On 7/14/06, Bill Baxter <wb...@gm...> wrote: > On 7/15/06, Robert Kern <rob...@gm...> wrote: > > Nils Wagner wrote: > > > > > >>> p = linalg.pinv(a) > > > Traceback (most recent call last): > > > File "<stdin>", line 1, in ? > > > File "/usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py", line > > > 426, in pinv > > > if issubclass(a.dtype.dtype, complexfloating): > > > AttributeError: 'numpy.dtype' object has no attribute 'dtype' > > > > Unless if you have something substantive to actually add to the discussion, the > > ticket you submitted is sufficient to report the bug. There are reasons we have > > a bug tracker, and one of them is to keep otherwise contentless bug reports off > > the mailing list. > > If that's really the policy, then this page should probably be changed: > http://www.scipy.org/Developer_Zone > > Quote: > "Bugs should be reported to one of the appropriate Mailing Lists. Do > this first, and open a ticket on the corresponding developer's wiki if > necessary." I like seeing the bug reports on the list. It is an easy way to get alerts of what to look out for. |
From: Sven S. <sve...@gm...> - 2006-07-14 21:16:43
|
Victoria G. Laidler schrieb: > > I understand that for interactive use, short names are more convenient; > but shouldn't they be available aliases to the more general names? Since > numpy is primarily a software library, I wouldn't expect it to sacrifice > a standard best-practice in order to make things more convenient for > interactive use. I don't necessarily agree that numpy should aim to be primarily a library, but I'm with you concerning the alias idea. However, iirc there was some discussion recently on this list about the dual solution (long names as well as short ones in parallel), and some important numpy people had some reservations, although I don't remember exactly what those were -- probably some Python Zen issues ("there should be only one way to get to Rome", was that it? -- just kidding). > > If the concern is for for matlab compatibility, maybe a synonym module > numpy.as_matlab could define all the synonyms, that matlab users could > then use? That would make more sense to me than inflicting obscure > matlab names on the rest of the user community. >From superficially browsing through the numpy guide my subjective impression is that function names are mostly pretty short. So maybe the alias thing should work the other way around, making long names available in a module numpy.long_names_for_typing_addicts (again, a bad joke...) cheers, Sven |
From: David M. C. <co...@ph...> - 2006-07-14 19:21:15
|
On Fri, 14 Jul 2006 12:29:39 -0500 Robert Kern <rob...@gm...> wrote: > Nils Wagner wrote: > > > >>> p = linalg.pinv(a) > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > File "/usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py", line > > 426, in pinv > > if issubclass(a.dtype.dtype, complexfloating): > > AttributeError: 'numpy.dtype' object has no attribute 'dtype' > > Unless if you have something substantive to actually add to the discussion, > the ticket you submitted is sufficient to report the bug. There are reasons > we have a bug tracker, and one of them is to keep otherwise contentless bug > reports off the mailing list. For an obvious typo like this, made on a recent commit (i.e., in the last day), it's probably just enough to email the committer (me, in this case). Or, at least, I don't mind. [this kind of error sneaking by is b/c we don't have enough test cases :-)] -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Robert K. <rob...@gm...> - 2006-07-14 19:12:10
|
Bill Baxter wrote: > On 7/15/06, Robert Kern <rob...@gm...> wrote: >> Nils Wagner wrote: >> >>> >>> p = linalg.pinv(a) >>> Traceback (most recent call last): >>> File "<stdin>", line 1, in ? >>> File "/usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py", line >>> 426, in pinv >>> if issubclass(a.dtype.dtype, complexfloating): >>> AttributeError: 'numpy.dtype' object has no attribute 'dtype' >> Unless if you have something substantive to actually add to the discussion, the >> ticket you submitted is sufficient to report the bug. There are reasons we have >> a bug tracker, and one of them is to keep otherwise contentless bug reports off >> the mailing list. > > If that's really the policy, then this page should probably be changed: > http://www.scipy.org/Developer_Zone > > Quote: > "Bugs should be reported to one of the appropriate Mailing Lists. Do > this first, and open a ticket on the corresponding developer's wiki if > necessary." Yes, that *should* be fixed. In January it was a reasonable policy, but with the ticket mailing lists, it's not useful anymore. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Bill B. <wb...@gm...> - 2006-07-14 19:01:33
|
On 7/15/06, Robert Kern <rob...@gm...> wrote: > Nils Wagner wrote: > > > >>> p = linalg.pinv(a) > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > File "/usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py", line > > 426, in pinv > > if issubclass(a.dtype.dtype, complexfloating): > > AttributeError: 'numpy.dtype' object has no attribute 'dtype' > > Unless if you have something substantive to actually add to the discussion, the > ticket you submitted is sufficient to report the bug. There are reasons we have > a bug tracker, and one of them is to keep otherwise contentless bug reports off > the mailing list. If that's really the policy, then this page should probably be changed: http://www.scipy.org/Developer_Zone Quote: "Bugs should be reported to one of the appropriate Mailing Lists. Do this first, and open a ticket on the corresponding developer's wiki if necessary." --Bill |
From: Nick F. <nv...@MI...> - 2006-07-14 17:39:21
|
On Jul 14, 2006, at 12:56 PM, Tim Hochberg wrote: <snip> > I think I'd try psyco (http://psyco.sourceforge.net/). That's > pretty painless to try and may result in a significant improvement. I've been doing more and more development on my PPC Mac, where psyco is not an option. If the speed issue really gets to me, I can run things with psyco on a linux box. Thanks, Nick |
From: Robert K. <rob...@gm...> - 2006-07-14 17:29:46
|
Nils Wagner wrote: > >>> p = linalg.pinv(a) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "/usr/lib64/python2.4/site-packages/numpy/linalg/linalg.py", line > 426, in pinv > if issubclass(a.dtype.dtype, complexfloating): > AttributeError: 'numpy.dtype' object has no attribute 'dtype' Unless if you have something substantive to actually add to the discussion, the ticket you submitted is sufficient to report the bug. There are reasons we have a bug tracker, and one of them is to keep otherwise contentless bug reports off the mailing list. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Travis N. V. <tr...@en...> - 2006-07-14 17:19:19
|
Greetings, The SciPy 2006 Conference (http://www.scipy.org/SciPy2006) is August 17-18 this year. The deadline for early registration is *today*, July 14, 2006. The registration price will increase from $100 to $150 after today. You can register online at https://www.enthought.com/scipy06 . We invite everyone attending the conference to also attend the Coding Sprints on Monday-Tuesday , August 14-15 and also the Tutorials Wednesday, August 16. There is no additional charge for these sessions. A *tentative* schedule of talks has now been posted. http://www.scipy.org/SciPy2006/Schedule We look forward to seeing you at CalTech in August! Best, Travis |
From: Tim H. <tim...@co...> - 2006-07-14 16:56:48
|
Nick Fotopoulos wrote: > On Jul 13, 2006, at 10:17 PM, Tim Hochberg wrote: > > >> Nick Fotopoulos wrote: >> >>> Dear all, >>> >>> I often make use of numpy.vectorize to make programs read more >>> like the physics equations I write on paper. numpy.vectorize is >>> basically a wrapper for numpy.frompyfunc. Reading Travis's Scipy >>> Book (mine is dated Jan 6 2005) kind of suggests to me that it >>> returns a full- fledged ufunc exactly like built-in ufuncs. >>> >>> First, is this true? >>> >> Well according to type(), the result of frompyfunc is indeed of >> type ufunc, so I would say the answer to that is yes. >> >>> Second, how is the performance? >>> >> A little timing indicates that it's not good (about 30 X slower for >> computing x**2 than doing it using x*x on an array). . That's not >> frompyfunc (or vectorizes) fault though. It's calling a python >> function at each point, so the python function call overhead is >> going to kill you. Not to mention instantiating an actual Python >> object or objects at each point. >> > > That's unfortunate since I tend to nest functions quite deeply and > then scipy.integrate.quad over them, which I'm sure results in a > ridiculous number of function calls. Are anonymous lambdas any > different than named functions in terms of performance? > Sorry, no. Under the covers they're the same. >>> i.e., are my functions performing approximately as fast as they >>> could be or would they still gain a great deal of speed by >>> rewriting it in C or some other compiled python accelerator? >>> >>> >> Can you give examples of what these functions look like? You might >> gain a great deal of speed by rewriting them in numpy in the >> correct way. Or perhaps not, but it's probably worth showing some >> examples so we can offer suggestions or at least admit that we are >> stumped. >> > > This is by far the slowest bit of my code. I cache the results, so > it's not too bad, but any upstream tweak can take a lot of CPU time > to propagate. > > @autovectorized > def dnsratezfunc(z): > """Take coalescence time into account."" > def integrand(zf): > return Pz(z,zf)*NSbirthzfunc(zf) > return quad(integrand,delayedZ(2e5*secperyear+1,z),5)[0] > dnsratez = lambdap*dnsratezfunc(zs) > > where: > > # Neutron star formation rate is a delayed version of star formation > rate > NSbirthzfunc = autovectorized(lambda z: SFRz(delayedZ > (1e8*secperyear,z))) > > def Pz(z_c,z_f): > """Return the probability density per unit redshift of a DNS > coalescence at z_c given a progenitor formation at z_f. """ > return P(t(z_c,z_f))*dtdz(z_c) > > and there are many further nested levels of function calls. If the > act of calling a function is more expensive than actually executing > it and I value speed over readability/code reuse, I can inline Pz's > function calls and inline the unvectorized NSbirthzfunc to reduce the > calling stack a bit. Any other suggestions? > I think I'd try psyco (http://psyco.sourceforge.net/). That's pretty painless to try and may result in a significant improvement. -tim |
From: Nick F. <nv...@MI...> - 2006-07-14 16:44:00
|
On Jul 13, 2006, at 10:17 PM, Tim Hochberg wrote: > Nick Fotopoulos wrote: >> Dear all, >> >> I often make use of numpy.vectorize to make programs read more >> like the physics equations I write on paper. numpy.vectorize is >> basically a wrapper for numpy.frompyfunc. Reading Travis's Scipy >> Book (mine is dated Jan 6 2005) kind of suggests to me that it >> returns a full- fledged ufunc exactly like built-in ufuncs. >> >> First, is this true? > Well according to type(), the result of frompyfunc is indeed of > type ufunc, so I would say the answer to that is yes. >> Second, how is the performance? > A little timing indicates that it's not good (about 30 X slower for > computing x**2 than doing it using x*x on an array). . That's not > frompyfunc (or vectorizes) fault though. It's calling a python > function at each point, so the python function call overhead is > going to kill you. Not to mention instantiating an actual Python > object or objects at each point. That's unfortunate since I tend to nest functions quite deeply and then scipy.integrate.quad over them, which I'm sure results in a ridiculous number of function calls. Are anonymous lambdas any different than named functions in terms of performance? > >> i.e., are my functions performing approximately as fast as they >> could be or would they still gain a great deal of speed by >> rewriting it in C or some other compiled python accelerator? >> > Can you give examples of what these functions look like? You might > gain a great deal of speed by rewriting them in numpy in the > correct way. Or perhaps not, but it's probably worth showing some > examples so we can offer suggestions or at least admit that we are > stumped. This is by far the slowest bit of my code. I cache the results, so it's not too bad, but any upstream tweak can take a lot of CPU time to propagate. @autovectorized def dnsratezfunc(z): """Take coalescence time into account."" def integrand(zf): return Pz(z,zf)*NSbirthzfunc(zf) return quad(integrand,delayedZ(2e5*secperyear+1,z),5)[0] dnsratez = lambdap*dnsratezfunc(zs) where: # Neutron star formation rate is a delayed version of star formation rate NSbirthzfunc = autovectorized(lambda z: SFRz(delayedZ (1e8*secperyear,z))) def Pz(z_c,z_f): """Return the probability density per unit redshift of a DNS coalescence at z_c given a progenitor formation at z_f. """ return P(t(z_c,z_f))*dtdz(z_c) and there are many further nested levels of function calls. If the act of calling a function is more expensive than actually executing it and I value speed over readability/code reuse, I can inline Pz's function calls and inline the unvectorized NSbirthzfunc to reduce the calling stack a bit. Any other suggestions? Thanks, Tim. Take care, Nick |
From: Andrew J. <a.h...@gm...> - 2006-07-14 16:35:41
|
David Cournapeau wrote: > Andrew Jaffe wrote: >> Hi All, >> >> I have just switched from RHEL to debian, and all of a sudden I started >> getting floating point exception errors in various contexts. >> >> Apparently, this has to do with some faulty error stuff in glibc, >> specifically related to the sse. I would prefer to fix the actual >> problem, but I can't seem to be able to get the recommended 'apt-get >> source glibc' incantation to work (I'm not root, but I can sudo.) >> > What does not work ? The apt-get source part ? The actual building ? > > Basically, if the sources are OK, you just need to do > - fakeroot dpkg-source -x name_of_dsc_file > - cd name_of_package > - fakeroot dpkg-buildpackage > > And that's it Aha -- I didn't know about fakeroot... Thanks! That was the problem with apt-get source. I'm compiling the patched version now... wow, is it slow! >> I was able to fix some of these issues by simply downgrading ATLAS to >> not use sse instructions anymore. >> >> But f2py still links with sse and sse2 by default. I can't quite >> understand the configuration well enough to work out how to turn it off. >> Can someone give me any guidance? >> > The way it is supposed to work, at least on debian and ubuntu, is that > you never link to the sse/sse2 versions, but to the non-optimized > versions. After, the dynamic loader will get the right one (ie optimized > if available) instead of the one linked (this of course only works for > dynamic linking). You can check which library is picked with a ldconfig > - p | grep lapack (for lapack functions, and so on...) The problem with f2py isn't the atlas/lapack linkage, which it does correctly, but the fact that it automatically appends -sse2 to the g77 compile options; I couldn't figure out how to turn that off! Although now I'm not so sure, since I can never get my self-compiled executable version of my fortran code to give the same error as when it runs within python. But with the patched glibc, I think I'm alright in any event! Thanks! A |
From: Eric F. <ef...@ha...> - 2006-07-14 16:29:10
|
Alan G Isaac wrote: > On Fri, 14 Jul 2006, Sven Schreiber apparently wrote: > >>So maybe that's a feature request, complementing the >>nansum function by a nanaverage? > > > This is not an objection; just an observation. > It has always seemed to me that such descriptive > statistics make more sense as class attributes. > In this case, something like a NanDstat class. Attached is something like that, in case someone finds it useful. It is designed to replace something I wrote a long time ago for matlab. It is only very lightly tested, so use with care. Eric |
From: Andrew S. <str...@as...> - 2006-07-14 16:25:52
|
GNU libc version 2.3.2 has a bug[1] "feclearexcept() error on CPUs with SSE" (fixed in 2.3.3) which has been submitted to Debian[2] but not fixed in sarge. See http://www.its.caltech.edu/~astraw/coding.html#id3 for more information and .debs which fix the problem. [1] http://sources.redhat.com/bugzilla/show_bug.cgi?id=10 [2] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=279294 If only somebody could show how this was a security issue, we could get Debian to finally release a fix in sarge for this (frequently asked) question. Andrew Jaffe wrote: > Hi All, > > I have just switched from RHEL to debian, and all of a sudden I started > getting floating point exception errors in various contexts. > > Apparently, this has to do with some faulty error stuff in glibc, > specifically related to the sse. I would prefer to fix the actual > problem, but I can't seem to be able to get the recommended 'apt-get > source glibc' incantation to work (I'm not root, but I can sudo.) > > I was able to fix some of these issues by simply downgrading ATLAS to > not use sse instructions anymore. > > But f2py still links with sse and sse2 by default. I can't quite > understand the configuration well enough to work out how to turn it off. > Can someone give me any guidance? > > Thanks, > > Andrew > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Victoria G. L. <la...@st...> - 2006-07-14 16:17:08
|
Jonathan Taylor wrote: > pseudoinverse > > it's the same name matlab uses: > > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html Thanks for the explanation. I'm puzzled by the naming choice, however. Standard best practice in writing software is to give understandable names, to improve readability and code maintenance. Obscure abbreviations like "pinv" pretty much went out with the FORTRAN 9-character limit for variable names. It's very unusual to see them in new software nowadays, and it always looks unprofessional to me. I understand that for interactive use, short names are more convenient; but shouldn't they be available aliases to the more general names? Since numpy is primarily a software library, I wouldn't expect it to sacrifice a standard best-practice in order to make things more convenient for interactive use. If the concern is for for matlab compatibility, maybe a synonym module numpy.as_matlab could define all the synonyms, that matlab users could then use? That would make more sense to me than inflicting obscure matlab names on the rest of the user community. Vicki Laidler > > Victoria G. Laidler wrote: > >> Sven Schreiber wrote: >> >> >> >>> Jon Peirce schrieb: >>> >>> >>> >>> >>>> There used to be a function generalized_inverse in the numpy.linalg >>>> module (certainly in 0.9.2). >>>> >>>> In numpy0.9.8 it seems to have been moved to the numpy.linalg.old >>>> subpackage. Does that mean it's being dropped? Did it have to move? >>>> Now i have to add code to my package to try both locations because >>>> my users might have any version... :-( >>>> >>>> >>>> >>>> >>> >>> Maybe I don't understand, but what's wrong with numpy.linalg.pinv? >>> >>> >>> >> >> Er, what's a pinv? It doesn't sound anything like a generalized_inverse. >> >> Vicki Laidler >> >> >> >> ------------------------------------------------------------------------- >> >> Using Tomcat but need to do more? Need to support web services, >> security? >> Get stuff done quickly with pre-integrated technology to make your >> job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > |
From: Jonathan T. <jon...@st...> - 2006-07-14 15:59:40
|
pseudoinverse it's the same name matlab uses: http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html Victoria G. Laidler wrote: >Sven Schreiber wrote: > > > >>Jon Peirce schrieb: >> >> >> >> >>>There used to be a function generalized_inverse in the numpy.linalg >>>module (certainly in 0.9.2). >>> >>>In numpy0.9.8 it seems to have been moved to the numpy.linalg.old >>>subpackage. Does that mean it's being dropped? Did it have to move? Now >>>i have to add code to my package to try both locations because my users >>>might have any version... :-( >>> >>> >>> >>> >>> >>> >>Maybe I don't understand, but what's wrong with numpy.linalg.pinv? >> >> >> >> >Er, what's a pinv? It doesn't sound anything like a generalized_inverse. > >Vicki Laidler > > > >------------------------------------------------------------------------- >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 |
From: Victoria G. L. <la...@st...> - 2006-07-14 15:56:51
|
Sven Schreiber wrote: >Jon Peirce schrieb: > > >>There used to be a function generalized_inverse in the numpy.linalg >>module (certainly in 0.9.2). >> >>In numpy0.9.8 it seems to have been moved to the numpy.linalg.old >>subpackage. Does that mean it's being dropped? Did it have to move? Now >>i have to add code to my package to try both locations because my users >>might have any version... :-( >> >> >> >> > >Maybe I don't understand, but what's wrong with numpy.linalg.pinv? > > Er, what's a pinv? It doesn't sound anything like a generalized_inverse. Vicki Laidler |
From: Alan G I. <ai...@am...> - 2006-07-14 13:18:24
|
On Fri, 14 Jul 2006, Sven Schreiber apparently wrote: > So maybe that's a feature request, complementing the > nansum function by a nanaverage? This is not an objection; just an observation. It has always seemed to me that such descriptive statistics make more sense as class attributes. In this case, something like a NanDstat class. fwiw, Alan Isaac |