You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Charles R H. <cha...@gm...> - 2006-10-14 20:28:49
|
On 10/13/06, A. M. Archibald <per...@gm...> wrote: > > On 13/10/06, Tim Hochberg <tim...@ie...> wrote: > > Charles R Harris wrote: <snip> > On the other hand if error handling is set to 'raise', then a > > FloatingPointError is issued. Is a FloatingPointWarning in order to > > mirror the FloatingPointError? And if so, would it be appropriate to use > > for condition number? > > I submitted a patchto use warnings for several functions in scipy a > while ago, and the approach I took was to create a ScipyWarning, from > which more specific warnings were derived (IntegrationWarning, for > example). That was perhaps a bit short-sighted. > > I'd suggest a FloatingPointWarning as a base class, with > IllConditionedMatrix as a subclass (it should include the condition > number, but probably not the matrix itself unless it's small, as > debugging information). > Let's pin this down a bit. Numpy seems to need its own warning classes so we can control the printing of the warnings. For the polyfit function I think it should *always* warn by default because it is likely to be used interactively and one can fool around with the degree of the fit. For instance, I am now scaling the the input x vector so norm_inf(x) == 1 and this seems to work pretty well for lots of stuff with rcond=-1 (about 1e-16) and a warning on rank reduction. As long as the rank stays the same things seem to work ok, up to fits of degree 21 on the test data that started this conversation. BTW, how does one turn warnings back on? If I do a >>> warnings.simplefilter('always', mywarn) things work fine. Following this by >>> warnings.simplefilter('once', mywarn) does what is supposed to do. Once again issuing >>> warnings.simplefilter('always', mywarn) fails to have any effect. Resetwarnings doesn't help. Hmmm... Chuck |
From: Jay P. <pa...@gm...> - 2006-10-14 19:28:54
|
> Are there any updates to the Developer Tools that you can install for 10.3.9? > Particularly, is there one which provides gcc 4.0, which I think is the sine qua > non for building Universal binaries. > > Can you build any other extension modules using distutils? > > If neither of the above is true, then you may need to upgrade to 10.4 to build > Universal binaries. You might want to check the available Mac Python > documentation and pythonmac-sig archives for more information. I have not been > following the Universal discussion as closely as I could have (and as I'm > currently on vacation, I'm not about to rectify that now). > Well, I just tried building PIL, and it worked just fine. So it looks like we can't generally say that it's impossible for me to build extension modules. I've posted a message to the distutils-sig, so hopefully we can resolve something there. If not, I'll take it to pythonmac-sig. Thanks, Jay P. |
From: Robert K. <rob...@gm...> - 2006-10-14 18:12:55
|
Jay Parlar wrote: >> Jay Parlar wrote: >>> In the process of finally switching over to Python 2.5, and am trying >>> to build numpy. Unfortunately, it dies during the build: >>> C compiler: gcc -arch ppc -arch i386 -isysroot >>> /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double >>> -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 >>> >>> compile options: >>> '-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 >>> -Inumpy/core/src -Inumpy/core/include >>> -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 >>> -c' >>> gcc: _configtest.c >>> gcc: cannot specify -o with -c or -S and multiple compilations >>> gcc: cannot specify -o with -c or -S and multiple compilations >>> failure. >> This is the problem. Are you sure that you are using the correct version of gcc >> for making Universal binaries on 10.3.9? If so, then we are not passing the >> correct flags to it. Unfortunately, I think that the Universal stuff is going to >> make our lives quite complicated. > > Well, my system is up-to-date, with only one gcc on it, so I don't > know what else I can do. I originally missed the line saying "C > compiler: gcc ...". It's odd, because I certainly don't have a > /Developer/SDKs/MacOSX10.4u.sdk on my system. I wonder if that's there > implicitly because the universal Python 2.5 I downloaded from > python.org was built on a 10.4 system. It probably was. I'm not sure what the deal is with building extensions with Universal Python on 10.3.9. It's possible that Universal binaries are only executable on 10.3.9, but not buildable. Are there any updates to the Developer Tools that you can install for 10.3.9? Particularly, is there one which provides gcc 4.0, which I think is the sine qua non for building Universal binaries. Can you build any other extension modules using distutils? If neither of the above is true, then you may need to upgrade to 10.4 to build Universal binaries. You might want to check the available Mac Python documentation and pythonmac-sig archives for more information. I have not been following the Universal discussion as closely as I could have (and as I'm currently on vacation, I'm not about to rectify that now). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Jay P. <pa...@gm...> - 2006-10-14 17:30:02
|
> Jay Parlar wrote: > > In the process of finally switching over to Python 2.5, and am trying > > to build numpy. Unfortunately, it dies during the build: > > > C compiler: gcc -arch ppc -arch i386 -isysroot > > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double > > -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > > > > compile options: > > '-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 > > -Inumpy/core/src -Inumpy/core/include > > -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 > > -c' > > gcc: _configtest.c > > gcc: cannot specify -o with -c or -S and multiple compilations > > gcc: cannot specify -o with -c or -S and multiple compilations > > failure. > > This is the problem. Are you sure that you are using the correct version of gcc > for making Universal binaries on 10.3.9? If so, then we are not passing the > correct flags to it. Unfortunately, I think that the Universal stuff is going to > make our lives quite complicated. > Well, my system is up-to-date, with only one gcc on it, so I don't know what else I can do. I originally missed the line saying "C compiler: gcc ...". It's odd, because I certainly don't have a /Developer/SDKs/MacOSX10.4u.sdk on my system. I wonder if that's there implicitly because the universal Python 2.5 I downloaded from python.org was built on a 10.4 system. Jay P. |
From: <szd...@16...> - 2006-10-14 17:12:29
|
贵公司负责人(经理/财务)您好! 本公司有多余的进项发票可对外代开,代开范围有:商品销售、广告、服务、 租赁、运输、“电脑版”化工、建筑安装、餐饮定额发票等.(点数从优),如贵 公司在业务上有需要请来电咨询! 本公司郑重承诺所用票据均可验证后在付款。(如果此函件对您造 成不便,敬请谅解!如对您有帮助我感到十分荣幸,请与我们联系). 联 系 人: 张豪兴 联系电话: 013590116835 地 址:深圳市深南中路国际文化大厦 E-mail: szh...@16... |
From: Bill B. <wb...@gm...> - 2006-10-14 16:59:52
|
On 10/15/06, A. M. Archibald <per...@gm...> wrote: > So, well, any suggestion for a name? pylab is already in use by > matplotlib (for some reason), as is Scientific Python (and numpy, > Numeric and numarray are obviously already confusing). I always thought 'pylab' was an odd name, so I always do from matplotlib import pylab as plot I think the matplotlib folks would be willing to change the name of pylab if it were seen as something that would be good for the community. I kind of wonder why they didn't call it 'matplot', personally. --bb |
From: Gael V. <gae...@no...> - 2006-10-14 16:54:50
|
On Sat, Oct 14, 2006 at 10:50:36AM -0600, Charles R Harris wrote: > So, well, any suggestion for a name? pylab is already in use by > matplotlib (for some reason), as is Scientific Python (and numpy, > Numeric and numarray are obviously already confusing). > supernumpy? I think we should rather build upon scipy, rather than numpy, if we want to build upon an existing name. Numpy is low-level array manipulation, scipy is all the fancy math that goes above, so it make more sens to derive a name on scipy. Besides scipy.org is already a good rallying point for scientific computing with python How about scipylab ! :-> --=20 Ga=EBl |
From: Charles R H. <cha...@gm...> - 2006-10-14 16:50:38
|
On 10/14/06, A. M. Archibald <per...@gm...> wrote: > > On 14/10/06, Gael Varoquaux <gae...@no...> wrote: > > On Sat, Oct 14, 2006 at 06:58:45AM -0600, Bill Spotz wrote: <snip> I agree. Moreover, being picked for such integration work would help > encourage people to converge on one MPI interface (for example). > There's some discussion of such a package at NumpyProConPage on the > wiki (and in particular at http://www.scipy.org/PyLabAwaits ). The > "scipy superpack" (which is only for Windows, as I understand it) is a > sort of beginning. > > So, well, any suggestion for a name? pylab is already in use by > matplotlib (for some reason), as is Scientific Python (and numpy, > Numeric and numarray are obviously already confusing). supernumpy? Chuck |
From: A. M. A. <per...@gm...> - 2006-10-14 16:48:16
|
On 14/10/06, Gael Varoquaux <gae...@no...> wrote: > On Sat, Oct 14, 2006 at 06:58:45AM -0600, Bill Spotz wrote: > > I would like to second the notion of converging on a single MPI > > interface. My parallel project encapsulates most of the inter- > > processor communication within higher-level objects because the lower- > > level communication patterns can usually be determined from higher- > > level data structures. But still, there are times when a user would > > like access to the lower-level MPI interface. > > I think we are running in the same old problem of scipy/a super-package > of scientific tools. It is important to keep numpy/scipy modular. People > may want to install either one without an MPI interface. However it is > nice that if somebody just wants "everything" without having to choose > he can download that "everything" from scpy.org and that it comes well > bundled together. So all we need to do is find a name for that > super-package, put it together with good integration and add it to > scipy.org. > > My 2 cents, > > Ga=EBl I agree. Moreover, being picked for such integration work would help encourage people to converge on one MPI interface (for example). There's some discussion of such a package at NumpyProConPage on the wiki (and in particular at http://www.scipy.org/PyLabAwaits ). The "scipy superpack" (which is only for Windows, as I understand it) is a sort of beginning. So, well, any suggestion for a name? pylab is already in use by matplotlib (for some reason), as is Scientific Python (and numpy, Numeric and numarray are obviously already confusing). A. M. Archibald |
From: Robert K. <rob...@gm...> - 2006-10-14 16:33:04
|
Jay Parlar wrote: > In the process of finally switching over to Python 2.5, and am trying > to build numpy. Unfortunately, it dies during the build: > C compiler: gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double > -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 > > compile options: > '-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 > -Inumpy/core/src -Inumpy/core/include > -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 > -c' > gcc: _configtest.c > gcc: cannot specify -o with -c or -S and multiple compilations > gcc: cannot specify -o with -c or -S and multiple compilations > failure. This is the problem. Are you sure that you are using the correct version of gcc for making Universal binaries on 10.3.9? If so, then we are not passing the correct flags to it. Unfortunately, I think that the Universal stuff is going to make our lives quite complicated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Gael V. <gae...@no...> - 2006-10-14 13:34:25
|
On Sat, Oct 14, 2006 at 06:58:45AM -0600, Bill Spotz wrote: > I would like to second the notion of converging on a single MPI =20 > interface. My parallel project encapsulates most of the inter-=20 > processor communication within higher-level objects because the lower-=20 > level communication patterns can usually be determined from higher-=20 > level data structures. But still, there are times when a user would =20 > like access to the lower-level MPI interface. I think we are running in the same old problem of scipy/a super-package of scientific tools. It is important to keep numpy/scipy modular. People may want to install either one without an MPI interface. However it is nice that if somebody just wants "everything" without having to choose he can download that "everything" from scpy.org and that it comes well bundled together. So all we need to do is find a name for that super-package, put it together with good integration and add it to scipy.org. My 2 cents, Ga=EBl |
From: Bill S. <wf...@sa...> - 2006-10-14 12:59:24
|
On Oct 14, 2006, at 2:58 AM, tha...@bi... wrote: > On 10/13/06, Lisandro Dalcin <da...@gm...> wrote: >> This post is surely OT, but I cannot imagine a better place to >> contact >> people about this subject. Please, don't blame me. >> >> Any people here interested in NumPy/SciPy + MPI? > > I've been working on a Dynamic Bayesian Network (DBN) toolkit for > some time > (called Mocapy, freely available from sourceforge https:// > sourceforge.net/projects/mocapy/). The thing is almost entirely > implemented in Python, and currently uses Numeric and pyMPI. I > routinely train DBNs from 100.000s of observations on our 240 CPU > cluster. > > I'm in the proces of porting Mocapy to numpy. I assume pyMPI will > also work with numpy, but I haven't tried it out yet. Would be > great if scipy came with default MPI support, especially since > pyMPI does not seem to be actively developed anymore. I would like to second the notion of converging on a single MPI interface. My parallel project encapsulates most of the inter- processor communication within higher-level objects because the lower- level communication patterns can usually be determined from higher- level data structures. But still, there are times when a user would like access to the lower-level MPI interface. I haven't tried to make my project compatible with any of the several MPI python interfaces out there, because it isn't clear to me which one is most widely used. If one were to emerge -- and even better, if the various independent projects were to then combine their efforts in an open source environment (the way Numeric and numarray are converging to NumPy) -- then this choice would be easy. ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-5451 ** ** Albuquerque, NM 87185-0370 Email: wf...@sa... ** |
From: <RY...@HO...> - 2006-10-14 09:43:35
|
秋の新着コミュニテイです。 http://xxzzxx.com/n/ |
From: <tha...@bi...> - 2006-10-14 08:58:46
|
CgpPbiAxMC8xMy8wNiwgTGlzYW5kcm8gRGFsY2luIDxkYWxjaW5sQGdtYWlsLmNvbT4gd3JvdGU6 Cj4gVGhpcyBwb3N0IGlzIHN1cmVseSBPVCwgYnV0IEkgY2Fubm90IGltYWdpbmUgYSBiZXR0ZXIg cGxhY2UgdG8gY29udGFjdAo+IHBlb3BsZSBhYm91dCB0aGlzIHN1YmplY3QuIFBsZWFzZSwgZG9u J3QgYmxhbWUgbWUuCj4gCj4gQW55IHBlb3BsZSBoZXJlIGludGVyZXN0ZWQgaW4gTnVtUHkvU2Np UHkgKyBNUEk/IAoKSSd2ZSBiZWVuIHdvcmtpbmcgb24gYSBEeW5hbWljIEJheWVzaWFuIE5ldHdv cmsgKERCTikgdG9vbGtpdCBmb3Igc29tZSB0aW1lCihjYWxsZWQgTW9jYXB5LCBmcmVlbHkgYXZh aWxhYmxlIGZyb20gc291cmNlZm9yZ2UgaHR0cHM6Ly9zb3VyY2Vmb3JnZS5uZXQvcHJvamVjdHMv bW9jYXB5LykuIFRoZSB0aGluZyBpcyBhbG1vc3QgZW50aXJlbHkgaW1wbGVtZW50ZWQgaW4gUHl0 aG9uLCBhbmQgY3VycmVudGx5IHVzZXMgTnVtZXJpYyBhbmQgcHlNUEkuIEkgcm91dGluZWx5IHRy YWluIERCTnMgZnJvbSAxMDAuMDAwcyBvZiBvYnNlcnZhdGlvbnMgb24gb3VyIDI0MCBDUFUgY2x1 c3Rlci4gCgpJJ20gaW4gdGhlIHByb2NlcyBvZiBwb3J0aW5nIE1vY2FweSB0byBudW1weS4gSSBh c3N1bWUgcHlNUEkgd2lsbCBhbHNvIHdvcmsgd2l0aCBudW1weSwgYnV0IEkgaGF2ZW4ndCB0cmll ZCBpdCBvdXQgeWV0LiBXb3VsZCBiZSBncmVhdCBpZiBzY2lweSBjYW1lIHdpdGggZGVmYXVsdCBN UEkgc3VwcG9ydCwgZXNwZWNpYWxseSBzaW5jZSBweU1QSSBkb2VzIG5vdCBzZWVtIHRvIGJlIGFj dGl2ZWx5IGRldmVsb3BlZCBhbnltb3JlLgoKQ2hlZXJzLAoKLVRob21hcwoKLS0tLQpUaG9tYXMg SGFtZWxyeWNrLCBQb3N0LWRvY3RvcmFsIHJlc2VhcmNoZXIKQmlvaW5mb3JtYXRpY3MgY2VudGVy Ckluc3RpdHV0ZSBvZiBNb2xlY3VsYXIgQmlvbG9neSBhbmQgUGh5c2lvbG9neQpVbml2ZXJzaXR5 IG9mIENvcGVuaGFnZW4KVW5pdmVyc2l0ZXRzcGFya2VuIDE1IC0gQnlnbmluZyAxMApESy0yMTAw IENvcGVuaGFnZW4gw5gKRGVubWFyawpIb21lcGFnZTogaHR0cDovL3d3dy5iaW5mLmt1LmRrL1By b3RlaW5fc3RydWN0dXJlCg== |
From: eric <er...@en...> - 2006-10-13 23:03:01
|
Brian Granger wrote: > Just as a data point. > > I have used mpi4py before and have built it on many systems ranging > from my macbook to NERSC supercomputers. In my opinion it is > currently the best python mpi bindings available. Lisandro has done a > fantastic job with this. Also Fernando and I have worked hard to make > sure that mpi4py works with the new parallel capabilities of IPython. > > I would love to see mpi4py hosted in a public repository for others to > contribute. I think this would really solidify mpi4py as a top notch > mpi interface. But, my only concern is that there might be many folks > who want to use mpi4py who don't need scipy. I am one of those folks > - I don't necessarily need scipy on the NERSC supercomputers, but I do > need mpi4py. Because of this, I would probably still recommend > keeping mpi4py as a separate project. Is there any chance it could be > hosted at mip4py.scipy.org? > =20 Fine from our side... eric > I strongly encourage others to try it out. Installation is is easy. > > Brian Granger > > On 10/13/06, Lisandro Dalcin <da...@gm...> wrote: > =20 >> This post is surely OT, but I cannot imagine a better place to contact >> people about this subject. Please, don't blame me. >> >> Any people here interested in NumPy/SciPy + MPI? From some time ago, >> I've been developing mpi4py (first release at SF) and I am really near >> to release a new version. >> >> This package exposes an API almost identical to MPI-2 C++ bindings. >> Almost all MPI-1 and MPI-2 features (even one-sided communications and >> parallel I/O) are fully supported for any object exposing >> single-segment buffer interface, an only some of them for >> communication of general Python objects (with the help of >> pickle/marshal). >> >> The posibility of constructing any user-defined MPI datatypes, as well >> as virtual topologies (specially cartesian), can be really nice for >> anyone interested in parallel multidimensional array procesing. >> >> Before the next release, I would like to wait for any comment, You can >> contact me via private mail to get a tarbal with latest developments, >> or we can have some discussion here, if many of you consider this a >> good idea. In the long term, I would like to see mpi4py integrated as >> a subpackage of SciPy. >> >> Regards, >> >> -- >> Lisandro Dalc=EDn >> --------------- >> Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIM= EC) >> Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INT= EC) >> Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICE= T) >> PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> _______________________________________________ >> Scipy-dev mailing list >> Sci...@sc... >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> =20 > > -----------------------------------------------------------------------= -- > Using Tomcat but need to do more? Need to support web services, securit= y? > Get stuff done quickly with pre-integrated technology to make your job = easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geron= imo > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > =20 |
From: Brian G. <ell...@gm...> - 2006-10-13 22:49:44
|
Just as a data point. I have used mpi4py before and have built it on many systems ranging from my macbook to NERSC supercomputers. In my opinion it is currently the best python mpi bindings available. Lisandro has done a fantastic job with this. Also Fernando and I have worked hard to make sure that mpi4py works with the new parallel capabilities of IPython. I would love to see mpi4py hosted in a public repository for others to contribute. I think this would really solidify mpi4py as a top notch mpi interface. But, my only concern is that there might be many folks who want to use mpi4py who don't need scipy. I am one of those folks - I don't necessarily need scipy on the NERSC supercomputers, but I do need mpi4py. Because of this, I would probably still recommend keeping mpi4py as a separate project. Is there any chance it could be hosted at mip4py.scipy.org? I strongly encourage others to try it out. Installation is is easy. Brian Granger On 10/13/06, Lisandro Dalcin <da...@gm...> wrote: > This post is surely OT, but I cannot imagine a better place to contact > people about this subject. Please, don't blame me. > > Any people here interested in NumPy/SciPy + MPI? From some time ago, > I've been developing mpi4py (first release at SF) and I am really near > to release a new version. > > This package exposes an API almost identical to MPI-2 C++ bindings. > Almost all MPI-1 and MPI-2 features (even one-sided communications and > parallel I/O) are fully supported for any object exposing > single-segment buffer interface, an only some of them for > communication of general Python objects (with the help of > pickle/marshal). > > The posibility of constructing any user-defined MPI datatypes, as well > as virtual topologies (specially cartesian), can be really nice for > anyone interested in parallel multidimensional array procesing. > > Before the next release, I would like to wait for any comment, You can > contact me via private mail to get a tarbal with latest developments, > or we can have some discussion here, if many of you consider this a > good idea. In the long term, I would like to see mpi4py integrated as > a subpackage of SciPy. > > Regards, > > -- > Lisandro Dalc=EDn > --------------- > Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) > Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) > Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) > PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > _______________________________________________ > Scipy-dev mailing list > Sci...@sc... > http://projects.scipy.org/mailman/listinfo/scipy-dev > |
From: A. M. A. <per...@gm...> - 2006-10-13 22:09:29
|
On 13/10/06, Tim Hochberg <tim...@ie...> wrote: > Charles R Harris wrote: > > That sounds good, but how to do it? Should I raise an exception? > Use the warnings framework: > > >>> import warnings > >>> warnings.warn("condition number is BAD") > __main__:1: UserWarning: condition number is BAD > > The user can turn warnings on or off or turned in exceptions based on a > variety of criteria. Look for the warnings filter in the docs. > > Which brings up a question: do we want to have a FloatingPointWarning or > some such? Currently, if you use set the error handling to warn using > seterr a runtime warning is issued: > > >>> np.seterr(all='warn') > {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': > 'ignore'} > >>> np.arange(1) / 0 > __main__:1: RuntimeWarning: divide by zero encountered in divide > > > On the other hand if error handling is set to 'raise', then a > FloatingPointError is issued. Is a FloatingPointWarning in order to > mirror the FloatingPointError? And if so, would it be appropriate to use > for condition number? I submitted a patchto use warnings for several functions in scipy a while ago, and the approach I took was to create a ScipyWarning, from which more specific warnings were derived (IntegrationWarning, for example). That was perhaps a bit short-sighted. I'd suggest a FloatingPointWarning as a base class, with IllConditionedMatrix as a subclass (it should include the condition number, but probably not the matrix itself unless it's small, as debugging information). The warnings module is frustratingly non-reentrant, unfortunately, which makes writing tests very awkward. > > I would also have to modify lstsq so it returns the degree of the fit > > which would mess up the current interface. > One approach would be to write lstsqcond (or a better name) that returns > both the fit and the condition number. listsq could then be just a > wrapper over that which dumped the condition number. IIRC, the > condition number is available, but we're not returning it. This is a very good idea. scipy.integrate.quad returns a pair (result, error_estimate) and every time I use it I trip over that. (Perhaps if I were a fine upstanding numerical analyst I would be checking the error estimate every time, but it is a pain.) Another option would be a "full_output" optional argument. A. M. Archibald |
From: Tim H. <tim...@ie...> - 2006-10-13 21:12:17
|
Charles R Harris wrote: > > > On 10/13/06, *A. M. Archibald* <per...@gm... > <mailto:per...@gm...>> wrote: > > On 12/10/06, Charles R Harris <cha...@gm... > <mailto:cha...@gm...>> wrote: > > Hi all, > > > > I note that polyfit looks like it should work for single and > double, real > > and complex, polynomials. On the otherhand, the default rcond > needs to > > depend on the underlying precision. On the other, other hand, > all the svd > > computations are done with dgelsd or zgelsd, i.e., double > precision. Even so > > problems can arise from inherent errors of the input data if it > is single > > precision to start with. I also think the final degree of the > fit should be > > available somewhere if wanted, as it is an indication of what is > going on. > > Sooo, any suggestions as to what to do? My initial impulse would > be to set > > rcond=1e-6 for single, 1e-14 for double, make rcond a keyword, > and kick the > > can down the road on returning the actual degree of the fit. > > I'd also be inclined to output a warning (which the user can ignore, > read or trap as necessary) if the condition number is too bad or they > supplied an rcond that is too small for the precision of their data. > > > That sounds good, but how to do it? Should I raise an exception? Use the warnings framework: >>> import warnings >>> warnings.warn("condition number is BAD") __main__:1: UserWarning: condition number is BAD The user can turn warnings on or off or turned in exceptions based on a variety of criteria. Look for the warnings filter in the docs. Which brings up a question: do we want to have a FloatingPointWarning or some such? Currently, if you use set the error handling to warn using seterr a runtime warning is issued: >>> np.seterr(all='warn') {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> np.arange(1) / 0 __main__:1: RuntimeWarning: divide by zero encountered in divide On the other hand if error handling is set to 'raise', then a FloatingPointError is issued. Is a FloatingPointWarning in order to mirror the FloatingPointError? And if so, would it be appropriate to use for condition number? > I would also have to modify lstsq so it returns the degree of the fit > which would mess up the current interface. One approach would be to write lstsqcond (or a better name) that returns both the fit and the condition number. listsq could then be just a wrapper over that which dumped the condition number. IIRC, the condition number is available, but we're not returning it. -tim |
From: Greg W. <gre...@gm...> - 2006-10-13 21:09:49
|
On 10/13/06, A. M. Archibald <per...@gm...> wrote: > > In any case, all this is outside the purview of numpy (as is polyfit, > frankly). > Great. Thanks for the ideas of other algorithms/functions to look at. Greg -- Linux. Because rebooting is for adding hardware. |
From: A. M. A. <per...@gm...> - 2006-10-13 21:07:58
|
On 13/10/06, Charles R Harris <cha...@gm...> wrote: > > > > On 10/13/06, A. M. Archibald <per...@gm...> wrote: > > On 12/10/06, Charles R Harris <cha...@gm...> wrote: > > > Hi all, > > > > > > I note that polyfit looks like it should work for single and double, > real > > > and complex, polynomials. On the otherhand, the default rcond needs to > > > depend on the underlying precision. On the other, other hand, all the > svd > > > computations are done with dgelsd or zgelsd, i.e., double precision. > Even so > > > problems can arise from inherent errors of the input data if it is > single > > > precision to start with. I also think the final degree of the fit should > be > > > available somewhere if wanted, as it is an indication of what is going > on. > > > Sooo, any suggestions as to what to do? My initial impulse would be to > set > > > rcond=1e-6 for single, 1e-14 for double, make rcond a keyword, and kick > the > > > can down the road on returning the actual degree of the fit. > > > > I'd also be inclined to output a warning (which the user can ignore, > > read or trap as necessary) if the condition number is too bad or they > > supplied an rcond that is too small for the precision of their data. > > That sounds good, but how to do it? Should I raise an exception? I would > also have to modify lstsq so it returns the degree of the fit which would > mess up the current interface. Python's warnings module is a decent solution for providing this information. Goodness-of-fit worries me less than ill-conditioning - users are going to expect the curve to deviate from their function (and an easy reliable way to get goodness of fit is sqrt(sum(abs(f(xs)-polynomial(xs))**2)); this is certain to take into account any roundoff errors introduced anywhere). But they may well have no idea they should be worried about the condition number of some matrix they've never heard of. A. M. Archibald |
From: A. M. A. <per...@gm...> - 2006-10-13 21:03:49
|
On 13/10/06, Greg Willden <gre...@gm...> wrote: > What about including multiple algorithms each returning a figure of fit? > Then I could try two or three different algorithms and then use the one that > works best for my data. The basic problem is that X^n is rarely a good basis for the functions on [a,b]. So if you want it to return the coefficients of a polynomial, you're basically stuck. If you *don't* want that, there's a whole bestiary of other options. If you're just looking to put a smooth curve through a bunch of data points (perhaps with known uncertainties), scipy.interpolate includes some nice spline fitting functions. If you're looking for polynomials, orthogonal polynomials may serve as a better basis for your interval; you can look in scipy.special for them (and leastsq will fit them to your points). Extracting their coefficients is possible but will bring you back to numerical instabilities. In any case, all this is outside the purview of numpy (as is polyfit, frankly). A. M. Archibald |
From: Charles R H. <cha...@gm...> - 2006-10-13 20:58:41
|
On 10/13/06, A. M. Archibald <per...@gm...> wrote: > > On 12/10/06, Charles R Harris <cha...@gm...> wrote: > > Hi all, > > > > I note that polyfit looks like it should work for single and double, > real > > and complex, polynomials. On the otherhand, the default rcond needs to > > depend on the underlying precision. On the other, other hand, all the > svd > > computations are done with dgelsd or zgelsd, i.e., double precision. > Even so > > problems can arise from inherent errors of the input data if it is > single > > precision to start with. I also think the final degree of the fit should > be > > available somewhere if wanted, as it is an indication of what is going > on. > > Sooo, any suggestions as to what to do? My initial impulse would be to > set > > rcond=1e-6 for single, 1e-14 for double, make rcond a keyword, and kick > the > > can down the road on returning the actual degree of the fit. > > I'd also be inclined to output a warning (which the user can ignore, > read or trap as necessary) if the condition number is too bad or they > supplied an rcond that is too small for the precision of their data. That sounds good, but how to do it? Should I raise an exception? I would also have to modify lstsq so it returns the degree of the fit which would mess up the current interface. Chuck |
From: Tim H. <tim...@ie...> - 2006-10-13 20:46:09
|
Greg Willden wrote: > On 10/13/06, *A. M. Archibald* <per...@gm... > <mailto:per...@gm...>> wrote: > > At this point you might as well use a polynomial class that can > accomodate a variety of bases for the space of polynomials - X^n, > (X-a)^n, orthogonal polynomials (translated and scaled as needed), > what have you. > > I think I vote for polyfit that is no more clever than it has to be > but which warns the user when the fit is bad. > > > > What about including multiple algorithms each returning a figure of fit? > Then I could try two or three different algorithms and then use the > one that works best for my data. A simple, "stupid" curve fitting algorithm may be appropriate for numpy, but once your getting into multiple algorithms it's time to move it to a package in scipy IMO (and it would be good to find someone who cares, and knows, about curve fitting to adopt it). -tim |
From: Greg W. <gre...@gm...> - 2006-10-13 20:36:53
|
On 10/13/06, A. M. Archibald <per...@gm...> wrote: > > At this point you might as well use a polynomial class that can > accomodate a variety of bases for the space of polynomials - X^n, > (X-a)^n, orthogonal polynomials (translated and scaled as needed), > what have you. > > I think I vote for polyfit that is no more clever than it has to be > but which warns the user when the fit is bad. > > What about including multiple algorithms each returning a figure of fit? Then I could try two or three different algorithms and then use the one that works best for my data. Greg -- Linux. Because rebooting is for adding hardware. |
From: Lisandro D. <da...@gm...> - 2006-10-13 20:20:51
|
On 10/13/06, Francesc Altet <fa...@ca...> wrote: > Is it possible to test a numpy version directly from the source > directory without having to install it? I usually do: $ python setup.py build $ python setup.py install --home=3D/tmp $ export PYTHONPATH=3D/tmp/lib/python and then $ python -c 'import numpy; numpy.test()' and finally, if all was right, su -c 'python setup.py install' or python setup.py install --home=3D$HOME --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |