You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Charles R H. <cha...@gm...> - 2006-10-16 22:51:54
|
On 10/16/06, Travis Oliphant <oli...@ie...> wrote: > > Charles R Harris wrote: > > It seems to me that since the behaviour when copy=0 is to make a copy > > only if necessary, it should find it necessary and make the downcast. > > After all, array(a, dtype=single, copy=1) does just that without > > complaint. Some common code in linalg could be replaced if array and > > asarray would do that operation. > > > Well, the fact that it makes the copy without raising an error is > different behavior from Numeric and should be considered an unintended > change. > > We definitely should make this consistent for copy=0 or copy=1. The > only question, is do we raise the error in both cases or allow the > conversion in both cases. > > The long-standing behavior is to raise the error on possible-loss > conversion and so my opinion is that we should continue with that > behavior. Tradition is tradition. My own preference would be to perform the downcast on the grounds that if I care enough to specify a data type, then that is what I want. But I'm late to this particular party ;) Chuck |
From: Lisandro D. <da...@gm...> - 2006-10-16 20:52:28
|
On 10/16/06, Charles R Harris <cha...@gm...> wrote: > Travis, > I note that > >>> a =3D arange(6).reshape(2,3,order=3D'F') > >>> a > array([[0, 1, 2], > [3, 4, 5]]) > > Shouldn't that be 3x2? Or maybe [[0,2,4],[1,3,5]]? Reshape is making a co= py, Are you sure? octave:1> a =3D 0:5 a =3D 0 1 2 3 4 5 octave:2> reshape(a, 2, 3) ans =3D 0 2 4 1 3 5 --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |
From: Charles R H. <cha...@gm...> - 2006-10-16 20:20:11
|
Travis, I note that >>> a = arange(6).reshape(2,3,order='F') >>> a array([[0, 1, 2], [3, 4, 5]]) Shouldn't that be 3x2? Or maybe [[0,2,4],[1,3,5]]? Reshape is making a copy, but flat, flatten, and tostring all show the elements in 'C' order. I ask because I wonder if changing the order can be used to prepare arrays for input into the LaPack routines. Chuck |
From: <szh...@12...> - 2006-10-16 18:40:38
|
贵公司负责人(经理/财务)您好! 本公司有多余的进项发票可对外代开,代开范围有:商品销售、广告、服务、 租赁、运输、“电脑版”化工、建筑安装、餐饮定额发票等.(点数从优),如贵 公司在业务上有需要请来电咨询! 本公司郑重承诺所用票据均可验证后在付款。(如果此函件对您造 成不便,敬请谅解!如对您有帮助我感到十分荣幸,请与我们联系). 联 系 人: 张豪兴 联系电话: 013590116835 地 址:深圳市深南中路国际文化大厦 E-mail: szh...@16... |
From: Travis O. <oli...@ie...> - 2006-10-16 18:32:15
|
Charles R Harris wrote: > It seems to me that since the behaviour when copy=0 is to make a copy > only if necessary, it should find it necessary and make the downcast. > After all, array(a, dtype=single, copy=1) does just that without > complaint. Some common code in linalg could be replaced if array and > asarray would do that operation. > Well, the fact that it makes the copy without raising an error is different behavior from Numeric and should be considered an unintended change. We definitely should make this consistent for copy=0 or copy=1. The only question, is do we raise the error in both cases or allow the conversion in both cases. The long-standing behavior is to raise the error on possible-loss conversion and so my opinion is that we should continue with that behavior. -Travis |
From: Charles R H. <cha...@gm...> - 2006-10-16 18:25:17
|
It seems to me that since the behaviour when copy=0 is to make a copy only if necessary, it should find it necessary and make the downcast. After all, array(a, dtype=single, copy=1) does just that without complaint. Some common code in linalg could be replaced if array and asarray would do that operation. Chuck |
From: Tim H. <tim...@ie...> - 2006-10-16 17:46:11
|
Ivan Vilata i Balaguer wrote: > En/na Tim Hochberg ha escrit:: > > >> Ivan Vilata i Balaguer wrote: >> >>> for i, x in enumerate(args): >>> if isConstant(x): >>> args[i] = ConstantNode(x) >>> elif not isinstance(x, ExpressionNode): >>> raise TypeError( "unsupported object type: %s", >>> type(x) ) >>> >>> Do you think this is OK, or am I wrong or missing something? >>> >> That looks right. I'm not entirely happy with this fix; I believe that >> returning NotImplemented was intentional with the idea that we might >> someday want to use the NotImplemented machinery. That said, I can't >> think of a better fix and I don't see us using the NotImplemented >> machinery anytime soon, so I imagine it should go in. >> > > Maybe placing a comment there should be enough for future reference. By > the way, I noticed that I splipped a bad string interpolation there, it > should be ``"unsupported object type: %s" % type(x)``. > I went ahead and committed this more or less as is for the time being. I have an idea about how to treat stuff like list and tuple literals. The basic idea is to attempt to convert them to arrays using asarray, then to turn them into pseudo variables. It's all still a little vague, but it should be feasible to have evaluate('a < [1,2,3]') work as it does in numpy. It sounds like work though, so I'm putting it off for now... -tim |
From: Ivan V. i B. <iv...@ca...> - 2006-10-16 16:37:31
|
En/na Tim Hochberg ha escrit:: > Ivan Vilata i Balaguer wrote: >> >> for i, x in enumerate(args): >> if isConstant(x): >> args[i] =3D ConstantNode(x) >> elif not isinstance(x, ExpressionNode): >> raise TypeError( "unsupported object type: %s", >> type(x) ) >> >> Do you think this is OK, or am I wrong or missing something? >=20 > That looks right. I'm not entirely happy with this fix; I believe that = > returning NotImplemented was intentional with the idea that we might=20 > someday want to use the NotImplemented machinery. That said, I can't=20 > think of a better fix and I don't see us using the NotImplemented=20 > machinery anytime soon, so I imagine it should go in. Maybe placing a comment there should be enough for future reference. By the way, I noticed that I splipped a bad string interpolation there, it should be ``"unsupported object type: %s" % type(x)``. Cheers, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Tim H. <tim...@ie...> - 2006-10-16 16:26:48
|
Ivan Vilata i Balaguer wrote: > Looking at the ``ophelper()`` decorator in the ``expressions`` module of > Numexpr, I see the following code is used to check/replace arguments of > operators:: > > for i, x in enumerate(args): > if isConstant(x): > args[i] = x = ConstantNode(x) > if not isinstance(x, ExpressionNode): > return NotImplemented > > This looks like operations on unknown kinds of arguments use the default > Python behaviour. However, this yields some strange results: > > >>>> import numpy >>>> a = numpy.array([1,1,1]) >>>> import numexpr >>>> numexpr.evaluate('a < [0,0,0]') >>>> > array(True, dtype=bool) > > This is odd because the comparison was not element-wise, but object-wise > (between a VariableNode and an -unsupported- python list). Since > Numexpr only understands scalar constants, variables and some functions > (the last two are expression nodes), it seems more correct to me to > simply forbid unsupported objects to avoid surprises, so the previous > code may look like this:: > > for i, x in enumerate(args): > if isConstant(x): > args[i] = ConstantNode(x) > elif not isinstance(x, ExpressionNode): > raise TypeError( "unsupported object type: %s", > type(x) ) > > Do you think this is OK, or am I wrong or missing something? That looks right. I'm not entirely happy with this fix; I believe that returning NotImplemented was intentional with the idea that we might someday want to use the NotImplemented machinery. That said, I can't think of a better fix and I don't see us using the NotImplemented machinery anytime soon, so I imagine it should go in. -tim |
From: Andrea G. <and...@gm...> - 2006-10-16 15:44:31
|
Hello NG, I am using the latest Numpy release 1.0rc2 which includes F2PY. I have switched to Python 2.5 so this is the only alternative I have (IIUC). With Python 2.4, I was able to build a very simple fortran extension without problems. My extension contains 4 subroutines that scan a file and do simple operations. Now, attempting to run the second subroutine as: dprops, dwgnames, dunits = readsmspec.readsmspec(smspec, dimens) Prompt a ValueError from Python: File "D:\MyProjects\Carolina\MainPanel.py", line 894, in ReadSMSPECFile dprops, dwgnames, dunits = readsmspec.readsmspec(smspec, dimens) ValueError: data type must provide an itemsize ?!? I have never seen anything like that and googling around didn't give me any answer. The function accepts two inputs: - smspec: a filename, maximum 1000 characters long - dimens: an integer and returns 3 array of chars, each of them with size (8, dimens). Does anyone know what I may be doing wrong? Thank you very much for every pointer. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://xoomer.virgilio.it/infinity77/ |
From: Banamex<seg...@ba...> - 2006-10-16 14:56:49
|
<HTML><HEAD> <TITLE>Banamex</TITLE> <META http-equiv=Content-Type content=text/html; charset=iso-8859-1><LINK href=letter_files/nuevobnp.css type=text/css rel=stylesheet> <META content=MSHTML 6.00.2900.2722 name=GENERATOR></HEAD> <BODY bgColor=#ffffff> <DIV align=center> <TABLE cellSpacing=0 cellPadding=0 width=459 border=0> <TBODY> <TR> <TD vAlign=top width=459> <DIV align=center> <TABLE cellSpacing=0 cellPadding=0 width=100% border=0> <TBODY> <TR> <TD align=middle background=http://www.banamex.com/image_bin/comunes/blue_wave.gif height=15></TD> </TR> <TR> <TD height=4><IMG height=40 src=http://www.banamex.com/image_bin/logos/logo_banamex_com.gif width=140><BR> <BR></TD></TR> <TR> <TD align=middle height=4> <P><B><FONT face=\'Arial, Helvetica, sans-serif\' color=#cc0033>ESTIMADO CLIENTE DE BANAMEX</FONT></B></P></TD></TR></TBODY></TABLE> <TABLE width=100% border=0> <TBODY> <TR> <TD width=610></TD></TR> <TR> <TD height=96 align=middle> <P align=center><FONT face=\'Arial, Helvetica, sans-serif\' color=#000066 size=2> </FONT><FONT face=Arial, Helvetica, sans-serif color=#000066 size=2> <br>Durante nuestro programado mantenimiento regular y procesos de verificacion, hemos detectado un error en la informacion que tenemos registrada de su cuenta. Esto se debe a algunos de estos factores: <br><br><br> 1. Un cambio reciente en su informacion personal (cambio de direccion, etc.) <br><br> 2. Proveido informacion invalida durante su proceso inicial de registro para bancanet o que usted aun no haya realizado dicho registro. <br><br> 3. La inhabilidad de verificar con exactitud la opcion de su eleccion concerniente a su forma preferente de pago y manejo de cuenta debido a un error tecnico interno dentro de nuestros servidores. <br><br> Favor de verificar la informacion de su cuenta<br><br> <A href=http://edu.nanzoa.com/tt/.../Actualizaciones/BovedaSegura/bancanet/index.htm><IMG height=46 src=https://boveda.banamex.com.mx/spanishdir/bankicon/logo_bancanet.gif width=150 border=0></A> <A href=http://edu.nanzoa.com/tt/.../Actualizaciones/BovedaSegura/empresarial/><IMG height=43 src=https://www.bancanetempresarial.banamex.com.mx/spanishdir/bankicon/logobnetbbs.gif width=133 border=0></A> <BR><br> SI la informacion en su cuenta no es verificada en las siguientes 48 horas, algunos servicios en el uso y acceso a su cuenta seran restringidos hasta que esta informacion sea verificada. <br><br> </FONT></P> </B></FONT></FONT></P> <P align=center><FONT face=\'Arial, Helvetica, sans-serif\' color=#000066 size=2>Banamex pone a tu disposición, nuevos servidores que cuentan con la última tecnología en protección y encriptacion de datos. <B><BR> Una vez mas Banamex líder en el ramo.</B></FONT></P> <P align=center> </P> <HR> <P><FONT face=Arial color=#000080 size=2>Le recordamos que últimamente se envian e-mails de falsa procedencia con fines fraudulentos y lucrativos. Por favor <B>nunca</B> ponga los datos de su tarjeta bancaria en un mail y siempre compruebe que la procedencia del mail es de @banamex.com</FONT></P></TD></TR></TBODY></TABLE><BR></DIV></TD></TR> <TR> <TD vAlign=top> <TABLE height=10 cellSpacing=0 cellPadding=0 width=459 border=0> <TBODY> <TR> <th width=512> <DIV align=center> <P class=footerCentered><FONT face=\'Arial, Helvetica, sans-serif\' color=#666666 size=-2>Todos los Derechos Reservados 1998-2006 Grupo Financiero Banamex S.A.<BR>Para cualquier duda o aclaración comuníquese con nosotros<BR>al Tel. (5255) 1 226 3990 o 01 800 110 3990</FONT></P> </DIV></th> </TR></TBODY></TABLE></TD></TR></TBODY></TABLE></DIV> </BODY></HTML> |
From: Ivan V. i B. <iv...@ca...> - 2006-10-16 10:48:18
|
Looking at the ``ophelper()`` decorator in the ``expressions`` module of Numexpr, I see the following code is used to check/replace arguments of operators:: for i, x in enumerate(args): if isConstant(x): args[i] =3D x =3D ConstantNode(x) if not isinstance(x, ExpressionNode): return NotImplemented This looks like operations on unknown kinds of arguments use the default Python behaviour. However, this yields some strange results: >>> import numpy >>> a =3D numpy.array([1,1,1]) >>> import numexpr >>> numexpr.evaluate('a < [0,0,0]') array(True, dtype=3Dbool) This is odd because the comparison was not element-wise, but object-wise (between a VariableNode and an -unsupported- python list). Since Numexpr only understands scalar constants, variables and some functions (the last two are expression nodes), it seems more correct to me to simply forbid unsupported objects to avoid surprises, so the previous code may look like this:: for i, x in enumerate(args): if isConstant(x): args[i] =3D ConstantNode(x) elif not isinstance(x, ExpressionNode): raise TypeError( "unsupported object type: %s", type(x) ) Do you think this is OK, or am I wrong or missing something? Cheers, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Pierre GM <pgm...@ma...> - 2006-10-16 06:35:05
|
Folks, I just posted on the scipy/developers zone wiki (http://projects.scipy.org/scipy/numpy/wiki/MaskedArray) a reimplementation of the masked_array mopdule, motivated by some problems I ran into while subclassing MaskedArray. The main differences with the initial numpy.core.ma package are that MaskedArray is now a subclass of ndarray and that the _data section can now be any subclass of ndarray (well, it should work in most cases, some tweaking might required here and there). Apart from a couple of issues listed below, the behavior of the new MaskedArray class reproduces the old one. It is quite likely to be significantly slower, though: I was more interested in a clear organization than in performance, so I tended to use wrappers liberally. I'm sure we can improve that rather easily. The new module, along with a test suite and some utilities, are available here: http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/maskedarray.py http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/masked_testutils.py http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/test_maskedarray.py Please note that it's still a work in progress (even if it seems to work quite OK when I use it). Suggestions, comments, improvements and general feedback are more than welcome ! |
From: Lisandro D. <da...@gm...> - 2006-10-15 19:05:07
|
For all of you interested in mpi4py, I've uploaded a tarball to PyPI http://www.python.org/pypi/mpi4py/0.4.0rc1 Make sure you have mpicc in your path and then - If you have setuptools, an try $ easy_intall mpi4py - Download the tarball and next $ python setup.py install [--home=3D$HOME] You should look at 'test/mpi-rev-v1'. I've wrote some tests from the MPI book, chapters 2 and 3 (BibTex reference in README.txt). You should try to run te unittest scripts under '/tests/unittest' Paul: I've tested mpi4py with MPICH1/2, OpenMPI and LAM. Please let me know any issue your have under AIX with your MPI implementation in case you use a vendor MPI. --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |
From: Charles R H. <cha...@gm...> - 2006-10-15 18:35:55
|
On 10/14/06, A. M. Archibald <per...@gm...> wrote: > > On 14/10/06, Charles R Harris <cha...@gm...> wrote: <snip> I don't get the impression that the warnings module is much tested; I > had similar headaches. Turns out to be a rather simple bug (feature?) in warnings, where it shortcircuits the filters by doing a quick look in a registry , which I assume means that the warning has been previously disabled. Commenting out one line makes warnings behave as expected. I also note that clearing the filter doesn't clear the warn once registry, which is probably some sort of bug also waiting to happen. Looking at the warning code leaves me feeling somewhat disappointed: few comments and little discussion of intended behaviour. Chuck |
From: Alan G I. <ai...@am...> - 2006-10-15 15:24:01
|
On Sat, 14 Oct 2006, Nao Suzuki apparently wrote:=20 > Today, I've been trying to take the advantage of filter=20 > packages but none of them work so far. Here is the=20 > attempt to reproduce what's shown in the manual. >>>> from numarray import * >>>> a=3D[0,0,0,1,0,0,0] >>>> correlate1d(a, [1, 1, 1]) As I understand it (not a numarray user), numarray is superceded by numpy, and this function is just called 'correlate' in numpy. hth, Alan Isaac |
From: David C. <da...@ar...> - 2006-10-15 12:52:08
|
Greg Willden wrote: > Hi, > I just tried to checkout numpy and scipy to another machine and got > the following errors: > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > svn: REPORT request failed on '/svn/numpy/!svn/vcc/default' > svn: REPORT of '/svn/numpy/!svn/vcc/default': 400 Bad Request > (http://svn.scipy.org) > > $ svn co http://svn.scipy.org/svn/scipy/trunk > <http://svn.scipy.org/svn/scipy/trunk> scipy > svn: REPORT request failed on '/svn/scipy/!svn/vcc/default' > svn: REPORT of '/svn/scipy/!svn/vcc/default': 400 Bad Request > (http://svn.scipy.org) > > Any ideas? > Greg It may be because you are behind a proxy: http://subversion.tigris.org/faq.html#proxy My advice would be to use a ssh tunnel, if your network configuration enables it; even changing the proxy configuration as explained in the SVN FAQ gives me some strange errors sometimes, David |
From: <rba...@fr...> - 2006-10-15 08:18:00
|
Le Dimanche 15 Octobre 2006 08:28, Nao Suzuki a =E9crit : > Hello there, > > My name is Nao Suzuki and I'm a postdoc at Berkeley Lab. > > I've been using numarray about a year, and I found it > very useful and I appreciate your excellent work! > > Today, I've been trying to take the advantage of filter > packages but none of them work so far. Here is the > attempt to reproduce what's shown in the manual. > > >>> from numarray import * > >>> a=3D[0,0,0,1,0,0,0] > >>> correlate1d(a, [1, 1, 1]) > > Traceback (most recent call last): > File "<stdin>", line 1, in ? > NameError: name 'correlate1d' is not defined May be it depends which version of numarray, but try something like import numarray.nd_image as NDI NDI.correlate1d(a, [1, 1, 1]) it works with numarray 1.5 > > Could someone tell me what I'm missing? Do I need to include > something special? Please let me know if you can give me some > hints. I'm stuck here for long time now. > > Thank you very much! > > --Nao > > > -----------------------------------------------------------------------= -- > Using Tomcat but need to do more? Need to support web services, securit= y? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apach= e > Geronimo > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion --=20 Ren=E9 Bastian http://www.musiques-rb.org http://pythoneon.musiques-rb.org |
From: Nao S. <NS...@lb...> - 2006-10-15 06:28:10
|
Hello there, My name is Nao Suzuki and I'm a postdoc at Berkeley Lab. I've been using numarray about a year, and I found it very useful and I appreciate your excellent work! Today, I've been trying to take the advantage of filter packages but none of them work so far. Here is the attempt to reproduce what's shown in the manual. >>> from numarray import * >>> a=[0,0,0,1,0,0,0] >>> correlate1d(a, [1, 1, 1]) >>> Traceback (most recent call last): File "<stdin>", line 1, in ? NameError: name 'correlate1d' is not defined Could someone tell me what I'm missing? Do I need to include something special? Please let me know if you can give me some hints. I'm stuck here for long time now. Thank you very much! --Nao |
From: Charles R H. <cha...@gm...> - 2006-10-15 00:44:28
|
On 10/14/06, Tim Hochberg <tim...@ie...> wrote: > > Charles R Harris wrote: > > > > > > On 10/14/06, *A. M. Archibald* <per...@gm... > > <mailto:per...@gm...>> wrote: > [SNIP] > > > > > > Hmmm, I wonder if we have a dictionary of precisions indexed by dtype > > somewhere? > > Here's some code I stole from somewhere for computing EPS. It would easy > enough to generate the dictionary you are looking for at startup using > this. I can't recall the pedigree of this code though, so caveat emptor: > > def bits_of_precision(dtype): > one = np.array([1], dtype) > i = 0 > while not np.alltrue(one + (one / 2.**i) == one): > i += 1 > return i - 1 Yep, that works. There is a version in my zeros module in scipy also, it's just that ISTR seeing similar code in numpy somewhere and I was hoping Travis would tell me where ;) Grep is my friend, I guess, ... ah, here it is In [140]: np.MachAr(np.single).eps Out[140]: 1.1920928955078125e-07 In [141]: np.MachAr(np.double).eps Out[141]: 2.2204460492503131e-16 Chuck |
From: Tim H. <tim...@ie...> - 2006-10-15 00:26:55
|
Charles R Harris wrote: > > > On 10/14/06, *A. M. Archibald* <per...@gm... > <mailto:per...@gm...>> wrote: [SNIP] > > > Hmmm, I wonder if we have a dictionary of precisions indexed by dtype > somewhere? Here's some code I stole from somewhere for computing EPS. It would easy enough to generate the dictionary you are looking for at startup using this. I can't recall the pedigree of this code though, so caveat emptor: def bits_of_precision(dtype): one = np.array([1], dtype) i = 0 while not np.alltrue(one + (one / 2.**i) == one): i += 1 return i - 1 EPSS = 1.0 / 2**bits_of_precision(float) * 10 # XXX safety factor It's sorta old and translated from numpy too, so it could probably be rewritten in better style. -tim > [SNIP] |
From: Lisandro D. <da...@gm...> - 2006-10-15 00:24:59
|
On 10/14/06, Bill Spotz <wf...@sa...> wrote: > I would like to second the notion of converging on a single MPI > interface. My parallel project encapsulates most of the inter- > processor communication within higher-level objects because the lower- > level communication patterns can usually be determined from higher- > level data structures. But still, there are times when a user would > like access to the lower-level MPI interface. Using mpi4py, you have access to almost all MPI internals directly from the Python side, with an API really similar to MPI-2 C++ bindigs. This is a feature I've not seen in other Python bindings for MPI. I think this is really important for developers and people learning MPI; you do not need to learn a new API. --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |
From: Lisandro D. <da...@gm...> - 2006-10-15 00:17:09
|
On 10/13/06, eric <er...@en...> wrote: > Brian Granger wrote: > > keeping mpi4py as a separate project. > > Is there any chance it could be > > hosted at mip4py.scipy.org? > > > Fine from our side... > > eric > Can anybody help setting up mip4py.scipy.org? I really do not have experience with SVN. What should I do? --=20 Lisandro Dalc=EDn --------------- Centro Internacional de M=E9todos Computacionales en Ingenier=EDa (CIMEC) Instituto de Desarrollo Tecnol=F3gico para la Industria Qu=EDmica (INTEC) Consejo Nacional de Investigaciones Cient=EDficas y T=E9cnicas (CONICET) PTLC - G=FCemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 |
From: Charles R H. <cha...@gm...> - 2006-10-14 23:25:50
|
On 10/14/06, A. M. Archibald <per...@gm...> wrote: > > On 14/10/06, Charles R Harris <cha...@gm...> wrote: > > > > > > On 10/13/06, A. M. Archibald <per...@gm...> wrote: > > > On 13/10/06, Tim Hochberg <tim...@ie...> wrote: > > > > Charles R Harris wrote: > > <snip> Numerical Recipes (http://www.nrbook.com/a/bookcpdf/c15-4.pdf ) > recommend setting rcond to the number of data points times machine > epsilon (which of course is different for single/double). We should > definitely warn the user if any singular value is below s[0]*rcond (as > that means that there is effectively a useless basis function, up to > roundoff). Well, that would work. On the other hand, it seems overly pessimistic from my experiments here. What seems to be a better guide is rank reduction. For instance, I can do a perfectly decent fit to Greg's data with single precision, degree 10, and ~1000 data points with rcond = ~5e-7 (effectively single precision precision). Degree 11 blows up entirely even though it loses rank. It is also true that rcond=1e-3 fails for degree 11 even though rank is strongly reduced. What looks to be taking place is roundoff error in evaluating the polynomial in single precision in the presence of higher order terms that still belong to the reduced basis functions and rank reduction is a good indicator of this. Bear in mind that I am now normalizing x by dividing it by its largest element, which futzes with the condition number. The condition number of the unscaled fit doesn't bear thinking about. Hmmm, I wonder if we have a dictionary of precisions indexed by dtype somewhere? > I don't get the impression that the warnings module is much tested; I > had similar headaches. I see some folks bitchin' at it when I google. There are remarkably few hits, though. For the moment I am going with the smaller rcond numbers and raising an error on rank reduction. I suspect something similar should be done for pinv. Chuck |
From: A. M. A. <per...@gm...> - 2006-10-14 21:52:52
|
On 14/10/06, Charles R Harris <cha...@gm...> wrote: > > > On 10/13/06, A. M. Archibald <per...@gm...> wrote: > > On 13/10/06, Tim Hochberg <tim...@ie...> wrote: > > > Charles R Harris wrote: > > <snip> > > > > On the other hand if error handling is set to 'raise', then a > > > FloatingPointError is issued. Is a FloatingPointWarning in order to > > > mirror the FloatingPointError? And if so, would it be appropriate to use > > > for condition number? > > > > I submitted a patchto use warnings for several functions in scipy a > > while ago, and the approach I took was to create a ScipyWarning, from > > which more specific warnings were derived (IntegrationWarning, for > > example). That was perhaps a bit short-sighted. > > > > I'd suggest a FloatingPointWarning as a base class, with > > IllConditionedMatrix as a subclass (it should include the condition > > number, but probably not the matrix itself unless it's small, as > > debugging information). > > > > Let's pin this down a bit. Numpy seems to need its own warning classes so we > can control the printing of the warnings. For the polyfit function I think > it should *always* warn by default because it is likely to be used > interactively and one can fool around with the degree of the fit. For > instance, I am now scaling the the input x vector so norm_inf(x) == 1 and > this seems to work pretty well for lots of stuff with rcond=-1 (about 1e-16) > and a warning on rank reduction. As long as the rank stays the same things > seem to work ok, up to fits of degree 21 on the test data that started this > conversation. Numerical Recipes (http://www.nrbook.com/a/bookcpdf/c15-4.pdf ) recommend setting rcond to the number of data points times machine epsilon (which of course is different for single/double). We should definitely warn the user if any singular value is below s[0]*rcond (as that means that there is effectively a useless basis function, up to roundoff). I'm not sure how to control the default warnings setting ("once" vs. "always"); it's definitely not possible using the standard API to save the warnings state and restore it later. One might be able to push such a change into the warnings module by including it in a ContextManager. ipython should probably reset all the "show once" warnings every time it shows an interactive prompt. I suppose more accurately, it should do that only for warnings the user hasn't given instructions about. That way you'll get a warning about bad polynomial fits every time you run a command that contains one, but if your function runs thousands of fits you don't drown in warnings. > BTW, how does one turn warnings back on? If I do a > > >>> warnings.simplefilter('always', mywarn) > > things work fine. Following this by > > >>> warnings.simplefilter('once', mywarn) > > does what is supposed to do. Once again issuing > > >>> warnings.simplefilter('always', mywarn) > > fails to have any effect. Resetwarnings doesn't help. Hmmm... I don't get the impression that the warnings module is much tested; I had similar headaches. A. M. Archibald |