You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Travis O. <oli...@ie...> - 2002-03-06 04:43:50
|
I have not heard any feedback back on my proposal to add a final object to the extended slice syntax to current Numeric to allow for unambiguous index and mask-array access. As a modification to the proposal, suppose we just check to see if the last argument (of at least two) is a 0d array of type signed byte (currently this is illegal and will raise an error). This number would be a flag indicating how to interpret the previous objects. Of course these numbers would be hidden from the user who would write: a[index_array, _I] = <values> b = a[index_array, _I] or a[mask_array, _M] = <values> b = a[mask_array, _M] where _M is a 0d signed byte array indicating that the mask_array should be interpreted as a mask while _I is a 0d signed byte array indicating that the index_array should be interpreted as a integers into the flattened version of a. Other indexing schemes could be envisioned as well a[a1,a2,a3,_X] could be the cross product of the integer arrays a1, a2, and a3 for example. or a[a1, a2, a3, _Z] could select elements from a by "zipping" the sequences a1, a2, and a3 together to form a list of tuples to grab from a. Comments? |
From: Travis O. <oli...@ie...> - 2002-03-06 04:35:18
|
Recently there has been discussion on the list about the awkwardness of matrix syntax when using Numeric Python. Matrix expressions can be awkard to express in Numeric which is a negative mark on an otherwise excellent computing environment. Currently part of the problem can be solved by working with Matrix objects explicitly: a = Matrix.Matrix("[1 2 3; 4 5 6]") # Notice the strings. However, most operations return arrays which have to be recast to matrices using at best a character with parenthesis: M = Matrix.Matrix M(sin(a)) * M(cos(a)).T The suggestion was made to add ".M" as an attribute of arrays which returns a matrix. Thus, the code above can be written: sin(a).M * cos(a).M.T While some aesthestic simplicity is obtained, the big advantage is in consistency. Somebody else may decide that P = Matrix.Matrix is a better choice. But, if we establish that .M always returns a matrix for arrays < 2d, then we gain consistency. I've made this change and am ready to commit the change to the Numeric tree, unless there are strong objections. I know some people do not like the proliferation of attributes, but in this case the notational convenience it affords to otherwise overly burdened syntax and the consistency it allows Numeric to deal with Matrix equations may be worth it. What do you think? -Travis Oliphant |
From: <ne...@ns...> - 2002-03-06 01:50:38
|
Hi, RandomArray has a problem? I use python2.0, Numpy 20.3. simple source code is ... -------------------------------- #!/usr/bin/env python import math import Numeric import RandomArray import sys RandomArray.seed(1234,5678) i=0L while 1: i = i+1 a = RandomArray.randint(0,100) if a==100: print 'i=',i, 'a=',a --------------------------------- and result is --------------------------------- i= 70164640 a= 100 i= 152242967 a= 100 i= 159619195 a= 100 i= 173219763 a= 100 i= 200933959 a= 100 i= 233219191 a= 100 i= 276114822 a= 100 i= 313589319 a= 100 i= 340689813 a= 100 i= 402397265 a= 100 i= 456099215 a= 100 i= 506078935 a= 100 i= 547758957 a= 100 i= 559163554 a= 100 i= 570211180 a= 100 .......... --------------------------------- RandomArray.randint(0,100) has range 0<= RandomArray.randint(0,100) <100. But, result is not...somtime, a==100 arise. So, I upgrade to python 2.2 and Numpy 21b3. But, I met same problem. And, so, I change the os Mandrake 8.0 to Redhat 7.2. But, same problem... I don't know what is my mistake... Please help me ... Kee-Hyoung Joo ------------------------------------------------------------------ I love Jesus Christ who is my savior. He gives me meanning of life. In Christ, I have become shepherd and bible teacher. e-mail : ne...@ki... home : http://newton.skku.ac.kr/~newton (My old home page) ------------------------------------------------------------------ |
From: <ne...@ns...> - 2002-03-06 01:19:27
|
Hi, RandomArray has a problem? I use python2.0, Numpy 20.3. simple source code is ... -------------------------------- #!/usr/bin/env python import math import Numeric import RandomArray import sys RandomArray.seed(1234,5678) i=0L while 1: i = i+1 a = RandomArray.randint(0,100) if a==100: print 'i=',i, 'a=',a --------------------------------- and result is --------------------------------- i= 70164640 a= 100 i= 152242967 a= 100 i= 159619195 a= 100 i= 173219763 a= 100 i= 200933959 a= 100 i= 233219191 a= 100 i= 276114822 a= 100 i= 313589319 a= 100 i= 340689813 a= 100 i= 402397265 a= 100 i= 456099215 a= 100 i= 506078935 a= 100 i= 547758957 a= 100 i= 559163554 a= 100 i= 570211180 a= 100 .......... --------------------------------- RandomArray.randint(0,100) has range 0<= RandomArray.randint(0,100) <100. But, result is not...somtime, a==100 arise. So, I upgrade to python 2.2 and Numpy 21b3. But, I met same problem. And, so, I change the os Mandrake 8.0 to Redhat 7.2. But, same problem... I don't know what is my mistake... Please help me ... Kee-Hyoung Joo ------------------------------------------------------------------ I love Jesus Christ who is my savior. He gives me meanning of life. In Christ, I have become shepherd and bible teacher. e-mail : ne...@ki... home : http://newton.skku.ac.kr/~newton (My old home page) ------------------------------------------------------------------ |
From: <ne...@ad...> - 2002-03-06 00:51:56
|
DQpIaSwNCg0KUmFuZG9tQXJyYXkgaGFzIGEgcHJvYmxlbT8NCg0KSSB1c2UgcHl0aG9uMi4wLCBO dW1weSAyMC4zLg0KDQpzaW1wbGUgc291cmNlIGNvZGUgaXMgLi4uDQotLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KIyEvdXNyL2Jpbi9lbnYgcHl0aG9uDQppbXBvcnQgbWF0aA0KaW1w b3J0IE51bWVyaWMNCmltcG9ydCBSYW5kb21BcnJheQ0KaW1wb3J0IHN5cw0KDQpSYW5kb21BcnJh eS5zZWVkKDEyMzQsNTY3OCkNCmk9MEwNCndoaWxlIDE6DQogICAgaSA9IGkrMQ0KICAgIGEgPSBS YW5kb21BcnJheS5yYW5kaW50KDAsMTAwKQ0KICAgIGlmIGE9PTEwMDoNCiAgICAgICAgcHJpbnQg J2k9JyxpLCAnYT0nLGENCg0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCmFu ZCByZXN1bHQgaXMNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KaT0gNzAxNjQ2 NDAgYT0gMTAwDQppPSAxNTIyNDI5NjcgYT0gMTAwDQppPSAxNTk2MTkxOTUgYT0gMTAwDQppPSAx NzMyMTk3NjMgYT0gMTAwDQppPSAyMDA5MzM5NTkgYT0gMTAwDQppPSAyMzMyMTkxOTEgYT0gMTAw DQppPSAyNzYxMTQ4MjIgYT0gMTAwDQppPSAzMTM1ODkzMTkgYT0gMTAwDQppPSAzNDA2ODk4MTMg YT0gMTAwDQppPSA0MDIzOTcyNjUgYT0gMTAwDQppPSA0NTYwOTkyMTUgYT0gMTAwDQppPSA1MDYw Nzg5MzUgYT0gMTAwDQppPSA1NDc3NTg5NTcgYT0gMTAwDQppPSA1NTkxNjM1NTQgYT0gMTAwDQpp PSA1NzAyMTExODAgYT0gMTAwDQouLi4uLi4uLi4uDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0NClJhbmRvbUFycmF5LnJhbmRpbnQoMCwxMDApIGhhcyByYW5nZSANCjA8PSBSYW5k b21BcnJheS5yYW5kaW50KDAsMTAwKSA8MTAwLg0KQnV0LCByZXN1bHQgaXMgbm90Li4uc29tdGlt ZSwgYT09MTAwIGFyaXNlLg0KDQpTbywgSSB1cGdyYWRlIHRvIHB5dGhvbiAyLjIgYW5kIE51bXB5 IDIxYjMuDQpCdXQsIEkgbWV0IHNhbWUgcHJvYmxlbS4NCkFuZCwgc28sIEkgY2hhbmdlIHRoZSBv cyBNYW5kcmFrZSA4LjAgdG8gUmVkaGF0IDcuMi4NCkJ1dCwgc2FtZSBwcm9ibGVtLi4uDQoNCkkg ZG9uJ3Qga25vdyB3aGF0IGlzIG15IG1pc3Rha2UuLi4NCg0KUGxlYXNlIGhlbHAgbWUgLi4uDQoN CiAgICAgICAgICAgICAgICAgICAgS2VlLUh5b3VuZyBKb28NCg0K |
From: Tim P. <ti...@co...> - 2002-03-06 00:50:16
|
[Huaiyu Zhu] > ... > 1. The -lieee is indeed the most direct cure. On the specific platform tried. The libm errno rules changed between C89 and C99, and I'm afraid there's no errno behavior Python can rely on anymore. So I expect more changes will be needed in Python, regardless of how things turn out on this specific platform. > ... > 2. Is there a configure option to guarantee -lieee? If anyone can answer this question, please don't answer it here: it will just get lost. Attach it to Huaiyu's bug report instead: <http://sf.net/tracker/?func=detail&aid=525705&group_id=5470&atid=105470> Thanks. > ... > 3. errno 34 (or -lieee) may not be the sole reason. > > On a RedHat 6.1 upgraded to 7.1 (both gcc and glibc), errno 34 > is indeed raised in a C program linked without -lieee, and Python is > indeed compiled without -lieee, but Python does not raise > OverflowError. I expect you're missing something. Skip posted the Python code before, and if errno gets set, Python *does* raise OverflowError: errno = 0; /* Skip forgot to post this line, and it's important */ ... ix = pow(iv, iw); ... if (errno != 0) { /* XXX could it be another type of error? */ PyErr_SetFromErrno(PyExc_OverflowError); return NULL; If you can read C, believe your eyes <wink>. What you may be missing is what an utter mess C is here. How libm behaves may depend on compiler options, linker options, global system flags, and even options set for other libraries you link with. > ... > 4. Is there an easier way to debug such problems? The cause was obvious to the first person (Skip) who stepped into Python to see what the code did on a platform where it failed. It's not going to be obvious to someone who doesn't. > 5. How is 1e200**2 handled? It goes through exactly the same code. > Since both 1e-200**2 and 1e200**2 produce the same errno all the time, > but Python still raises OverflowError for 1e200**2 when linked with > -lieee, there must be a separate mechanism at work. You're speculating from a false base: if platform pow(x, y) sets errno to any non-zero value, Python x**y raises OverflowError. What differs is when platform pow(x, y) does not set errno. In that case, Python synthesizes errno=ERANGE if the pow() result equals +- platform HUGE_VAL. > What is that and how can I override it? Sorry, you can't override it. > I know this is by design, but I think the design is dumb (to put it > politely). I won't get into an argument here. I'll write > up my rationale against this when I have some time. I'm afraid a rationale won't do any good. I'm in favor of supplying full 754 compatibility myself, but: A) Getting from here to there requires volunteers to design, implement, document, and test the code. Given the atrocious state of C's 754 story, and the subtlety of 754's requirements, this needs volunteers who are both 754 experts and platform C experts. That combination is rare. B) Python's floating-point users will never agree it's a good thing, so such a change requires careful backward compatibility work too. This isn't likely to get done by someone who characterizes the other side as "dumb (to put it politely)" <0.9 wink>. Note that the only significant floating-point code ever contributed to the Python core was the fpectl module, and its purpose is to *break* 754 "non-stop" exception semantics in the places Python happens to let them sneak through. > I do remember there being several discussions in the past, but I don't > remember any convincing argument for the current decision. Any URL > would be greatly appreciated, beside the one pointed by Tim. Which "current decision" do you have in mind? There is no design doc for Python's numerics, if that's what you're looking for. As the text at the URL I gave you said, much of Python's fp behavior is accidental, inherited from platform C quirks. |
From: Andrew P. L. <bs...@al...> - 2002-03-04 21:16:44
|
Ummmm, it helps if I include the URL. Sorry. http://www.boost.org/libs/python/doc/ -a On Mon, 4 Mar 2002, Andrew P. Lentvorski wrote: > You might want to check out the Boost Python Library. It is peer reviewed > and seems to get most things correct. > > It should make writing wrappers a lot easier. > > -a > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Andrew P. L. <bs...@al...> - 2002-03-04 21:11:44
|
You might want to check out the Boost Python Library. It is peer reviewed and seems to get most things correct. It should make writing wrappers a lot easier. -a |
From: Paul F D. <pa...@pf...> - 2002-03-04 19:54:38
|
Travis fixed the error he accidentally made when improving MLab.py. This incident points out that our test suite does not include any tests for the non-core modules. Recently the discussion over the meaning of cov shows this too: I didn't even have a test that showed what the INTENDED answer is. We need test suites for FFT, LinearAlgebra, MLab, etc. If you have the subject competence to make a test file for us, please volunteer. I'd like them to be like the ones in Test/test.py, but as a separate file so that I can test the modules separately. |
From: Paul F D. <pa...@pf...> - 2002-03-04 16:20:56
|
Your change to Matrix.py has a fatal flaw: Python 2.2 (#1, Mar 1 2002, 11:11:28) [GCC 2.96 20000731 (Red Hat Linux 7.1 2.96-81)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import LinearAlgebra Traceback (most recent call last): File "<stdin>", line 1, in ? File "/home/dubois/linux/lib/python2.2/site-packages/Numeric/LinearAlgebra.py ", line 10, in ? import MLab File "/home/dubois/linux/lib/python2.2/site-packages/Numeric/MLab.py", line 10, in ? import Matrix File "/home/dubois/linux/lib/python2.2/site-packages/Numeric/Matrix.py", line 8, in ? from LinearAlgebra import inverse ImportError: cannot import name inverse >>> |
From: Tim P. <ti...@co...> - 2002-03-04 08:41:16
|
A lot of this speculation should have been cut short by my first msg. Yes, something changed in 2.2; follow the referenced link: http://sf.net/tracker/?group_id=5470&atid=105470&func=detail&aid=496104 For the rest of it, it looks like the "1e-200**2 raises OverflowError" glitch is unique to platforms using glibc. What isn't clear is whether it's dependent on which version of glibc, or on whether Python is linked with -lieee, or both. Unfortunately, the C standard (neither one) isn't a lick of help here -- error reporting from C math functions is a x-platform crapshoot. Can someone who sees this problem confirm or deny that they link with -lieee? If they see this problem and don't link with -lieee, also please try linking with -lieee and see whether the problem goes away then. On boxes with this problem, I'm also curious what import math print math.pow(1e-200, 2.0) does under 2.1. One probably-relevant thing that changed between 2.1 and 2.2 is that float**int calls the platform pow(float, int) in 2.2. 2.1 did it with repeated multiplication instead, but screwed up endcases. An example under 2.1: >>> x = -1. >>> import sys >>> x**(-sys.maxint-1L) Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: negative number cannot be raised to a fractional power >>> The same thing under 2.2 returns 1.0, provided your platform pow() isn't braindead. Repeated multiplication is also less accurate than a decent-quality pow(). |
From: Pearu P. <pe...@ce...> - 2002-03-03 15:10:02
|
Hi again, On Sun, 3 Mar 2002, Pearu Peterson wrote: > So, I would have expected CDOUBLE_to_CDOUBLE to be > > static void CDOUBLE_to_CDOUBLE(double *ip, int ipstep, > double *op, int opstep, int n) > { int i; > for(i=0;i<n;i++,ip+=ipstep,op+=opstep) { > *op = (double)*ip; /* copy real part */ > *(op+1) = (double)*(ip+1); /* copy imaginary part that always > follows the real part in memory */ > } > } After some testing I found that CDOUBLE_to_CDOUBLE should be static void CDOUBLE_to_CDOUBLE(double *ip, int ipstep, double *op, int opstep, int n) { int i; for(i=0;i<n;i++,ip+=2*ipstep,op+=2*opstep) { *op = (double)*ip; /* copy real part */ *(op+1) = (double)*(ip+1); /* copy imaginary part that always follows the real part in memory */ } } In fact, by redefining CDOUBLE_to_CDOUBLE, CDOUBLE_to_CFLOAT, CFLOAT_to_CDOUBLE, and CFLOAT_to_CFLOAT functions as above, I get everything work correctly for copy_ND_array. So, what it indicates? Is copy_ND_array a hack or is copying Numeric complex arrays broken? I don't know. I could not produce any incorrect results when using default Numeric functions, though. But I did not tried really hard either because I am not sure if the above makes sense for you. Regards, Pearu |
From: Pearu P. <pe...@ce...> - 2002-03-03 12:40:58
|
Hi! I am trying to copy a 2-d complex array to another 2-d complex array in an extension module. Both arrays may be noncontiguous. But using the same routine (namely Travis's copy_ND_array, you can find it at the end of this messsage) as for real arrays seems not work. After some playing around and reading docs about strides and stuff, I found that the reason might be in how Numeric (20.3) defines the CDOUBLE_to_CDOUBLE function: static void CDOUBLE_to_CDOUBLE(double *ip, int ipstep, double *op, int opstep, int n) {int i; for(i=0;i<2*n;i++,ip+=ipstep,op+=opstep) {*op = (double)*ip;}} It seems not to take into account that real and imaginary part are always contiguous in memory, even if an array itself is not. Actually, I don't understand how this code can work (unless some magic is done in places where this code is used). I would have expected that the code for CDOUBLE_to_CDOUBLE to be analoguous to relative functions but for the real data. For example, DOUBLE_to_DOUBLE is defined as static void DOUBLE_to_DOUBLE(double *ip, int ipstep, double *op, int opstep, int n) {int i; for(i=0;i<n;i++,ip+=ipstep,op+=opstep) {*op = (double)*ip;}} So, I would have expected CDOUBLE_to_CDOUBLE to be static void CDOUBLE_to_CDOUBLE(double *ip, int ipstep, double *op, int opstep, int n) { int i; for(i=0;i<n;i++,ip+=ipstep,op+=opstep) { *op = (double)*ip; /* copy real part */ *(op+1) = (double)*(ip+1); /* copy imaginary part that always follows the real part in memory */ } } Could someone explain how Numeric can work with the current CDOUBLE_to_CDOUBLE? Because I don't understand which one is broken, the Numeric's CDOUBLE_to_CDOUBLE (and relative) functions, or my code. The latter may be the case, but to fix it, I need some clarification on the issue. Can you help me? Thanks, Pearu /************************* copy_ND_array *******************************/ #define INCREMENT(ret_ind, nd, max_ind) \ { \ int k; \ k = (nd) - 1; \ if (k<0) (ret_ind)[0] = (max_ind)[0]; else \ if (++(ret_ind)[k] >= (max_ind)[k]) { \ while (k >= 0 && ((ret_ind)[k] >= (max_ind)[k]-1)) \ (ret_ind)[k--] = 0; \ if (k >= 0) (ret_ind)[k]++; \ else (ret_ind)[0] = (max_ind)[0]; \ } \ } #define CALCINDEX(indx, nd_index, strides, ndim) \ { \ int i; \ indx = 0; \ for (i=0; i < (ndim); i++) \ indx += nd_index[i]*strides[i]; \ } extern int copy_ND_array(const PyArrayObject *in, PyArrayObject *out) { /* This routine copies an N-D array in to an N-D array out where both can be discontiguous. An appropriate (raw) cast is made on the data. */ /* It works by using an N-1 length vector to hold the N-1 first indices into the array. This counter is looped through copying (and casting) the entire last dimension at a time. */ int *nd_index, indx1; int indx2, last_dim; int instep, outstep; if (0 == in->nd) { in->descr->cast[out->descr->type_num]((void *)in->data,1, (void*)out->data,1,1); return 0; } if (1 == in->nd) { in->descr->cast[out->descr->type_num]((void *)in->data,1, (void*)out->data,1,in->dimensions[0]); return 0; } nd_index = (int *)calloc(in->nd-1,sizeof(int)); last_dim = in->nd - 1; instep = in->strides[last_dim] / in->descr->elsize; outstep = out->strides[last_dim] / out->descr->elsize; if (NULL == nd_index ) { fprintf(stderr,"Could not allocate memory for index array.\n"); return -1; } while(nd_index[0] != in->dimensions[0]) { CALCINDEX(indx1,nd_index,in->strides,in->nd-1); CALCINDEX(indx2,nd_index,out->strides,out->nd-1); /* Copy (with an appropriate cast) the last dimension of the array */ (in->descr->cast[out->descr->type_num])((void*)(in->data+indx1),instep, (void*)(out->data+indx2),outstep,in->dimensions[last_dim]); INCREMENT(nd_index,in->nd-1,in->dimensions); } free(nd_index); return 0; } /* EOF copy_ND_array */ |
From: Andrew M. <an...@bu...> - 2002-03-02 18:16:24
|
On 2 Mar 2002, Konrad Hinsen wrote: > Tim Peters <ti...@co...> writes: > > > > # Python 2.2 > > > > > > >>> 1e-200**2 > > > Traceback (most recent call last): > > > File "<stdin>", line 1, in ? > > > OverflowError: (34, 'Numerical result out of range') > > > > That one is surprising and definitely not intended: it suggests your > > platform libm is setting errno to ERANGE for pow(1e-200, 2.0), or that your > > platform C headers define INFINITY but incorrectly, or that your platform C > > headers define HUGE_VAL but incorrectly, or that your platform C compiler > > generates bad code, or optimizes incorrectly, for negating and/or comparing > > I just tested and found the same behaviour, on RedHat Linux 7.1 > running on a Pentium machine. Python 2.1, compiled and running on the > same machine, returns 0. So does the Python 1.5.2 that comes with the > RedHat installation. Although there might certainly be something wrong > with the C compiler and/or header files, something has likely changed > in Python as well in going to 2.2, the only other explanation I see > would be a compiler optimization bug that didn't have an effect with > earlier Python releases. Other examples... FreeBSD 4.4: Python 2.1.1 (#1, Sep 13 2001, 18:12:15) [GCC 2.95.3 20010315 (release) [FreeBSD]] on freebsd4 Type "copyright", "credits" or "license" for more information. >>> 1e-200**2 0.0 >>> 1e200**2 Inf Python 2.3a0 (#1, Mar 1 2002, 00:00:52) [GCC 2.95.3 20010315 (release) [FreeBSD]] on freebsd4 Type "help", "copyright", "credits" or "license" for more information. >>> 1e-200**2 0.0 >>> 1e200**2 Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: (34, 'Result too large') Both builds built with "./configure --with-fpectl", although the optimisation settings for the 2.3a0 (CVS of yesterday) build were tweaked (2.1.1: "-g -O3"; 2.3a0: "-s -m486 -Os"). My 2.2 OS/2 EMX build (which uses gcc 2.8.1 -O2) produces exactly the same result as 2.3a0 on FreeBSD. -- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: an...@bu... | Snail: PO Box 370 an...@pc... | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia |
From: Paul F D. <pa...@pf...> - 2002-03-02 15:55:22
|
I also see the underflow problem on my Linux box 2.4.2-2. This is certainly untenable. However, I am able to catch OverflowError in both cases. I had a user complain about this just yesterday, so I think it is a new behavior in Python 2.2 which I was just rolling out. A small Fortran test problem did not exhibit the underflow bug, and caught the overflow bug at COMPILE TIME (!). There are two states for the IEEE underflow: one in which the hardware sets it to zero, and the other in which the hardware signals the OS and you can tell the OS to set it to zero. There is no standard for the interface to this facility that I am aware of. (Usually I have had to figure out how to make sure the underflow was handled in hardware because the sheer cost of letting it turn into a system call was prohibitive.) I speculate that on machines where the OS call is the default that Python 2.2 is catching the signal when it should let it go by. I have not looked at this lately so something may have changed. You can use the kinds package that comes with Numeric to test for maximum and minimum exponents. kinds.default_float_kind.MAX_10_EXP (equal to 308 on my Linux box, for example) tells you how big an exponent a floating point number can have. MIN_10_EXP (-307 for me) is also there. Work around on your convergence test: instead of testing x**2 you might test log10(x) vs. a constant or some expression involving kinds.default_float_kind.MIN_10_EXP. -----Original Message----- From: num...@li... [mailto:num...@li...] On Behalf Of Tim Peters Sent: Friday, March 01, 2002 9:43 PM To: Huaiyu Zhu; num...@li... Cc: pyt...@py... Subject: [Numpy-discussion] RE: Python 2.2 seriously crippled for numerical computation? [Huaiyu Zhu] > There appears to be a serious bug in Python 2.2 that severely limits > its usefulness for numerical computation: > > # Python 1.5.2 - 2.1 > > >>> 1e200**2 > inf A platform-dependent accident, there. > >>> 1e-200**2 > 0.0 > > # Python 2.2 > > >>> 1e-200**2 > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: (34, 'Numerical result out of range') That one is surprising and definitely not intended: it suggests your platform libm is setting errno to ERANGE for pow(1e-200, 2.0), or that your platform C headers define INFINITY but incorrectly, or that your platform C headers define HUGE_VAL but incorrectly, or that your platform C compiler generates bad code, or optimizes incorrectly, for negating and/or comparing against its definition of HUGE_VAL or INFINITY. Python intends silent underflow to 0 in this case, and I haven't heard of underflows raising OverflowError before. Please file a bug report with full details about which operating system, Python version, compiler and C libraries you're using (then it's going to take a wizard with access to all that stuff to trace into it and determine the true cause). > >>> 1e200**2 > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: (34, 'Numerical result out of range') That one is intended; see http://sf.net/tracker/?group_id=5470&atid=105470&func=detail&aid=496104 for discussion. > This produces the following serious effects: after hours of numerical > computation, just as the error is converging to zero, the whole thing > suddenly unravels. It depends on how you write your code, of course. > Note that try/except is completely useless for this purpose. Ditto. If your platform C lets you get away with it, you may still be able to get an infinity out of 1e200 * 1e200. > I hope this is unintended behavior Half intended, half both unintended and never before reported. > and that there is an easy fix. Sorry, "no" to either. _______________________________________________ Numpy-discussion mailing list Num...@li... https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Tim P. <ti...@co...> - 2002-03-02 05:42:56
|
[Huaiyu Zhu] > There appears to be a serious bug in Python 2.2 that severely limits its > usefulness for numerical computation: > > # Python 1.5.2 - 2.1 > > >>> 1e200**2 > inf A platform-dependent accident, there. > >>> 1e-200**2 > 0.0 > > # Python 2.2 > > >>> 1e-200**2 > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: (34, 'Numerical result out of range') That one is surprising and definitely not intended: it suggests your platform libm is setting errno to ERANGE for pow(1e-200, 2.0), or that your platform C headers define INFINITY but incorrectly, or that your platform C headers define HUGE_VAL but incorrectly, or that your platform C compiler generates bad code, or optimizes incorrectly, for negating and/or comparing against its definition of HUGE_VAL or INFINITY. Python intends silent underflow to 0 in this case, and I haven't heard of underflows raising OverflowError before. Please file a bug report with full details about which operating system, Python version, compiler and C libraries you're using (then it's going to take a wizard with access to all that stuff to trace into it and determine the true cause). > >>> 1e200**2 > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: (34, 'Numerical result out of range') That one is intended; see http://sf.net/tracker/?group_id=5470&atid=105470&func=detail&aid=496104 for discussion. > This produces the following serious effects: after hours of numerical > computation, just as the error is converging to zero, the whole thing > suddenly unravels. It depends on how you write your code, of course. > Note that try/except is completely useless for this purpose. Ditto. If your platform C lets you get away with it, you may still be able to get an infinity out of 1e200 * 1e200. > I hope this is unintended behavior Half intended, half both unintended and never before reported. > and that there is an easy fix. Sorry, "no" to either. |
From: Huaiyu Z. <hua...@ya...> - 2002-03-02 05:08:32
|
There appears to be a serious bug in Python 2.2 that severely limits its usefulness for numerical computation: # Python 1.5.2 - 2.1 >>> 1e200**2 inf >>> 1e-200**2 0.0 # Python 2.2 >>> 1e-200**2 Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: (34, 'Numerical result out of range') >>> 1e200**2 Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: (34, 'Numerical result out of range') This produces the following serious effects: after hours of numerical computation, just as the error is converging to zero, the whole thing suddenly unravels. Note that try/except is completely useless for this purpose. I hope this is unintended behavior and that there is an easy fix. Have any of you experienced this? Huaiyu |
From: Warren F. <fo...@SL...> - 2002-03-02 01:52:55
|
On Thu, 28 Feb 2002, Tim Hochberg wrote: > > not going to do you any good). The above C' * C actually creates, > AFAIK, > > _3_ versions of C, 2 of them transposed (prior to 20.3; > > I think you're a little off track here. The transpose operation doesn't > normally make a copy, it just creates a new object that points to the same > data, but with different stride values. So the transpose shouldn't be slow > or take up more space. Numeric.transpose quickly returns an object which takes up little space. But, in many cases, when you actually use the object returned, a contiguous copy gets created. Glancing over the 21.0b3 sources, it looks like this might not happen as often as it used to, but there are still plenty of calls to PyArray_ContiguousFromObject in there, so transposition is not always as cheap as it might seem. Especially if you say Aprime=Numeric.transpose(A) and then use Aprime several times, you could end up repeatedly creating and discarding temporary transposed copies. Warren Focke |
From: <R.M...@ex...> - 2002-03-01 09:41:43
|
Hi, On 28 Feb 2002, Travis Oliphant wrote: > On 28 Feb 2002, A.Schmolck wrote: >> Two essential matrix operations (matrix-multiplication and >> transposition (which is what I am mainly using) are both >> considerably >> >> a) less efficient and >> b) less notationally elegant > You are not alone in your concerns. The developers of SciPy are > quite concerned about speed, hence the required linking to ATLAS. > The question of notational elegance is stickier because we just > can't add new operators. > The solution I see is to use other classes. At the moment, I agree this is probably the best solution, although it would be nice if the core python was able to add operators :) >> The following Matlab fragment >> M * (C' * C) * V' * u >> > This becomes (using SciPy which defines Mat = Matrix.Matrix and > could later redefine it to use the ATLAS libraries for matrix > multiplication). > C, V, u, M = apply(Mat, (C, V, u, M)) > M * (C.H * C) * V.H * M Yes, much better. > not bad.. and with a Mat class that uses the ATLAS blas (not a > very hard thing to do now.), this could be made as fast as > MATLAB. > Perhaps, as as start we could look at how you make the current > Numeric use blas if it is installed to do dot on real and complex > arrays (I know you can get rid of lapack_lite and use your own > lapack) but, the dot function is defined in multiarray and would > have to be modified to use the BLAS instead of its own homegrown > algorithm. This is precisely what Alex and I have done. Please see the patch to Numeric and timings on http://www.dcs.ex.ac.uk/~aschmolc/Numeric/ It's not beautiful but about 40 times faster on 1000 by 1000 matrix multiplies. I'll attempt to provide a similar patch for numarray over the next week or so. Many thanks for your comments. Richard. |
From: Todd M. <jm...@st...> - 2002-03-01 00:00:35
|
Hi, I'm Todd Miller and I work at STSCI on Numarray. Two people have reported problems with compiling Numarray-0.11 or 0.2 with GCC on SPARC. There are two problems: 1. Compiling the _ufuncmodule.c using gcc-2.95 on a SPARC (with the default switches) uses tons of virtual memory and typically fails. a. This can be avoided by adding the compilation flags: EXTRA_COMPILE_ARGS=["-O0", "-Wno-return-type"] to your setup.py when compiling *numarray*. b. Alternately, you can wait for numarray-0.3 which will partition the ufuncmodule into smaller compilation units. We suspect these will avoid the problem naturally and permit the use of optimization. c. Lastly, if you have Sun cc, you might want to try using it instead of gcc. This is what we do at STSCI. You need to recompile Python itself if you want to do this and your python was already compiled with gcc. 2. Python compiled with gcc generates misaligned storage within buffer objects. Numarray-0.2 is dependent on the problematic variant of the buffer object so if you want to use Float64 or Complex128 on a SPARC you may experience core dumps. a. I have a non-portable patch which worked for me with gcc-2.95 on SPARC. I can e-mail this to anyone interested. Apply this patch and recompile *python*. b. You might be able to fix this with gcc compilation switches for Python: try -munaligned-doubles and recompile *python*. c. Numarray-0.3 will address this issue by providing its own minimal memory object which features correctly aligned storage. This solution will not require recompiling python, but won't be available until numarray-0.3. d. Python compiled with Sun cc using the default switches doesn't manifest this bug. If you have Sun cc, you may want to recompile *python* using that. In general, I think the "better part of valor" is probably to wait 3 weeks for numarray-0.3 when both issues should be addressed. If you want to try numarray-0.2 now with GCC on a SPARC, I hope some of these ideas work for you. Todd -- Todd Miller jm...@st... STSCI / SSG (410) 338 4576 |
From: Travis O. <oli...@ee...> - 2002-02-28 22:31:14
|
On 28 Feb 2002, A.Schmolck wrote: > Two essential matrix operations (matrix-multiplication and transposition > (which is what I am mainly using) are both considerably > > a) less efficient and > b) less notationally elegant You are not alone in your concerns. The developers of SciPy are quite concerned about speed, hence the required linking to ATLAS. As Pearu mentioned all of the BLAS will be available (much of it is). This will enable very efficient algorithms. The question of notational elegance is stickier because we just can't add new operators. The solution I see is to use other classes. Right now, the Numeric array is an array of numbers (it is not a vector or a matrix) and that is why it has the operations it does. The Matrix class (delivered with Numeric) creates a Matrix object that uses the array of numbers of Numeric arrays. It overloads the * operator and defines .T, and .H for transpose and Hermitian transpose respectively. This requires explictly making your objects matrices (not a bad thing in my book as not all 2-D arrays fit perfectly in a matrix algebra). > The following Matlab fragment > > M * (C' * C) * V' * u > This becomes (using SciPy which defines Mat = Matrix.Matrix and could later redefine it to use the ATLAS libraries for matrix multiplication). C, V, u, M = apply(Mat, (C, V, u, M)) M * (C.H * C) * V.H * M not bad.. and with a Mat class that uses the ATLAS blas (not a very hard thing to do now.), this could be made as fast as MATLAB. Perhaps, as as start we could look at how you make the current Numeric use blas if it is installed to do dot on real and complex arrays (I know you can get rid of lapack_lite and use your own lapack) but, the dot function is defined in multiarray and would have to be modified to use the BLAS instead of its own homegrown algorithm. -Travis |
From: Krishnaswami, N. <ne...@cs...> - 2002-02-28 21:48:47
|
a.s...@gm... [mailto:a.s...@gm...] wrote: > > Numeric is an impressively powerful and in many respects easy and > comfortable to use package (e.g. it's sophisticated slicing > operations, not to mention the power and elegance of the underlying > python language) and one would hope that it can one day replace Matlab > (which is both expensive and a nightmare as a programming language) as > a standard platform for numerical calculations. I'm in much the same boat, only with Gauss as the language I want to replace. > There is however a problem that, for the use to which I want > to put Numeric, runs deeper and provides me with quite a headache: > > Two essential matrix operations (matrix-multiplication and > transposition (which is what I am mainly using) are both considerably > > a) less efficient and > b) less notationally elegant > > under Numeric than under Matlab. These are my two problems as well. I can live with the clumsy function call interface to the matrix ops, but the loss of efficiency is a real killer for me. In my code, Gauss is about 8-10x faster than Numpy, which is a killer speed loss. (And Gauss is modestly slower than C, though I don't care about this because the Gauss is fast enough.) Right now, I have a data-mining program that I prototyped in Numpy and am now rewriting in C. Because Numpy isn't fast enough, I have wasted close to a week on this rewrite. This sounds bitter, but it's not meant to. I have to deploy on VMS, and after we had gotten Numpy working on OpenVMS I really hoped that the Alpha would fast enough that I could just use the Python prototype. -- Neel Krishnaswami ne...@cs... |
From: Pearu P. <pe...@ce...> - 2002-02-28 21:27:24
|
Hi, On 28 Feb 2002, A.Schmolck wrote: > So far, I've thought of the following possible solutions: > > 0. Do nothing: > Just live with the awkward syntax. Let me add a subsolution here: 0.1 Wait for scipy to mature (or better yet, help it to do that). Scipy already provides wrappers to both, Fortran and C, LAPACK and BLAS routines, though currently they are under revision. With the new wrappers to these routines you can optimize your code fragments as flexible as if using them from C or Fortran. In principle, one could mix Fortran and C routines (i.e. the corresponding wrappers) so that one avoids all unneccasary transpositions. All matrix operations can be performed in-situ if so desired. Regards, Pearu |
From: Tim H. <tim...@ie...> - 2002-02-28 21:03:16
|
Hi Alexander, [SNIP] > Two essential matrix operations (matrix-multiplication and transposition > (which is what I am mainly using) are both considerably > > a) less efficient and > b) less notationally elegant [Interesting stuff about notation and efficiency] > Or, even worse if one doesn't want to pollute the namespace: > > Numeric.dot(Numeric.dot(Numeric.dot(Numeric.M, > Numeric.dot(Numeric.transpose(C), C)), Numeric.transpose(v)), u) I compromise and use np.dot, etc. myself, but that's not really relavant to the issue at hand. [More snippage] > 2. Numeric performs unnecessary transpose operations (prior to 20.3, I think, > more about this later). The transpose operation is really damaging with big > matrices, because it creates a complete copy, rather than trying to do > something lazy (if your memory is already almost half filled up with > (matrix) C, then creating a (in principle superfluous) transposed copy is > not going to do you any good). The above C' * C actually creates, AFAIK, > _3_ versions of C, 2 of them transposed (prior to 20.3; I think you're a little off track here. The transpose operation doesn't normally make a copy, it just creates a new object that points to the same data, but with different stride values. So the transpose shouldn't be slow or take up more space. Numarray may well make a copy on transpose, I haven't looked into that, but I assume that at this point your are still talking about the old Numeric from the look of the code you posted. > > dot(a,b) > > translates into > > innerproduct(a, swapaxes(b, -1, -2)) > > In newer versions of Numeric, this is replaced by > > multiarray.matrixproduct(a, b) > > which has the considerable advantage that it doesn't create an unnecessary > copy and the considerable disadvantage that it seems to be factor 3 or so > slower than the old (already not blazingly fast) version for large Matrix x > Matrix multiplication, (see timing results [1])). Like I said, I don't think either of these should be making an extra copy unless it's happening inside multiarray.innerproduct or multiarray.matrixproduct. I haven't looked at the code for those in a _long_ time and then only glancingly, so I have no idea about that. [Faster! with Atlas] Sounds very cool. > > > As I said, > > dot(dot(dot(M, dot(transpose(C), C)), transpose(v)), u) > > is pretty obscure compared to > > M * (C' * C) * V' * u) Of the options that don't require new operators I'm somewhat fond of defining __call__ to be matrix multiply. If you toss in .t notation that you mention below, you get: (M)( (C.t)(C) ) (V.t)(u) Not perfect, but not too bad either. Note that I've tossed in some extra parentheses to make the above look better. It could actually be written: M( C.t(C) )(V.t)(u) ) But I think that's more confusing as it looks too much like a function call. (Although there is some mathematical precedent for viewing matrix multiplication as a function.) I'm a little iffy on the '.t' notation as it could get out of hand. Personally I could use cunjugate as much as transpose, and it's a similar operation -- would we also add '.c'? And possibly '.s' and '.h' for skew and Hermetian matrices? That might be a little much. The __call__ idea was not particularly popular last time, but I figured I'd toss at it out again as an easy to implement possibility. -tim |
From: Charles G W. <cg...@al...> - 2002-02-28 20:24:15
|
A.Schmolck writes: > > Two essential matrix operations (matrix-multiplication and transposition > (which is what I am mainly using) are both considerably > > a) less efficient and > b) less notationally elegant Your comments about efficiency are well-taken. I have (in a previous life) done work on efficient (in terms of virtual memory access / paging behavior) transposes of large arrays. (Divide and conquer). Anyhow - if there were support for the operation of A*B' (and A'*B) at the C level, you wouldn't need to ever actually have a copy of the transposed array in memory - you would just exchange the roles of "i" and "j" in the computation... > 3. Wrap: create a DotMatrix class that overloads '*' to be dot and maybe > self.t to return the transpose -- this also means that all the numerical > libraries I frequently use need to be wrapped. I guess you haven't yet stumbled across the Matrix.py that comes with Numeric - it overrides "*" to be the dot-product. Unfortunately I don't see a really easy way to simplify the Transpose operator - at the very least you could do T = Numeric.transpose and then you're just writing T(A) instead of the long-winded version. Interestingly, the "~" operator is available, but it calls the function "__invert__". I guess it would be too weird to have ~A denote the transpose? Right now you get an error - one could set things up so that ~A was the matrix inverse of A, but we already have the A**-1 notation (among others) for that... |