You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Chris B. <chr...@ho...> - 2001-06-19 16:43:49
|
Well, this is what I get with Python 2.1, NumPy 20.0.0, Linux on a 450Mhz PIII [cbarker@waves junk]$ python numtst.py 20.0.0 0.438 0.154 0.148 0.139 So whatever it was may have been fixed (or be a strange platform dependence). Note, you do want to be a bit careful about using time.time, as it measures real time, so if you have another process hogging resources, it will not be a fair measure. You can use time.closck instead, although I'm sure it has its issues as well. This is what I get with clock: [cbarker@waves junk]$ python numtst.py 20.0.0 0.440 0.160 0.140 0.140 There's not much going on on my machine right now, so little difference. -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Tim H. <tim...@ie...> - 2001-06-19 15:54:20
|
Hi Berthold, I tested your code on Win95 using Numeric 20.0.0 and got essentially the same pattern of times. So, upgrading your version of Numeric is not likely to help. Curious, I checked out the code for sort and found that it justs calls qsort from the C library. I suspect that the problem is related to quicksort having bad worst case behaviour. Not being a computer scientest, I can't tell you under what situations the bad behaviour is triggered, although I know it doesn't like presorted lists. Anyway, if you're using 1D arrays, one workaround would be to use list.sort. Python's sorting routines has, I understand, lots of gimmicks to avoid the problems that quicksort sometimes encounters. I tried it and this: r=(random((71400,))*7).astype(Int) l = r.tolist() l.sort() p = array(l) runs about 40 times faster than this: r=(random((71400,))*7).astype(Int) p = r.sort() If you don't need to convert to and from a array, this approach is 60 times faster. Even if you're dealing with a multidimensional array, this approach (in a loop) might be signifigantly faster assuming you're sorting along the long axis. It makes one wonder if using the python sort rather than qsort for Numeric.sort would be a profitable change. No time to investigate it right now though. Hope that's useful... -tim > We have a speed problem with Numeric.sort on large arrays with only a > few different values. Here is my example > > -- snip -- > > >cat numtst.py > import Numeric > print Numeric.__version__ > class timer: > def __init__(self): > import time > self.start = time.time() > def stop(self): > import time > print "%.3f" % (time.time() - self.start) > > from RandomArray import random > from Numeric import sort, Int > > r=random((71400,)) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*70000).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*70).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > r=(random((71400,))*7).astype(Int) > t = timer() ; p=sort(r) ; t.stop() > 16:27 hoel@seeve:hoel 2>python numtst.py > 17.3.0 > 0.185 > 0.148 > 2.053 > 21.668 > > -- snip -- > > So the less different values are contained in the array the longer > takes the sorting. Is this also the case with newer versions of > Numeric (But this is Python 1.5.2)? Why is sorting of these arrays so > slow? > > Thanks > > Berthold > -- > email: ho...@Ge... > ) tel. : +49 (40) 3 61 49 - 73 74 > ( > C[_] These opinions might be mine, but never those of my employer. > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: <ho...@ge...> - 2001-06-19 14:37:51
|
We have a speed problem with Numeric.sort on large arrays with only a few different values. Here is my example -- snip -- >cat numtst.py import Numeric print Numeric.__version__ class timer: def __init__(self): import time self.start = time.time() def stop(self): import time print "%.3f" % (time.time() - self.start) from RandomArray import random from Numeric import sort, Int r=random((71400,)) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*70000).astype(Int) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*70).astype(Int) t = timer() ; p=sort(r) ; t.stop() r=(random((71400,))*7).astype(Int) t = timer() ; p=sort(r) ; t.stop() 16:27 hoel@seeve:hoel 2>python numtst.py 17.3.0 0.185 0.148 2.053 21.668 -- snip -- So the less different values are contained in the array the longer takes the sorting. Is this also the case with newer versions of Numeric (But this is Python 1.5.2)? Why is sorting of these arrays so slow? Thanks Berthold -- email: ho...@Ge... ) tel. : +49 (40) 3 61 49 - 73 74 ( C[_] These opinions might be mine, but never those of my employer. |
From: Paul F. D. <pa...@pf...> - 2001-06-15 23:45:42
|
Heavy MA users may wish to get the latest from CVS to try. It has numerous improvements as detailed in changes.txt. I am sending out this announcement because I won't be available to work on it for three weeks, so if you find it is broken sync back to June 14 until I can fix it. |
From: Paul F. D. <pa...@pf...> - 2001-06-15 23:42:46
|
The version of MA.py checked in today supports more of the Numeric API. If you are a heavy MA user you may wish to check it out. Due to other commitments I will be unavailable for further work and testing on it for about 3 weeks. I attach the file for your convenience. To try it, drop it in the site-packages/MA directory in place of your current one. I also gave MA a new test routine very close to the new one for Numeric. |
From: Paul F. D. <pa...@pf...> - 2001-06-14 17:11:20
|
In CVS there is now a file Test/test.py instead of the previous test_items and test_all.py. test.py uses PyUnit, the new testing framework in Python. Developers should be able to add tests much more easily now. |
From: Chris B. <chr...@ho...> - 2001-06-13 19:10:23
|
If I read your C++ right (and I may not have, I'm a C++ novice), you allocated the memory for all three arrays, and then performed your loop. In the Python version, the result array is allocated when the multiplication is perfomed, so you are allocating and freeing the result array each tim ein the loop. That may slow things down a little. In a real application, you are less likely to be re-doing the same computation over and over again, so the allocation would happen only once. You might try something like this, and see if it is any faster (it is more memory efficient) Note also that there is some overhead in function calls in Python, so you may get some speed up if you inline the call to mult_test. You can decide for yourself if this would still be a fair comparison. You might try something like this, and see if it is any faster (it is more memory efficient) (unfortunately, MA doesn't seem to support the thiord argument to multiply) My version (I don't have TimerUtility, so I used time.clock instead) got these times: Your code: completed 1000 in 99.050000 seconds 3.74e+06 checked multiplies/second My code: alternative completed 1000 in 80.070000 seconds 4.62e+06 checked multiplies/second It did buy you something: here is the code: #!/usr/bin/env python2.1 import sys # test harness for Masked array performonce #from MA import * from Numeric import * from time import clock def mult_test(a1, a2): res = a1 * a2 if __name__ == '__main__': repeat = 100 gates = 1000 beams = 370 if len(sys.argv) > 1: repeat = int(sys.argv[1]) t1 = ones((beams, gates), Float) a1 = t1 a2 = t1 # a1 = masked_values(t1, -327.68) # a2 = masked_values(t1, -327.68) i = 0 start = clock() while (i < repeat): i = i+1 res = mult_test(a1, a2) elapsed = clock() - start print 'completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) print # alternative: res = zeros(a1.shape,Float) i = 0 start = clock() while (i < repeat): i = i+1 multiply(a1, a2, res) elapsed = clock() - start print 'alternative completed %d in %f seconds' % (repeat , elapsed) cntMultiply = repeat*gates*beams print '%8.3g checked multiplies/second' % (cntMultiply/elapsed) print Another note: calling ones with Float as your type gives you a Python float, which is a C double. Use 'f' or Float32 to get a C float. I've found on Intel hardware, doubles are just as fast (the FPU used doubles anyway), but they do use more memory, so this could make a difference. -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Paul F. D. <pa...@pf...> - 2001-06-13 01:31:21
|
PS my test was on double precision, failed to notice that too. -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Joe Van Andel Sent: Tuesday, June 12, 2001 5:20 PM To: numpy-discussion Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Paul F. D. <pa...@pf...> - 2001-06-13 01:16:54
|
I have a timing benchmark for MA that computes the ratio MA/Numeric for two cases: 1. there is actually no mask 2. there is a mask For N=50,000 these ratios are usually around 1.3 and 1.8 respectively. It makes sense in the second case that the number might be around 2 since you have to pass through the mask data as well, even if it is only bytes. In short, there is this much overhead to MA. If you got MA/C++ = 1.67 it would indicate Numpy/C++ comparable. The tests Jim did when he first wrote it were about 10% worse than C. Your C++ uses a special value instead of a mask array which may mean that you traded space for CPU time, and using large arrays like that maybe that causes some page faults (?) Anyway you're comparing apples and oranges a little. Anyway, my point is this is probably an MA issue rather than a Numpy issue. However, please note that I did not (yet) do any of the normal profiling and testing that one would do to speed MA up, such as putting key parts in C. This is just not an issue for me right now. -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Joe Van Andel Sent: Tuesday, June 12, 2001 5:20 PM To: numpy-discussion Subject: [Numpy-discussion] performance comparison of C++ vs Numeric (MA) operations. I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Joe V. A. <van...@at...> - 2001-06-13 00:20:15
|
I was curious about the relative performance of C++ vs Numeric Python, for operations on arrays of roughly 400,000 array elements. I built a simple array single precision multiplication function in C++, that performs an element by element multiply, checking whether each element is "valid" or "missing data". Then, for comparision, I wrote a similar multiplication routine, using the Masked Array (MA) package of Numeric Python. I compiled Numeric Python (20.1.0b2) with '-O3', by modifying setup.py to contain lines like OPTIMIZE=['-O3'] ext_modules = . . Extension('multiarray', ['Src/multiarraymodule.c'], extra_compile_args=OPTIMIZE ), --------------------------------------- On an 800 Mhz dual processor Dell Linux box, using gcc 2.95.3, Software Performance ------------------------------------------------ Numeric Python 5.0e6 multiplies/second Numeric Python -03 6.1e6 multiplies/second C++ 10.3e6 multiplies/second C++ -O3 10.3e6 multiplies/second (I tried using "plain" Numeric arrays, rather than Masked arrays, and it didn't seem to make much difference.) Has anyone else benchmarked the relative performance of C/C++ vs Numeric Python? Does anyone know of other optimizations to Numeric Python that could be implemented? I know a more realistic benchmark would include I/O, which might tend to reduce the apparent difference in performance. I've attached the benchmark modules, in case someone would like to examine them. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Konrad H. <hi...@cn...> - 2001-06-12 09:57:54
|
> >>> Numeric.array([2.9e-131])**3 > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: math range error > >>> 2.9e-131**3 > 0.0 > >>> Numeric.array([2.9e-131+0j])**3 > array([ 0.+0.j]) > > Now I have a quick solution for my problem, but I have the impression that > this is a bug, at least I don't understand the underlying logic. If > somebody can explain it to me? The power calculation routines for float and complex are completely different, I suppose the absence of underflow reporting in the latter is just a side effect. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Konrad H. <hi...@cn...> - 2001-06-12 09:56:00
|
> Do anyone know how to disable underflow exception errors in Numeric? The problem is not in Numeric, it is the C library that decides whether underflows should be considered errors or not. The Python interpreter has some workarounds, but Numeric has not. I had the same problems you describe under AIX, where I solved them by linking to a different version of the math library. But I don't think there is a platform-independent solution. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Joe V. A. <van...@at...> - 2001-06-08 19:59:04
|
"Paul F. Dubois" wrote: > > Is the array single or double precision? Does it have the spacesaver > attribute set? The array is single precision, with the spacesave attribute set. > > Is it possible the data had bad values in it that were something other than > the missing value? What was the missing value? Yes, I finally found that the array had some contained some 'Nan' variables, because of an error in my earlier calculation. So, at this point, MA works for me. Thanks for your help. -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Paul F. D. <pa...@pf...> - 2001-06-07 16:36:04
|
Try this and let me know if it works for you. It implements Hardy's multiquadric. Note the caution on the number of input points. This algorithm does a really spiffy job usually. Try the default rsq first. -- Paul -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Rob...@dn... Sent: Wednesday, June 06, 2001 8:59 PM To: num...@li... Subject: [Numpy-Discussion] 3d interpolation I have a series of x,y,z irregular data points. I would like to create a 2-d array (surface) from this data, each cell in the array being an interpolated value based on the nearby z values. I was wondering if anyone had any experience or suggestions under python? The interpolation algorithm needn't be elaborate, kriging is definite overkill, I was thinking more along the lines of splines or even inverse distance weighting. Thanks Robert Denham Department of Natural Resources Queensland, Australia ************************************************************************ The information in this e-mail together with any attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any form of review, disclosure, modification, distribution and/or publication of this e-mail message is prohibited. If you have received this message in error, you are asked to inform the sender as quickly as possible and delete this message and any copies of this message from your computer and/or your computer system network. ************************************************************************ _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Pearu P. <pe...@ce...> - 2001-06-07 08:08:30
|
On Wed, 6 Jun 2001, Karshi Hasanov wrote: > I wanna build *.vtk structured data file from an array A[i,j,k] which has > a vector attributes. What's the right( or best) way of doing it using python? > I do have the VTK User's Guide Book, but it didn't tell me much. > Thanks Check out PyVTK: http://cens.ioc.ee/projects/pyvtk/ Using PyVTK you can create the data file as follows: from pyvtk import * VtkData(StructuredPoints([n1,n2,n3]),PointData(Vectors(A))).tofile('arr.vtk') where A is n1 x n2 x n3 arrays of 3-sequences. Pearu |
From: <Rob...@dn...> - 2001-06-07 03:59:12
|
I have a series of x,y,z irregular data points. I would like to create a 2-d array (surface) from this data, each cell in the array being an interpolated value based on the nearby z values. I was wondering if anyone had any experience or suggestions under python? The interpolation algorithm needn't be elaborate, kriging is definite overkill, I was thinking more along the lines of splines or even inverse distance weighting. Thanks Robert Denham Department of Natural Resources Queensland, Australia ************************************************************************ The information in this e-mail together with any attachments is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any form of review, disclosure, modification, distribution and/or publication of this e-mail message is prohibited. If you have received this message in error, you are asked to inform the sender as quickly as possible and delete this message and any copies of this message from your computer and/or your computer system network. ************************************************************************ |
From: Paul F. D. <pa...@pf...> - 2001-06-06 22:59:54
|
20.1.0b2 is now available in tar.gz, .exe, .zip formats. |
From: Karshi H. <kar...@ut...> - 2001-06-06 21:44:32
|
Hi, I wanna build *.vtk structured data file from an array A[i,j,k] which has a vector attributes. What's the right( or best) way of doing it using python? I do have the VTK User's Guide Book, but it didn't tell me much. Thanks |
From: <co...@ph...> - 2001-06-06 18:57:11
|
At some point, "David H. Marimont" <mar...@nx...> wrote: > Thanks, David, that worked perfectly -- I can now import lapack_lite > without any errors. > > Now I need to now how to call lapack functions aside from the ones > that come packaged with lapack_lite (dgeev, dgelss, dgelss, dgesv, > dgesvd, dgetrf, dsyev, zgelss, zgesv,z gesvd, zgetff, and zheev). > I found these via inspect.getmembers(lapack_lite). Where do all > the other lapack functions live? And is there some way for me to > determine that automatically? > > Thanks. You want PyLapack. See a previous message at http://www.geocrawler.com/archives/3/1329/2000/4/0/3616954/ -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |co...@mc... |
From: David H. M. <mar...@nx...> - 2001-06-06 18:33:38
|
Thanks, David, that worked perfectly -- I can now import lapack_lite without any errors. Now I need to now how to call lapack functions aside from the ones that come packaged with lapack_lite (dgeev, dgelss, dgelss, dgesv, dgesvd, dgetrf, dsyev, zgelss, zgesv,z gesvd, zgetff, and zheev). I found these via inspect.getmembers(lapack_lite). Where do all the other lapack functions live? And is there some way for me to determine that automatically? Thanks. David "David M. Cooke" wrote: > > At some point, "David H. Marimont" <mar...@nx...> wrote: > > > I just compiled and installed Numeric 20.1.0b1 using lapack > > and blas libraries. When I tried to import lapack_lite (after > > importing Numeric), I got this error: > > > > Traceback (most recent call last): > > File "<stdiimoprimporn>", line 1, in ? > > ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe > > > > I'm using Python 2.1 on RH 7.1. > > > > I've had this problem before and have even seen postings to this list > > about related problems. But the solutions posted were over my head, so I've > > never been able to use the python interface to the lapack and blas libraries, > > which I really need. Does anyone have any advice, preferably pitched to > > someone who has limited compilation skills (i.e. at the the "configure, > > make, make install" level)? > > > > Thanks. > > You have to compile in the g2c library. For RH 7.1, add the path > '/usr/lib/gcc-lib/i386-redhat-linux/2.96/' to library_dirs_list in > setup.py, and 'g2c' in libraries_list. > > So the appropiate lines in setup.py will look like: > > # delete all but the first one in this list if using your own LAPACK/BLAS > sourcelist = ['Src/lapack_litemodule.c', > # 'Src/blas_lite.c', > # 'Src/f2c_lite.c', > # 'Src/zlapack_lite.c', > # 'Src/dlapack_lite.c' > ] > # set these to use your own BLAS > library_dirs_list = ['/usr/local/lib', '/usr/lib/gcc-lib/i386-redhat-linux/2.96/'] > libraries_list = ['lapack', 'blas', 'g2c'] > > If you're compiling on Debian, I don't think you need to add the path > (but you need 'g2c'). > > You need g2c because lapack and blas were compiled from Fortran using > g77, and so they depend on routines that implement some of the Fortran > statements. > > -- > |>|\/|< > /--------------------------------------------------------------------------\ > |David M. Cooke > |co...@mc... > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: <co...@ph...> - 2001-06-06 18:06:51
|
At some point, "David H. Marimont" <mar...@nx...> wrote: > I just compiled and installed Numeric 20.1.0b1 using lapack > and blas libraries. When I tried to import lapack_lite (after > importing Numeric), I got this error: > > Traceback (most recent call last): > File "<stdiimoprimporn>", line 1, in ? > ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe > > I'm using Python 2.1 on RH 7.1. > > I've had this problem before and have even seen postings to this list > about related problems. But the solutions posted were over my head, so I've > never been able to use the python interface to the lapack and blas libraries, > which I really need. Does anyone have any advice, preferably pitched to > someone who has limited compilation skills (i.e. at the the "configure, > make, make install" level)? > > Thanks. You have to compile in the g2c library. For RH 7.1, add the path '/usr/lib/gcc-lib/i386-redhat-linux/2.96/' to library_dirs_list in setup.py, and 'g2c' in libraries_list. So the appropiate lines in setup.py will look like: # delete all but the first one in this list if using your own LAPACK/BLAS sourcelist = ['Src/lapack_litemodule.c', # 'Src/blas_lite.c', # 'Src/f2c_lite.c', # 'Src/zlapack_lite.c', # 'Src/dlapack_lite.c' ] # set these to use your own BLAS library_dirs_list = ['/usr/local/lib', '/usr/lib/gcc-lib/i386-redhat-linux/2.96/'] libraries_list = ['lapack', 'blas', 'g2c'] If you're compiling on Debian, I don't think you need to add the path (but you need 'g2c'). You need g2c because lapack and blas were compiled from Fortran using g77, and so they depend on routines that implement some of the Fortran statements. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke |co...@mc... |
From: David H. M. <mar...@nx...> - 2001-06-06 17:50:28
|
I just compiled and installed Numeric 20.1.0b1 using lapack and blas libraries. When I tried to import lapack_lite (after importing Numeric), I got this error: Traceback (most recent call last): File "<stdiimoprimporn>", line 1, in ? ImportError: /usr/lib/liblapack.so.3: undefined symbol: e_wsfe I'm using Python 2.1 on RH 7.1. I've had this problem before and have even seen postings to this list about related problems. But the solutions posted were over my head, so I've never been able to use the python interface to the lapack and blas libraries, which I really need. Does anyone have any advice, preferably pitched to someone who has limited compilation skills (i.e. at the the "configure, make, make install" level)? Thanks. David Marimont |
From: Joe V. A. <van...@at...> - 2001-06-06 16:51:01
|
I'm trying to use the MA package for numeric computations. Unfortunately, attempting to construct a masked array sometimes fails: masked_values(values, missingValue,savespace=1) File "/usr/lib/python2.1/site-packages/MA/MA.py", line 1299, in masked_values m = Numeric.less_equal(abs(d-value), atol+rtol*abs(value)) OverflowError: math range error The odd thing is that the floating point calculations that produced the input Numeric array didn't cause a math range error, but MA's attempt to find the 'missing' values does cause a range error. When I switched to Python2.1, I had to find and fix several overflow problems that didn't cause exceptions under Python1.5. For example, I had to use a "protected" exponentation routine to avoid overflow errors: MIN_EXP = -745 MAX_EXP = 709 def ProtExp(a): """ Protected Exponentiation calculation. Avoid Overflow error on large negative or positive arguments """ min_a = choose(less(a, MIN_EXP), (a, MIN_EXP)) return exp(choose(greater(min_a, MAX_EXP), (min_a, MAX_EXP)) ) ------------------- I'm concerned that the math exception handling for Python2.1 under x86 Linux makes it hard to get my work done. Any ideas on how to fix this error in MA? (I already tried masked_values(values, missingValue, rtol=1e-2,atol=1.e-4, savespace=1), which didn't help.) -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Tavis R. <ta...@ca...> - 2001-06-05 18:48:33
|
oops, there's a typo in that test script I posted it should be b = loadIt('test.pickle') instead of > > > This script segfaults > > > ==================================== > > > # ... same imports and func defs as above > > > b = loadIt() > > > print b > > > > > > ==================================== |
From: Tavis R. <ta...@ca...> - 2001-06-05 18:31:31
|
Paul, I just installed 20.1.0b1 and got the same segfault. I'm using Suse 6.4 Note that if I dump it and load it from a single process it works fine. The error only occurs when I try to load it from a separate process. Tavis On Tuesday 05 June 2001 11:14, Paul F. Dubois wrote: > Travis: > Works for me ....using either dump or dumps, load or > loads I used Numeric 20.1.0b1 / Python 2.1 / RedHat 6.2 > On Tue, 05 Jun 2001, Tavis Rudd wrote: > > Hi, > > I've been having difficultly pickling arrays with the > > type PyObject using Numeric. I haven't tried it with > > MA but I assume the same problem exists. > > > > This script works > > ===================================== > > from cPickle import dump, load > > from Numeric import array, PyObject > > > > def pickleIt(obj, fileName): > > fp = open(fileName, 'w') > > dump(obj, fp) > > fp.close > > > > def loadIt(fileName): > > fp = open(fileName, 'r') > > obj = load(fp) > > fp.close() > > return obj > > > > a = array(['abc', 'def', 'ghi'], PyObject) > > pickleIt(a, 'test.pickle') > > > > This script segfaults > > ==================================== > > # ... same imports and func defs as above > > b = loadIt() > > print b > > > > ==================================== > > > > I first noticed this when trying to pickle arrays > > constructed from lists of mx.DateTime objects. > > > > Numeric 19.1.0 > > Python 2.1 final > > Linux 2.2.18 > > > > Is this a reproduceable bug or something unique to my > > setup? > > Tavis > > > > _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > http://lists.sourceforge.net/lists/listinfo/numpy-discu > >ssion > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discuss >ion |