You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: David M. C. <co...@ph...> - 2006-06-12 22:33:50
|
On Mon, 12 Jun 2006 09:02:54 +0200 Nils Wagner <nw...@ia...> wrote: > matplotlib data path /usr/lib64/python2.4/site-packages/matplotlib/mpl-data > $HOME=/home/nwagner > loaded rc file /home/nwagner/matplotlibrc > matplotlib version 0.87.3 > verbose.level helpful > interactive is False > platform is linux2 > numerix numpy 0.9.9.2603 > Traceback (most recent call last): > File "cascade.py", line 3, in ? > from pylab import plot, show, xlim, ylim, subplot, xlabel, ylabel, > title, legend,savefig,clf,scatter > File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? > from matplotlib.pylab import * > File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line > 198, in ? > import mlab #so I can override hist, psd, etc... > File "/usr/lib64/python2.4/site-packages/matplotlib/mlab.py", line 74, > in ? > from numerix.fft import fft, inverse_fft > ImportError: cannot import name inverse_fft It's a bug in matplotlib: it should use ifft for numpy. We cleaned up the namespace a while back to not have two names for things. (Admittedly, I'm not sure why we went with the short names instead of the self-descriptive long ones. It's in the archives somewhere.) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: David M. C. <co...@ph...> - 2006-06-12 22:29:51
|
On Tue, 13 Jun 2006 00:19:54 +0200 Steve Schmerler <el...@gm...> wrote: > The latest svn build fails. > > [snip] > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1224, in calc_info > atlas_version = get_atlas_version(**version_info) > File > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", > line 1085, in get_atlas_version > library_dirs=config.get('library_dirs', []), > File > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/config.py", > line 121, in get_output > return exitcode, output > UnboundLocalError: local variable 'exitcode' referenced before assignment > ==================================================================================== > > I removed the old /build dir and even did a complete fresh checkout but > it still fails to build. > > cheers, > steve > Sorry about that; I noticed and fixed it last night, but forgot to check it in. It should work now. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Steve S. <el...@gm...> - 2006-06-12 22:19:53
|
The latest svn build fails. ==================================================================================== elcorto@ramrod:~/install/python/scipy/svn$ make build cd numpy; python setup.py build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' No module named __svn_version__ F2PY Version 2_2607 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not find in /usr/local/lib libraries mkl,vml,guide not find in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not find in /usr/local/lib libraries ptf77blas,ptcblas,atlas not find in /usr/lib/atlas libraries ptf77blas,ptcblas,atlas not find in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not find in /usr/local/lib libraries f77blas,cblas,atlas not find in /usr/lib/atlas FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib'] language = c Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy_distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib -lf77blas -lcblas -latlas -o _configtest _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status _configtest.o: In function `main': /home/elcorto/install/python/scipy/svn/numpy/_configtest.c:5: undefined reference to `ATL_buildinfo' collect2: ld returned 1 exit status failure. removing: _configtest.c _configtest.o Traceback (most recent call last): File "setup.py", line 84, in ? setup_package() File "setup.py", line 77, in setup_package configuration=configuration ) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/core.py", line 140, in setup config = configuration() File "setup.py", line 43, in configuration config.add_subpackage('numpy') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./numpy/setup.py", line 9, in configuration config.add_subpackage('core') File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 740, in add_subpackage caller_level = 2) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 723, in get_subpackage caller_level = caller_level + 1) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/misc_util.py", line 670, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 207, in configuration blas_info = get_info('blas_opt',0) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 397, in get_info self.calc_info() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1224, in calc_info atlas_version = get_atlas_version(**version_info) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/system_info.py", line 1085, in get_atlas_version library_dirs=config.get('library_dirs', []), File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/config.py", line 121, in get_output return exitcode, output UnboundLocalError: local variable 'exitcode' referenced before assignment ==================================================================================== I removed the old /build dir and even did a complete fresh checkout but it still fails to build. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. |
From: Sasha <nd...@ma...> - 2006-06-12 22:19:21
|
BTW, here is the relevant explanation from mathmodule.c: /* ANSI C generally requires libm functions to set ERANGE * on overflow, but also generally *allows* them to set * ERANGE on underflow too. There's no consistency about * the latter across platforms. * Alas, C99 never requires that errno be set. * Here we suppress the underflow errors (libm functions * should return a zero on underflow, and +- HUGE_VAL on * overflow, so testing the result for zero suffices to * distinguish the cases). */ On 6/12/06, Sasha <nd...@ma...> wrote: > I don't know about numarray, but the difference between Numeric and > python math module stems from the fact that the math module ignores > errno set by C library and only checks for infinity. Numeric relies > on errno exclusively, numpy ignores errors by default: > > >>> import numpy,math,Numeric > >>> numpy.exp(-760) > 0.0 > >>> math.exp(-760) > 0.0 > >>> Numeric.exp(-760) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: math range error > >>> numpy.exp(760) > inf > >>> math.exp(760) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: math range error > >>> Numeric.exp(760) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > OverflowError: math range error > > I would say it's a bug in Numeric, so you are out of luck. > > Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: > > >>> exp = MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-100,100)) > >>> exp(-760).filled() > 0 > > You would need to replace -100,100 with the bounds appropriate for your system. > > > > > On 6/12/06, Sebastian Haase <ha...@ms...> wrote: > > Hi, > > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to > > do a non linear minimization. It uses the "old" Numeric module. > > But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down > > to > > >>> Numeric.exp(-760.) > > Traceback (most recent call last): > > File "<input>", line 1, in ? > > OverflowError: math range error > > > > >From numarray I'm used to getting this: > > >>> na.exp(-760) > > 0.0 > > > > Mostly I'm confused because my code worked before I upgraded to version 24.2. > > > > Thanks for any hints on how I could revive my code... > > -Sebastian Haase > > > > > > _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: Sasha <nd...@ma...> - 2006-06-12 22:15:17
|
I don't know about numarray, but the difference between Numeric and python math module stems from the fact that the math module ignores errno set by C library and only checks for infinity. Numeric relies on errno exclusively, numpy ignores errors by default: >>> import numpy,math,Numeric >>> numpy.exp(-760) 0.0 >>> math.exp(-760) 0.0 >>> Numeric.exp(-760) Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: math range error >>> numpy.exp(760) inf >>> math.exp(760) Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: math range error >>> Numeric.exp(760) Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: math range error I would say it's a bug in Numeric, so you are out of luck. Unfortunalely, even MA.exp(-760) does not work, but this is easy to fix: >>> exp = MA.masked_unary_operation(Numeric.exp,0.0,MA.domain_check_interval(-100,100)) >>> exp(-760).filled() 0 You would need to replace -100,100 with the bounds appropriate for your system. On 6/12/06, Sebastian Haase <ha...@ms...> wrote: > Hi, > I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way to > do a non linear minimization. It uses the "old" Numeric module. > But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked down > to > >>> Numeric.exp(-760.) > Traceback (most recent call last): > File "<input>", line 1, in ? > OverflowError: math range error > > >From numarray I'm used to getting this: > >>> na.exp(-760) > 0.0 > > Mostly I'm confused because my code worked before I upgraded to version 24.2. > > Thanks for any hints on how I could revive my code... > -Sebastian Haase > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Sebastian H. <ha...@ms...> - 2006-06-12 21:34:18
|
Hi, I'm using Konrad Hinsen's LeastSquares.leastSquaresFit for a convenient way= to=20 do a non linear minimization. It uses the "old" Numeric module. But since I upgraded to Numeric 24.2 I get OverflowErrors that I tracked do= wn=20 to=20 >>> Numeric.exp(-760.) Traceback (most recent call last): File "<input>", line 1, in ? OverflowError: math range error =46rom numarray I'm used to getting this: >>> na.exp(-760) 0.0 Mostly I'm confused because my code worked before I upgraded to version 24.= 2. Thanks for any hints on how I could revive my code... =2DSebastian Haase |
From: Travis O. <oli...@ee...> - 2006-06-12 20:18:01
|
Robert Hetland wrote: >I set up a linux machine without BLAS, LAPACK, ATLAS, hoping that >lapack_lite would take over. For the moment, I am not concerned >about speed -- I just want something that will work with small >matricies. I installed numpy, and it passes all of the tests OK, but >it hangs when doing eig: > >u, v = linalg.eig(rand(10,10)) ># ....lots of nothing.... > >Do you *need* the linear algebra libraries for eig? BTW, inverse >seems to work fine. > >-Rob > > > From ticket #5 >Greg Landrum pointed out that it may be a gcc 4.0 related >problem and proposed a workaround -- to add the option '-ffloat-store' to CFLAGS. Works for me ! > > > Are you using gcc 4.0? -Travis |
From: Robert H. <he...@ta...> - 2006-06-12 15:29:42
|
On Jun 8, 2006, at 3:23 PM, David M. Cooke wrote: > > Lapack_lite probably doesn't get much testing from the developers, > because we > probably all have optimized versions of blas and lapack. This is precisely my suspicion... I tried a variety of random, square matrices (like rand(10, 10), rand(100, 100), etc.), and none work. An it just hangs forever, so there is really no output to debug. It is the most recent svn version of numpy (which BTW, works on my Mac, with AltiVec there..) -Rob ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: he...@ta..., w: http://pong.tamu.edu |
From: Johannes L. <a.u...@gm...> - 2006-06-12 14:03:17
|
> I've tried to send a message twice to scipy-user since friday without > success (messages don't come back to me but I don't receive any message > from scipy-user too and they don't appear in archives). > Note that since friday there are no new messages from that list. > > Is scipy-user working? Hm, scipy-dev seems to be offline as well. Johannes |
From: Brian B. <bb...@br...> - 2006-06-12 12:57:10
|
Hello, I am trying to load some .mat files in python, that were saved with octave. I get some weird things with strings, and structs fail altogether. Am I doing something wrong? Python 2.4, Scipy '0.4.9.1906', numpy 0.9.8, octave 2.1.71, running Linux. thanks, Brian Blais here is what I tried: Numbers are ok: ========OCTAVE========== >> a=rand(4) a = 0.617860 0.884195 0.032998 0.217922 0.207970 0.753992 0.333966 0.905661 0.048432 0.290895 0.353919 0.958442 0.697213 0.616851 0.426595 0.371364 >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [13]:d=io.loadmat('pythonfile.mat') In [14]:d Out[14]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:23:54 UTC', '__version__': '1.0', 'a': array([[ 0.61785957, 0.88419484, 0.03299807, 0.21792207], [ 0.20796989, 0.75399171, 0.33396634, 0.90566095], [ 0.04843219, 0.29089527, 0.35391921, 0.95844178], [ 0.69721313, 0.61685075, 0.42659485, 0.37136358]])} Strings are weird (turns to all 1's) ========OCTAVE========== >> a='hello' a = hello >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [15]:d=io.loadmat('pythonfile.mat') In [16]:d Out[16]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:13 UTC', '__version__': '1.0', 'a': '11111'} Cell arrays are fine (except for strings): ========OCTAVE========== >> a={5 [1,2,3] 'this'} a = { [1,1] = 5 [1,2] = 1 2 3 [1,3] = this } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [17]:d=io.loadmat('pythonfile.mat') In [18]:d Out[18]: {'__header__': 'MATLAB 5.0 MAT-file, written by Octave 2.1.71, 2006-06-09 14:24:51 UTC', '__version__': '1.0', 'a': array([5.0, [ 1. 2. 3.], 1111], dtype=object)} Structs crash: ========OCTAVE========== >> clear a >> a.hello=5 a = { hello = 5 } >> a.this=[1,2,3] a = { hello = 5 this = 1 2 3 } >> save -mat-binary pythonfile.mat a =========PYTHON=========== In [19]:d=io.loadmat('pythonfile.mat') --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/bblais/octave/work/mouse/<console> /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, dict, appendmat, basename) 751 if not (0 in test_vals): # MATLAB version 5 format 752 fid.rewind() --> 753 thisdict = _loadv5(fid,basename) 754 if dict is not None: 755 dict.update(thisdict) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _loadv5(fid, basename) 688 try: 689 var = var + 1 --> 690 el, varname = _get_element(fid) 691 if varname is None: 692 varname = '%s_%04d' % (basename,var) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _get_element(fid) 676 677 # handle miMatrix type --> 678 el, name = _parse_mimatrix(fid,numbytes) 679 return el, name 680 /usr/lib/python2.4/site-packages/scipy/io/mio.py in _parse_mimatrix(fid, bytes) 597 result[i].__dict__[element] = val 598 result = squeeze(transpose(reshape(result,tupdims))) --> 599 if rank(result)==0: result = result.item() 600 601 # object is like a structure with but with a class name AttributeError: mat_struct instance has no attribute 'item' -- ----------------- bb...@br... http://web.bryant.edu/~bblais |
From: Emanuele O. <oli...@it...> - 2006-06-12 08:07:09
|
Hi, I've tried to send a message twice to scipy-user since friday without success (messages don't come back to me but I don't receive any message from scipy-user too and they don't appear in archives). Note that since friday there are no new messages from that list. Is scipy-user working? TIA Emanuele |
From: Nils W. <nw...@ia...> - 2006-06-12 07:03:18
|
matplotlib data path /usr/lib64/python2.4/site-packages/matplotlib/mpl-data $HOME=/home/nwagner loaded rc file /home/nwagner/matplotlibrc matplotlib version 0.87.3 verbose.level helpful interactive is False platform is linux2 numerix numpy 0.9.9.2603 Traceback (most recent call last): File "cascade.py", line 3, in ? from pylab import plot, show, xlim, ylim, subplot, xlabel, ylabel, title, legend,savefig,clf,scatter File "/usr/lib64/python2.4/site-packages/pylab.py", line 1, in ? from matplotlib.pylab import * File "/usr/lib64/python2.4/site-packages/matplotlib/pylab.py", line 198, in ? import mlab #so I can override hist, psd, etc... File "/usr/lib64/python2.4/site-packages/matplotlib/mlab.py", line 74, in ? from numerix.fft import fft, inverse_fft ImportError: cannot import name inverse_fft |
From: Paulo J. da S. e S. <pjs...@im...> - 2006-06-11 23:06:12
|
Em Sáb, 2006-06-10 às 15:15 -0700, JJ escreveu: > python > import numpy > import scipy > a = scipy.random.normal(0,1,[10000,2000]) > b = scipy.random.normal(0,1,[10000,2000]) > c = scipy.dot(a,scipy.transpose(b)) Interesting enough, I may have found "the reason". I am using only numpy (as I don't have scipy compiled and it is not necessary to the code above). The problem is probably memory consumption. Let me explain. After creating a, ipython reports 160Mb of memory usage. After creating b, 330Mb. But when I run the last line, the memory footprint jumps to 1.2gb! This is four times the original memory consumption. In my computer the result is swapping and the calculation would take forever. Why is the memory usage getting so high? Paulo Obs: As a side not. If you decrease the matrix sizes (like for example 2000x2000), numpy and matlab spend basically the same time. If the transpose imposes some penalty for numpy, it imposes the same penalty for matlab (version 6.5, R13). |
From: Rob H. <ro...@ho...> - 2006-06-11 08:31:34
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 JJ wrote: > python > import numpy > import scipy > a = scipy.random.normal(0,1,[10000,2000]) > b = scipy.random.normal(0,1,[10000,2000]) > c = scipy.dot(a,scipy.transpose(b)) Hi, My experience with the old Numeric tells me that the first thing I would try to speed this up is to copy the transposed b into a fresh array. It might be that the memory access in dot is very inefficient due to the transposed (and hence large-stride) array. Of course I may be completely wrong. Rob - -- Rob W.W. Hooft || ro...@ho... || http://www.hooft.net/people/rob/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFEi9TdH7J/Cv8rb3QRAgXYAJ9EcJtfUeX3H0ZWf22AapOvC3dgTwCgtF5r QW6si4kqTjCvifCfTc/ShC0= =uuUY -----END PGP SIGNATURE----- |
From: Charles R H. <cha...@gm...> - 2006-06-11 04:47:29
|
Hmm, I just tried this and it took so long on my machine (Athlon64, fc5_x86_64), that I ctrl-c'd out of it. Running ldd on lapack_lite.so shows libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aaaaace2000) libc.so.6 => /lib64/libc.so.6 (0x00002aaaaadfa000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) So apparently the Atlas library present in /usr/lib64/atlas was not linked in. I built numpy from the svn repository two days ago. I expect JJ's version is linked with atlas 'cause mine sure didn't run in 11 seconds. Chuck On 6/10/06, Robert Kern <rob...@gm...> wrote: > > JJ wrote: > > Any ideas on where to look for a speedup? If the > > problem is that it could not locate the atlas > > ibraries, how might I assure that numpy finds the > > atlas libraries. I can recompile and send along the > > results if it would help. > > Run ldd(1) on the file lapack_lite.so . It should show you what dynamic > libraries it is linked against. > > > PS. I first sent this to the scipy mailing list, but > > it didnt seem to make it there. > > That's okay. This is actually the right place. All of the functions you > used are > numpy functions, not scipy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Robert K. <rob...@gm...> - 2006-06-10 22:32:06
|
JJ wrote: > Any ideas on where to look for a speedup? If the > problem is that it could not locate the atlas > ibraries, how might I assure that numpy finds the > atlas libraries. I can recompile and send along the > results if it would help. Run ldd(1) on the file lapack_lite.so . It should show you what dynamic libraries it is linked against. > PS. I first sent this to the scipy mailing list, but > it didnt seem to make it there. That's okay. This is actually the right place. All of the functions you used are numpy functions, not scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Tim H. <tim...@co...> - 2006-06-10 22:31:12
|
David M. Cooke wrote: >On Sat, Jun 10, 2006 at 01:18:05PM -0700, Tim Hochberg wrote: > > >>I finally got around to cleaning up and checking in fromiter. As Travis >>suggested, this version does not require that you specify count. From >>the docstring: >> >> fromiter(...) >> fromiter(iterable, dtype, count=-1) returns a new 1d array >> initialized from iterable. If count is nonegative, the new array >> will have count elements, otherwise it's size is determined by the >> generator. >> >>If count is specified, it allocates the full array ahead of time. If it >>is not, it periodically reallocates space for the array, allocating 50% >>extra space each time and reallocating back to the final size at the end >>(to give realloc a chance to reclaim any extra space). >> >>Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as >>"array(list(iterable),dtype=dtype)". Omitting count slows things down by >>about 15%; still much faster than using "array(list(...))". It also is >>going to chew up more memory than if you include count, at least >>temporarily, but still should typically use much less than the >>"array(list(...))" approach. >> >> > >Can this be integrated into array() so that array(iterable, dtype=dtype) >does the expected thing? > > It get's a little sticky since the expected thing is probably that array([iterable, iterable, iterable], dtype=dtype) work and produce an array of shape [3, N]. That looks like that would be hard to do efficiently. >Can you try to find the length of the iterable, with PySequence_Size() on >the original object? This gets a bit iffy, as that might not be correct >(but it could be used as a hint). > > The way the code is setup, a hint could be made use of with little additional complexity. Allegedly, some objects in 2.5 will grow __length_hint__, which could be made use of as well. I'm not very motivated to mess with this at the moment though as the benefit is relatively small. >What about iterables that return, say, tuples? Maybe add a shape argument, >so that fromiter(iterable, dtype, count, shape=(None, 3)) expects elements >from iterable that can be turned into arrays of shape (3,)? That could >replace count, too. > > I expect that this would double (or more) the complexity of the current code (which is nice and simple at present). I'm inclined to leave it as it is and advocate solutions of this type: >>> import numpy >>> tupleiter = ((x, x+1, x+2) for x in range(10)) # Just for example >>> def flatten(x): ... for y in x: ... for z in y: ... yield z >>> numpy.fromiter(flatten(tupleiter), int).reshape(-1, 3) array([[ 0, 1, 2], [ 1, 2, 3], [ 2, 3, 4], [ 3, 4, 5], [ 4, 5, 6], [ 5, 6, 7], [ 6, 7, 8], [ 7, 8, 9], [ 8, 9, 10], [ 9, 10, 11]]) [As a side note, I'm quite suprised that there isn't a way to flatten stuff already in itertools, but if there is, I can't find it]. -tim |
From: JJ <jos...@ya...> - 2006-06-10 22:15:14
|
Hello. I am a new user to scipy, thinking about crossing over from Matlab. I have a new AMD 64 machine and just installed fedora 5 and scipy. It is a dual boot machine with windows XP. I did a small test to compare the speed of matlab (in 32 bit windows, Matlab student v14) to the speed of scipy (in fedora, 64 bit). I generated two random matrices of 10,000 by 2,000 elements and then took their dot product. The scipy code was: python import numpy import scipy a = scipy.random.normal(0,1,[10000,2000]) b = scipy.random.normal(0,1,[10000,2000]) c = scipy.dot(a,scipy.transpose(b)) I timed the last line of the code and compared it to the equivalent code in Matlab. The results were that Matlab took 3.3 minutes and scipy took 11.5 minutes. Thats a factor of three. I am surprised with the difference and am wondering if there is anything I can do to speed up scipy. I installed scipy, blas, atlas, numpy and lapack from source, just as the instructions on the scipy web site suggested (or as close to the instructions as I could). The only thing odd was that when installing numpy, I received messages that the atlas libraries could not be found. However, it did locate the lapack libraries. I dont know why it could not find the atlas libraries, as I told it exactly where to find them. It did not give the message that it was using the slower default libraries. I also tried compiling after an export ATLAS = statement, but that did not make a difference. Wherever I could, I complied it specifically for the 64 bit machine. I used the current gcc compiler. The ATLAS notes suggested that the speed problems with the 2.9+ compilers had been fixed. Any ideas on where to look for a speedup? If the problem is that it could not locate the atlas ibraries, how might I assure that numpy finds the atlas libraries. I can recompile and send along the results if it would help. Thanks. John PS. I first sent this to the scipy mailing list, but it didnt seem to make it there. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
From: Robert K. <rob...@gm...> - 2006-06-10 22:05:45
|
David M. Cooke wrote: > Can this be integrated into array() so that array(iterable, dtype=dtype) > does the expected thing? That was rejected early on because array() is so incredibly overloaded as it is. http://article.gmane.org/gmane.comp.python.numeric.general/5756 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: David M. C. <co...@ph...> - 2006-06-10 21:42:58
|
On Sat, Jun 10, 2006 at 01:18:05PM -0700, Tim Hochberg wrote: > > I finally got around to cleaning up and checking in fromiter. As Travis > suggested, this version does not require that you specify count. From > the docstring: > > fromiter(...) > fromiter(iterable, dtype, count=-1) returns a new 1d array > initialized from iterable. If count is nonegative, the new array > will have count elements, otherwise it's size is determined by the > generator. > > If count is specified, it allocates the full array ahead of time. If it > is not, it periodically reallocates space for the array, allocating 50% > extra space each time and reallocating back to the final size at the end > (to give realloc a chance to reclaim any extra space). > > Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as > "array(list(iterable),dtype=dtype)". Omitting count slows things down by > about 15%; still much faster than using "array(list(...))". It also is > going to chew up more memory than if you include count, at least > temporarily, but still should typically use much less than the > "array(list(...))" approach. Can this be integrated into array() so that array(iterable, dtype=dtype) does the expected thing? Can you try to find the length of the iterable, with PySequence_Size() on the original object? This gets a bit iffy, as that might not be correct (but it could be used as a hint). What about iterables that return, say, tuples? Maybe add a shape argument, so that fromiter(iterable, dtype, count, shape=(None, 3)) expects elements from iterable that can be turned into arrays of shape (3,)? That could replace count, too. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Andrew S. <str...@as...> - 2006-06-10 21:22:25
|
OK, here's another (semi-crazy) idea: __array_struct__ is the interface. ctypes lets us use it in "pure" Python. We provide a "reference implementation" so that newbies don't get segfaults. |
From: Tim H. <tim...@co...> - 2006-06-10 20:20:31
|
I finally got around to cleaning up and checking in fromiter. As Travis suggested, this version does not require that you specify count. From the docstring: fromiter(...) fromiter(iterable, dtype, count=-1) returns a new 1d array initialized from iterable. If count is nonegative, the new array will have count elements, otherwise it's size is determined by the generator. If count is specified, it allocates the full array ahead of time. If it is not, it periodically reallocates space for the array, allocating 50% extra space each time and reallocating back to the final size at the end (to give realloc a chance to reclaim any extra space). Speedwise, "fromiter(iterable, dtype, count)" is about twice as fast as "array(list(iterable),dtype=dtype)". Omitting count slows things down by about 15%; still much faster than using "array(list(...))". It also is going to chew up more memory than if you include count, at least temporarily, but still should typically use much less than the "array(list(...))" approach. -tim |
From: stephen e. <ste...@gm...> - 2006-06-10 19:40:13
|
Thanks for all the help! Convolving looks like a great way to do this, and I think that mean will be just fine for my purposes. That iterator also looks fantastic and is actually the sort of thing that I was looking for at first. I havn't tried it yet though. Any idea how fast it would be? Stephen On 6/10/06, Alex Liberzon <ale...@gm...> wrote: > > Not sure, but my Google desktop search of "medfilt" (the name of > Matlab function) brought me to: > > info_signal.py - N-dimensional order filter. medfilt -N-dimensional > median filter > > If it's true, then it is the 2D median filter. > > Regarding the neighbouring cells, I found the iterator on 2D ranges on > the O'Reily Cookbook by Simon Wittber very useful for my PyPIV > (Particle Image Velocimetry, which works by correlation of 2D blocks > of two successive images): > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334971 > > def blocks(size, box=(1,1)): > """ > Iterate over a 2D range in 2D increments. > Returns a 4 element tuple of top left and bottom right coordinates. > """ > box = list(box) > pos = [0,0] > yield tuple(pos + box) > while True: > if pos[0] >= size[0]-box[0]: > pos[0] = 0 > pos[1] += box[1] > if pos[1] >= size[1]: > raise StopIteration > else: > pos[0] += box[0] > topleft = pos > bottomright = [min(x[1]+x[0],x[2]) for x in zip(pos,box,size)] > yield tuple(topleft + bottomright) > > if __name__ == "__main__": > for c in blocks((100,100),(99,10)): > print c > for c in blocks((10,10)): > print c > > > > HIH, > Alex > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Alex L. <ale...@gm...> - 2006-06-10 17:19:20
|
Not sure, but my Google desktop search of "medfilt" (the name of Matlab function) brought me to: info_signal.py - N-dimensional order filter. medfilt -N-dimensional median filter If it's true, then it is the 2D median filter. Regarding the neighbouring cells, I found the iterator on 2D ranges on the O'Reily Cookbook by Simon Wittber very useful for my PyPIV (Particle Image Velocimetry, which works by correlation of 2D blocks of two successive images): http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/334971 def blocks(size, box=(1,1)): """ Iterate over a 2D range in 2D increments. Returns a 4 element tuple of top left and bottom right coordinates. """ box = list(box) pos = [0,0] yield tuple(pos + box) while True: if pos[0] >= size[0]-box[0]: pos[0] = 0 pos[1] += box[1] if pos[1] >= size[1]: raise StopIteration else: pos[0] += box[0] topleft = pos bottomright = [min(x[1]+x[0],x[2]) for x in zip(pos,box,size)] yield tuple(topleft + bottomright) if __name__ == "__main__": for c in blocks((100,100),(99,10)): print c for c in blocks((10,10)): print c HIH, Alex |
From: Alan G I. <ai...@am...> - 2006-06-10 13:40:49
|
On Sat, 10 Jun 2006, stephen emslie apparently wrote: > I'm just starting with numpy (via scipy) and I'm wanting to perform > adaptive thresholding > (http://www.cee.hw.ac.uk/hipr/html/adpthrsh.html) on an image. The ability to define a function on a neighborhood, where the neighborhood is defined by relative coordinates, is useful other places too. (E.g., agent based modeling. Here the output should be a new array of the same dimension with each element replaced by the value of the function on the neighborhood.) I am also interested in learning how people handle this. Cheers, Alan Isaac |