You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Travis O. <oli...@ee...> - 2006-06-09 17:50:06
|
Albert Strasheim wrote: >Hello all > > > >>-----Original Message----- >>From: num...@li... [mailto:numpy- >>dis...@li...] On Behalf Of Travis Oliphant >>Sent: 08 June 2006 22:27 >>To: numpy-discussion >>Subject: [Numpy-discussion] Array Protocol change for Python 2.6 >> >>... >> >>I would like to eliminate all the other array protocol attributes before >>NumPy 1.0 (and re-label those such as __array_data__ that are useful in >>other contexts --- like ctypes). >> >> > >Just out of curiosity: > >In [1]: x = N.array([]) > >In [2]: x.__array_data__ >Out[2]: ('0x01C23EE0', False) > >Is there a reason why the __array_data__ tuple stores the address as a hex >string? I would guess that this representation of the address isn't the most >useful one for most applications. > > I suppose we could have stored it as a Python Long integer. But, storing it as a string was probably inspired by SWIG. -Travis |
From: Sasha <nd...@ma...> - 2006-06-09 16:53:23
|
On 6/9/06, Tim Hochberg <tim...@co...> wrote: > Shouldn't pure python implementations > just provide __array__? > You cannot implement __array__ without importing numpy. |
From: Sasha <nd...@ma...> - 2006-06-09 16:50:19
|
On 6/9/06, Tim Hochberg <tim...@co...> wrote: > Sasha wrote: > ... > >> > >My rule of thumb for choosing between an attribute and a method is > >that attribute access should not create new objects. > > > Conceptually at least, couldn't there be a single __array_interface__ > object associated with a given array? In that sense, it doesn't really > feel like creating a new object. > In my view, conceptually, __array_interface__ creates a adaptor to the array-like object. What are the advantages of it being an attribute? It is never settable, so the most common advantage of packing get/set methods in a single attribute can be rulled out. Saving typing of '()' cannot be taken seriousely when the name contains a pair of double underscores :-). There was a similar issue discussed on the python-3000 mailing list with respect to __hash__ method <http://mail.python.org/pipermail/python-3000/2006-April/000362.html>. > .... > >> > >My problem with __array_struct__ returning either a tuple or a CObject > >is that array protocol sholuld really provide both. CObject is > >useless for interoperability at python level and a tuple (or dict) is > >inefficient at the C level. Thus a good array-like object should > >really provide both __array_struct__ for use by C modules and > >__array_tuple__ (or whatever) for use by python modules. On the other > >hand, making both required attributes/methods will put an extra burden > >on package writers. Moreover, a pure python implementation of an > >array-like object will not be able to provide __array_struct__ at all. > > One possible solution would be an array protocol metaclass that adds > >__array_struct__ to a class with __array_tuple__ and __array_tuple__ > >to a class with __array_struct__ (yet another argument to make both > >methods). > > > > > I don't understand this. I'm don't see how bringing in metaclass is > going to help a pure python type provide a sensible __array_struct__. > That seems like a hopeless task. Shouldn't pure python implementations > just provide __array__? > My metaclass idea is very similar to your unpack_interface suggestion. A metaclass can autonatically add def __array_tuple__(self): return unpack_interface(self.__array_interface__()) or def __array_interface__(self): return pack_interface(self.__array_tuple__()) to a class that only implements only one of the two required methods. > A single attribute seems pretty appealing to me, I'm don't see much use > for anything else. I don't mind just having __array_struct__ that must return a CObject. My main objection was against a method/attribute that may return either CObject or something else. That felt like shifting the burden from package writer to the package user. |
From: Robert K. <rob...@gm...> - 2006-06-09 16:31:05
|
Francesc Altet wrote: > A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: > >>Just out of curiosity: >> >>In [1]: x = N.array([]) >> >>In [2]: x.__array_data__ >>Out[2]: ('0x01C23EE0', False) >> >>Is there a reason why the __array_data__ tuple stores the address as a hex >>string? I would guess that this representation of the address isn't the >>most useful one for most applications. > > Good point. I hit this before and forgot to send a message about this. I agree > that a integer would be better. Although, now that I think about this, I > suppose that the issue should be the difference of representation of longs in > 32-bit and 64-bit platforms, isn't it? Like how Win64 uses 32-bit longs and 64-bit pointers. And then there's signedness. Please don't use Python ints to encode pointers. Holding arbitrary pointers is the job of CObjects. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Tim H. <tim...@co...> - 2006-06-09 16:06:31
|
Sasha wrote: >On 6/8/06, David M. Cooke <co...@ph...> wrote: > > >>... >>+0 for name change; I'm happy with it as an attribute. >> >> >> >My rule of thumb for choosing between an attribute and a method is >that attribute access should not create new objects. > Conceptually at least, couldn't there be a single __array_interface__ object associated with a given array? In that sense, it doesn't really feel like creating a new object. > In addition, to >me __array_interface__ feels like a generalization of __array__ >method, so I personally expected it to be a method the first time I >tried to use it. > > > >>... >>The idea behind the array interface was to have 0 external dependencies: any >>array-like object from any package could add the interface, without requiring >>a 3rd-party module. That's why the C version uses a CObject. Subclasses of >>CObject start getting into 3rd-party requirements. >> >> >> > >Not necessarily. Different packages don't need to share the subclass, >but subclassing CObject is probably a bad idea for the reasons I will >explain below. > > > >>How about a dict instead of a tuple? With keys matching the attributes it's >>replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and >>"offset". The problem with a tuple from my point of view is I can never >>remember which order things go (this is why in the standard library the >>result of os.stat() and time.localtime() are now "tuple-like" classes with >>attributes). >> >> >> >My problem with __array_struct__ returning either a tuple or a CObject >is that array protocol sholuld really provide both. CObject is >useless for interoperability at python level and a tuple (or dict) is >inefficient at the C level. Thus a good array-like object should >really provide both __array_struct__ for use by C modules and >__array_tuple__ (or whatever) for use by python modules. On the other >hand, making both required attributes/methods will put an extra burden >on package writers. Moreover, a pure python implementation of an >array-like object will not be able to provide __array_struct__ at all. > One possible solution would be an array protocol metaclass that adds >__array_struct__ to a class with __array_tuple__ and __array_tuple__ >to a class with __array_struct__ (yet another argument to make both >methods). > > I don't understand this. I'm don't see how bringing in metaclass is going to help a pure python type provide a sensible __array_struct__. That seems like a hopeless task. Shouldn't pure python implementations just provide __array__? A single attribute seems pretty appealing to me, I'm don't see much use for anything else. >>We still need __array_descr__, as the C struct doesn't provide all the info >>that this does. >> >> >> >What do you have in mind? > > Is there any prospect of merging this data into the C struct? It would be cleaner if all of the information could be embedded into the C struct, but I can see how that might be a backward compatibility nightmare. -tim > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Francesc A. <fa...@ca...> - 2006-06-09 10:06:53
|
A Divendres 09 Juny 2006 11:54, Albert Strasheim va escriure: > Just out of curiosity: > > In [1]: x =3D N.array([]) > > In [2]: x.__array_data__ > Out[2]: ('0x01C23EE0', False) > > Is there a reason why the __array_data__ tuple stores the address as a hex > string? I would guess that this representation of the address isn't the > most useful one for most applications. Good point. I hit this before and forgot to send a message about this. I ag= ree=20 that a integer would be better. Although, now that I think about this, I=20 suppose that the issue should be the difference of representation of longs = in=20 32-bit and 64-bit platforms, isn't it? Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Albert S. <fu...@gm...> - 2006-06-09 10:02:59
|
Hello all For my Summer of Code project, I'm adding Support Vector Machine code to SciPy. Underneath, I'm currently using libsvm. Thus far, I've been compiling libsvm as a shared library (DLL on Windows) using SCons and doing the wrapping with ctypes. Now, I would like to integrate my code into the SciPy build. Unfortunately, it doesn't seem as if numpy.distutils or distutils proper knows about building shared libraries. Building shared libraries across multiple platforms is tricky to say the least so I don't know if implementing this functionality again is something worth doing. The alternative -- never using shared libraries, doesn't seem very appealing either. Is anybody building shared libraries? Any code or comments? Regards, Albert |
From: Albert S. <fu...@gm...> - 2006-06-09 09:54:29
|
Hello all > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Travis Oliphant > Sent: 08 June 2006 22:27 > To: numpy-discussion > Subject: [Numpy-discussion] Array Protocol change for Python 2.6 > > ... > > I would like to eliminate all the other array protocol attributes before > NumPy 1.0 (and re-label those such as __array_data__ that are useful in > other contexts --- like ctypes). Just out of curiosity: In [1]: x = N.array([]) In [2]: x.__array_data__ Out[2]: ('0x01C23EE0', False) Is there a reason why the __array_data__ tuple stores the address as a hex string? I would guess that this representation of the address isn't the most useful one for most applications. Regards, Albert |
From: Stephan T. <st...@si...> - 2006-06-09 08:06:35
|
> ==================================== > atlas_info: > ( library_dirs = /usr/local/lib:/usr/lib ) > ( paths: /usr/lib/atlas,/usr/lib/sse2 ) > looking libraries f77blas,cblas,atlas in /usr/local/lib but found None > looking libraries f77blas,cblas,atlas in /usr/local/lib but found None (.. more of these...) Some of these and similar spurious warnings can be eliminated by replacing the calls to check_libs in system_info.py with calls to check_libs2. Currently these warnings are generated for each file extension that is tested (".so", ".a"...) Alternatively, the warnings could be made more informative. Many of the other warnings could be eliminated by consolidating the various BLAS/LAPACK options. If anyone is manipulating the build system, could he please apply the patch from #114 fixing the Windows build? > I tried to fix it, but the call sequence in that code is convoluted > enough that after a few 'import traceback;traceback.print_stack()' > tries I sort of gave up. That code is rather (how can I say this > nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I > won't be able to contribute a cleanup here. Even if you spent enough time to understand the existing code, you probably wouldn't have a chance to clean up the code because any small change could break some obscure platform/compiler/library combination. Moreover, changes could break the build of scipy and other libraries depending on Numpy-distutils. If you really wanted to rewrite the build code, you'd need to specify a minimum set of supported platform and library combinations, have each of them available for testing and deliberately risk breaking any other platform. Regards, Stephan |
From: David M. C. <co...@ph...> - 2006-06-09 08:02:48
|
On Thu, Jun 08, 2006 at 11:28:04PM -0600, Fernando Perez wrote: > Hi all, > > the following warning about strict-prototypes in weave drives me crazy: > > longs[~]> python wbuild.py > <weave: compiling> > cc1plus: warning: command line option "-Wstrict-prototypes" is valid > for Ada/C/ObjC but not for C++ > > since I use weave on auto-generated code, I get it lots of times and I > find spurious warnings to be very distracting. > > Anyone object to this patch against current numpy SVN to get rid of > this thing? (tracking where the hell that thing was coming from was > all kinds of fun) Go ahead. I'm against random messages being printed out anyways -- I'd get rid of the '<weave: compiling>' too. There's a bunch of code in scipy with 'print' statements that I don't think belong in a library. (Now, if we defined a logging framework, that'd be ok with me!) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Fernando P. <fpe...@gm...> - 2006-06-09 07:08:34
|
Hi all, the following warning about strict-prototypes in weave drives me crazy: longs[~]> python wbuild.py <weave: compiling> cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++ since I use weave on auto-generated code, I get it lots of times and I find spurious warnings to be very distracting. Anyone object to this patch against current numpy SVN to get rid of this thing? (tracking where the hell that thing was coming from was all kinds of fun) Index: ccompiler.py =================================================================== --- ccompiler.py (revision 2588) +++ ccompiler.py (working copy) @@ -191,6 +191,19 @@ log.info('customize %s' % (self.__class__.__name__)) customize_compiler(self) if need_cxx: + # In general, distutils uses -Wstrict-prototypes, but this option is + # not valid for C++ code, only for C. Remove it if it's there to + # avoid a spurious warning on every compilation. All the default + # options used by distutils can be extracted with: + + # from distutils import sysconfig + # sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', + # 'CCSHARED', 'LDSHARED', 'SO') + try: + self.compiler_so.remove('-Wstrict-prototypes') + except ValueError: + pass + if hasattr(self,'compiler') and self.compiler[0].find('gcc')>=0: if sys.version[:3]>='2.3': if not self.compiler_cxx: ### EOF Cheers, f |
From: Sasha <nd...@ma...> - 2006-06-09 02:52:55
|
On 6/8/06, David M. Cooke <co...@ph...> wrote: > ... > +0 for name change; I'm happy with it as an attribute. > My rule of thumb for choosing between an attribute and a method is that attribute access should not create new objects. In addition, to me __array_interface__ feels like a generalization of __array__ method, so I personally expected it to be a method the first time I tried to use it. >... > The idea behind the array interface was to have 0 external dependencies: any > array-like object from any package could add the interface, without requiring > a 3rd-party module. That's why the C version uses a CObject. Subclasses of > CObject start getting into 3rd-party requirements. > Not necessarily. Different packages don't need to share the subclass, but subclassing CObject is probably a bad idea for the reasons I will explain below. > How about a dict instead of a tuple? With keys matching the attributes it's > replacing: "shapes", "typestr", "descr", "data", "strides", "mask", and > "offset". The problem with a tuple from my point of view is I can never > remember which order things go (this is why in the standard library the > result of os.stat() and time.localtime() are now "tuple-like" classes with > attributes). > My problem with __array_struct__ returning either a tuple or a CObject is that array protocol sholuld really provide both. CObject is useless for interoperability at python level and a tuple (or dict) is inefficient at the C level. Thus a good array-like object should really provide both __array_struct__ for use by C modules and __array_tuple__ (or whatever) for use by python modules. On the other hand, making both required attributes/methods will put an extra burden on package writers. Moreover, a pure python implementation of an array-like object will not be able to provide __array_struct__ at all. One possible solution would be an array protocol metaclass that adds __array_struct__ to a class with __array_tuple__ and __array_tuple__ to a class with __array_struct__ (yet another argument to make both methods). > We still need __array_descr__, as the C struct doesn't provide all the info > that this does. > What do you have in mind? |
From: Fernando P. <fpe...@gm...> - 2006-06-09 02:26:01
|
On 6/8/06, Simon Burton <si...@ar...> wrote: > On Thu, 8 Jun 2006 16:48:27 -0600 > "Fernando Perez" <fpe...@gm...> wrote: > > > > > In summary, I don't really know if this is actually finding what it > > wants or not, given the two messages. > > I just went through this on debian sarge which is similar. > > I put this in site.cgf: > > [atlas] > library_dirs = /usr/lib/atlas/ > atlas_libs = lapack, blas > > Then I needed to set LD_LIBRARY_PATH to point to /usr/lib/atlas/sse2. [...] > But to really test this is working I ran python under gdb and set > a break point on cblas_dgemm. Then a call to numpy.dot should > break inside the sse2/liblapack.so.3.0. > > (also it's a lot faster with the sse2 dgemm) > > $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 gdb python2.4 OK, thanks a LOT for that gdb trick: it provides a very nice way to understand what's actually going on. self.note("really, learn better use of gdb") Using that, though, it would then seem as if the build DID successfully find everything without any further action on my part: longs[dist]> gdb python GNU gdb 6.4-debian ... (gdb) break cblas_dgemm Function "cblas_dgemm" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (cblas_dgemm) pending. (gdb) run Starting program: /usr/bin/python ... Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. (no debugging symbols found) >>> import numpy Breakpoint 2 at 0x40429860 Pending breakpoint "cblas_dgemm" resolved >>> a=numpy.empty((1024,1024),'d') >>> b=numpy.empty((1024,1024),'d') >>> numpy.dot(a,b) [Switching to Thread 1075428416 (LWP 3919)] Breakpoint 2, 0x40429860 in cblas_dgemm () from /usr/lib/sse2/libcblas.so.3 ====================================================== Note that on my system, LD_LIBRARY_PATH does NOT contain that dir: longs[dist]> env | grep LD_LIB LD_LIBRARY_PATH=/usr/local/lf9560/lib:/usr/local/intel/mkl/8.0.2/lib/32:/usr/local/intel/compiler90/lib:/home/fperez/usr/lib:/home/fperez/usr/local/lib: and I built everything with a plain setup.py install --prefix=~/tmp/local without /any/ tweaks to site.cfg, no LD_LIBRARY_PATH modifications or anything else. I just installed atlas-sse2* and lapack3*, but NOT refblas3*. Basically it seems that the build process does the right thing out of the box, and the warning is spurious. Since I was being extra-careful in this build, I didn't want to let any warning of that kind go unchecked. It might still be worth fixing that warning to prevent others from going on a similar wild goose chase, but I'm not comfortable touching that code (I don't know if anyone besides Pearu is). Thanks for the help! Cheers, f |
From: Simon B. <si...@ar...> - 2006-06-09 02:09:25
|
On Thu, 8 Jun 2006 16:48:27 -0600 "Fernando Perez" <fpe...@gm...> wrote: > > In summary, I don't really know if this is actually finding what it > wants or not, given the two messages. I just went through this on debian sarge which is similar. I put this in site.cgf: [atlas] library_dirs = /usr/lib/atlas/ atlas_libs = lapack, blas Then I needed to set LD_LIBRARY_PATH to point to /usr/lib/atlas/sse2. $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 python2.4 Python 2.4.3 (#4, Jun 5 2006, 19:07:06) [GCC 3.4.1 (Debian 3.4.1-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> [1]+ Stopped env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 python2.4 Look in /proc/PID/maps for the relevant libs: $ ps -a|grep python ... 16953 pts/64 00:00:00 python2.4 $ grep atlas /proc/16953/maps b6fa7000-b750e000 r-xp 00000000 00:0c 1185402 /usr/lib/atlas/sse2/libblas.so.3.0 b750e000-b7513000 rwxp 00567000 00:0c 1185402 /usr/lib/atlas/sse2/libblas.so.3.0 b7513000-b7a58000 r-xp 00000000 00:0c 1185401 /usr/lib/atlas/sse2/liblapack.so.3.0 b7a58000-b7a5b000 rwxp 00545000 00:0c 1185401 /usr/lib/atlas/sse2/liblapack.so.3.0 $ But to really test this is working I ran python under gdb and set a break point on cblas_dgemm. Then a call to numpy.dot should break inside the sse2/liblapack.so.3.0. (also it's a lot faster with the sse2 dgemm) $ env LD_LIBRARY_PATH=/usr/lib/atlas/sse2 gdb python2.4 GNU gdb 6.1-debian Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-linux"...Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) break cblas_dgemm Function "cblas_dgemm" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (cblas_dgemm) pending. (gdb) run Starting program: /home/users/simonb/bin/python2.4 [Thread debugging using libthread_db enabled] [New Thread -1210476000 (LWP 17557)] Python 2.4.3 (#4, Jun 5 2006, 19:07:06) [GCC 3.4.1 (Debian 3.4.1-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Breakpoint 2 at 0xb7549db0 Pending breakpoint "cblas_dgemm" resolved <------- import numpy is in my pythonstartup >>> a=numpy.empty((1024,1024),'d') >>> b=numpy.empty((1024,1024),'d') >>> numpy.dot(a,b) [Switching to Thread -1210476000 (LWP 17557)] Breakpoint 2, 0xb7549db0 in cblas_dgemm () from /usr/lib/atlas/sse2/liblapack.so.3 (gdb) bingo. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com |
From: Fernando P. <fpe...@gm...> - 2006-06-09 01:39:46
|
On 6/8/06, Dan Christensen <jd...@uw...> wrote: > I don't know if it's related, but I've found on my Debian system that > whenever I want to compile something that uses the atlas library, I > need to put -L/usr/lib/sse2 on the gcc line, even though everything > seems to indicate that the linker has been told to look there already. > It could be that Ubuntu has a similar issue, and that it is affecting > your build. mmh, given how green I am in the ubuntu world, you may well be right. But my original question went before any linking happens, since I was just posting the messages from numpy's system_info, which doesn't attempt to link at anything, it just does a static filesystem analysis. So perhaps there is more than one issue here. I'm just trying to clarify, from the given messages (which I found a bit confusing) whether all the atlas/sse2 stuff is actually being picked up or not, at least as far as numpy thinks it is. Cheers, f |
From: Dan C. <jd...@uw...> - 2006-06-09 01:23:22
|
I don't know if it's related, but I've found on my Debian system that whenever I want to compile something that uses the atlas library, I need to put -L/usr/lib/sse2 on the gcc line, even though everything seems to indicate that the linker has been told to look there already. It could be that Ubuntu has a similar issue, and that it is affecting your build. Dan |
From: <lis...@ma...> - 2006-06-08 23:44:11
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Because of complaints of linking errors from some OS X users, I am trying to build and distribute statically-linked versions. To do this, I have taken the important libraries (e.g. freetype, libg2c), and put them in a directory called staticlibs, then built numpy by: python setup.py build_clib build_ext -L../staticlibs build bdist_mpkg It builds, installs and runs fine. However, when I go to build and run f2py extensions, I now get the following (from my PyMC code): /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/PyMC/MCMC.py 37 _randint = random.randint 38 rexponential = random.exponential - ---> 39 from flib import categor as _categorical global flib = undefined global categor = undefined global as = undefined _categorical = undefined 40 from flib import rcat as rcategorical 41 from flib import binomial as _binomial ImportError: Loaded module does not contain symbol _initflib Here, flib is the f2py extension that is built in the PyMC setup file according to: from numpy.distutils.core import setup, Extension flib = Extension(name='PyMC.flib',sources=['PyMC/flib.f']) version = "1.0" distrib = setup( version=version, author="Chris Fonnesbeck", author_email="fon...@ma...", description="Version %s of PyMC" % version, license="Academic Free License", name="PyMC", url="pymc.sourceforge.net", packages=["PyMC"], ext_modules = [flib] ) This worked fine before my attempts to statically link numpy. Any ideas regarding a solution? Thanks, Chris - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin) iD8DBQFEiLY+keka2iCbE4wRAi1/AJ90K7LIkF7Y+ti65cVxLB1KCA+MNgCggj2p I1jzals7IoBeYX0cWfmlbcI= =bY3a -----END PGP SIGNATURE----- |
From: Darren D. <dd...@co...> - 2006-06-08 23:26:05
|
On Thursday 01 June 2006 12:46, Robert Kern wrote: > Nadav Horesh wrote: > > I recently upgraded to gcc4.1.1. When I tried to compile scipy from > > today's svn repository it halts with the following message: > > > > Traceback (most recent call last): > > File "setup.py", line 50, in ? > > setup_package() > > File "setup.py", line 42, in setup_package > > configuration=configuration ) > > File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line > > 170, in setup > > return old_setup(**new_attr) > > File "/usr/lib/python2.4/distutils/core.py", line 149, in setup > > dist.run_commands() > > File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands > > self.run_command(cmd) > > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run > > self.run_command(cmd_name) > > File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command > > self.distribution.run_command(command) > > File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command > > cmd_obj.run() > > File > > "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > > line 109, in run > > self.build_extensions() > > File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in > > build_e xtensions > > self.build_extension(ext) > > File > > "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", > > line 301, in build_extension > > link = self.fcompiler.link_shared_object > > AttributeError: 'NoneType' object has no attribute 'link_shared_object' > > > > ---- > > > > The output of gfortran --version: > > > > GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1) > > Hmm. The usual suspect (not finding the version) doesn't seem to be the > problem here. > > >>> from numpy.distutils.ccompiler import simple_version_match > >>> m = simple_version_match(start='GNU Fortran 95') > >>> m(None, 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)') > > '4.1.1' > > > I have also the old g77 compiler installed (g77-3.4.6). Is there a way to > > force numpy/scipy to use it? > > Sure. > > python setup.py config_fc --fcompiler=gnu build_src build_clib build_ext > build I am able to build numpy/scipy on a 64bit Athlon with gentoo and gcc-4.1.1. I get one error with scipy 0.5.0.1940: ============================================== FAIL: check_random_complex_overdet (scipy.linalg.tests.test_basic.test_lstsq) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 413, in check_random_complex_overdet assert_array_almost_equal(x,direct_lstsq(a,b),3) File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 233, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 77.7777777778%): Array 1: [[-0.0137+0.0173j 0.0037-0.0173j -0.0114+0.0119j] [ 0.0029-0.0356j 0.0086-0.034j 0.033 -0.0879j] [ 0.0041-0.0097j ... Array 2: [[-0.016 +0.0162j 0.003 -0.0171j -0.0148+0.009j ] [-0.0017-0.0405j 0.003 -0.036j 0.0256-0.0977j] [ 0.0038-0.0112j ... ---------------------------------------------------------------------- Also, there may be a minor bug in numpy/distutils. I get error messages during the build: customize GnuFCompiler Couldn't match compiler version for 'GNU Fortran 95 (GCC) 4.1.1 (Gentoo 4.1.1)\nCopyright (C) 2006 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License. \nFor more information about these matters, see the file named COPYING\n' customize CompaqFCompiler customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize Gnu95FCompiler customize Gnu95FCompiler This error message is returned because the fc_exe executable defined in GnuFCompiler returns a successful exit status to GnuFCompiler.get_version, but GnuFCompiler explicitly forbids identifying Fortran 95. I only bring it up because the build yields an error message that might confuse people. Darren |
From: Fernando P. <fpe...@gm...> - 2006-06-08 23:12:01
|
On 6/8/06, David M. Cooke <co...@ph...> wrote: > Agree; I'll look at it. Many thanks. I'm sorry not to help, but I have a really big fish to fry right now, and can't commit to the diversion this would mean. Cheers, f |
From: David M. C. <co...@ph...> - 2006-06-08 23:06:48
|
On Thu, 8 Jun 2006 16:48:27 -0600 "Fernando Perez" <fpe...@gm...> wrote: [snip] > I tried to fix it, but the call sequence in that code is convoluted > enough that after a few 'import traceback;traceback.print_stack()' > tries I sort of gave up. That code is rather (how can I say this > nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I > won't be able to contribute a cleanup here. I think the whole numpy.distutils could use a good cleanup ... > I think this tool should run by default in a mode with NO attempt to > fire a command-line subsystem of its own, so users can simply run > > python /path/to/system_info > system_info.log > > for further analysis. Agree; I'll look at it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Fernando P. <fpe...@gm...> - 2006-06-08 22:55:24
|
Hi all, I'm starting the transition of a large code from Numeric to numpy, so I am now doing a fresh build with a lot more care than before, actually reading all the intermediate messages. I am a bit puzzled and could use some help. This is all on an ubuntu dapper box with the atlas-sse2 packages (and everything else recommended installed). By running as suggested in the scipy readme: python ~/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py I get the following message at some point: ==================================== atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) looking libraries f77blas,cblas,atlas in /usr/local/lib but found None looking libraries f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None ( paths: /usr/lib/sse2/libf77blas.so ) ( paths: /usr/lib/sse2/libcblas.so ) ( paths: /usr/lib/sse2/libatlas.so ) ( paths: /usr/lib/sse2/liblapack_atlas.so ) looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack in /usr/lib/sse2 but found None looking libraries f77blas,cblas,atlas in /usr/lib but found None looking libraries f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None system_info.atlas_info ( include_dirs = /usr/local/include:/usr/include ) ( paths: /usr/include/atlas_misc.h,/usr/include/atlas_enum.h,/usr/include/atlas_aux.h,/usr/include/atlas_type.h ) /usr/local/installers/src/scipy/numpy/numpy/distutils/system_info.py:870: UserWarning: ********************************************************************* Could not find lapack library within the ATLAS installation. ********************************************************************* warnings.warn(message) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITHOUT_LAPACK', None)] ==================================== What I find very puzzling here is that later on, the following goes by: lapack_atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas in /usr/local/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None looking libraries lapack_atlas in /usr/lib/atlas but found None ( paths: /usr/lib/sse2/liblapack_atlas.so ) ( paths: /usr/lib/sse2/libf77blas.so ) ( paths: /usr/lib/sse2/libcblas.so ) ( paths: /usr/lib/sse2/libatlas.so ) ( paths: /usr/lib/sse2/liblapack_atlas.so ) looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack in /usr/lib/sse2 but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas,f77blas,cblas,atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None looking libraries lapack_atlas in /usr/lib but found None system_info.lapack_atlas_info ( include_dirs = /usr/local/include:/usr/include ) ( paths: /usr/include/atlas_misc.h,/usr/include/atlas_enum.h,/usr/include/atlas_aux.h,/usr/include/atlas_type.h ) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/lib/atlas,/usr/lib/sse2 ) FOUND: libraries = ['lapack_atlas', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITH_LAPACK_ATLAS', None)] ============================================== Does the second mean that it /is/ finding the right libraries? Since the first search in atlas_info is also printing ( paths: /usr/lib/sse2/liblapack_atlas.so ) I don't quite understand why it then reports the warning. For reference, here's the content of the relevant directories on my system: ============================================== longs[sse2]> ls /usr/lib/sse2 libatlas.a libcblas.a libf77blas.a liblapack_atlas.a libatlas.so@ libcblas.so@ libf77blas.so@ liblapack_atlas.so@ libatlas.so.3@ libcblas.so.3@ libf77blas.so.3@ liblapack_atlas.so.3@ libatlas.so.3.0 libcblas.so.3.0 libf77blas.so.3.0 liblapack_atlas.so.3.0 longs[sse2]> ls /usr/lib/atlas/sse2/ libblas.a libblas.so.3@ liblapack.a liblapack.so.3@ libblas.so@ libblas.so.3.0 liblapack.so@ liblapack.so.3.0 ============================================== In summary, I don't really know if this is actually finding what it wants or not, given the two messages. Cheers, f ps - it's worth mentioning that the sequence: python ~/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py gets itself into a nasty recursion where it fires the interactive session 3 times in a row. And in doing so, it splits its own output in a funny way: [...] blas_opt_info: ======================================================================== Starting interactive session ------------------------------------------------------------------------ Tasks: i - Show python/platform/machine information ie - Show environment information c - Show C compilers information c<name> - Set C compiler (current:None) f - Show Fortran compilers information f<name> - Set Fortran compiler (current:None) e - Edit proposed sys.argv[1:]. Task aliases: 0 - Configure 1 - Build 2 - Install 2<prefix> - Install with prefix. 3 - Inplace build 4 - Source distribution 5 - Binary distribution Proposed sys.argv = ['/home/fperez/tmp/local/lib/python2.4/site-packages/numpy/distutils/system_info.py'] Choose a task (^D to quit, Enter to continue with setup): ##### msg: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('NO_ATLAS_INFO', 2)] ================= I tried to fix it, but the call sequence in that code is convoluted enough that after a few 'import traceback;traceback.print_stack()' tries I sort of gave up. That code is rather (how can I say this nicely) pasta-like :), and thoroughly uncommented, so I'm afraid I won't be able to contribute a cleanup here. I think this tool should run by default in a mode with NO attempt to fire a command-line subsystem of its own, so users can simply run python /path/to/system_info > system_info.log for further analysis. |
From: Travis O. <oli...@ee...> - 2006-06-08 22:23:01
|
Thanks for the continuing discussion on the array interface. I'm thinking about this right now, because I just spent several hours trying to figure out if it is possible to add additional "object-behavior" pointers to a type by creating a metatype that sub-types from the Python PyType_Type (this is the object that has all the function pointers to implement mapping behavior, buffer behavior, etc.). I found some emails from 2002 where Guido indicates that it is not possible to sub-type the PyType_Type object and add new function pointers at the end without major re-writing of Python. The suggested mechanism is to add a CObject to the tp_dict of the type object itself. As far as I can tell is equivalent to what we are doing with adding the array interface as an attribute look-up. In trying to sell the array interface to the wider Python community (and get it into Python 2.6), we need to simplify the interface though. I no longer think having all of these attributes off the object itself is a good idea (I think this is a case where flat *is not* better than nested). It turns out that the __array_struct__ interface is the really important one (it's the one that numarray, NumPy, and Numeric are all using). So, one approach is to simply toss out support for the other part of the interface in NumPy and "let it die." Is this what people who opposing using the __array_struct__ attribute in a dualistic way are suggesting? Clearly some of the attributes will need to survive (like __array_descr__ which gives information that __array_struct__ doesn't even provide). A big part of the push for multidimensional arrays in Python is the addition of the PyArray_Descr * object into Python (or something similar). This would allow a way to describe data in a generic way and could change the use of __array_descr__. But, currently the __array_struct__ attribute approach does not support field-descriptions, so __array_descr__ is the only way. Please continue offering your suggestions... -Travis |
From: Andrew S. <str...@as...> - 2006-06-08 22:19:39
|
Andrew Straw wrote: >I've put together some .debs for numpy-0.9.8. There are binaries >compiled for amd64 and i386 architectures of Ubuntu Dapper, and I >suspect these will build from source for just about any Debian-based >distro and architecture. > > As usually happens when I try to release packages in the middle of the night, the cold light of morning brings some glaring problems. The biggest one is that the .diff.gz that was generated wasn't showing the changes against numpy that I had to make. I'm surprised that my own tests with apt-get source showed that it still built from source. I've uploaded a new version, 0.9.8-0ads2 (note the 2 at the end). You can check your installed version by doing the following: dpkg-query -l *numpy* Anyhow, here's the debian/changelog for 0.9.8-0ads2: * Fixed .orig.tar.gz so that .diff.gz includes modifications made to source. * Relax build-depend on setuptools to work with any version * Don't import setuptools in numpy.distutils.command.install unless it's already in sys.modules. I would like to merge with the package in debian experimental by Jose Fonseca and Marco Presi, but their package uses a lot of makefile wizardry that bombs out on me without any apparently informative error message. (I will be the first to admit that I know very little about Makefiles.) On the other hand, the main advantage their package currently has is installation of manpages for f2py, installation of the existing free documentation, and tweaks to script (f2py) permissions and naming. The latter of these issues seems to be solved by the build-dependency on setuptools, which is smart about installing scripts with the right permissions and names (it appends "2.4" to the python2.4 version of f2py, and so on). There have been a couple of offers of help from Ed and Ryan. I think in the long run, the best thing to do would be to invest these efforts communicating with the debian developers and to get a more up-to-date version in their repository. (My repository will only ever be an unofficial repository with the primary purpose of serving our needs at work which hopefully overlaps substantially with usefulness to others.) This should have a trickle-down effect to mainline Ubuntu repository, also. I doubt that the debian developers will want to start their python-numpy package from scratch, so I can suggest trying to submit patches to their system. You can checkout their source at svn://svn.debian.org/deb-scipy . Unfortunately, that's about the only guidance I can provide, because, like I said above, I can't get their Makefile wizardry to work on a newer version of numpy. Arnd, I would like to get to the bottom of these atlas issues myself, and I've followed a similar chain of logic as you. It's possible that the svd routine (dgesdd, IIRC) is somehow just a bad one to benchmark on. It is a real workhorse for me, and so it's really the one that counts for me. I'll put together a few timeit routines that test svd() and dot() and do some more experimentation, although I can't promise when. Let's keep everyone informed of any progress we make. Cheers! Andrew |
From: Gennan C. <gn...@co...> - 2006-06-08 22:18:46
|
Got you. BTW, I did manage to compile ATLAS 3.7 version into .a. Any chance I can use that? Or only shared object can be used?? Gen On Jun 8, 2006, at 3:11 PM, David M. Cooke wrote: > On Thu, 8 Jun 2006 14:57:02 -0700 > Gennan Chen <gn...@co...> wrote: > >> Hi! >> >> I just got an MacBook Pro and tried to install numpy+scipy on that. >> I successfully installed ipython+matplotlib+python 2.4 through >> darwinports. >> Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + >> gfortran) seems working fine for numpy. After I installed it and run >> numpy.test() in ipython, it failed. And the error is: >> >> In [4]: numpy.test() >> Found 3 tests for numpy.lib.getlimits >> Found 30 tests for numpy.core.numerictypes >> Found 13 tests for numpy.core.umath >> Found 3 tests for numpy.core.scalarmath >> Found 8 tests for numpy.lib.arraysetops >> Found 42 tests for numpy.lib.type_check >> Found 95 tests for numpy.core.multiarray >> Found 3 tests for numpy.dft.helper >> Found 36 tests for numpy.core.ma >> Found 2 tests for numpy.core.oldnumeric >> Found 9 tests for numpy.lib.twodim_base >> Found 9 tests for numpy.core.defmatrix >> Found 1 tests for numpy.lib.ufunclike >> Found 35 tests for numpy.lib.function_base >> Found 1 tests for numpy.lib.polynomial >> Found 6 tests for numpy.core.records >> Found 19 tests for numpy.core.numeric >> Found 5 tests for numpy.distutils.misc_util >> Found 4 tests for numpy.lib.index_tricks >> Found 46 tests for numpy.lib.shape_base >> Found 0 tests for __main__ >> ..............................................F...................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> ..................................................................... >> ... >> .......... >> ===================================================================== >> = >> FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) >> --------------------------------------------------------------------- >> - >> Traceback (most recent call last): >> File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ >> test_scalarmath.py", line 42, in check_large_types >> assert b == 6765201, "error with %r: got %r" % (t,b) >> AssertionError: error with <type 'float128scalar'>: got 0.0 >> >> --------------------------------------------------------------------- >> - >> Ran 370 tests in 0.510s >> >> FAILED (failures=1) >> Out[4]: <unittest.TextTestRunner object at 0x1581fd0> >> >> >> Anyone has any idea?? or Anyone ever successfully did that? > > It's new; something's missing in the new power code I added for the > scalartypes. It'll get fixed when I get around to it :-) > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |co...@ph... > |
From: David M. C. <co...@ph...> - 2006-06-08 22:12:02
|
On Thu, 8 Jun 2006 14:57:02 -0700 Gennan Chen <gn...@co...> wrote: > Hi! > > I just got an MacBook Pro and tried to install numpy+scipy on that. > I successfully installed ipython+matplotlib+python 2.4 through > darwinports. > Then I svn co a copy of numpy +scipy. Compilation (gcc 4.0.1 + > gfortran) seems working fine for numpy. After I installed it and run > numpy.test() in ipython, it failed. And the error is: > > In [4]: numpy.test() > Found 3 tests for numpy.lib.getlimits > Found 30 tests for numpy.core.numerictypes > Found 13 tests for numpy.core.umath > Found 3 tests for numpy.core.scalarmath > Found 8 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 95 tests for numpy.core.multiarray > Found 3 tests for numpy.dft.helper > Found 36 tests for numpy.core.ma > Found 2 tests for numpy.core.oldnumeric > Found 9 tests for numpy.lib.twodim_base > Found 9 tests for numpy.core.defmatrix > Found 1 tests for numpy.lib.ufunclike > Found 35 tests for numpy.lib.function_base > Found 1 tests for numpy.lib.polynomial > Found 6 tests for numpy.core.records > Found 19 tests for numpy.core.numeric > Found 5 tests for numpy.distutils.misc_util > Found 4 tests for numpy.lib.index_tricks > Found 46 tests for numpy.lib.shape_base > Found 0 tests for __main__ > ..............................................F......................... > ........................................................................ > ........................................................................ > ........................................................................ > ........................................................................ > .......... > ====================================================================== > FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ > test_scalarmath.py", line 42, in check_large_types > assert b == 6765201, "error with %r: got %r" % (t,b) > AssertionError: error with <type 'float128scalar'>: got 0.0 > > ---------------------------------------------------------------------- > Ran 370 tests in 0.510s > > FAILED (failures=1) > Out[4]: <unittest.TextTestRunner object at 0x1581fd0> > > > Anyone has any idea?? or Anyone ever successfully did that? It's new; something's missing in the new power code I added for the scalartypes. It'll get fixed when I get around to it :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |