You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Albert S. <fu...@gm...> - 2006-05-11 03:33:06
|
Hello all > -----Original Message----- > From: Tim Hochberg [mailto:tim...@co...] > Sent: 11 May 2006 05:20 > To: Albert Strasheim > Cc: 'numpy-discussion' > Subject: Re: [Numpy-discussion] Building on Windows > > Albert Strasheim wrote: > > >Hello all, > > > >It seems that many people are building on Windows without problems, > > except for Stephan and myself. > > > >Let me start by staying that yes, the default build on Windows with MinGW > >and Visual Studio works nicely. > > > >However, is anybody building with ATLAS and finding that experience to be > >equally painless? If so, *please* can you tell me how you've organized > >your libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about > >ATLAS's LAPACK functions? What about building ATLAS as DLL?). Also, I'd > >be very interested in the contents of your site.cfg. I've been trying > > for many weeks to do some small subset of the above without hacking > >into the core of numpy.distutils. So far, no luck. > > Sorry, no help here. I'm just doing vanilla builds. I like vanilla, but I'd love to try one of the other hundred flavors! ;-) > >Does anybody do debug builds on Windows? Again, please tell me how you do > >this, because I would really like to be able to build a debug version of > >NumPy for debugging with the MSVS compiler. > > > > > Again just vanilla builds, although this is something I'd like to try > one of these days. (Is that MSVC compiler, or is that yet another > compiler for windows). MSVS (MS Visual Studio) and MSVC can probably be considered to be the same thing. However, you have many flavors (argh!). The Microsoft Visual C++ Toolkit 2003 only includes MSVC, while Visual C++ Express Edition 2005 and all the "pay-to-play" editions include MSVC and the MSV[SC] debugger. I think there's also another debugger called WinDbg which is included with the Platform SDK. > >As for compiler warnings, last time I checked, distutils seems to be > >suppressing the output from the compiler, except when the build actually > >fails. Or am I mistaken? > > > Hmm. I hadn't thought about that. It certainly spits out plenty of > warnings when the build fails, so I assumed that it was always spitting > out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on > a successful compilation. Anyone know a way to stop that off the top of > their head? See the following URL for the kind of pain this causes: http://article.gmane.org/gmane.comp.python.numeric.general/5219 > >Eagerly awaiting Windows build nirvana, > > > > > Heh! Thanks for your feedback. Not nirvana yet, but vanilla will do for now. Cheers, Albert |
From: Robert K. <rob...@gm...> - 2006-05-11 03:27:58
|
Ryan Krauss wrote: > Is it possible with numpy to create arrays of arbitrary objects? > Specifically, I have defined a symbolic string class with operator > overloading for most simple math operations: 'a'*'b' ==> 'a*b' > > Can I create two matrices of these symbolic string objects and > multiply those matrices together? > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > got cast to regular strings) Use dtype=object . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Sasha <nd...@ma...> - 2006-05-11 03:27:23
|
You have to specify dtype to be object: >>> from numpy import * >>> class X(str): pass ... >>> a,b,c,d =3D map(X, 'abcd') >>> array([[a,b],[c,d]],'O') array([[a, b], [c, d]], dtype=3Dobject) >>> _ * 2 array([[aa, bb], [cc, dd]], dtype=3Dobject) On 5/10/06, Ryan Krauss <rya...@gm...> wrote: > Is it possible with numpy to create arrays of arbitrary objects? > Specifically, I have defined a symbolic string class with operator > overloading for most simple math operations: 'a'*'b' =3D=3D> 'a*b' > > Can I create two matrices of these symbolic string objects and > multiply those matrices together? > > (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings > got cast to regular strings) > > Thanks, > > Ryan > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job ea= sier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronim= o > http://sel.as-us.falkag.net/sel?cmdlnk&kid=120709&bid&3057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Ryan K. <rya...@gm...> - 2006-05-11 03:20:59
|
Is it possible with numpy to create arrays of arbitrary objects? Specifically, I have defined a symbolic string class with operator overloading for most simple math operations: 'a'*'b' =3D=3D> 'a*b' Can I create two matrices of these symbolic string objects and multiply those matrices together? (simply doing array([[a,b],[c,d]]) did not work. the symbolic strings got cast to regular strings) Thanks, Ryan |
From: Tim H. <tim...@co...> - 2006-05-11 03:19:53
|
Albert Strasheim wrote: >Hello all, > >It seems that many people are building on Windows without problems, except >for Stephan and myself. > >Let me start by staying that yes, the default build on Windows with MinGW >and Visual Studio works nicely. > >However, is anybody building with ATLAS and finding that experience to be >equally painless? If so, *please* can you tell me how you've organized your >libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's >LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very >interested in the contents of your site.cfg. I've been trying for many >weeks to do some small subset of the above without hacking into the core of >numpy.distutils. So far, no luck. > > Sorry, no help here. I'm just doing vanilla builds. >Does anybody do debug builds on Windows? Again, please tell me how you do >this, because I would really like to be able to build a debug version of >NumPy for debugging with the MSVS compiler. > > Again just vanilla builds, although this is something I'd like to try one of these days. (Is that MSVC compiler, or is that yet another compiler for windows). >As for compiler warnings, last time I checked, distutils seems to be >suppressing the output from the compiler, except when the build actually >fails. Or am I mistaken? > > Hmm. I hadn't thought about that. It certainly spits out plenty of warnings when the build fails, so I assumed that it was always spitting out warnings. [Fiddle] Ouch! It does indeed seem to supress warnings on a successful compilation. Anyone know a way to stop that off the top of their head? >Eagerly awaiting Windows build nirvana, > > Heh! Regards, -tim |
From: Stephan T. <st...@si...> - 2006-05-10 23:48:26
|
Hi Albert, in the following you find an abridged preview version of my MSVC+ATLAS+Lapack build tutorial ;-) You probably already know most of it, but maybe it helps. You'll need a current Cygwin with g77 and MinGW-libraries installed. Atlas: ====== Download and extract latest development ATLAS (3.7.11). Comment out line 77 in Atlas/CONFIG/probe_SSE3.c. Run "make" and choose the appropriate options for your system. Don't activate posix threads (for now). Overwrite the compiler and linker flags with flags that include "-mno-cygwin". Use the default architecture settings. Atlas and the test suite hopefully compile without an error now. Lapack: ======= Download and extract www.netlib.org/lapack/lapack.tgz and apply the most current patch from www.netlib.org/lapack-dev/ Replace lapack/make.inc with lapack/INSTALL/make.inc.LINUX. Append "-mno-cygwin" to OPTS, NOOPT and LOADOPTS in make.inc. Add ".PHONY: install testing timing" as the last line to lapack/Makefile. Run "make install lib" in the lapack root directory in Cygwin. ("make testing timing" and should also work now, but you probably want to use your optimised BLAS for that. Some errors in the tests are to be expected.) Atlas + Lapack: =============== Copy the generated lapack_LINUX.a together with "libatlas.a", "libcblas.a", "libf77blas.a", "liblapack.a" into a convenient directory. In Cygwin execute the following command sequence in that directory to get an ATLAS-optimized LAPACK library "ar x liblapack.a ar r lapack_LINUX.a *.o rm *.o mv lapack_LINUX.a liblapack.a" Now make a copy of all lib*.a's to *.lib's, i.e. duplicate libatlas.a to atlas.lib, in order to allow distutils to recognize the libs and at the same time provide the correct versions for MSVC. Copy libg2c.a and libgcc.a from cygwin/lib/gcc/i686-pc-mingw32/3.4.4 to this directory and again make .lib copies. Compile and install numpy: ========================== Put "[atlas] library_dirs = d:\path\to\your\BlasDirectory atlas_libs = lapack,f77blas,cblas,atlas,g2c,gcc" into your site.cfg in the numpy root directory. Open an Visual Studio 2003 command prompt and run "Path\To\Python.exe setup.py config --compiler=msvc build --compiler=msvc bdist_wininst". Use the resulting dist/numpy-VERSION.exe installer to install Numpy. Testing: In a Python console run import numpy.testing numpy.testing.NumpyTest(numpy).run() ... hopefully without an error. Test your code base. I'll wikify an extended version in the next days. Stephan Albert Strasheim wrote: > Hello all, > > It seems that many people are building on Windows without problems, except > for Stephan and myself. > > Let me start by staying that yes, the default build on Windows with MinGW > and Visual Studio works nicely. > > However, is anybody building with ATLAS and finding that experience to be > equally painless? If so, *please* can you tell me how you've organized your > libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's > LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very > interested in the contents of your site.cfg. I've been trying for many > weeks to do some small subset of the above without hacking into the core of > numpy.distutils. So far, no luck. > > Does anybody do debug builds on Windows? Again, please tell me how you do > this, because I would really like to be able to build a debug version of > NumPy for debugging with the MSVS compiler. > > As for compiler warnings, last time I checked, distutils seems to be > suppressing the output from the compiler, except when the build actually > fails. Or am I mistaken? > > Eagerly awaiting Windows build nirvana, > > Albert > >> -----Original Message----- >> From: num...@li... [mailto:numpy- >> dis...@li...] On Behalf Of Tim Hochberg >> Sent: 10 May 2006 23:49 >> To: Travis Oliphant >> Cc: Stephan Tolksdorf; numpy-discussion >> Subject: Re: [Numpy-discussion] Building on Windows >> >> Travis Oliphant wrote: >> >>> Stephan Tolksdorf wrote: >>> >>>> Hi, >>>> >>>> there are still some (mostly minor) problems with the Windows build >>>> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >>>> documentation enhancements, but before doing so I'd like to know if >>>> there's interest from one of the core developers to review/commit >>>> these patches afterwards. I'm asking because in the past questions >>>> and suggestions regarding the building process of Numpy (especially >>>> on Windows) often remained unanswered on this list. I realise that >>>> many developers don't use Windows and that the distutils build is a >>>> complex beast, but the current situation seems a bit unsatisfactory >>>> - and I would like to help. >>> I think your assessment is a bit harsh. I regularly build on MinGW >>> so I know it works there (at least at release time). I also have >>> applied several patches with the express purpose of getting the build >>> working on MSVC and Cygwin. >>> >>> So, go ahead and let us know what problems you are having. You are >>> correct that my main build platform is not Windows, but I think >>> several other people do use Windows regularly and we definitely want >>> to support it. >>> >> Indeeed. I build from SVN at least once a week using MSVC and it's been >> compiling warning free and passing all tests for me for some time. >> >> -tim > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Albert S. <fu...@gm...> - 2006-05-10 22:39:29
|
Hello all, It seems that many people are building on Windows without problems, except for Stephan and myself. Let me start by staying that yes, the default build on Windows with MinGW and Visual Studio works nicely. However, is anybody building with ATLAS and finding that experience to be equally painless? If so, *please* can you tell me how you've organized your libraries (which libraries? CLAPACK? FLAPACK? .a? .lib? What about ATLAS's LAPACK functions? What about building ATLAS as DLL?). Also, I'd be very interested in the contents of your site.cfg. I've been trying for many weeks to do some small subset of the above without hacking into the core of numpy.distutils. So far, no luck. Does anybody do debug builds on Windows? Again, please tell me how you do this, because I would really like to be able to build a debug version of NumPy for debugging with the MSVS compiler. As for compiler warnings, last time I checked, distutils seems to be suppressing the output from the compiler, except when the build actually fails. Or am I mistaken? Eagerly awaiting Windows build nirvana, Albert > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Tim Hochberg > Sent: 10 May 2006 23:49 > To: Travis Oliphant > Cc: Stephan Tolksdorf; numpy-discussion > Subject: Re: [Numpy-discussion] Building on Windows > > Travis Oliphant wrote: > > > Stephan Tolksdorf wrote: > > > >> Hi, > >> > >> there are still some (mostly minor) problems with the Windows build > >> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > >> documentation enhancements, but before doing so I'd like to know if > >> there's interest from one of the core developers to review/commit > >> these patches afterwards. I'm asking because in the past questions > >> and suggestions regarding the building process of Numpy (especially > >> on Windows) often remained unanswered on this list. I realise that > >> many developers don't use Windows and that the distutils build is a > >> complex beast, but the current situation seems a bit unsatisfactory > >> - and I would like to help. > > > > I think your assessment is a bit harsh. I regularly build on MinGW > > so I know it works there (at least at release time). I also have > > applied several patches with the express purpose of getting the build > > working on MSVC and Cygwin. > > > > So, go ahead and let us know what problems you are having. You are > > correct that my main build platform is not Windows, but I think > > several other people do use Windows regularly and we definitely want > > to support it. > > > Indeeed. I build from SVN at least once a week using MSVC and it's been > compiling warning free and passing all tests for me for some time. > > -tim |
From: Travis O. <oli...@ie...> - 2006-05-10 22:37:12
|
Tim Hochberg wrote: >> Is adding basic Numeric-like indexing something you see as useful to >> basearray? > > Yes! No! Maybe ;-) Got ya, loud and clear :-) I understand the confusion. I think we should do the following (for release 1.0) 1) Implement a base-array with no getitem method nor setitem method at all 2) Implement a sub-class that supports only creation of data-types corresponding to existing Python scalars (Boolean, Long-based integers, Double-based floats, complex and object types). Then, all array accesses should return the underlying Python objects. This sub-class should also only do view-based indexing (basically it's old Numeric behavior inside of NumPy). 3) Implement the ndarray as a sub-class of #2 that does fancy indexing and returns array-scalars Item 1) should be pushed for inclusion in 2.6 and possibly even something like 2) -Travis |
From: Tim H. <tim...@co...> - 2006-05-10 22:06:55
|
Travis Oliphant wrote: > Tim Hochberg wrote: > >>> Is adding basic Numeric-like indexing something you see as useful to >>> basearray? >> >> >> Yes! No! Maybe ;-) > > > Got ya, loud and clear :-) > > I understand the confusion. > > I think we should do the following (for release 1.0) > > > 1) Implement a base-array with no getitem method nor setitem method at > all > > 2) Implement a sub-class that supports only creation of data-types > corresponding to existing Python scalars (Boolean, Long-based > integers, Double-based floats, complex and object types). Then, all > array accesses should return the underlying Python objects. > This sub-class should also only do view-based indexing (basically it's > old Numeric behavior inside of NumPy). > > 3) Implement the ndarray as a sub-class of #2 that does fancy indexing > and returns array-scalars > > > Item 1) should be pushed for inclusion in 2.6 and possibly even > something like 2) +1 Let me point out an interesting possibility. If ndarray inherits from basearray, only one of them needs to have the current __new__ method. That means that we could do the following rearrangement, if we felt like it: 1. Remove 'array' 2. Rename 'ndarray' to 'array' 3. Put the old functionality of array into array.__new__ The current functionality of ndarray.__new__ would still be available as basearray.__new__. I mention this partly because I can think of things to do with the name ndarray: for example, use it as the name of the subclass in (2). -tim |
From: Tim H. <tim...@co...> - 2006-05-10 21:52:19
|
Sasha wrote: > On 5/10/06, Travis Oliphant <oli...@ie...> wrote: > >> ... >> I'm thinking that fancy-indexing should be re-factored a bit so that >> view-based indexing is tried first and then on error, fancy-indexing is >> tried. Right now, it goes through the fancy-indexing check and that >> seems to slow things down more than it needs to for simple indexing >> operations. > > > Is it too late to reconsider the decision to further overload [] to > support fancy indexing? It would be nice to restrict [] to view > based indexing and require a function call for copy-based. If that is > not an option, I would like to propose to have no __getitem__ in the > basearray and instead have rich collection of various functions such > as "take" which can be used by the derived classes to create their own > __getitem__ . This is exactly the approach taken by arraykit. > > Independent of the fate of the [] operator, I would like to have means > to specify exactly what I want without having to rely on the smartness > of the fancy-indexing check. For example, in the current version, I > can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do > x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I > cannot find an equivalent of x[[3,2,1],[1,2,3]]. > > I think [] syntax preferable in the interactive setting, where it > allows to get the result with a few keystrokes. In addition [] has > special syntactic properties in python (special meaning of : and ... > within []) that allows some nifty looking syntax not available for > member functions. This is why I was considering using pseudo attributes, similar to flat, for my basearray subclass. I could hang [] off of them and use all of the normal array indexing syntax. I haven't come up with ideal names yet, but it could look something like: x[:3, 5:] # normal view indexing x.at[[3,2,1], [1,2,3]] # integer array indexing x.iff[[1,0,1], [2,1,0]] # boolean array indexing. > On the other hand in programming, and especially in > writing reusable code specialized member functions such as "take" are > more appropriate for several resons. (1) Robustness, x.take(i) will do > the same thing if i is a tuple, list, or array of any integer type, > while with x[i] it is anybodys guess and the results may change with > the changes in numpy. (2) Performance: fancy-indexing check is > expensive. (3) Code readability: in the interactive session when you > type x[i], i is either supplied literally or is defined on the same > screen, but if i comes as an argument to the function, it may be hard > to figure out whether i expected to be an integer or a list of > integers is also ok. Sounds reasonable to me. -tim |
From: Tim H. <tim...@co...> - 2006-05-10 21:48:57
|
Travis Oliphant wrote: > Stephan Tolksdorf wrote: > >> Hi, >> >> there are still some (mostly minor) problems with the Windows build >> of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and >> documentation enhancements, but before doing so I'd like to know if >> there's interest from one of the core developers to review/commit >> these patches afterwards. I'm asking because in the past questions >> and suggestions regarding the building process of Numpy (especially >> on Windows) often remained unanswered on this list. I realise that >> many developers don't use Windows and that the distutils build is a >> complex beast, but the current situation seems a bit unsatisfactory >> - and I would like to help. > > > > I think your assessment is a bit harsh. I regularly build on MinGW > so I know it works there (at least at release time). I also have > applied several patches with the express purpose of getting the build > working on MSVC and Cygwin. > > So, go ahead and let us know what problems you are having. You are > correct that my main build platform is not Windows, but I think > several other people do use Windows regularly and we definitely want > to support it. > Indeeed. I build from SVN at least once a week using MSVC and it's been compiling warning free and passing all tests for me for some time. -tim |
From: Tim H. <tim...@co...> - 2006-05-10 21:46:24
|
Sasha wrote: > On 5/8/06, Tim Hochberg <tim...@co...> wrote: > >> [...] Here's a brief example; >> this is what an custom array class that just supported indexing and >> shape would look like using arraykit: >> >> import numpy.arraykit as _kit >> >> class customarray(_kit.basearray): >> __new__ = _kit.fromobj >> __getitem__ = _kit.getitem >> __setitem__ = _kit.setitem >> shape = property(_kit.getshape, _kit.setshape) > > > I see the following problem with your approach: customarray.__new__ is > supposed to return an instance of customarray, but in your example it > returns a basearray. Actually, it doesn't. The signature of fromobj is: fromobj(subtype, obj, dtype=None, order="C"). It returns an object of type subtype (as long as subtype is derived from basearray). At present, fromobj is implemented in Python as: def fromobj(subtype, obj, dtype=None, order="C"): if order not in ["C", "FORTRAN"]: raise ValueError("Order must be either 'C' or 'FORTRAN', not %r" % order) nda = _numpy.array(obj, dtype, order=order) return basearray.__new__(subtype, nda.shape, nda.dtype, nda.data, order=order) That's kind of kludgy, and I plan to remove the dependance on numpy.array at some point, but it seems to work OK. > You may like an approach that I took in writing > r.py <https://svn.sourceforge.net/svnroot/rpy/trunk/sandbox/r.py>. In > the context of your example, I would make fromobj a classmethod of > _kit.basearray and use the type argument to allocate the new object > (type->tp_alloc(type, 0);). This way customarray(...) will return > customarray as expected. > > All _kit methods that return arrays can take the same approach and > become classmethods of _kit.basearray. The drawback is the pollution > of the base class namespace, but this may be acceptable if you name > the baseclass methods with a leading underscore. I'd rather avoid that since one of my goals is to remove name polution. I'll keep it in mind though if I run into problems with the above approach. -tim |
From: Travis O. <oli...@ie...> - 2006-05-10 21:41:37
|
Stephan Tolksdorf wrote: > Hi, > > there are still some (mostly minor) problems with the Windows build of > Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and > documentation enhancements, but before doing so I'd like to know if > there's interest from one of the core developers to review/commit > these patches afterwards. I'm asking because in the past questions and > suggestions regarding the building process of Numpy (especially on > Windows) often remained unanswered on this list. I realise that many > developers don't use Windows and that the distutils build is a complex > beast, but the current situation seems a bit unsatisfactory - and I > would like to help. I think your assessment is a bit harsh. I regularly build on MinGW so I know it works there (at least at release time). I also have applied several patches with the express purpose of getting the build working on MSVC and Cygwin. So, go ahead and let us know what problems you are having. You are correct that my main build platform is not Windows, but I think several other people do use Windows regularly and we definitely want to support it. -Travis |
From: Pearu P. <pe...@sc...> - 2006-05-10 21:40:10
|
On Wed, 10 May 2006, Stephan Tolksdorf wrote: > Hi, > > there are still some (mostly minor) problems with the Windows build of Numpy > (MinGW/Cygwin/MSVC). I'd be happy to produce patches and documentation > enhancements, but before doing so I'd like to know if there's interest from > one of the core developers to review/commit these patches afterwards. I'm > asking because in the past questions and suggestions regarding the building > process of Numpy (especially on Windows) often remained unanswered on this > list. I realise that many developers don't use Windows and that the distutils > build is a complex beast, but the current situation seems a bit > unsatisfactory - and I would like to help. > Would there be any interest for further refactoring of the build code over > and above patching errors? Yes. Note that patches should not break other platforms. I am currently succesfully using mingw32 compiler to build numpy. Python is Enthon23 and Enthon24 that conviniently contain all compiler tools. Pearu |
From: Stephan T. <st...@si...> - 2006-05-10 21:32:27
|
Hi, there are still some (mostly minor) problems with the Windows build of Numpy (MinGW/Cygwin/MSVC). I'd be happy to produce patches and documentation enhancements, but before doing so I'd like to know if there's interest from one of the core developers to review/commit these patches afterwards. I'm asking because in the past questions and suggestions regarding the building process of Numpy (especially on Windows) often remained unanswered on this list. I realise that many developers don't use Windows and that the distutils build is a complex beast, but the current situation seems a bit unsatisfactory - and I would like to help. Would there be any interest for further refactoring of the build code over and above patching errors? Stephan |
From: Travis O. <oli...@ie...> - 2006-05-10 21:25:44
|
Thanks for the discussion on Ticket #83 (whether or not to return scalars from matrix methods). I like many of the comments and generally agree with Tim and Sasha about why bothering to replace one inconsistency with another. However, I've been swayed by two facts 1) People that use matrices more seem to like having the arguments be returned as scalars 2) Multiplication by a 1x1 matrix won't work on most matrices but multiplication by a scalar will. These two facts lean towards accepting the patch. Therefore, Ticket #83 patch will be applied. Best regards, -Travis |
From: Sasha <nd...@ma...> - 2006-05-10 21:04:45
|
On 5/8/06, Tim Hochberg <tim...@co...> wrote: > [...] Here's a brief example; > this is what an custom array class that just supported indexing and > shape would look like using arraykit: > > import numpy.arraykit as _kit > > class customarray(_kit.basearray): > __new__ =3D _kit.fromobj > __getitem__ =3D _kit.getitem > __setitem__ =3D _kit.setitem > shape =3D property(_kit.getshape, _kit.setshape) I see the following problem with your approach: customarray.__new__ is supposed to return an instance of customarray, but in your example it returns a basearray. You may like an approach that I took in writing r.py <https://svn.sourceforge.net/svnroot/rpy/trunk/sandbox/r.py>. In the context of your example, I would make fromobj a classmethod of _kit.basearray and use the type argument to allocate the new object (type->tp_alloc(type, 0);). This way customarray(...) will return customarray as expected. All _kit methods that return arrays can take the same approach and become classmethods of _kit.basearray. The drawback is the pollution of the base class namespace, but this may be acceptable if you name the baseclass methods with a leading underscore. |
From: Travis O. <oli...@ie...> - 2006-05-10 20:57:32
|
Sasha wrote: > On 5/10/06, Travis Oliphant <oli...@ie...> wrote: >> ... >> I'm thinking that fancy-indexing should be re-factored a bit so that >> view-based indexing is tried first and then on error, fancy-indexing is >> tried. Right now, it goes through the fancy-indexing check and that >> seems to slow things down more than it needs to for simple indexing >> operations. > > Is it too late to reconsider the decision to further overload [] to > support fancy indexing? It would be nice to restrict [] to view > based indexing and require a function call for copy-based. If that is > not an option, I would like to propose to have no __getitem__ in the > basearray and instead have rich collection of various functions such > as "take" which can be used by the derived classes to create their own > __getitem__ . It may be too late since the fancy-indexing was actually introduced by numarray. It does seem to be a feature that people like. > > Independent of the fate of the [] operator, I would like to have means > to specify exactly what I want without having to rely on the smartness > of the fancy-indexing check. For example, in the current version, I > can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do > x.take([1,2,3], axis=1) as an alternative to x[:,[1,2,3]], but I > cannot find an equivalent of x[[3,2,1],[1,2,3]]. It probably isn't there. Perhaps it should be. As you've guessed, a lot of the overloading of [] is because inside it you can use simplified syntax to generate slices. I would like to see the slice syntax extended so it could be used inside function calls as well as a means to generate slice objects on-the-fly. Perhaps for Python 3.0 this could be suggested. -Travis |
From: Travis O. <oli...@ie...> - 2006-05-10 20:54:01
|
Tim Hochberg wrote: > > Since I'm actually messing with trying to untangle arraykit from > multiarray right now, let me ask you a question: there are several > functions in arrayobject.c that look like they should be part of the > API. Notably: > > PyArray_CopyObject > PyArray_MapIterNew > PyArray_MapIterBind > PyArray_GetMap > PyArray_MapIterReset > PyArray_MapIterNext > PyArray_SetMap > PyArray_IntTupleFromIntp > > However, they don't appear to show up. They also aren't in > *_api_order.txt, where I presume the list of all exported functions > lives. Is this on purpose, or is it an oversight? > Some of these are an oversight. The Mapping-related ones require a little more explanation, though. Initially I had thought to allow mapping iterators to live independently of array indexing. But, it never worked out that way. I think they could be made part of the API, but they need to be used correctly together. In partciular, you can't really "re-bind" another array to a mapping interator. You have to create a new one, so it may be a little confusing. But, I see no reason to not let these things out on their own. -Travis |
From: Sasha <nd...@ma...> - 2006-05-10 20:42:33
|
On 5/10/06, Travis Oliphant <oli...@ie...> wrote: > ... > I'm thinking that fancy-indexing should be re-factored a bit so that > view-based indexing is tried first and then on error, fancy-indexing is > tried. Right now, it goes through the fancy-indexing check and that > seems to slow things down more than it needs to for simple indexing > operations. Is it too late to reconsider the decision to further overload [] to support fancy indexing? It would be nice to restrict [] to view based indexing and require a function call for copy-based. If that is not an option, I would like to propose to have no __getitem__ in the basearray and instead have rich collection of various functions such as "take" which can be used by the derived classes to create their own __getitem__ . Independent of the fate of the [] operator, I would like to have means to specify exactly what I want without having to rely on the smartness of the fancy-indexing check. For example, in the current version, I can either do x[[1,2,3]] or x.take([1,2,3]). For a 2d x I can do x.take([1,2,3], axis=3D1) as an alternative to x[:,[1,2,3]], but I cannot find an equivalent of x[[3,2,1],[1,2,3]]. I think [] syntax preferable in the interactive setting, where it allows to get the result with a few keystrokes. In addition [] has special syntactic properties in python (special meaning of : and ... within []) that allows some nifty looking syntax not available for member functions. On the other hand in programming, and especially in writing reusable code specialized member functions such as "take" are more appropriate for several resons. (1) Robustness, x.take(i) will do the same thing if i is a tuple, list, or array of any integer type, while with x[i] it is anybodys guess and the results may change with the changes in numpy. (2) Performance: fancy-indexing check is expensive. (3) Code readability: in the interactive session when you type x[i], i is either supplied literally or is defined on the same screen, but if i comes as an argument to the function, it may be hard to figure out whether i expected to be an integer or a list of integers is also ok. |
From: Tim H. <tim...@co...> - 2006-05-10 20:29:41
|
Travis Oliphant wrote: > Tim Hochberg wrote: > >> >> I created a branch to work on basearray and arraykit: >> >> http://svn.scipy.org/svn/numpy/branches/arraykit >> >> Basearray, as most of you probably know by now is the array >> superclass that Travis, Sasha and I have all talked about at various >> times with slightly different emphasis. > > > I'm thinking that fancy-indexing should be re-factored a bit so that > view-based indexing is tried first and then on error, fancy-indexing > is tried. Right now, it goes through the fancy-indexing check and > that seems to slow things down more than it needs to for simple > indexing operations. That sounds like a good idea. I would like to see fancy indexing broken out for arraykit if not necessarily for basearray. > > Perhaps it would make sense for basearray to implement simple indexing > while the ndarray would augment the basic indexing. > > > Is adding basic Numeric-like indexing something you see as useful to > basearray? Yes! No! Maybe ;-) Each time I think this over I come to a slightly different conclusion. At one point I was thinking that basearray should support shape, __getitem__ and __setitem__ (and had I thought about it at the time, I would have preferred basic indexing here). However at present I'm thinking that basearray should really just support your basic array protocol and nothing else. If we added the above three methods then that makes life harder for someone who wants to create an array subclass that is either immutable or has a fixed shape. Sure shape and/or __setitem__ can be overridden with something that raises some sort of exception, but it's exactly that sort of stuff that I was interested in getting away from with basearray (although admittedly this would be on a much smaller scale). I can't think of a real problem with supplying just a read only version of shape and getitem, but it also doesn't seem very useful. So, as I said I lean towards the simplest, thinnest interface possible. However, it may be a good idea to put together another subclass of basearray that supports shape, __getitem__, __setitem__ [in their basic forms], __repr__ and __str__. This could be part of the proposal to add basearray to the core. That way the basearray module could export something that's directly useful to people in addition to basearray which is really only useful as a basis for other stuff. Also, like I said I would use this for arraykit if it were available (and might even be willing to do the work myself if I find the time). I have considered splitting out fancy indexing in my simplified array class using some sort of psuedo attribute (similar to flat). If I was doing that, I'd actually prefer to split out the two different types of fancy indexing (boolean versus integer) so that they could be applied separately. -tim > > > -Travis > > > > ------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Travis O. <oli...@ie...> - 2006-05-10 19:58:06
|
Tim Hochberg wrote: > > I created a branch to work on basearray and arraykit: > > http://svn.scipy.org/svn/numpy/branches/arraykit > > Basearray, as most of you probably know by now is the array superclass > that Travis, Sasha and I have > all talked about at various times with slightly different emphasis. I'm thinking that fancy-indexing should be re-factored a bit so that view-based indexing is tried first and then on error, fancy-indexing is tried. Right now, it goes through the fancy-indexing check and that seems to slow things down more than it needs to for simple indexing operations. Perhaps it would make sense for basearray to implement simple indexing while the ndarray would augment the basic indexing. Is adding basic Numeric-like indexing something you see as useful to basearray? -Travis |
From: Sasha <nd...@ma...> - 2006-05-10 18:46:34
|
On 5/10/06, Christopher Barker <Chr...@no...> wrote: > ... > Is there even a direct way to construct a numpy scalar? > Yes, >>> type(int_(0)) <type 'int32scalar'> >>> type(float_(0)) <type 'float64scalar'> RTFM :-) > > Actually I thought that Sasha's position was that both scalars and > > *rank-0* [aka shape=3D()] arrays were useful in different circumstances > > and that we shouldn't completely anihilate rank-0 arrays in favor of > > scalars. > > What is the difference? except that rank-o arrays are mutable, and I do > think a mutable scalar is a good thing to have. Why not make numpy > scalars mutable, and then would there be a difference? Mutable objects cannot have value based hash, which practically means they cannot be used as keys in python dictionaries. This may change in python 3.0, but meanwhile mutable scalars are not an option. |
From: Travis O. <oli...@ie...> - 2006-05-10 18:42:47
|
Sebastian Haase wrote: > Hi, > I'm a long time numarray user. > I have some SWIG typemaps that I'm using for quite some time. > They are C++ oriented and support creating template'd function. > I only cover the case of contiguous "input" arrays of 1D,2D and 3D. > (I would like to ensure that NO implicit type conversions are made so > that I can use the same scheme to have arrays changed on the C/C++ > side and can later see/access those in python.) > > (as I just added some text to the scipy wiki: > www.scipy.org/Converting_from_numarray) > I use something like: > > PyArrayObject *NAarr = NA_InputArray(inputPyObject, PYARR_TC, > NUM_C_ARRAY); > arr = (double *) NA_OFFSETDATA(NAarr); > > > What is new numpy's equivalent of NA_InputArray In the scipy/Lib/ndimage package is a numcompat.c and numcompat.h file that implements several of the equivalents. I'd like to see a module like this get formalized and placed into numpy itself so that most numarray extensions simply have to be re-compiled to work with NumPy. Here is the relevant information (although I don't know what PYARR_TC is...) typedef enum { tAny, tBool=PyArray_BOOL, tInt8=PyArray_INT8, tUInt8=PyArray_UINT8, tInt16=PyArray_INT16, tUInt16=PyArray_UINT16, tInt32=PyArray_INT32, tUInt32=PyArray_UINT32, tInt64=PyArray_INT64, tUInt64=PyArray_UINT64, tFloat32=PyArray_FLOAT32, tFloat64=PyArray_FLOAT64, tComplex32=PyArray_COMPLEX64, tComplex64=PyArray_COMPLEX128, tObject=PyArray_OBJECT, /* placeholder... does nothing */ tDefault = tFloat64, #if BITSOF_LONG == 64 tLong = tInt64, #else tLong = tInt32, #endif tMaxType } NumarrayType; typedef enum { NUM_CONTIGUOUS=CONTIGUOUS, NUM_NOTSWAPPED=NOTSWAPPED, NUM_ALIGNED=ALIGNED, NUM_WRITABLE=WRITEABLE, NUM_COPY=ENSURECOPY, NUM_C_ARRAY = (NUM_CONTIGUOUS | NUM_ALIGNED | NUM_NOTSWAPPED), NUM_UNCONVERTED = 0 } NumRequirements; #define _NAtype_toDescr(type) (((type)==tAny) ? NULL : \ PyArray_DescrFromType(type)) #define NA_InputArray(obj, type, flags) \ (PyArrayObject *)\ PyArray_FromAny(obj, _NAtype_toDescr(type), 0, 0, flags, NULL) #define NA_OFFSETDATA(a) ((void *) PyArray_DATA(a)) |
From: Travis O. <oli...@ie...> - 2006-05-10 18:34:57
|
Sebastian Haase wrote: > On Wednesday 10 May 2006 10:05, Travis Oliphant wrote: > >> Sebastian Haase wrote: >> >>> One additional question: >>> is PyArray_FromDimsAndData creating a copy ? >>> I have very large image data and cannot afford copies :-( >>> >> No, it uses the data as the memory space for the array (but you have to >> either manage that memory area yourself or reset the OWNDATA flag to get >> NumPy to delete it for you on array deletion). >> >> -Travis >> > > Thanks for the reply. > Regarding "setting the OWNDATA flag": > How does NumPy know if it should call free (C code) or delete [] (C++ code) ? > It doesn't. It always uses _pya_free which is a macro that is defined to either system free or Python's memory-manager equivalent. It should always be paired with _pya_malloc. Yes, you can have serious problems by mixing memory allocators. In other-words, unless you know what you are doing it is unwise to set the OWNDATA flag for data that was defined elsewhere. My favorite method is to simply let NumPy create the memory for you (e.g. use PyArray_SimpleNew). Then, you won't have trouble. If that method is not possible, then the next best thing to do is to define a simple Python Object that uses reference counting to manage the memory for you. Then, you point array->base to that object so that it's reference count gets decremented when the array disappears. The simple Python Object defines it's tp_deallocate function to call the appropriate free. -Travis |