You can subscribe to this list here.
2000 
_{Jan}
(8) 
_{Feb}
(49) 
_{Mar}
(48) 
_{Apr}
(28) 
_{May}
(37) 
_{Jun}
(28) 
_{Jul}
(16) 
_{Aug}
(16) 
_{Sep}
(44) 
_{Oct}
(61) 
_{Nov}
(31) 
_{Dec}
(24) 

2001 
_{Jan}
(56) 
_{Feb}
(54) 
_{Mar}
(41) 
_{Apr}
(71) 
_{May}
(48) 
_{Jun}
(32) 
_{Jul}
(53) 
_{Aug}
(91) 
_{Sep}
(56) 
_{Oct}
(33) 
_{Nov}
(81) 
_{Dec}
(54) 
2002 
_{Jan}
(72) 
_{Feb}
(37) 
_{Mar}
(126) 
_{Apr}
(62) 
_{May}
(34) 
_{Jun}
(124) 
_{Jul}
(36) 
_{Aug}
(34) 
_{Sep}
(60) 
_{Oct}
(37) 
_{Nov}
(23) 
_{Dec}
(104) 
2003 
_{Jan}
(110) 
_{Feb}
(73) 
_{Mar}
(42) 
_{Apr}
(8) 
_{May}
(76) 
_{Jun}
(14) 
_{Jul}
(52) 
_{Aug}
(26) 
_{Sep}
(108) 
_{Oct}
(82) 
_{Nov}
(89) 
_{Dec}
(94) 
2004 
_{Jan}
(117) 
_{Feb}
(86) 
_{Mar}
(75) 
_{Apr}
(55) 
_{May}
(75) 
_{Jun}
(160) 
_{Jul}
(152) 
_{Aug}
(86) 
_{Sep}
(75) 
_{Oct}
(134) 
_{Nov}
(62) 
_{Dec}
(60) 
2005 
_{Jan}
(187) 
_{Feb}
(318) 
_{Mar}
(296) 
_{Apr}
(205) 
_{May}
(84) 
_{Jun}
(63) 
_{Jul}
(122) 
_{Aug}
(59) 
_{Sep}
(66) 
_{Oct}
(148) 
_{Nov}
(120) 
_{Dec}
(70) 
2006 
_{Jan}
(460) 
_{Feb}
(683) 
_{Mar}
(589) 
_{Apr}
(559) 
_{May}
(445) 
_{Jun}
(712) 
_{Jul}
(815) 
_{Aug}
(663) 
_{Sep}
(559) 
_{Oct}
(930) 
_{Nov}
(373) 
_{Dec}

S  M  T  W  T  F  S 




1
(17) 
2
(10) 
3
(18) 
4
(4) 
5
(3) 
6
(12) 
7
(29) 
8
(23) 
9
(33) 
10
(52) 
11
(26) 
12
(3) 
13
(27) 
14
(37) 
15
(29) 
16
(21) 
17
(45) 
18
(13) 
19
(6) 
20
(19) 
21
(38) 
22
(30) 
23
(47) 
24
(54) 
25
(19) 
26
(3) 
27
(24) 
28
(41) 




From: Robert Kern <robert.kern@gm...>  20060222 22:10:29

Colin J. Williams wrote: > Robert, > > Many thank for this. You have described the standard Python approach to > constructing an instance. As I understand it, > numpy uses the __new__ method, but not __init__, in most cases. > > My interest is in " any (positional as well as keyword) arguments". > What should the user feed the constuctor? This isn't clear from the > online documentation. Look in the code. The PyArrayDescr_Type method table gives arraydescr_new() as the implementation of the tp_new slot (the C name for __new__). You can read the implementation for information. Patches for documentation will be gratefully accepted. That said: In [16]: a = arange(10) In [17]: a.dtype Out[17]: dtype('>i4') In [18]: dtype('>i4') Out[18]: dtype('>i4') If you want complete documentation on datatype descriptors, it's in Chapter 7 of Travis's book. > From a Python user's point of view, the module holding the dtype class > appears to be multiarray. > > The standard Python approach is to put the information in a __module__ > attribute so that one doesn't have to go hunting around. Please see below. <shrug> dtype.__module__ (== 'numpy') tells you the canonical place to access it from Python code. It will never be able to tell you what C source file to look in. You'll have to break out grep no matter what. > While on the subject of the Standand Python aproach, class names usually > start with an upper case letter and the builtins have their own style, > ListType etc. numpy equates ArrayType to ndarray but ArrayType is > deprecated. ListType, TupleType et al. are also deprecated in favor of list and tuple, etc. But yes, we do use all lowercase names for classes. This is a conscious decision. It's just a style convention, just like PEP8 is just a style convention for the standard library.  Robert Kern robert.kern@... "In the fields of hell where the grass grows high Are the graves of dreams allowed to die."  Richard Harter 
From: Colin J. Williams <cjw@sy...>  20060222 21:18:18

Robert Kern wrote: >Colin J. Williams wrote: > > >>I've been trying to gain some understanding of dtype from the builtin >>documentation and would appreciate advice. >> >>I don't find anything in http://projects.scipy.org/scipy/numpy or >>http://wiki.python.org/moin/NumPy >> >>Chapter 2.1 of the book has a good overview, but little reference material. >> >>In the following, dt= numpy.dtype >> >>Some specific problems are flagged ** below. >> >>Colin W. >>[snip] >> >> >>   >> Data and other attributes defined here: >>  __new__ = <builtin method __new__ of type object>  >>T.__new__(S, ...) > a new object with type S, a subtype of >>T ** What are the parameters? In other words, >> >>what does ... stand for? ** >> >> > >http://www.python.org/2.2.3/descrintro.html#__new__ > >"""Recall that you create class instances by calling the class. When the class >is a newstyle class, the following happens when it is called. First, the >class's __new__ method is called, passing the class itself as first argument, >followed by any (positional as well as keyword) arguments received by the >original call. This returns a new instance. Then that instance's __init__ method >is called to further initialize it. (This is all controlled by the __call__ >method of the metaclass, by the way.) >""" > > > >>** There is no __module__ attribute. How does one identify the modules >>holding the code? ** >> >> > >It's an extension type PyArray_Descr* in numpy/core/src/arrayobject.c . > > > Robert, Many thank for this. You have described the standard Python approach to constructing an instance. As I understand it, numpy uses the __new__ method, but not __init__, in most cases. My interest is in " any (positional as well as keyword) arguments". What should the user feed the constuctor? This isn't clear from the online documentation. From a Python user's point of view, the module holding the dtype class appears to be multiarray. The standard Python approach is to put the information in a __module__ attribute so that one doesn't have to go hunting around. Please see below. While on the subject of the Standand Python aproach, class names usually start with an upper case letter and the builtins have their own style, ListType etc. numpy equates ArrayType to ndarray but ArrayType is deprecated. Colin W. C:\>python Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy.core.multiarray as mu >>> dir(mu) ['_ARRAY_API', '__doc__', '__file__', '__name__', '__version__', '_fastCopyAndTranspose', '_flagdict ', '_get_ndarray_c_version', 'arange', 'array', 'bigndarray', 'broadcast', 'can_cast', 'concatenate' , 'correlate', 'dot', 'dtype', 'empty', 'error', 'flatiter', 'frombuffer', 'fromfile', 'fromstring', 'getbuffer', 'inner', 'lexsort', 'ndarray', 'newbuffer', 'register_dtype', 'scalar', 'set_numeric_o ps', 'set_string_function', 'set_typeDict', 'typeinfo', 'where', 'zeros'] >>> 
From: Christopher Barker <Chris.B<arker@no...>  20060222 20:26:14

Sven Schreiber wrote: >  If I have >1 variable then everything is fine (provided I use your > advice of slicing instead of indexing afterwards) and the variables are > in the _columns_ of the 2darray. >  But if there's just one data _column_ in the file, then pylab/numpy > gives me a 1darray that sometimes works as a _row_ (and as you noted, > sometimes not), but never works as a column. > > Imho that's bad, because as a consequence I must use overhead code to > distinguish between these cases. I'd do that on load. You must have a way of knowing how many variables you're loading, so when it is one you can add this line: a.shape = (1,1) and then proceed the same way after that. chris  Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 5266959 voice 7600 Sand Point Way NE (206) 5266329 fax Seattle, WA 98115 (206) 5266317 main reception Chris.Barker@... 
From: Zachary Pincus <zpincus@st...>  20060222 20:25:57

Here is my eventual solution. I'm not sure it's speedoptimal for even a python implementation, but it is terse. I agree that it might be nice to have this fast, and/or in C (I'm using it for finite difference and related things). def cshift(l, offset): offset %= len(l) return numpy.concatenate((l[offset:], l[:offset])) Zach On Feb 22, 2006, at 11:40 AM, Mads Ipsen wrote: > On Wed, 22 Feb 2006, Alan G Isaac wrote: > >> On Wed, 22 Feb 2006, Zachary Pincus apparently wrote: >>> Does numpy have an builtin mechanism to shift elements along some >>> axis in an array? (e.g. to "roll" [0,1,2,3] by some offset, here 2, >>> to make [2,3,0,1]) >> >> This sounds like the rotater command in GAUSS. >> As far as I know there is no equivalent in numpy. >> Please post your ultimate solution. >> >> Cheers, >> Alan Isaac >> > > Similar to cshift() (cyclic shift) in F90. Very nice for calculating > finite differences, such as > > x' = ( cshift(x,+1)  cshift(x1) ) / dx > > This would be a very handy feature indeed. > > // Mads > > >  > This SF.net email is sponsored by: Splunk Inc. Do you grep through > log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD > SPLUNK! > http://sel.asus.falkag.net/sel? > cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion 
From: Charlie Moad <cwmoad@gm...>  20060222 20:01:13

Since no one has answered this, I am going to take a whack at it.=20 Experts feel free to shoot me down. Here is a sample showing multiple inheritance with a mix of old style and new style classes. I don't claim there is any logic to the code, but it is just for demo purposes.  from numpy import * class actImage: def __init__(self, colorOrder=3D'RGBA'): self.colorOrder =3D colorOrder class Image(actImage, ndarray): def __new__(cls, shape=3D(1024,768), dtype=3Dfloat32): return ndarray.__new__(cls, shape=3Dshape, dtype=3Ddtype) x =3D Image() assert isinstance(x[0,1], float32) assert x.colorOrder =3D=3D 'RGBA'  Running "help(ndarray)" has some useful info as well.  Charlie On 2/19/06, Robert Lupton <rhl@...> wrote: > I have a swig extension that defines a class that inherits from > both a personal Ccoded image struct (actImage), and also from > Numeric's UserArray. This works very nicely, but I thought that > it was about time to upgrade to numpy. > > The code looks like: > > from UserArray import * > > class Image(UserArray, actImage): > def __init__(self, *args): > actImage.__init__(self, *args) > UserArray.__init__(self, self.getArray(), 'd', copy=3DFalse, > savespace=3DFalse) > > I can't figure out how to convert this to use ndarray, as ndarray > doesn't > seem to have an __init__ method, merely a __new__. > > So what's the approved numpy way to handle multiple inheritance? > I've a nasty > idea that this is a python question that I should know the answer to, > but I'm > afraid that I don't... > > R > > > >  > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.asus.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat= =3D121642 > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion > 
From: Mads Ipsen <mpi@os...>  20060222 19:40:52

On Wed, 22 Feb 2006, Alan G Isaac wrote: > On Wed, 22 Feb 2006, Zachary Pincus apparently wrote: > > Does numpy have an builtin mechanism to shift elements along some > > axis in an array? (e.g. to "roll" [0,1,2,3] by some offset, here 2, > > to make [2,3,0,1]) > > This sounds like the rotater command in GAUSS. > As far as I know there is no equivalent in numpy. > Please post your ultimate solution. > > Cheers, > Alan Isaac > Similar to cshift() (cyclic shift) in F90. Very nice for calculating finite differences, such as x' = ( cshift(x,+1)  cshift(x1) ) / dx This would be a very handy feature indeed. // Mads 
From: Tim Hochberg <tim.hochberg@co...>  20060222 19:29:31

Alan G Isaac wrote: >On Wed, 22 Feb 2006, Zachary Pincus apparently wrote: > > >>Does numpy have an builtin mechanism to shift elements along some >>axis in an array? (e.g. to "roll" [0,1,2,3] by some offset, here 2, >>to make [2,3,0,1]) >> >> > >This sounds like the rotater command in GAUSS. >As far as I know there is no equivalent in numpy. >Please post your ultimate solution. > > If you need to roll just a few elements the following should work fairly efficiently. If you don't want to roll in place, you could instead copy A on the way in and return the modified copy. However, in that case, concatenating slices might be better.  import numpy def roll(A, n): "Roll the array A in place. Positive n > roll right, negative n > roll left" if n > 0: n = abs(n) temp = A[n:] A[n:] = A[:n] A[:n] = temp elif n < 0: n = abs(n) temp = A[:n] A[:n] = A[n:] A[n:] = temp else: pass A = numpy.arange(10) print A roll(A, 3) print A roll(A, 3) print A 
From: Alan G Isaac <aisaac@am...>  20060222 19:08:27

On Wed, 22 Feb 2006, Zachary Pincus apparently wrote: > Does numpy have an builtin mechanism to shift elements along some > axis in an array? (e.g. to "roll" [0,1,2,3] by some offset, here 2, > to make [2,3,0,1]) This sounds like the rotater command in GAUSS. As far as I know there is no equivalent in numpy. Please post your ultimate solution. Cheers, Alan Isaac 
From: Robert Kern <robert.kern@gm...>  20060222 17:58:50

Colin J. Williams wrote: > I've been trying to gain some understanding of dtype from the builtin > documentation and would appreciate advice. > > I don't find anything in http://projects.scipy.org/scipy/numpy or > http://wiki.python.org/moin/NumPy > > Chapter 2.1 of the book has a good overview, but little reference material. > > In the following, dt= numpy.dtype > > Some specific problems are flagged ** below. > > Colin W. > > [Dbg]>>> h(dt) > Help on class dtype in module numpy: > > class dtype(__builtin__.object) >  Methods defined here: >   __cmp__(...) >  x.__cmp__(y) <==> cmp(x,y) >   __getitem__(...) >  x.__getitem__(y) <==> x[y] >   __len__(...) >  x.__len__() <==> len(x) >   __reduce__(...) >  self.__reduce__() for pickling. >   __repr__(...) >  x.__repr__() <==> repr(x) >   __setstate__(...) >  self.__setstate__() for pickling. >   __str__(...) >  x.__str__() <==> str(x) >   newbyteorder(...) >  self.newbyteorder(<endian>) returns a copy of the dtype object >  with altered byteorders. If <endian> is not given all byteorders >  are swapped. Otherwise endian can be '>', '<', or '=' to force >  a byteorder. Descriptors in all fields are also updated in the >  new dtype object. >    >  Data and other attributes defined here: >   __new__ = <builtin method __new__ of type object>  > T.__new__(S, ...) > a new object with type S, a subtype of > T ** What are the parameters? In other words, >  > what does ... stand for? ** http://www.python.org/2.2.3/descrintro.html#__new__ """Recall that you create class instances by calling the class. When the class is a newstyle class, the following happens when it is called. First, the class's __new__ method is called, passing the class itself as first argument, followed by any (positional as well as keyword) arguments received by the original call. This returns a new instance. Then that instance's __init__ method is called to further initialize it. (This is all controlled by the __call__ method of the metaclass, by the way.) """ > ** There is no __module__ attribute. How does one identify the modules > holding the code? ** It's an extension type PyArray_Descr* in numpy/core/src/arrayobject.c .  Robert Kern robert.kern@... "In the fields of hell where the grass grows high Are the graves of dreams allowed to die."  Richard Harter 
From: <mfmorss@ae...>  20060222 16:33:40

Thanks for this observation. I will modify ufuncobject.h as you suggested, instead. The other problem still results in a complaint, but not an error; it does not prevent compilation. I have another little problem but I expect to be able to solve it. I will report when and if I have Numpy installed. Mark F. Morss Principal Analyst, Market Risk American Electric Power Travis Oliphant <oliphant.travis@ ieee.org> To mfmorss@... 02/22/2006 11:29 cc AM numpydiscussion <numpydiscussion@... .net> Subject Re: [Numpydiscussion] Trouble installing Numpy on AIX 5.2. mfmorss@... wrote: >This problem was solved by adding "#include <fenv.h>" to ...numpy0.9.5 >/numpy/core/src/umathmodule.c.src > > I suspect this allowed compilation, but I'm not sure if it "solved the problem." It depends on whether or not the FE_OVERFLOW defined in fenv.h is the same as FP_OVERFLOW on the _AIX (it might be...). The better solution is to change the constant to what it should be... Did the long double *, double * problem also resolve itself? This seems to an error with the modfl function you are picking up since the AIX docs say that modfl should take and receive long double arguments. Best, Travis 
From: Travis Oliphant <oliphant.travis@ie...>  20060222 16:30:04

mfmorss@... wrote: >This problem was solved by adding "#include <fenv.h>" to ...numpy0.9.5 >/numpy/core/src/umathmodule.c.src > > I suspect this allowed compilation, but I'm not sure if it "solved the problem." It depends on whether or not the FE_OVERFLOW defined in fenv.h is the same as FP_OVERFLOW on the _AIX (it might be...). The better solution is to change the constant to what it should be... Did the long double *, double * problem also resolve itself? This seems to an error with the modfl function you are picking up since the AIX docs say that modfl should take and receive long double arguments. Best, Travis 
From: Travis Oliphant <oliphant.travis@ie...>  20060222 16:19:38

mfmorss@... wrote: >I built Python successfully on our AIX 5.2 server using "./configure >withoutcxx disableipv6". (This uses the native IBM C compiler, >invoking it as "cc_r". We have no C++ compiler.) > >But I have been unable to install Numpy0.9.5 using the same compiler. >After "python setup.py install," the relevant section of the output was: > >compile options: 'Ibuild/src/numpy/core/src Inumpy/core/include >Ibuild/src/numpy/core Inumpy/core/src Inumpy/core/include >I/pydirectory/include/python2.4 c' >cc_r: build/src/numpy/core/src/umathmodule.c >"build/src/numpy/core/src/umathmodule.c", line 2734.25: 1506045 (S) >Undeclared identifier FE_OVERFLOW. > > Thanks for this check. This is an error in the _AIX section of the header. Change line 304 in ufuncobject.h from FE_OVERFLOW to FP_OVERFLOW. >"build/src/numpy/core/src/umathmodule.c", line 9307.32: 1506280 (W) >Function argument assignment between types "long double*" and "double*" is >not allowed. > > I'm not sure where this error comes from. It seems to appear when modfl is used. What is the content of config.h (in your <pythonsitepackages>/numpy/core/include/numpy directory)? Can you find out if modfl is defined on your platform already? >A closely related question is, how can I modify the Numpy setup.py and/or >distutils files to enable me to control the options with which cc_r is >invoked? I inspected these files, but not being very expert in Python, I >could not figure this out. > > The default CFLAGS are those you used to build Python with. I think you can set the CFLAGS environment variable in order to change this. Thank you for your test. I don't have access to _AIX platform and so I appreciate your feedback. Travis 
From: <mfmorss@ae...>  20060222 16:15:26

This problem was solved by adding "#include <fenv.h>" to ...numpy0.9.5 /numpy/core/src/umathmodule.c.src Mark F. Morss Principal Analyst, Market Risk American Electric Power mfmorss@... Sent by: numpydiscussion To admin@... numpydiscussion eforge.net <numpydiscussion@... .net> cc 02/22/2006 09:06 AM Subject [Numpydiscussion] Trouble installing Numpy on AIX 5.2. I built Python successfully on our AIX 5.2 server using "./configure withoutcxx disableipv6". (This uses the native IBM C compiler, invoking it as "cc_r". We have no C++ compiler.) But I have been unable to install Numpy0.9.5 using the same compiler. After "python setup.py install," the relevant section of the output was: compile options: 'Ibuild/src/numpy/core/src Inumpy/core/include Ibuild/src/numpy/core Inumpy/core/src Inumpy/core/include I/pydirectory/include/python2.4 c' cc_r: build/src/numpy/core/src/umathmodule.c "build/src/numpy/core/src/umathmodule.c", line 2566.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2584.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2602.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2620.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2638.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2654.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2674.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2694.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2714.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2734.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 9307.32: 1506280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. "build/src/numpy/core/src/umathmodule.c", line 2566.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2584.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2602.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2620.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2638.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2654.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2674.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2694.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2714.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2734.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 9307.32: 1506280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. error: Command "cc_r DNDEBUG O Ibuild/src/numpy/core/src Inumpy/core/include Ibuild/src/numpy/core Inumpy/core/src Inumpy/core/include I/app/sandbox/s625662/installed/include/python2.4 c build/src/numpy/core/src/umathmodule.c o build/temp.aix5.22.4 /build/src/numpy/core/src/umathmodule.o" failed with exit status 1 A closely related question is, how can I modify the Numpy setup.py and/or distutils files to enable me to control the options with which cc_r is invoked? I inspected these files, but not being very expert in Python, I could not figure this out. Mark F. Morss Principal Analyst, Market Risk American Electric Power  This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.asus.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 _______________________________________________ Numpydiscussion mailing list Numpydiscussion@... https://lists.sourceforge.net/lists/listinfo/numpydiscussion 
From: Colin J. Williams <cjw@sy...>  20060222 15:29:00

I've been trying to gain some understanding of dtype from the builtin documentation and would appreciate advice. I don't find anything in http://projects.scipy.org/scipy/numpy or http://wiki.python.org/moin/NumPy Chapter 2.1 of the book has a good overview, but little reference material. In the following, dt= numpy.dtype Some specific problems are flagged ** below. Colin W. [Dbg]>>> h(dt) Help on class dtype in module numpy: class dtype(__builtin__.object)  Methods defined here:   __cmp__(...)  x.__cmp__(y) <==> cmp(x,y)   __getitem__(...)  x.__getitem__(y) <==> x[y]   __len__(...)  x.__len__() <==> len(x)   __reduce__(...)  self.__reduce__() for pickling.   __repr__(...)  x.__repr__() <==> repr(x)   __setstate__(...)  self.__setstate__() for pickling.   __str__(...)  x.__str__() <==> str(x)   newbyteorder(...)  self.newbyteorder(<endian>) returns a copy of the dtype object  with altered byteorders. If <endian> is not given all byteorders  are swapped. Otherwise endian can be '>', '<', or '=' to force  a byteorder. Descriptors in all fields are also updated in the  new dtype object.     Data and other attributes defined here:   __new__ = <builtin method __new__ of type object>  T.__new__(S, ...) > a new object with type S, a subtype of T ** What are the parameters? In other words,  what does ... stand for? **   alignment = <member 'alignment' of 'numpy.dtype' objects>    base = <attribute 'base' of 'numpy.dtype' objects>  The base datatype or self if no subdtype   byteorder = <member 'byteorder' of 'numpy.dtype' objects>    char = <member 'char' of 'numpy.dtype' objects>    descr = <attribute 'descr' of 'numpy.dtype' objects>  The array_protocol type descriptor.   fields = <attribute 'fields' of 'numpy.dtype' objects>    hasobject = <member 'hasobject' of 'numpy.dtype' objects>    isbuiltin = <attribute 'isbuiltin' of 'numpy.dtype' objects>  Is this a builltin datatype descriptor?   isnative = <attribute 'isnative' of 'numpy.dtype' objects>  Is the byteorder of this descriptor native?   itemsize = <member 'itemsize' of 'numpy.dtype' objects>    kind = <member 'kind' of 'numpy.dtype' objects>    name = <attribute 'name' of 'numpy.dtype' objects>  The name of the true datatype   num = <member 'num' of 'numpy.dtype' objects>    shape = <attribute 'shape' of 'numpy.dtype' objects>  The shape of the subdtype or (1,)   str = <attribute 'str' of 'numpy.dtype' objects>  The array_protocol typestring.   subdtype = <attribute 'subdtype' of 'numpy.dtype' objects>  A tuple of (descr, shape) or None.   type = <member 'type' of 'numpy.dtype' objects> [Dbg]>>> dt.num.__doc__ ** no doc string ** [Dbg]>>> help(dt.num) Help on member_descriptor object: num = class member_descriptor(object)  Methods defined here:   __delete__(...)  descr.__delete__(obj)   __get__(...)  descr.__get__(obj[, type]) > value   __getattribute__(...)  x.__getattribute__('name') <==> x.name   __repr__(...)  x.__repr__() <==> repr(x)   __set__(...)  descr.__set__(obj, value)     Data and other attributes defined here:   __objclass__ = <member '__objclass__' of 'member_descriptor' objects> [Dbg]>>> help(dt.num) Help on member_descriptor object: num = class member_descriptor(object)  Methods defined here:   __delete__(...)  descr.__delete__(obj)   __get__(...)  descr.__get__(obj[, type]) > value   __getattribute__(...)  x.__getattribute__('name') <==> x.name   __repr__(...)  x.__repr__() <==> repr(x)   __set__(...)  descr.__set__(obj, value)     Data and other attributes defined here:   __objclass__ = <member '__objclass__' of 'member_descriptor' objects> [Dbg]>>> help(dt.num.__objclass__) Help on class dtype in module numpy: class dtype(__builtin__.object)  Methods defined here:   __cmp__(...)  x.__cmp__(y) <==> cmp(x,y)   __getitem__(...)  x.__getitem__(y) <==> x[y]   __len__(...)  x.__len__() <==> len(x)   __reduce__(...)  self.__reduce__() for pickling.   __repr__(...)  x.__repr__() <==> repr(x)   __setstate__(...)  self.__setstate__() for pickling.   __str__(...)  x.__str__() <==> str(x)   newbyteorder(...)  self.newbyteorder(<endian>) returns a copy of the dtype object  with altered byteorders. If <endian> is not given all byteorders  are swapped. Otherwise endian can be '>', '<', or '=' to force  a byteorder. Descriptors in all fields are also updated in the  new dtype object.     Data and other attributes defined here:   __new__ = <builtin method __new__ of type object>  T.__new__(S, ...) > a new object with type S, a subtype of T   alignment = <member 'alignment' of 'numpy.dtype' objects>    base = <attribute 'base' of 'numpy.dtype' objects>  The base datatype or self if no subdtype   byteorder = <member 'byteorder' of 'numpy.dtype' objects>    char = <member 'char' of 'numpy.dtype' objects>    descr = <attribute 'descr' of 'numpy.dtype' objects>  The array_protocol type descriptor.   fields = <attribute 'fields' of 'numpy.dtype' objects>    hasobject = <member 'hasobject' of 'numpy.dtype' objects>    isbuiltin = <attribute 'isbuiltin' of 'numpy.dtype' objects>  Is this a builltin datatype descriptor?   isnative = <attribute 'isnative' of 'numpy.dtype' objects>  Is the byteorder of this descriptor native?   itemsize = <member 'itemsize' of 'numpy.dtype' objects>    kind = <member 'kind' of 'numpy.dtype' objects>    name = <attribute 'name' of 'numpy.dtype' objects>  The name of the true datatype ** How does this differ from what, in common  Python usage, is a class.__name__? **   num = <member 'num' of 'numpy.dtype' objects> ** What does this mean? **    shape = <attribute 'shape' of 'numpy.dtype' objects>  The shape of the subdtype or (1,)   str = <attribute 'str' of 'numpy.dtype' objects>  The array_protocol typestring.   subdtype = <attribute 'subdtype' of 'numpy.dtype' objects>  A tuple of (descr, shape) or None.   type = <member 'type' of 'numpy.dtype' objects> [Dbg]>>> ** There is no __module__ attribute. How does one identify the modules holding the code? ** 
From: Sven Schreiber <svetosch@gm...>  20060222 14:47:23

Christopher Barker schrieb: > Sven Schreiber wrote: >> I guess I'd rather follow the advice and just remember to treat 1d as >> a row. > > Except that it's not, universally. For instance, it won't transpose: > > > It's very helpful to remember that indexing reduces rank, and slicing > keeps the rank the same. It will serve you well to use that in the > future anyway. > Anyway, the problem is really about interaction with pylab/matplotlib (so slightly OT here, sorry); when getting data from a text file with pylab.load you can't be sure if the result is 1d or 2d. This means that:  If I have >1 variable then everything is fine (provided I use your advice of slicing instead of indexing afterwards) and the variables are in the _columns_ of the 2darray.  But if there's just one data _column_ in the file, then pylab/numpy gives me a 1darray that sometimes works as a _row_ (and as you noted, sometimes not), but never works as a column. Imho that's bad, because as a consequence I must use overhead code to distinguish between these cases. To me it seems more like pylab's bug instead of numpy's, so please excuse this OT twist, but since there seems to be overlap between the pylab/matplotlib and numpy folks, maybe it's not so bad. Thanks for your patience and helpful input, Sven 
From: Bruce Southey <bsouthey@gm...>  20060222 14:24:18

Hi, Actually it makes it slightly worse  given the responses on another thread it is probably due to not pushing enough into C code. Obviously use of blas etc will be faster but it doesn't change the fact that removing the inner loop would be faster still. Bruce On 2/22/06, Nadav Horesh <nadavh@...> wrote: > You may get a significant boost by replacing the line: > w=3Dw+ eta * (y*x  y**2*w) > with > w *=3D 1.0  eta*y*y > w +=3D eta*y*x > > I ran a test on a similar expression and got 5 fold speed increase. > The dot() function runs faster if you compile with dotblas. > > Nadav. > > > Original Message > From: numpydiscussionadmin@... on behalf of Bruce S= outhey > Sent: Tue 21Feb06 17:15 > To: Brian Blais > Cc: pythonlist@...; numpydiscussion@...; s= cipyuser@... > Subject: Re: [Numpydiscussion] algorithm, optimization, or other = problem? > Hi, > In the current version, note that Y is scalar so replace the squaring > (Y**2) with Y*Y as you do in the dohebb function. On my system > without blas etc removing the squaring removes a few seconds (16.28 to > 12.4). It did not seem to help factorizing Y. > > Also, eta and tau are constants so define them only once as scalars > outside the loops and do the division outside the loop. It only saves > about 0.2 seconds but these add up. > > The inner loop probably can be vectorized because it is just vector > operations on a matrix. You are just computing over the ith dimension > of X. I think that you could be able to find the matrix version on > the net. > > Regards > Bruce > > > > On 2/21/06, Brian Blais <bblais@...> wrote: > > Hello, > > > > I am trying to translate some Matlab/mex code to Python, for doing neur= al > > simulations. This application is definitely computingtime limited, an= d I need to > > optimize at least one inner loop of the code, or perhaps even rethink t= he algorithm. > > The procedure is very simple, after initializing any variables: > > > > 1) select a random input vector, which I will call "x". right now I ha= ve it as an > > array, and I choose columns from that array randomly. in other cases, = I may need to > > take an image, select a patch, and then make that a column vector. > > > > 2) calculate an output value, which is the dot product of the "x" and a= weight > > vector, "w", so > > > > y=3Ddot(x,w) > > > > 3) modify the weight vector based on a matrix equation, like: > > > > w=3Dw+ eta * (y*x  y**2*w) > > ^ > >  > > + learning rate constant > > > > 4) repeat steps 13 many times > > > > I've organized it like: > > > > for e in 100: # outer loop > > for i in 1000: # inner loop > > (steps 13) > > > > display things. > > > > so that the bulk of the computation is in the inner loop, and is amenab= le to > > converting to a faster language. This is my issue: > > > > straight python, in the example posted below for 250000 innerloop step= s, takes 20 > > seconds for each outerloop step. I tried Pyrex, which should work ver= y fast on such > > a problem, takes about 8.5 seconds per outerloop step. The same code = as a Cmex > > file in matlab takes 1.5 seconds per outerloop step. > > > > Given the huge difference between the Pyrex and the Mex, I feel that th= ere is > > something I am doing wrong, because the Ccode for both should run comp= arably. > > Perhaps the approach is wrong? I'm willing to take any suggestions! I= don't mind > > coding some in C, but the Python API seemed a bit challenging to me. > > > > One note: I am using the Numeric package, not numpy, only because I wan= t to be able > > to use the Enthought version for Windows. I develop on Linux, and have= n't had a > > chance to see if I can compile numpy using the Enthought Python for Win= dows. > > > > If there is anything else anyone needs to know, I'll post it. I put th= e main script, > > and a dohebb.pyx code below. > > > > > > thanks! > > > > Brian Blais > > > >  > >  > > > > bblais@... > > http://web.bryant.edu/~bblais > > > > > > > > > > # Main script: > > > > from dohebb import * > > import pylab as p > > from Numeric import * > > from RandomArray import * > > import time > > > > x=3Drandom((100,1000)) # 1000 input vectors > > > > numpats=3Dx.shape[0] > > w=3Drandom((numpats,1)); > > > > th=3Drandom((1,1)) > > > > params=3D{} > > params['eta']=3D0.001; > > params['tau']=3D100.0; > > old_mx=3D0; > > for e in range(100): > > > > rnd=3Drandint(0,numpats,250000) > > t1=3Dtime.time() > > if 0: # straight python > > for i in range(len(rnd)): > > pat=3Drnd[i] > > xx=3Dreshape(x[:,pat],(1,1)) > > y=3Dmatrixmultiply(xx,w) > > w=3Dw+params['eta']*(y*transpose(xx)y**2*w); > > th=3Dth+(1.0/params['tau'])*(y**2th); > > else: # pyrex > > dohebb(params,w,th,x,rnd) > > print time.time()t1 > > > > > > p.plot(w,'o') > > p.xlabel('weights') > > p.show() > > > > > > #=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > # dohebb.pyx > > > > cdef extern from "Numeric/arrayobject.h": > > > > struct PyArray_Descr: > > int type_num, elsize > > char type > > > > ctypedef class Numeric.ArrayType [object PyArrayObject]: > > cdef char *data > > cdef int nd > > cdef int *dimensions, *strides > > cdef object base > > cdef PyArray_Descr *descr > > cdef int flags > > > > > > def dohebb(params,ArrayType w,ArrayType th,ArrayType X,ArrayType rnd): > > > > > > cdef int num_iterations > > cdef int num_inputs > > cdef int offset > > cdef double *wp,*xp,*thp > > cdef int *rndp > > cdef double eta,tau > > > > eta=3Dparams['eta'] # learning rate > > tau=3Dparams['tau'] # used for variance estimate > > > > cdef double y > > num_iterations=3Drnd.dimensions[0] > > num_inputs=3Dw.dimensions[0] > > > > # get the pointers > > wp=3D<double *>w.data > > xp=3D<double *>X.data > > rndp=3D<int *>rnd.data > > thp=3D<double *>th.data > > > > for it from 0 <=3D it < num_iterations: > > > > offset=3Drndp[it]*num_inputs > > > > # calculate the output > > y=3D0.0 > > for i from 0 <=3D i < num_inputs: > > y=3Dy+wp[i]*xp[i+offset] > > > > # change in the weights > > for i from 0 <=3D i < num_inputs: > > wp[i]=3Dwp[i]+eta*(y*xp[i+offset]  y*y*wp[i]) > > > > # estimate the variance > > thp[0]=3Dthp[0]+(1.0/tau)*(y**2thp[0]) > > > > > > > > > > > > > > > >  > > This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files > > for problems? Stop! Download the new AJAX search engine that makes > > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > > http://sel.asus.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat= =3D121642 > > _______________________________________________ > > Numpydiscussion mailing list > > Numpydiscussion@... > > https://lists.sourceforge.net/lists/listinfo/numpydiscussion > > > > >  > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.asus.falkag.net/sel?cmd=3Dk&kid=103432&bid#0486&dat=121642 > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion > > > > 
From: <mfmorss@ae...>  20060222 14:06:24

I built Python successfully on our AIX 5.2 server using "./configure withoutcxx disableipv6". (This uses the native IBM C compiler, invoking it as "cc_r". We have no C++ compiler.) But I have been unable to install Numpy0.9.5 using the same compiler. After "python setup.py install," the relevant section of the output was: compile options: 'Ibuild/src/numpy/core/src Inumpy/core/include Ibuild/src/numpy/core Inumpy/core/src Inumpy/core/include I/pydirectory/include/python2.4 c' cc_r: build/src/numpy/core/src/umathmodule.c "build/src/numpy/core/src/umathmodule.c", line 2566.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2584.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2602.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2620.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2638.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2654.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2674.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2694.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2714.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2734.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 9307.32: 1506280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. "build/src/numpy/core/src/umathmodule.c", line 2566.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2584.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2602.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2620.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2638.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2654.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2674.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2694.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2714.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 2734.25: 1506045 (S) Undeclared identifier FE_OVERFLOW. "build/src/numpy/core/src/umathmodule.c", line 9307.32: 1506280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. error: Command "cc_r DNDEBUG O Ibuild/src/numpy/core/src Inumpy/core/include Ibuild/src/numpy/core Inumpy/core/src Inumpy/core/include I/app/sandbox/s625662/installed/include/python2.4 c build/src/numpy/core/src/umathmodule.c o build/temp.aix5.22.4 /build/src/numpy/core/src/umathmodule.o" failed with exit status 1 A closely related question is, how can I modify the Numpy setup.py and/or distutils files to enable me to control the options with which cc_r is invoked? I inspected these files, but not being very expert in Python, I could not figure this out. Mark F. Morss Principal Analyst, Market Risk American Electric Power 
From: Mads Ipsen <mpi@os...>  20060222 11:55:31

On Tue, 21 Feb 2006, Tim Hochberg wrote: > Mads Ipsen wrote: > > >On Tue, 21 Feb 2006, Tim Hochberg wrote: > > > > > > > >>This all makes perfect sense, but what happended to box? In your > >>original code there was a step where you did some mumbo jumbo and box > >>and rint. Namely: > >> > >> > > > >It's a minor detail, but the reason for this is the following > > > >Suppose you have a line with length of box = 10 with periodic boundary > >conditions (basically this is a circle). Now consider two points x0 = 1 > >and x1 = 9 on this line. The shortest distance dx between the points x0 > >and x1 is dx = 2 and not 8. The calculation > > > > dx = x1  x0 ( = +8) > > dx = box*rint(dx/box) ( = 2) > > > >will give you the desired result, namely dx = 2. Hope this makes better > >sense. Note that fmod() won't work since > > > > fmod(dx,box) = 8 > > > > > I think you could use some variation like "fmod(dx+box/2, box)  box/2" > but rint seems better. > > >Part of my original post was concerned with the fact, that I initially was > >using around() from numpy for this step. This was terribly slow, so I made > >some custom changes and added rint() from the Cmath library to the numpy > >module, giving a speedup factor about 4 for this particular line in the > >code. > > > >Best regards // Mads > > > > > > > OK, that all makes sense. You might want to try the following, which > factors out all the divisions and half the multiplies by box and > produces several fewer temporaries. Note I replaced x**2 with x*x, > which for the moment is much faster (I don't know if you've been > following the endless yacking about optimizing x**n, but x**2 will get > fast eventually). Depending on what you're doing with r2, you may be > able to avoid the last multiple by box as well. > > > # Loop over all particles > xbox = x/box > ybox = y/box > for i in range(n1): > dx = xbox[i+1:]  xbox[i] > dy = ybox[i+1:]  ybox[i] > dx = rint(dx) > dy = rint(dy) > r2 = (dx*dx + dy*dy) > r2 *= box > > > > Regards, > > tim > Thanks Tim, I am only a factor 2.5 slower than the C loop now, thanks to your suggestions. // Mads 
From: Ed Schofield <schofield@ft...>  20060222 10:47:55

Sasha wrote: >I propose to deprecate around and implement a new "round" member >function in C that will only accept scalar "decimals" and will behave >like a properly vectorized builtin round. I will do the coding if >there is interest. > >In any case, something has to be done here. I don't think the >following timings are acceptable: > > This sounds great to me :)  Ed 
From: Mads Ipsen <mpi@os...>  20060222 10:26:34

On Tue, 21 Feb 2006, Sasha wrote: > > python m timeit s "from numpy import array; x = array([1.5]*1000)" "(x+0.5).astype(int).astype(float)" > 100000 loops, best of 3: 18.8 usec per loop > > python m timeit s just want to point out that the function foo(x) = (x+0.5).astype(int).astype(float) is different from around. For x = array([1.2, 1.8]) it works but for x = array([1.2, 1.8]) you get around(x) = array([1., 2.]) whereas foo(x) gives foo(x) = array([0., 1.]) Using foo(x) = where(greater(x,0),x+0.5,x0.5).astype(int).astype(float) will work. // Mads 
From: <hot_night@ob...>  20060222 10:19:19

厳選したセレブ女性との付き合いはどうですか？ 濡れに濡れまくる女性を何時でも何処でも完全無料での 紹介致します。 ご近所検索・写メール検索など、メンバーだけの豊富なサービスが たっぷり！納得いくまでじっくりとお相手探し。 スチュワーデス・看護婦・モデル・OL・主婦など大量在籍し ております。じっくりと貴方の理想のお相手をゲットして熱い夜を 過してください。 http://www.covcov.net?num=112 ただ今、直メ配信サービス実施中！ 無料登録だけで女性より直接お返事が 貴方のメールBOXに届きます♪ 拒否：refuse@... 
From: Zachary Pincus <zpincus@st...>  20060222 08:49:32

Hello folks, Does numpy have an builtin mechanism to shift elements along some axis in an array? (e.g. to "roll" [0,1,2,3] by some offset, here 2, to make [2,3,0,1]) If not, what would be the fastest way to implement this in python? Using take? Using slicing and concatenation? Zach 
From: Nadav Horesh <nadavh@vi...>  20060222 06:58:05

You may get a significant boost by replacing the line: w=3Dw+ eta * (y*x  y**2*w) with w *=3D 1.0  eta*y*y w +=3D eta*y*x I ran a test on a similar expression and got 5 fold speed increase. The dot() function runs faster if you compile with dotblas. Nadav. Original Message From: numpydiscussionadmin@... on behalf of Bruce = Southey Sent: Tue 21Feb06 17:15 To: Brian Blais Cc: pythonlist@...; numpydiscussion@...; = scipyuser@... Subject: Re: [Numpydiscussion] algorithm, optimization, or other = problem? Hi, In the current version, note that Y is scalar so replace the squaring (Y**2) with Y*Y as you do in the dohebb function. On my system without blas etc removing the squaring removes a few seconds (16.28 to 12.4). It did not seem to help factorizing Y. Also, eta and tau are constants so define them only once as scalars outside the loops and do the division outside the loop. It only saves about 0.2 seconds but these add up. The inner loop probably can be vectorized because it is just vector operations on a matrix. You are just computing over the ith dimension of X. I think that you could be able to find the matrix version on the net. Regards Bruce On 2/21/06, Brian Blais <bblais@...> wrote: > Hello, > > I am trying to translate some Matlab/mex code to Python, for doing = neural > simulations. This application is definitely computingtime limited, = and I need to > optimize at least one inner loop of the code, or perhaps even rethink = the algorithm. > The procedure is very simple, after initializing any variables: > > 1) select a random input vector, which I will call "x". right now I = have it as an > array, and I choose columns from that array randomly. in other cases, = I may need to > take an image, select a patch, and then make that a column vector. > > 2) calculate an output value, which is the dot product of the "x" and = a weight > vector, "w", so > > y=3Ddot(x,w) > > 3) modify the weight vector based on a matrix equation, like: > > w=3Dw+ eta * (y*x  y**2*w) > ^ >  > + learning rate constant > > 4) repeat steps 13 many times > > I've organized it like: > > for e in 100: # outer loop > for i in 1000: # inner loop > (steps 13) > > display things. > > so that the bulk of the computation is in the inner loop, and is = amenable to > converting to a faster language. This is my issue: > > straight python, in the example posted below for 250000 innerloop = steps, takes 20 > seconds for each outerloop step. I tried Pyrex, which should work = very fast on such > a problem, takes about 8.5 seconds per outerloop step. The same code = as a Cmex > file in matlab takes 1.5 seconds per outerloop step. > > Given the huge difference between the Pyrex and the Mex, I feel that = there is > something I am doing wrong, because the Ccode for both should run = comparably. > Perhaps the approach is wrong? I'm willing to take any suggestions! = I don't mind > coding some in C, but the Python API seemed a bit challenging to me. > > One note: I am using the Numeric package, not numpy, only because I = want to be able > to use the Enthought version for Windows. I develop on Linux, and = haven't had a > chance to see if I can compile numpy using the Enthought Python for = Windows. > > If there is anything else anyone needs to know, I'll post it. I put = the main script, > and a dohebb.pyx code below. > > > thanks! > > Brian Blais > >  >  > > bblais@... > http://web.bryant.edu/~bblais > > > > > # Main script: > > from dohebb import * > import pylab as p > from Numeric import * > from RandomArray import * > import time > > x=3Drandom((100,1000)) # 1000 input vectors > > numpats=3Dx.shape[0] > w=3Drandom((numpats,1)); > > th=3Drandom((1,1)) > > params=3D{} > params['eta']=3D0.001; > params['tau']=3D100.0; > old_mx=3D0; > for e in range(100): > > rnd=3Drandint(0,numpats,250000) > t1=3Dtime.time() > if 0: # straight python > for i in range(len(rnd)): > pat=3Drnd[i] > xx=3Dreshape(x[:,pat],(1,1)) > y=3Dmatrixmultiply(xx,w) > w=3Dw+params['eta']*(y*transpose(xx)y**2*w); > th=3Dth+(1.0/params['tau'])*(y**2th); > else: # pyrex > dohebb(params,w,th,x,rnd) > print time.time()t1 > > > p.plot(w,'o') > p.xlabel('weights') > p.show() > > > = #=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > # dohebb.pyx > > cdef extern from "Numeric/arrayobject.h": > > struct PyArray_Descr: > int type_num, elsize > char type > > ctypedef class Numeric.ArrayType [object PyArrayObject]: > cdef char *data > cdef int nd > cdef int *dimensions, *strides > cdef object base > cdef PyArray_Descr *descr > cdef int flags > > > def dohebb(params,ArrayType w,ArrayType th,ArrayType X,ArrayType rnd): > > > cdef int num_iterations > cdef int num_inputs > cdef int offset > cdef double *wp,*xp,*thp > cdef int *rndp > cdef double eta,tau > > eta=3Dparams['eta'] # learning rate > tau=3Dparams['tau'] # used for variance estimate > > cdef double y > num_iterations=3Drnd.dimensions[0] > num_inputs=3Dw.dimensions[0] > > # get the pointers > wp=3D<double *>w.data > xp=3D<double *>X.data > rndp=3D<int *>rnd.data > thp=3D<double *>th.data > > for it from 0 <=3D it < num_iterations: > > offset=3Drndp[it]*num_inputs > > # calculate the output > y=3D0.0 > for i from 0 <=3D i < num_inputs: > y=3Dy+wp[i]*xp[i+offset] > > # change in the weights > for i from 0 <=3D i < num_inputs: > wp[i]=3Dwp[i]+eta*(y*xp[i+offset]  y*y*wp[i]) > > # estimate the variance > thp[0]=3Dthp[0]+(1.0/tau)*(y**2thp[0]) > > > > > > > >  > This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD = SPLUNK! > = http://sel.asus.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat=3D= 121642 > _______________________________________________ > Numpydiscussion mailing list > Numpydiscussion@... > https://lists.sourceforge.net/lists/listinfo/numpydiscussion >  This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.asus.falkag.net/sel?cmd=3Dk&kid=103432&bid#0486&dat=121642 _______________________________________________ Numpydiscussion mailing list Numpydiscussion@... https://lists.sourceforge.net/lists/listinfo/numpydiscussion 
From: <skip@po...>  20060222 03:58:08

Robert> Google suggests that it does matter. E.g. Robert> http://mail.python.org/pipermail/pythondev/2001March/013510.html Robert> http://bugs.mysql.com/bug.php?id=14202 Robert> http://mail.python.org/pipermail/imagesig/2002June/001884.html *sigh* Thanks. You'd think that Solaris was a common enough platform that the ATLAS folks would get this right... Skip 
From: Robert Kern <robert.kern@gm...>  20060222 03:18:07

skip@... wrote: > Robert> Hmm. Was ATLAS compiled fPIC? > > I'm not certain, but I doubt it should matter since only .a files were > generated. There's nothing to relocate: > > $ ls ltr > total 9190 > lrwxrwxrwx 1 skipm develop 41 Feb 9 14:51 Make.inc > /home/ink/skipm/src/ATLAS/Make.SunOS_Babe > rwrr 1 skipm develop 1529 Feb 9 14:51 Makefile > rwrr 1 skipm develop 236004 Feb 9 14:57 libtstatlas.a > rwrr 1 skipm develop 241352 Feb 9 16:28 libcblas.a > rwrr 1 skipm develop 280464 Feb 9 16:33 libf77blas.a > rwrr 1 skipm develop 278616 Feb 9 16:34 liblapack.a > rwrr 1 skipm develop 3603644 Feb 9 16:36 libatlas.a Google suggests that it does matter. E.g. http://mail.python.org/pipermail/pythondev/2001March/013510.html http://bugs.mysql.com/bug.php?id=14202 http://mail.python.org/pipermail/imagesig/2002June/001884.html  Robert Kern robert.kern@... "In the fields of hell where the grass grows high Are the graves of dreams allowed to die."  Richard Harter 