2000: 2001: 2002: 2003: 2004: 2005: 2006: Jan (8) Feb (49) Mar (48) Apr (28) May (37) Jun (28) Jul (16) Aug (16) Sep (44) Oct (61) Nov (31) Dec (24) Jan (56) Feb (54) Mar (41) Apr (71) May (48) Jun (32) Jul (53) Aug (91) Sep (56) Oct (33) Nov (81) Dec (54) Jan (72) Feb (37) Mar (126) Apr (62) May (34) Jun (124) Jul (36) Aug (34) Sep (60) Oct (37) Nov (23) Dec (104) Jan (110) Feb (73) Mar (42) Apr (8) May (76) Jun (14) Jul (52) Aug (26) Sep (108) Oct (82) Nov (89) Dec (94) Jan (117) Feb (86) Mar (75) Apr (55) May (75) Jun (160) Jul (152) Aug (86) Sep (75) Oct (134) Nov (62) Dec (60) Jan (187) Feb (318) Mar (296) Apr (205) May (84) Jun (63) Jul (122) Aug (59) Sep (66) Oct (148) Nov (120) Dec (70) Jan (460) Feb (683) Mar (589) Apr (559) May (445) Jun (712) Jul (815) Aug (663) Sep (559) Oct (930) Nov (373) Dec
S M T W T F S
1 2 3 4
(18) (29) (29) (7)
5 6 7 8 9 10 11
(5) (6) (9) (10) (15) (11) (2)
12 13 14 15 16 17 18
(5) (24) (22) (28) (23) (33) (6)
19 20 21 22 23 24 25
(2) (15) (21) (17) (51) (24) (8)
26 27 28 29 30 31
(17) (49) (40) (35) (14) (14)
 numpy-discussion Nested Flat Threaded Ultimate Show 25 Show 50 Show 75 Show 100
 Re: [Numpy-discussion] Numpy and PEP 343 From: - 2006-03-02 18:13 ```eric jones writes: > Travis Oliphant wrote: > >> Weave can still help with the "auto-compilation" of the specific >> library for your type. Ultimately such code will be faster than >> NumPy can every be. > > Yes. weave.blitz() can be used to do the equivalent of this lazy > evaluation for you in many cases without much effort. For example: > > import weave > from scipy import arange > > a = arange(1e7) > b = arange(1e7) > c=2.0*a+3.0*b > > # or with weave > weave.blitz("c=2.0*a+3.0*b") > > As Paul D. mentioned.what Tim outlined is essentially template > expressions in C++. blitz++ (http://www.oonumerics.org/blitz/) is a > C++ template expressions library for array operations, and weave.blitz > translates a Numeric expression into C++ blitz code. For the example > above on large arrays, you get about a factor of 4 speed up on large > arrays. (Notice, the first time you run the example it will be much > slower because of compile time. Use timings from subsequent runs.) > > C:\temp>weave_time.py > Expression: c=2.0*a+3.0*b > Numeric: 0.678311899322 > Weave: 0.162177084984 > Speed-up: 4.18253848494 > > All this to say, I think weave basically accomplishes what Tim wants > with a different mechanism (letting C++ compilers do the optimization > instead of writing this optimization at the python level). It does > require a compiler on client machines in its current form (even that > can be fixed...), but I think it might prove faster than > re-implementing a numeric expression compiler at the python level > (though that sounds fun as well). I couldn't leave it at that :-), so I wrote my bytecode idea up. You can grab it at http://arbutus.mcmaster.ca/dmc/software/numexpr-0.1.tar.gz Here are the performance numbers (note I use 1e6 elements instead of 1e7) cookedm@...\$ py -c 'import numexpr.timing; numexpr.timing.compare()' Expression: b*c+d*e numpy: 0.0934900999069 Weave: 0.0459051132202 Speed-up of weave over numpy: 2.03659447388 numexpr: 0.0467489004135 Speed-up of numexpr over numpy: 1.99983527056 Expression: 2*a+3*b numpy: 0.0784019947052 Weave: 0.0292909860611 Speed-up of weave over numpy: 2.67665945222 numexpr: 0.0323888063431 Speed-up of numexpr over numpy: 2.42065094572 Wow. On par with weave.blitz, and no C++ compiler!!! :-) You use it like this: from numexpr import numexpr func = numexpr("2*a+3*b") a = arange(1e6) b = arange(1e6) c = func(a, b) Alternatively, you can use it like weave.blitz, like this: from numexpr import evaluate a = arange(1e6) b = arange(1e6) c = evaluate("2*a+3*b") How it works ============ Well, first of all, it only works with the basic operations (+, -, *, and /), and only on real constants and arrays of doubles. (These restrictions are mainly for lack of time, of course.) Given an expression, it first compiles it to a program written in a small bytecode. Here's what it looks like: In [1]: from numexpr import numexpr In [2]: numexpr("2*a+3*b", precompiled=True) # precompiled=True just returns the program before it's made into a # bytecode object, so you can what it looks like Out[2]: [('mul_c', Register(0), Register(1, a), Constant(0, 2)), ('mul_c', Temporary(3), Register(2, b), Constant(1, 3)), ('add', Register(0), Register(0), Temporary(3))] In [3]: c = numexpr("2*a+3*b") In [4]: c.program Out[4]: '\x08\x00\x01\x00\x08\x03\x02\x01\x02\x00\x00\x03' In [5]: c.constants Out[5]: array([ 2., 3.]) In [6]: c.n_temps Out[6]: 1 In [7]: type(c) Out[7]: The bytecode program is a string of 4-character groups, where the first character of each group is the opcode, the second is the "register" to store in, and the third and fourth are the register numbers to use as arguments. Inputs and outputs are assigned to registers (so in the example above, the output is Register(0), and the two inputs are Register(1) and Register(2)), and temporaries are in remaining registers. The group of registers is actually an array of double*. The secret behind the speed is that the bytecode program is run blocked: it's run repeatedly, operating on 128 elements of each input array at at time (then 8 elements then 1 element for the leftover). This does a tradeoff between cache misses (the arguments for each time through the program will most likely be able to fit in the cache each time), and the overhead of the branching from evaluating the program. It's like doing this in numpy: c[0:128] = 2*a[0:128] + 3*b[0:128] c[128:256] = 2*a[128:256] + 3*b[128:256] etc. I could see this getting a nice speedup from using a CPU's vector instructions. For instance, it should be pretty easy to use liboil for the intermediate vector evaluations. If people think it's useful enough, I can check it into scipy's sandbox. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm@... ```

 Re: [Numpy-discussion] Numpy and PEP 343 From: Christopher Barker - 2006-03-02 19:40 ```eric jones wrote: > All this to say, I think weave basically accomplishes what Tim wants > with a different mechanism (letting C++ compilers do the optimization > instead of writing this optimization at the python level). It does > require a compiler on client machines in its current form (even that can > be fixed...) Perhaps by teaching Psyco about nd-arrays? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@... ```

 Re: [Numpy-discussion] Numpy and PEP 343 From: Travis Oliphant - 2006-03-02 21:12 ```David M. Cooke wrote: >I couldn't leave it at that :-), so I wrote my bytecode idea up. > >You can grab it at http://arbutus.mcmaster.ca/dmc/software/numexpr-0.1.tar.gz > > Awesome. Please do check it in to the sandbox. -Travis ```

 Re: [Numpy-discussion] Numpy and PEP 343 From: Ed Schofield - 2006-03-02 21:24 ```On 02/03/2006, at 7:12 PM, David M. Cooke wrote: > I couldn't leave it at that :-), so I wrote my bytecode idea up. > > You can grab it at http://arbutus.mcmaster.ca/dmc/software/ > numexpr-0.1.tar.gz I'd had a brief look at the code, and I think it's great! I particularly like the power available with the "numexpr.evaluate (string_expression)" syntax. I'd fully support your adding it to the scipy sandbox. I'm sure we could fill it out with more operators and data types quite fast. Great work! -- Ed ```

 Re: [Numpy-discussion] Numpy and PEP 343 From: Tim Hochberg - 2006-03-02 23:20 ```Christopher Barker wrote: > eric jones wrote: > >> All this to say, I think weave basically accomplishes what Tim wants >> with a different mechanism (letting C++ compilers do the optimization >> instead of writing this optimization at the python level). It does >> require a compiler on client machines in its current form (even that >> can be fixed...) > > > Perhaps by teaching Psyco about nd-arrays? Psyco is unlikely to ever be much good for floating point stuff. This is because it only knows about longs/pointers and arrays of longs/pointers. The floating point support it has at present involves breaking a double into two long sized chunks. Then whenever you need to manipulate the float you have to find the pieces and put them back together[1]. All of this means that float support is much poorer then integer support in Psyco. In theory this could be fixed, but since Armin's no longer actively working on Psyco, just maintaining it, I don't see this changing. Perhaps when Psyco like technology gets incorporated in PyPy it will include better support for floating point. Regards, -tim [1] This has all gotten rather fuzzy too me now, it's been a while since I looked at it and my understanding was at best fragile anyway. ```

 Re: [Numpy-discussion] Numpy and PEP 343 From: Tim Hochberg - 2006-03-02 23:48 ```David M. Cooke wrote: >eric jones writes: > > > >>Travis Oliphant wrote: >> >> >> >>>Weave can still help with the "auto-compilation" of the specific >>>library for your type. Ultimately such code will be faster than >>>NumPy can every be. >>> >>> >>Yes. weave.blitz() can be used to do the equivalent of this lazy >>evaluation for you in many cases without much effort. For example: >> >>import weave >>from scipy import arange >> >>a = arange(1e7) >>b = arange(1e7) >>c=2.0*a+3.0*b >> >># or with weave >>weave.blitz("c=2.0*a+3.0*b") >> >>As Paul D. mentioned.what Tim outlined is essentially template >>expressions in C++. blitz++ (http://www.oonumerics.org/blitz/) is a >>C++ template expressions library for array operations, and weave.blitz >>translates a Numeric expression into C++ blitz code. For the example >>above on large arrays, you get about a factor of 4 speed up on large >>arrays. (Notice, the first time you run the example it will be much >>slower because of compile time. Use timings from subsequent runs.) >> >>C:\temp>weave_time.py >>Expression: c=2.0*a+3.0*b >>Numeric: 0.678311899322 >>Weave: 0.162177084984 >>Speed-up: 4.18253848494 >> >>All this to say, I think weave basically accomplishes what Tim wants >>with a different mechanism (letting C++ compilers do the optimization >>instead of writing this optimization at the python level). It does >>require a compiler on client machines in its current form (even that >>can be fixed...), but I think it might prove faster than >>re-implementing a numeric expression compiler at the python level >>(though that sounds fun as well). >> >> > >I couldn't leave it at that :-), so I wrote my bytecode idea up. > >You can grab it at http://arbutus.mcmaster.ca/dmc/software/numexpr-0.1.tar.gz > >Here are the performance numbers (note I use 1e6 elements instead of >1e7) > >cookedm@...\$ py -c 'import numexpr.timing; numexpr.timing.compare()' >Expression: b*c+d*e >numpy: 0.0934900999069 >Weave: 0.0459051132202 >Speed-up of weave over numpy: 2.03659447388 >numexpr: 0.0467489004135 >Speed-up of numexpr over numpy: 1.99983527056 > >Expression: 2*a+3*b >numpy: 0.0784019947052 >Weave: 0.0292909860611 >Speed-up of weave over numpy: 2.67665945222 >numexpr: 0.0323888063431 >Speed-up of numexpr over numpy: 2.42065094572 > >Wow. On par with weave.blitz, and no C++ compiler!!! :-) > > That's awesome! I was also tempted by this, but I never got beyond prototyping some stuff in Python. >You use it like this: > >from numexpr import numexpr > >func = numexpr("2*a+3*b") > >a = arange(1e6) >b = arange(1e6) >c = func(a, b) > > Does this just uses the order that variable are initially used in the expression to determine the input order? I'm not sure I like that. It is very convenient for simple expressions though. >Alternatively, you can use it like weave.blitz, like this: > >from numexpr import evaluate > >a = arange(1e6) >b = arange(1e6) >c = evaluate("2*a+3*b") > > That's pretty sweet. And I see that you cache the expressions, so it should be pretty fast if you need to loop. [snip details] >If people think it's useful enough, I can check it into scipy's sandbox. > > Definitely check it in. It won't compile here with VC7, but I'll see if I can figure out why. This is probably thinking two far ahead, but an interesting thing to try would be adding conditional expressions: c = evaluate("(2*a + b) if (a > b) else (2*b + a)") If that could be made to work, and work fast, it would save both memory and time in those cases where you have to vary the computation based on the value. At present I end up computing the full arrays for each case and then choosing which result to use based on a mask, so it takes three times as much space as doing it element by element. -tim ```

 [Numpy-discussion] Numpy licenses From: Jeffery D. Collins - 2006-03-02 17:00 ```The SF site claims the following for Numeric/Numpy: License : OSI-Approved Open Source , GNU General Public License (GPL) Can this be changed to reflect LICENSE.TXT? Numpy (and the older Numeric-*) has never been GPL'd, correct? Also, a previous thread mentioned that the licensing terms for f2py will be changed from LGPL. In the svn trunk, some of its documentation still refers to LGPL, particularly f2py2e.py. Will this be changed before the next release? Thanks! Jeff ```

 Re: [Numpy-discussion] Numpy licenses From: Paul Dubois - 2006-03-02 17:06 Attachments: Message as HTML ```Numeric and numpy are not related from a licensing point of view. Numeric was released under an LLNL-written open source license. Travis wrote numpy and he can put any license he wants on that. On 02 Mar 2006 09:03:12 -0800, Jeffery D. Collins < jcollins_boulder@...> wrote: > > The SF site claims the following for Numeric/Numpy: > > License : OSI-Approved Open Source > , GNU > General Public License (GPL) > > > > Can this be changed to reflect LICENSE.TXT? Numpy (and the older > Numeric-*) has never been GPL'd, correct? > > Also, a previous thread mentioned that the licensing terms for f2py will > be changed from LGPL. In the svn trunk, some of its documentation still > refers to LGPL, particularly f2py2e.py. Will this be changed before > the next release? > > Thanks! > > Jeff > > > > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language > that extends applications into web and mobile media. Attend the live > webcast > and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D110944&bid=3D241720&dat= =3D121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion@... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > ```

 [Numpy-discussion] Re: Numpy licenses From: Robert Kern - 2006-03-02 17:25 ```Jeffery D. Collins wrote: > The SF site claims the following for Numeric/Numpy: > > License : OSI-Approved Open Source > , GNU > General Public License (GPL) > > > > Can this be changed to reflect LICENSE.TXT? Numpy (and the older > Numeric-*) has never been GPL'd, correct? It's possible that there used to be some code checked into the repository that was GPLed. It was certainly never a part of Numeric, though. What worries me more is that someone thought Numeric et al. belong to the Multi-User Dungeons topic. I would like to stress, though, that Sourceforge is *not* the place for information on the new NumPy. The only remaining use for it is the mailing list and downloads, and I hope that we can start using PyPI instead for placing our distributions. > Also, a previous thread mentioned that the licensing terms for f2py will > be changed from LGPL. In the svn trunk, some of its documentation still > refers to LGPL, particularly f2py2e.py. Will this be changed before > the next release? Done. -- Robert Kern robert.kern@... "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter ```

 Re: [Numpy-discussion] problems installing numpy on AIX From: - 2006-03-02 17:19 ```Minor correction. Mark F. Morss Principal Analyst, Market Risk 614-583-6757 Audinet 220-6757 mfmorss@... Sent by: numpy-discussion- To admin@... numpy-discussion eforge.net cc 03/02/2006 11:59 AM Subject [Numpy-discussion] problems installing numpy on AIX I returned to this task. After some minor tweaks discussed here earlier, the build appears to succeed. There are two issues. Issue 1 is, to me, of unknown significance; it does not prevent the build. Issue 2 is that numpy can't be imported. There are problems with newdocs, type_check and numerictypes. The symptoms are shown below. Issue 1. creating build/temp.aix-5.2-2.4/build/src/numpy/core/src compile options: '-Ibuild/src/numpy/core/src -Inumpy/core/include -Ibuild/src/numpy/core -Inumpy/core/src -Inumpy/core/include -I/mydirectory/include/python2.4 -c' cc_r: build/src/numpy/core/src/umathmodule.c "build/src/numpy/core/src/umathmodule.c", line 9307.32: 1506-280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. Issue 2. \$ python Python 2.4.2 (#2, Feb 22 2006, 08:38:08) [C] on aix5 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy import core -> failed: import random -> failed: 'module' object has no attribute 'dtype' import lib -> failed: import linalg -> failed: import dft -> failed: Traceback (most recent call last): File "", line 1, in ? File "/mydirectory/lib/python2.4/site-packages/numpy/__init__.py", line 45, in ? import add_newdocs File "/mydirectory/lib/python2.4/site-packages/numpy/add_newdocs.py", line 2, in ? from lib import add_newdoc File "/mydirectory/lib/python2.4/site-packages/numpy/lib/__init__.py", line 5, in ? from type_check import * File "/mydirectory/lib/python2.4/site-packages/numpy/lib/type_check.py", line 8, in ? import numpy.core.numeric as _nx File "/mydirectory/lib/python2.4/site-packages/numpy/core/__init__.py", line 7, in ? import numerictypes as nt File "/mydirectory/lib/python2.4/site-packages/numpy/core/numerictypes.py", line 371, in ? _unicodesize = array('u','U').itemsize MemoryError >>> Mark F. Morss Principal Analyst, Market Risk American Electric Power ------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion@... https://lists.sourceforge.net/lists/listinfo/numpy-discussion ```

 [Numpy-discussion] problems installing numpy on AIX From: - 2006-03-02 17:02 ```I returned to this task. After some minor tweaks discussed here earlier, the build appears to succeed. There are two issues. Issue 1 is, to me, of unknown significance; it does not prevent the build. Issue 2 is that numpy can't be imported. There are problems with newdocs, type_check and numerictypes. The symptoms are shown below. Issue 1. creating build/temp.aix-5.2-2.4/build/src/numpy/core/src compile options: '-Ibuild/src/numpy/core/src -Inumpy/core/include -Ibuild/src/numpy/core -Inumpy/core/src -Inumpy/core/include -I/app/sandbox/s625662/installed/include/python2.4 -c' cc_r: build/src/numpy/core/src/umathmodule.c "build/src/numpy/core/src/umathmodule.c", line 9307.32: 1506-280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. Issue 2. \$ python Python 2.4.2 (#2, Feb 22 2006, 08:38:08) [C] on aix5 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy import core -> failed: import random -> failed: 'module' object has no attribute 'dtype' import lib -> failed: import linalg -> failed: import dft -> failed: Traceback (most recent call last): File "", line 1, in ? File "/mydirectory/lib/python2.4/site-packages/numpy/__init__.py", line 45, in ? import add_newdocs File "/mydirectory/lib/python2.4/site-packages/numpy/add_newdocs.py", line 2, in ? from lib import add_newdoc File "/mydirectory/lib/python2.4/site-packages/numpy/lib/__init__.py", line 5, in ? from type_check import * File "/mydirectory/lib/python2.4/site-packages/numpy/lib/type_check.py", line 8, in ? import numpy.core.numeric as _nx File "/mydirectory/lib/python2.4/site-packages/numpy/core/__init__.py", line 7, in ? import numerictypes as nt File "/mydirectory/lib/python2.4/site-packages/numpy/core/numerictypes.py", line 371, in ? _unicodesize = array('u','U').itemsize MemoryError >>> Mark F. Morss Principal Analyst, Market Risk American Electric Power ```

 Re: [Numpy-discussion] setting package path From: Stefan van der Walt - 2006-03-02 16:13 ```On Tue, Feb 28, 2006 at 11:34:43PM -0600, Pearu Peterson wrote: > >which calculates "somepath". Which is wrong: the docstring, the code > >or my interpretation? >=20 > You have to read also following code: >=20 > d1 =3D=20 > os.path.join(d,'build','lib.%s-%s'%(get_platform(),sys.version[:3])= ) > if not os.path.isdir(d1): > d1 =3D os.path.dirname(d) # <- here we get "somepath/.." > sys.path.insert(0,d1) Pearu, Thanks for pointing this out. I was trying to figure out why my module's tests didn't run when I did, for example: \$ python test_geom.py But now I realise it is because typical package layout is "somepath/some_dir/tests/test_file.py", not "somepath/some_dir/test_file.py". For example, we have "numpy/core/tests/test_umath.py". As a temporary workaround, set_local_path("../../..") works for me. Regards St=E9fan ```

 [Numpy-discussion] numarray: need Float32 abs from array of type na.Complex64 or na.Complex32 From: Sebastian Haase - 2006-03-02 02:13 ```Hi, I just was hunting a very strange bug: I got weird results .... anyway. I found that I was using this line of code: if img.type() in (na.Complex64, na.Complex32): img = abs(na.asarray(img, na.Float32)) Now I don't understand how this code worked at all ? But I only saw those "weird results" after looking at the "dimmest corners" in my image and otherwise all images looked O.K. The goal here was to get a single precision absolute for either a complex64 or complex32 valued image array WITHOUT creating a temporary array. My thinking was that: img = na.abs( na.asarray(img, na.Complex32) ) would create a complete temporary version of img starts out being Complex64 Is this really the case ? Thanks for any hints, Sebastian Haase ```

 Re: [Numpy-discussion] numarray: need Float32 abs from array of type na.Complex64 or na.Complex32 From: Todd Miller - 2006-03-02 15:13 ```Sebastian Haase wrote: > Hi, > I just was hunting a very strange bug: I got weird results .... anyway. > > I found that I was using this line of code: > if img.type() in (na.Complex64, na.Complex32): > img = abs(na.asarray(img, na.Float32)) > > Now I don't understand how this code worked at all ? But I only saw those > "weird results" after looking at the "dimmest corners" in my image and > otherwise all images looked O.K. > > The goal here was to get a single precision absolute for either a complex64 or > complex32 valued image array WITHOUT creating a temporary array. > This idea sounds inconsistent to me. If the array is Complex64 and you pass it into asarray(,Float32), you're going to get a temporary. AND you're also truncating the imaginary part as you downcast to Float32... you're not getting a complex magnitude. Since abs() transforms from complex to real, you're going to get a temporary unless you get a little fancy, maybe like this: na.abs(img, img.real) # stores the abs() into the real component of the original array. img.real is a view not a copy. img.imag = 0 # optional step which makes the complex img array a real valued array with complex storage. img = img.real # just forget that img is using complex storage. Note that img is now discontiguous since no copy was made and there are still interleaved 0-valued imaginary components. So, I think there are two points to avoiding a copy here: (1) use the optional ufunc output parameter (2) store the abs() result into the .real view of the original complex array. > My thinking was that: > img = na.abs( na.asarray(img, na.Complex32) ) > would create a complete temporary version of img starts out being Complex64 > Is this really the case ? > Yes. ```

 Re: [Numpy-discussion] The idiom for supporting matrices and arrays in a function From: Travis Oliphant - 2006-03-02 00:45 ```Bill Baxter wrote: > The NumPy for Matlab Users's wiki is currently pleading to have > someone fill in "/*the idiom for supporting both matrices and arrays > in a function". > */Does anyone know what this is? I'm not sure what they want, exactly. Since a matrix is a subclass of an array you can treat them the same for the most part. The * and ** operators are the two that act differently on arrays and matrices so if you want to be sure then you use the functions dot and umath.multiply instead of the infix notation. If you want to convert arbitrary objects to "some-kind of array" then asanyarray(...) allows sub-classes of the array to pass to your function. To be perfectly careful, however, you will need to use explicity functions to perform any operation you may need. The other approach is to store the __array_wrap__ method of the input object (if there is any), convert everything to arrays using asarray() and then wrap the final result --- this is what ufuncs do internally. Again, I'm not sure what they mean? -Travis ```

 [Numpy-discussion] Re: The idiom for supporting matrices and arrays in a function From: Robert Kern - 2006-03-02 00:51 ```Bill Baxter wrote: > The NumPy for Matlab Users's wiki is currently pleading to have someone > fill in "/*the idiom for supporting both matrices and arrays in a > function". > * /Does anyone know what this is? The subclassing functionality is rather new, so I don't know if the proper idioms have really been discovered. I would suggest converting to arrays using asarray() right at the beginning of the function. If you want to spit out matrices/what-have-you out again, then you will need to do some more work. I would suggest looking at the procedure that ufuncs do with __array_finalize__ and __array_priority__ and creating a pure Python reference implementation that others could use. It's possible that you would be able to turn it into a decorator. -- Robert Kern robert.kern@... "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter ```

 Re: [Numpy-discussion] Re: The idiom for supporting matrices and arrays in a function From: Travis Oliphant - 2006-03-02 01:09 ```Robert Kern wrote: >Bill Baxter wrote: > > >>The NumPy for Matlab Users's wiki is currently pleading to have someone >>fill in "/*the idiom for supporting both matrices and arrays in a >>function". >>* /Does anyone know what this is? >> >> > >The subclassing functionality is rather new, so I don't know if the proper >idioms have really been discovered. I would suggest converting to arrays using >asarray() right at the beginning of the function. If you want to spit out >matrices/what-have-you out again, then you will need to do some more work. I >would suggest looking at the procedure that ufuncs do with __array_finalize__ >and __array_priority__ and creating a pure Python reference implementation that >others could use. It's possible that you would be able to turn it into a decorator. > > This is kind of emerging as the right strategy (although it's the __array_wrap__ and __array_priority__ methods that are actually relevant here. But, it is all rather new so we can forgive Robert this time ;-) ) The __array_priority__ is important if you've got "competing" wrappers (i.e. a 2-input, 1-output function) and don't know which __array_wrap__ to use for the output. In matlab you have one basic type of object. In NumPy you have a basic ndarray object that can be sub-classed which gives rise to all kinds of possibilities. I think we are still figuring out what the best thing to do is. Ufuncs convert everything to base-class arrays (what asarray does) and then use the __array_wrap__ of the input with the highest __array_priority__ to wrap all of the outputs (except that now output arrays passed in use their own __array_wrap__). So, I suppose following their example is a reasonable approach. Look in numpy/linlag/linlag.py for examples of using the asarray/__array_wrap__ strategy... The other approach is to let sub-classes through the array conversion (asanyarray), but this approach could rise to later errors if the sub-class redefines an operation you didn't expect it to, so is probably not safe unless all of your operations are functions that can handle any array subclass. And yes, I think a decorator could be written that would manage this. I'll leave that for the motivated... -Travis ```

 Re: [Numpy-discussion] The idiom for supporting matrices and arrays in a function From: Colin J. Williams - 2006-03-02 13:38 ```Travis Oliphant wrote: > Bill Baxter wrote: > >> The NumPy for Matlab Users's wiki is currently pleading to have >> someone fill in "/*the idiom for supporting both matrices and arrays >> in a function". */Does anyone know what this is? > > > I'm not sure what they want, exactly. > > Since a matrix is a subclass of an array you can treat them the same > for the most part. The * and ** operators are the two that act > differently on arrays and matrices so if you want to be sure then you > use the functions dot and umath.multiply instead of the infix notation. > > If you want to convert arbitrary objects to "some-kind of array" then > > asanyarray(...) How does asanyyarray() differ from asarray()? Colin W. > > allows sub-classes of the array to pass to your function. To be > perfectly careful, however, you will need to use explicity functions > to perform any operation you may need. > > The other approach is to store the __array_wrap__ method of the input > object (if there is any), convert everything to arrays using asarray() > and then wrap the final result --- this is what ufuncs do internally. > > Again, I'm not sure what they mean? > > -Travis > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language > that extends applications into web and mobile media. Attend the live > webcast > and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion@... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > ```

 Re[2]: [Numpy-discussion] The idiom for supporting matrices and arrays in a function From: Alan G Isaac - 2006-03-02 14:58 ```On Thu, 02 Mar 2006, "Colin J. Williams" apparently wrote: > How does asanyyarray() differ from asarray()? It leaves alone subclasses of ndarray. See example below. Cheers, Alan Isaac Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> y=N.mat([[1,2],[3,4]]) >>> z1=N.asarray(y) >>> z2=N.asanyarray(y) >>> z1 array([[1, 2], [3, 4]]) >>> z2 matrix([[1, 2], [3, 4]]) >>> ```

 [Numpy-discussion] Opinions on the book H.P. Langtangen: Python Scripting for Computational Science, 2nd ed. From: - 2006-03-02 11:22 ```Hello, Can anybody recommend the book: Python Scripting for Computational Science Series: Texts in Computational Science and Engineering, Vol. 3 Langtangen, Hans Petter 2nd ed., 2006, XXIV, 736 p. 33 illus., Hardcover ISBN: 3-540-29415-5 I am just a beginner in Python programming (but I master C, Matlab). I do research in applied numerical computing (oriented at control design). Is this a book for me? It is hard to guess just from the table contents. I especially doubt if the information on Numpy package is relevant at all with the wild development witnessed in the past recent months. However, peeling off these "numerical" parts, isn't buying some comprehensive Python book a better buy? Thanks for your tips, Zdenek Hurak ```

 Re: [Numpy-discussion] Opinions on the book H.P. Langtangen: Python Scripting for Computational Science, 2nd ed. From: Vinicius Lobosco - 2006-03-02 11:46 ```Hi Zdenek! I have this book, and I like this a lot as the other book of Langtangen abo= ut=20 Diffpack. I think he is very good in collecting in one place most of the=20 aspects one would like to use within this subject. So if you think about=20 implementing your software as a web service, to implement a GUI or to glue= =20 you C, C++ or Fortran software with Python or something else, this is the=20 right book. And it is very well written. On the other hand, if you're just= =20 looking for information regarding NumPy (from a Matlab user perspective), i= t=20 would be better to borrow it in the library and copy the only chapter about= =20 it. There is however, no book alternative. The best material is available=20 on-line, to my knowledge. Second, you're also right that some stuffs are no= t=20 updated anymore.=20 Best regards, Vinicius On Thursday 02 March 2006 12.21, Zden=C4=9Bk Hur=C3=A1k wrote: > Hello, > > Can anybody recommend the book: > > Python Scripting for Computational Science > Series: Texts in Computational Science and Engineering, Vol. 3 > Langtangen, Hans Petter > 2nd ed., 2006, XXIV, 736 p. 33 illus., Hardcover > ISBN: 3-540-29415-5 > > I am just a beginner in Python programming (but I master C, Matlab). I do > research in applied numerical computing (oriented at control design). Is > this a book for me? It is hard to guess just from the table contents. I > especially doubt if the information on Numpy package is relevant at all > with the wild development witnessed in the past recent months. However, > peeling off these "numerical" parts, isn't buying some comprehensive Pyth= on > book a better buy? > > Thanks for your tips, > Zdenek Hurak =2D----------------------- Vinicius Lobosco, PhD Process Intelligence http://www.paperplat.com +46 8 612 7803 +46 73 925 8476 (cell phone) Bj=C3=B6rnn=C3=A4sv=C3=A4gen 21 SE-113 47 Stockholm, Sweden ```

 Re: [Numpy-discussion] Opinions on the book H.P. Langtangen: Python Scripting for Computational Science, 2nd ed. From: Bruce Southey - 2006-03-02 13:49 ```SGksCkkgaGF2ZSB0aGUgZmlyc3QgZWRpdGlvbiB3aGljaCBpcyB2ZXJ5IGNvbXByZWhlbnNpdmUg b24gUHl0aG9uLApudW1hcnJheSBhbmQgTnVtZXJpYyAtIG5vdGUgdGhhdCBieSBOdW1QeSBoZSBp cyBtZWFucyBvbmx5IE51bWVyaWMgYW5kCm51bWFycmF5IGFuZCBub3QgdGhlIG5ldyBudW1weS4g SWYgeW91IGhhdmUgdGhlIGJhc2ljIG9mIFB5dGhvbiBhbmQKbnVtZXJpY2FsIGNvbXB1dGluZyB0 aGVuIGl0IGlzIGEgZ3JlYXQgYm9vayBhcyBpdCBjb3ZlciBtYW55IGRpZmZlcmVudAp0b3BpY3Mu IEZvciBleGFtcGxlLCB0aGVyZSBpcyBjb25zaWRlcmFibGUgZGV0YWlsIG9uIHVzaW5nIEZvcnRy YW4gYW5kCkMvQysrIHdpdGggTnVtZXJpYyBhbmQgbnVtYXJyYXkgaW5jbHVkaW5nIGV4YW1wbGVz LgoKRm9yIHRoZSBtYWpvcml0eSBvZiB0aGUgbWF0ZXJpYWwgaW4gdGhlIGJvb2sgaXMgc3RpbGwg dmFsaWQgZm9yIHRoZQpuZXcgTnVtcHkuIEhlIGRvZXMgcHJvdmlkZSBhIG51bWJlciBvZiBhZHZh bmNlZCB0b3BpY3MgaW4gUHl0aG9uLApudW1hcnJheSBhbmQgTnVtZXJpYyB0aGF0IGFyZSBoYXJk IHRvIGZpbmQgZWxzZXdoZXJlLgoKSWYgeW91IGFyZSBnb2luZyB0byBleHRlbnNpdmUgbnVtZXJp Y2FsIHdvcmsgdGhlbiBpdCBpcyByZWFsbHkgd29ydGggaXQuCgpCcnVjZQoKCk9uIDMvMi8wNiwg WmRlbuxrIEh1cuFrIDxodXJha0Bjb250cm9sLmZlbGsuY3Z1dC5jej4gd3JvdGU6Cj4gSGVsbG8s Cj4KPiBDYW4gYW55Ym9keSByZWNvbW1lbmQgdGhlIGJvb2s6Cj4KPiBQeXRob24gU2NyaXB0aW5n IGZvciBDb21wdXRhdGlvbmFsIFNjaWVuY2UKPiBTZXJpZXM6IFRleHRzIGluIENvbXB1dGF0aW9u YWwgU2NpZW5jZSBhbmQgRW5naW5lZXJpbmcsIFZvbC4gMwo+IExhbmd0YW5nZW4sIEhhbnMgUGV0 dGVyCj4gMm5kIGVkLiwgMjAwNiwgWFhJViwgNzM2IHAuIDMzIGlsbHVzLiwgSGFyZGNvdmVyCj4g SVNCTjogMy01NDAtMjk0MTUtNQo+Cj4gSSBhbSBqdXN0IGEgYmVnaW5uZXIgaW4gUHl0aG9uIHBy b2dyYW1taW5nIChidXQgSSBtYXN0ZXIgQywgTWF0bGFiKS4gSSBkbwo+IHJlc2VhcmNoIGluIGFw cGxpZWQgbnVtZXJpY2FsIGNvbXB1dGluZyAob3JpZW50ZWQgYXQgY29udHJvbCBkZXNpZ24pLiBJ cwo+IHRoaXMgYSBib29rIGZvciBtZT8gSXQgaXMgaGFyZCB0byBndWVzcyBqdXN0IGZyb20gdGhl IHRhYmxlIGNvbnRlbnRzLiBJCj4gZXNwZWNpYWxseSBkb3VidCBpZiB0aGUgaW5mb3JtYXRpb24g b24gTnVtcHkgcGFja2FnZSBpcyByZWxldmFudCBhdCBhbGwKPiB3aXRoIHRoZSB3aWxkIGRldmVs b3BtZW50IHdpdG5lc3NlZCBpbiB0aGUgcGFzdCByZWNlbnQgbW9udGhzLiBIb3dldmVyLAo+IHBl ZWxpbmcgb2ZmIHRoZXNlICJudW1lcmljYWwiIHBhcnRzLCBpc24ndCBidXlpbmcgc29tZSBjb21w cmVoZW5zaXZlIFB5dGhvbgo+IGJvb2sgYSBiZXR0ZXIgYnV5Pwo+Cj4gVGhhbmtzIGZvciB5b3Vy IHRpcHMsCj4gWmRlbmVrIEh1cmFrCj4KPgo+Cj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+IFRoaXMgU0YuTmV0IGVtYWlsIGlzIHNwb25z b3JlZCBieSB4UE1MLCBhIGdyb3VuZGJyZWFraW5nIHNjcmlwdGluZyBsYW5ndWFnZQo+IHRoYXQg ZXh0ZW5kcyBhcHBsaWNhdGlvbnMgaW50byB3ZWIgYW5kIG1vYmlsZSBtZWRpYS4gQXR0ZW5kIHRo ZSBsaXZlIHdlYmNhc3QKPiBhbmQgam9pbiB0aGUgcHJpbWUgZGV2ZWxvcGVyIGdyb3VwIGJyZWFr aW5nIGludG8gdGhpcyBuZXcgY29kaW5nIHRlcnJpdG9yeSEKPiBodHRwOi8vc2VsLmFzLXVzLmZh bGthZy5uZXQvc2VsP2NtZD1sbmsma2lkPTExMDk0NCZiaWQ9MjQxNzIwJmRhdD0xMjE2NDIKPiBf X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IE51bXB5LWRp c2N1c3Npb24gbWFpbGluZyBsaXN0Cj4gTnVtcHktZGlzY3Vzc2lvbkBsaXN0cy5zb3VyY2Vmb3Jn ZS5uZXQKPiBodHRwczovL2xpc3RzLnNvdXJjZWZvcmdlLm5ldC9saXN0cy9saXN0aW5mby9udW1w eS1kaXNjdXNzaW9uCj4K ```

 Re: [Numpy-discussion] fold distutils back into Python distro? From: - 2006-03-02 01:13 ```Travis> And just for clarification. As far as I understand it, it's not Travis> technically a fork of distutils. It is based on distutils but Travis> just extends it in many ways (i.e. new subclasses that extend Travis> from the Distutils classes). Ah, okay. That's different. I was thinking it was a fork. Skip ```

 Re: [Numpy-discussion] Re: fold distutils back into Python distro? From: - 2006-03-02 01:15 ```Robert> numpy.distutils (nee scipy_distutils) is not really a fork. It's Robert> just a set of extensions to distutils. One of the primary Robert> features is handling extensions with FORTRAN code. If you can Robert> convince Guido that standard distutils ought to handle FORTRAN, Robert> then we might have a deal. I see no reason why that wouldn't be possible in principle. After all, the bdist_* stuff keeps growing to handle a number of different package formats. Robert> Some of the other features might be rolled into the standard Robert> library, but probably not on the 2.5 time frame. The useful ones Robert> might be the system_info stuff (which you have recently become Robert> far too familiar with :-)), the build_src capabilities, colored Robert> output, and the Configuration object. Any idea if patches for each could be created fairly easily for each of these? Skip ```

 Re: [Numpy-discussion] Re: fold distutils back into Python distro? From: Gerard Vermeulen - 2006-03-02 08:29 ```On Wed, 01 Mar 2006 18:21:41 -0600 Robert Kern wrote: > Some of the other features might be rolled into the standard library, but > probably not on the 2.5 time frame. The useful ones might be the system_info > stuff (which you have recently become far too familiar with :-)), the build_src > capabilities, colored output, and the Configuration object. IMO, the colored output should be an optional feature. My emacs buffers do not understand it and the only way to disable it is modifying the source as far as I remember. Gerard ```