You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Sasha <nd...@ma...> - 2006-07-24 21:49:38
|
On 7/24/06, Travis Oliphant <oli...@ie...> wrote: > Andrew Straw has emphasized that the current strategy of appending the > SVN version number to development versions of the SVN tree makes it hard > to do version sorting. I am not sure what the problem is, but if the concern is that >>> '0.9.9.2803' > '0.9.9' True I suggest fixing that by appending '.final' to the release version string: >>> '0.9.9.2803' > '0.9.9.final' False If we are going to make any changes in this area, I would suggest following python sys and define three variables: numpy.version (human readable string including compiler info), numpy.version_info (sortable tuple) and numpy.hexversion (sortable integer that can be used by C preprocessor). I am not sure if python sys includes svn revision number in sys.version (I don't have svn version of python around), but it does include it in beta sys.version: >>> print sys.version 2.5b2 (r25b2:50512, Jul 18 2006, 15:22:50) [GCC 3.4.4 20050721 (Red Hat 3.4.4-2)] |
From: David M. C. <co...@ph...> - 2006-07-24 21:36:00
|
On Mon, 24 Jul 2006 15:06:57 -0600 Travis Oliphant <oli...@ie...> wrote: > Andrew Straw has emphasized that the current strategy of appending the > SVN version number to development versions of the SVN tree makes it hard > to do version sorting. > > His proposal is to not change the version number until the first beta > comes out. > > In other words, the trunk should not be 1.1 but 1.0dev? > > Are there any opposing opinions to this change. I personally think that > is a confusing numbering scheme because 1.0dev seems to imply it's the > development for version 1.0 instead of 1.1. But, if several others > think it's a good idea to support easy-sorting of version numbers, then > I will conceed. > > Perhaps we can have a version 'number' as 1.0plus? What if we use even numbers for stable releases, odd for development releases, like for Linux kernels? e.g., make 1.1 the dev branch (current trunk), and make 1.2 the next stable release after 1.0. Or, make the trunk 1.0.99.<svn version> The .99 usually makes it pretty clear that it's the development branch. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: David M. C. <co...@ph...> - 2006-07-24 21:24:23
|
On Sat, 22 Jul 2006 17:50:13 -0400 Steve Lianoglou <lis...@ar...> wrote: > Hi folks, > > Since 1.0 release is eminent, I just wanted to draw the attention to > two failures I get when I run numpy.test(1). > > I've never been able to get numpy to pass all test cases, but now it > fails a second one, so .. I'm pasting it below. Please let me know if > these are non-consequential. It fails the second one b/c I added it because the failure in the first one wasn't clear enough :-) > System info: > + Intel Mac (MacBook Pro) > + OS X.4.7 > + numpy version: 1.0.2881 > > test failures: > > FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ > test_scalarmath.py", line 47, in check_large_types > assert b == 6765201, "error with %r: got %r" % (t,b) > AssertionError: error with <type 'float128scalar'>: got 0.0 > > ====================================================================== > FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/local/lib/python2.4/site-packages/numpy/core/tests/ > test_scalarmath.py", line 20, in check_types > assert a == 1, "error with %r: got %r" % (atype,a) > AssertionError: error with <type 'float128scalar'>: got > 1.02604810941480982577e-4936 > > ---------------------------------------------------------------------- > Ran 468 tests in 1.157s > > FAILED (failures=2) > Out[2]: <unittest.TextTestRunner object at 0x15e3510> I'm aware of this (http://projects.scipy.org/scipy/numpy/ticket/183). It's a problem on PPC macs also. Travis thinks it may be a compiler problem. I've had a look, and can't see anything obvious. It *could* be that somewhere there's a typo in the code where things are set when sizeof(long double) == 128. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Robert K. <rob...@gm...> - 2006-07-24 21:21:33
|
Sebastian Haase wrote: > Hi, > Essentially I'm looking for the equivalent of what was in numarray: > from numarray import random_array > random_array.poisson(arr) > > That is: if for example arr is a 256x256 array of positive integers, then this > returns a new array of random numbers than are drawn according to the poisson > statistics where arr's value at coordinate y,x determines the mean of the > poisson distribution used to generate a new value for y,x. I'm afraid that at this point in time, the distributions only accept scalar values for the parameters. I've thought about reimplementing the distribution functions as ufuncs, but that's a hefty chunk of work that won't happen for 1.0. I'm afraid that, for now, you're stuck with iterating over the values. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Fernando P. <fpe...@gm...> - 2006-07-24 21:15:57
|
On 7/24/06, Travis Oliphant <oli...@ie...> wrote: > Andrew Straw has emphasized that the current strategy of appending the > SVN version number to development versions of the SVN tree makes it hard > to do version sorting. > > His proposal is to not change the version number until the first beta > comes out. I have to say that being able to quickly see the actual SVN revision number in the version helps a LOT in tracking down problems. Just look at how many posts on the list start with the canonical In [4]: import numpy,scipy In [5]: numpy.__version__ Out[5]: '0.9.9.2803' In [6]: scipy.__version__ Out[6]: '0.5.0.2079' printouts before discussing the problem. I don't really feel strongly about the issue, but if you change this, then please add a __revision__ attribute as well, so that this information can be quickly asked for in pure python (we don't want to bother newbies with obscure SVN lingo, esp. if they got their install from someone else). Cheers, f |
From: Travis O. <oli...@ie...> - 2006-07-24 21:06:59
|
Andrew Straw has emphasized that the current strategy of appending the SVN version number to development versions of the SVN tree makes it hard to do version sorting. His proposal is to not change the version number until the first beta comes out. In other words, the trunk should not be 1.1 but 1.0dev? Are there any opposing opinions to this change. I personally think that is a confusing numbering scheme because 1.0dev seems to imply it's the development for version 1.0 instead of 1.1. But, if several others think it's a good idea to support easy-sorting of version numbers, then I will conceed. Perhaps we can have a version 'number' as 1.0plus? -Travis |
From: Travis O. <oli...@ie...> - 2006-07-24 21:01:26
|
For the next several months until 1.0 comes out. Please make changes that go into 1.0 to the ver1.0 branch of the NumPy SVN tree. Then, periodically, we can merge those changes back to the trunk for ver 1.1 -Travis |
From: Sebastian H. <ha...@ms...> - 2006-07-24 20:28:15
|
On Monday 24 July 2006 12:23, Travis Oliphant wrote: > Sebastian Haase wrote: > > Hi, > > I have a (medical) image file. > > I wrote a nice interface based on memmap using numarray. > > The class design I used was essentially to return a numarray array > > object with a new "custom" attribute giving access to special > > information about the base file. > > > > Now with numpy I noticed that a numpy object does not allow adding new > > attributes !! (How is this ? Why ?) > > > > Travis already suggested (replying to one of my last postings) to create > > a new sub class of numpy.ndarray. > > > > But how do I initialize an object of my new class to be "basically > > identically to" an existing ndarray object ? > > Normally I could do > > class B(N.ndarray): > > pass > > a=N.arange(10) > > a.__class__ = B > > > > BUT I get this error: > > #>>> a.__class__ = B > > Traceback (most recent call last): > > File "<input>", line 1, in ? > > TypeError: __class__ assignment: only for heap types > > > > What is a "heap type" ? Why ? How can I do what I want ? > > A heap type is one created in Python (i.e. not builtin with C). > > You should be able to do > > a = a.view(B) > > -Travis Thanks - (I'm just replying because I assume your helpful answer was mistakenly sent to me only and did not make it to the list) |
From: Sven S. <sve...@gm...> - 2006-07-24 20:21:24
|
Travis Oliphant schrieb: > Sven Schreiber wrote: >> > The change was trying to fix up some cases but did break this one. The > problem is that figuring out whether or not to transpose the result is a > bit tricky. I've obviously still got it wrong. > Ok, this is obviously one of the places were an automated test would be helpful to avoid regressions. In the beta months ahead, I would like to help with contributing such tests. However, since I'm not a developer, I have little to no experience with that. Is there any numpy-specific documentation or guide on how to add tests? (I know there's plenty of guides on the general python testing framework.) Or any brilliant example in the distribution that should be followed? Regards, Sven p.s.: Is there going to be a new beta soon? Or is there a quick fix to this slicing problem in the meantime? Thanks. |
From: Travis O. <oli...@ie...> - 2006-07-24 19:55:34
|
Sebastian Haase wrote: > Hi, > if I have a numpy array 'a' > and say: > a.dtype == numpy.float32 > > Is the result independent of a's byteorder ? > The byteorder is a property of the data-type (not of the array) --- this is different from numarray where byteorder is a property of the array. a.dtype == numpy.float32 will always set the data-type to a machine-native float32 data-type. -Travis |
From: Travis O. <oli...@ie...> - 2006-07-24 19:55:34
|
Sebastian Haase wrote: > Hi, > if I have a numpy array 'a' > and say: > a.dtype == numpy.float32 > > Is the result independent of a's byteorder ? > (That's what I would expect ! Just checking !) > I think I misread the question and saw "==" as "=" But, the answer I gave should still help: the byteorder is a property of the data-type. There is no such thing as "a's" byteorder. Thus, numpy.float32 (which is actually an array-scalar and not a true data-type) is interepreted as a machine-byte-order IEEE floating-point data-type with 32 bits. Thus, the result will depend on whether or not a.dtype is machine-order or not. -Travis |
From: Travis O. <oli...@ie...> - 2006-07-24 19:55:33
|
Sven Schreiber wrote: > Thanks for helping out on matrix stuff, Bill! > > > Hm, I don't know -- if you don't mind I'd like to get a second opinion > before I mess around there. It's funny though that the changeset has the > title "fixing up matrix slicing" or something like that... > The change was trying to fix up some cases but did break this one. The problem is that figuring out whether or not to transpose the result is a bit tricky. I've obviously still got it wrong. -Travis |
From: Travis O. <oli...@ie...> - 2006-07-24 19:55:32
|
Sebastian Haase wrote: > Hi, > I'm converting SWIG typemap'ed C extensions from numarray to numpy. > I studied (and use parts of) numpy.i from the doc directory. > I noticed that there is no > decref for the TYPEMAP_INPLACE2 typemap. This uses a function > obj_to_array_no_conversion() which in turn just returns > the original PyObject* ( casted to a PyArrayObject* after some sanity > checks) It looks to me that in this case there should be an explicit > Py_INCREF() - in case the function is threaded (releases the Python > GIL) since it holds a pointer to that object's data . > Probably, true. The numpy.i typemaps are not thoroughly reference-count checked. > (Alternatively) Travis suggested (at the > http://www.scipy.org/Converting_from_numarray wiki page) using > PyArray_FromAny - is this incrementing the ref.count (implicitely) ? > The numarray equivalent (NA_InputArray) IS incrementing the ref.count > (as far as I know...). > > Yes, you get back a new reference from PyArray_FromAny. > Furthermore on that same wiki page the PyArray_FromAny() is called > together with PyArray_DescrFromType(<type>). > After searching through the numpy source I found that in > blasdot/_dotblas.c (in dotblas_matrixproduct() )there is an explicit > Py_INCREF even on the dtype returned from PyArray_DescrFromType. > > PyArray_FromAny consumes a reference to the PyArray_Descr * object (which is a Python object). Thus, because PyArray_FromAny is called twice with the same data-type object, there is a need to increment it's reference count. -Travis |
From: Sebastian H. <ha...@ms...> - 2006-07-24 19:48:41
|
Hi, Essentially I'm looking for the equivalent of what was in numarray: from numarray import random_array random_array.poisson(arr) That is: if for example arr is a 256x256 array of positive integers, then this returns a new array of random numbers than are drawn according to the poisson statistics where arr's value at coordinate y,x determines the mean of the poisson distribution used to generate a new value for y,x. [[This is needed e.g. to simulate quantum noise in CCD images. Each pixel has different amount of noise depending of what it's (noise-free) "input" value was.]] Thanks, Sebastian Haase |
From: Travis O. <oli...@ie...> - 2006-07-24 19:24:53
|
Sebastian Haase wrote: > Hi! > I'm trying to convert my numarray records code to numpy. > >>> type(m.hdrArray) > <class 'numpy.core.records.recarray'> > >>> m.hdrArray.d > [(array([ 1., 1., 1.], dtype=float32),)] > > but I get: > >>> m.hdrArray[0].getfield('d') > 5.43230922614e-312 > > Am I missing something or is this a bug ? > Probably a bug. The getfield method needs more testing. File a ticket. -Travis |
From: Travis O. <oli...@ie...> - 2006-07-24 19:18:50
|
Paul Barrett wrote: > I'm having a problem converting a C extension module that was > originally written for numarray to use numpy. I using swig to create > a wrapper flle for the C code. I have added the > numpy.get_numarray_include() method to my setup.py file and have > changed the numarray/libnumarray.h to use numpy/libnumarray.h. The > extension appears to compile fine (with the exception of some warning > messages). However, when I import the module, I get a segfault. Do I > need to add anything else to the share library's initialization step > other than import_libnumarray()? > No, that should be enough. The numarray C-API has only been tested on a few extension modules. It's very possible some of the calls have problems. It's also possible you have an older version of numpy lying around somewhere. Do you get any kind of error message on import? -Travis |
From: Sebastian H. <ha...@ms...> - 2006-07-24 18:41:30
|
Hi, Are numpy.product() and numpy.prod() doing the exact same thing ? If yes, why are they pointing to two different functions ? >>> N.prod <function prod at 0x43cef56c> >>> N.product <function product at 0x43cef304> Thanks, Sebastian Haase |
From: Karol L. <kar...@kn...> - 2006-07-24 18:37:33
|
On Monday 24 July 2006 20:10, Sebastian Haase wrote: > Hi ! > Thanks for the reply. > Did you actually run this ? I get: > #>>> a=N.arange(10, dtype=N.float32) > #>>> a.dtype == N.float32 > #True > #>>> N.__version__ > #'0.9.9.2823' Hi, Looks like I need to upgrade my working version or something... >>> import numpy as N >>> N.__version__ '0.9.8' >>> a=N.arange(10, dtype=N.float32) >>> a.dtype == N.float32 False Cheers, Karol -- written by Karol Langner pon lip 24 20:36:05 CEST 2006 |
From: Sebastian H. <ha...@ms...> - 2006-07-24 18:10:09
|
On Monday 24 July 2006 03:18, Karol Langner wrote: > On Monday 24 July 2006 06:47, Sebastian Haase wrote: > > Hi, > > if I have a numpy array 'a' > > and say: > > a.dtype == numpy.float32 > > > > Is the result independent of a's byteorder ? > > (That's what I would expect ! Just checking !) > > > > Thanks, > > Sebastian Haase > > The condition will always be False, because you're comparing wrong things > here, numpy.float32 is a scalar type, not a dtype. Hi ! Thanks for the reply. Did you actually run this ? I get: #>>> a=N.arange(10, dtype=N.float32) #>>> a.dtype == N.float32 #True #>>> N.__version__ #'0.9.9.2823' > > >>> numpy.float32 > > <type 'float32scalar'> > > >>> type(numpy.dtype('>f4')) > > <type 'numpy.dtype'> > > And I think byteorder matters when comparing dtypes: > >>> numpy.dtype('>f4') == numpy.dtype('<f4') > False OK - I did a test now: #>>> b= a.copy() #>>> b=b.newbyteorder('big') #>>> a.dtype == b.dtype #False #>>> a.dtype #'<f4' #>>> b.dtype #'>f4' How can I do a comparison showing that both a and b are float32 ?? Thanks, Sebastian Haase |
From: Travis O. <oli...@ie...> - 2006-07-24 17:05:45
|
Graham Cummins wrote: > Greetings, > > I just downloaded numpy 1.0b1. I see a lot of changes from 0.9.8, and > I'm curious as to whether these changes will be a lasting property of > numpy 1.0 and later. > > Most of the changes relate to nomenclature for type constants (e.g. > int32, complex128, newaxis) and functions (e.g. inverse_real_fft -> > ifft). Although it takes some time to comb through code for all of the > possible name changes (there are lots!), it's easy enough to do. Release notes will be forthcoming. These changes will be lasting... What changed is that "old" names were placed only in a compatibility module (numpy.oldnumeric). Import from there if you want the old names (convertcode was also changed to alter Numeric-->numpy.oldnumeric. This was done so as to make it clearer what is "old" and for compatibility purposes only and what new code should be written with. > > The thing that is taking me longer is (as usual) converting c > extensions. Float32, and PyArray_Float32 used to be defined in 0.9.8, > and are now not. AFAICT, npy_float works in the same way Float32 used > to work, but I haven't yet figured out what to use in place of > PyArray_Float32 in, for example "PyArray_FROM_OTF(data, ?? , ALIGNED | > CONTIGUOUS);" > Here, we assigned the prefixes NPY_ and npy_ to all the old CAPITALIZED and uncapitalized names, respectively to avoid the problem of name clashes which occurs commonly when using NumPy to wrap another library. The un-prefixed names are still available when you use #include "numpy/noprefix.h" (which is what NumPy itself uses). > > On another topic, when I install numpy (version 0.9.8 or 1.0b1) using > "seutup.py install", the headers needed to build extensions don't get > moved into my python distribution directory tree. I've been moving > these files by hand, and that seems to work, but could I pass some > argument to distutils that would do this automatically? To support multiple-version installations of NumPy (like eggs allow) it's important to put the headers in their own location and not in a system-wide directory. If you want to place them system-wide, you currently need to copy them by hand. But, it's not recommended to do that. Just append the output of numpy.get_include() to the list of include directories. -Travis |
From: Jordana H. <jo...@cu...> - 2006-07-24 16:47:01
|
Bes k t S v el s li y ng W v atc u he h s =20 R y OL j EX C y ART f IER BR e EI y TLIN k G BV p LGA q RI OM g EG z A P v ATE z K P x hilip h pe and man o y othe f r Han a dbag r s & Pu p rs w es, TI k FFA l NY & CO Je m werl t y Or z de v r T j ODA c Y and sav k e 2 o 5 % http://righttherehun.com =20 =20 =20 share of the reward. And so Bilbo was swung down from the wall, and departed with nothing for all his trouble, except the armour which Thorin had given him already. More than one of the dwarves =18in their hearts felt shame and pity at his going. Farewell! he cried to them. We may meet again as friends. Be off! called Thorin. You have mail upon you, which was |
From: Pierre B. de R. <Pie...@in...> - 2006-07-24 16:40:16
|
Ivan Vilata i Balaguer wrote: > En/na Pierre Barbier de Reuille ha escrit:: > > >>>>> import numpy >>>>> numpy.__version__ >>>>> >> '0.9.9.2852' >> >>>>> numpy.bool_ >>>>> >> <type 'boolscalar'> >> > > Sorry if I didn't make my question clear. What I find lacking is a > ``numpy.boolean`` type which is to ``numpy.bool_`` the same as > ``numpy.string`` is now to ``numpy.str_`` (i.e. a pure reference with a > prettier name). Otherwise I'm not getting what you're meaning! C:) > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > Cárabos Coop. V. V V Enjoy Data > "" > > Ok, so maybe it is because it is called "bool8" ? So that it is clear that it takes 8 bits ? Pierre |
From: Ivan V. i B. <iv...@ca...> - 2006-07-24 16:29:50
|
En/na Pierre Barbier de Reuille ha escrit:: >>>>import numpy >>>>numpy.__version__ >'0.9.9.2852' >>>>numpy.bool_ ><type 'boolscalar'> Sorry if I didn't make my question clear. What I find lacking is a ``numpy.boolean`` type which is to ``numpy.bool_`` the same as ``numpy.string`` is now to ``numpy.str_`` (i.e. a pure reference with a prettier name). Otherwise I'm not getting what you're meaning! C:) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Pierre B. de R. <Pie...@in...> - 2006-07-24 15:16:59
|
Ivan Vilata i Balaguer wrote: > Hi all, > > Since there is a "string" type which is the same as "str_", how come > there is no "boolean" type which is the same as "bool_"? Did I miss > some design decision about naming? You know, just for completeness, not > that it is some kind of problem at all! ;) > Well ... >>> import numpy >>> numpy.__version__ '0.9.9.2852' >>> numpy.bool_ <type 'boolscalar'> Pierre > Cheers, > > :: > > Ivan Vilata i Balaguer >qo< http://www.carabos.com/ > Cárabos Coop. V. V V Enjoy Data > "" > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Andrew J. <a.h...@ba...> - 2006-07-24 14:15:23
|
Hi All, I'm finding myself dealing with n-dimensional grids quite a lot, and trying to do some 'tricky' index manipulation. The main problem is manipulating arrays when I don't know a priori the number of dimensions; in essence I need to be able to iterate across dimensions. first, I've got three arrays of lower-bounds (param_min), upper-bounds (param_max) and numbers of steps (nstep). I'm using the following incantations to produce the association mgrid an ogrid arrays: args = tuple(slice(p1, p2, n*1j) for p1,p2,n in zip(param_min, param_max, nstep)) param_grid = N.mgrid.__getitem__(args) Is this the easiest way to do this? Second, from the mgrid object, param_grid, I want to recover the step sizes (assume I've thrown away the args so I can't make an ogrid object). This seems to work: deltas = numpy.empty(npar) for i in xrange(npar): idxtup = (i,)+(0,)*i + (1,) + (0,)*(npar-1-i) deltas[i] = param_grid[idxtup] - param_grid[((i,)+(0,)*npar )] (Or I could compress this into a single somewhat complicated list comprehension). Again, this seems a bit overly complicated. Any ideas for simplifying it? But at least I can work out how to do these things. Finally, however, I need to reconstruct the individual param_min:param_max:(nstep*1j) 1-d arrays (i.e., the flattened versions of the ogrid output). These are effectively param_grid[i,0,,,,:,,,,] where ':' is in slot i. But I don't know how to emulate ':' in either a slice object or tuple-indexing notation. Obviously I could just do a more complicated version of the deltas[] calculation, or direct manipulation on param_grid.flat, but both seem like too much work on my part... Thanks in advance! Andrew p.s. thanks to travis for all his hard work, especially in the run-up to 1.0b (although test() crashes on my PPC Mac... more on that later when I've had time to play). |