You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Phlip <ppl...@om...> - 2001-01-11 19:49:31
|
Proclaimed Chris Barker from the mountaintops: > I waited a little while before answering this, because there are > certainly people more qualified to do so that me. I am only on the NumPy > list, so it may have been answered on a different list. The irritation is, without a CXX list server, I'm having to molest the off-topic fora where Dubois et al are reputed to hang out. > The short answer is yes, you will have to generate a new a array and > copy the old one into the new. MultiArray objects were created to > provide efficient storage of lots of numbers (among other things). > Because of this requirement, the numbers are stored as a large single > array, and so they cannot be re-sized without re-creating that array. > You may be able to change just the data array itself (and a few > properties), rather than creating a new structure entirely, but it > probably wouldn't be worth it. Here's the state of the system: static void copyData ( Py::Array & ma, vector<vector< string > > & database, int maxFields ) { #if 1 Py::Sequence shape (Py::Int (2)); // <-- pow shape[0] = Py::Int (int (database.size())); shape[1] = Py::Int (maxFields); PyArray_Reshape ((PyArrayObject*)ma.ptr(), shape.ptr()); #else int zone[] = { database.size(), maxFields }; Py::Object mo ((PyObject*)PyArray_FromDims (2, zone, PyArray_OBJECT) ); ma = mo; assert (ma.dimension(1) == database.size()); assert (ma.dimension(2) == maxFields); for (int idxRow (0); idxRow < maxRows; ++idxRow) { Py::Array row (ma[idxRow]); for (int idxCol (0); idxCol < maxFields; ++idxCol) { string const & str (database[idxRow][idxCol]); Py::String pyStr (str.c_str()); Py::Object obj (pyStr); row [idxCol] = obj; // <-- pow } } #endif } Both versions crash on the line marked "pow". The top one crashes when I think I'm trying to do the equivalent of the Python array = (2.4) The bottom one crashes after creating a new array, right when I try to copy in an element. The Pythonic equivalent of matrixi [row][col] = "8" Everyone remember I'm not trying to presenve the old contents of the array - just return from the extension a new array full of stuff. > By the way, I'd like to hear how this all works out. Being able to use > NumPy Arrays in extensions more easily would be great! Our Chief Drive-by Architect has ordered me to use them like an in-memory database. >Sigh< --Phlip "...fanatical devotion to the Pope, and cute red uniforms..." |
From: Chris B. <cb...@jp...> - 2001-01-09 18:48:58
|
Phlip TheProgrammer wrote: > And using CXX, we wrap these objects in high-level C++ methods. Not the low-level fragile C. The effect compares favorably to ATL for VC++ and ActiveX. > If we pass a multiarray into a function expecting a PyArrayObject, how then do we add new elements to it? I tried things like 'push_back()' and 'setItem()', but they either did not exist or did not extend the array. > > Am I going to have to generate a new array and copy the old one into the new? I waited a little while before answering this, because there are certainly people more qualified to do so that me. I am only on the NumPy list, so it may have been answered on a different list. The short answer is yes, you will have to generate a new a array and copy the old one into the new. MultiArray objects were created to provide efficient storage of lots of numbers (among other things). Because of this requirement, the numbers are stored as a large single array, and so they cannot be re-sized without re-creating that array. You may be able to change just the data array itself (and a few properties), rather than creating a new structure entirely, but it probably wouldn't be worth it. By the way, I'd like to hear how this all works out. Being able to use NumPy Arrays in extensions more easily would be great! -Chris -- Christopher Barker, Ph.D. cb...@jp... --- --- --- http://www.jps.net/cbarker -----@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Water Resources Engineering ------ @ ------ @ ------ @ Coastal and Fluvial Hydrodynamics ------- --------- -------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ |
From: Guido v. R. <gu...@py...> - 2001-01-09 13:57:16
|
It appears that all the links to NumPy for Download and Documentation on www.python.org are broken. This looks bad for www.python.org and for Numerical Python! In particular the links on these pages are all broken: http://www.python.org/topics/scicomp/numpy_download.html link to ftp://ftp-icf.llnl.gov/pub/python/README.html http://www.python.org/topics/scicomp/documentation.html links to ftp://ftp-icf.llnl.gov/pub/python/numericalpython.pdf, ftp://ftp-icf.llnl.gov/pub/python/NumTut.tgz, and the FAQ (http://www.python.org/cgi-bin/numpy-faq) Can anybody "in the know" mail me updated links? If you have updated HTML for the pages containign the links that would be even better! Please mail directly to gu...@py...; I'm not subscribed to this list. --Guido van Rossum (home page: http://www.python.org/~guido/) |
From: Paul F. D. <pau...@ho...> - 2001-01-08 17:27:46
|
The helpful person who submitted this report asks for a reply but does not give their name. Thank you for finding this problem. It turns out that this is and this isn't a problem. On the whole, it is. I have a fix but need to warn people that it will break any existing code that takes advantage of a feature I did not advertise. (:-> Here's the story: class MA inherits from a class ActiveAttributes, supplied with MA as an announced module in the package. ActiveAttributes is designed to encapsulate the behavior that Numeric has, of having attributes that are not really attributes but are trios of functions. For an example of what I mean by this, consider the .shape attribute. print x.shape x.shape = (3, 9) del x.shape shape appears to be the name of an attribute but in fact it is not. There are actually a trio of handlers, so that the above actually execute more like they were print x.__getshape() x.__setshape((3,9)) x.__delattr('shape') Now for the problem. In implementing ActiveAttributes, I set up an indirect system of handlers for each "active" attribute like 'shape', and in remembering what handlers to use I carelessly used *bound* methods rather than *unbound* methods. This meant that each instance contained a reference cycle. However, Python 2.0 has a cyclic garbage collector that runs periodically so over the course of a long routine like my test routine, the garbage was in fact being collected. The bug-poster's patch reveals the problem by doing it a lot of memory operations in a very few statements. The fix is to change ActiveAttributes to save unbound rather than bound methods. This works and prevents the observed growth. I will check it in to CVS with a new release number 4.0 when I have finished testing another idea that may also be an improvement. Current users may wish to invoke the facilities of the "gc" module in 2.0 to increase the frequency of collection if they are currently experiencing a problem. To anyone out there who had noticed the activeattr.py module and used it, this change will require an incompatible change to your code when you switch to the new version. -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of no...@so... Sent: Monday, January 08, 2001 4:27 AM To: no...@so...; no...@so...; num...@so... Subject: [Numpy-developers] [Bug #128025] MA seems to leak memory Bug #128025, was updated on 2001-Jan-08 04:26 Here is a current snapshot of the bug. Project: Numerical Python Category: Fatal Error Status: Open Resolution: None Bug Group: Robustness lacking Priority: 5 Submitted by: nobody Assigned to : nobody Summary: MA seems to leak memory Details: I executed the following code in the interpreter: >>> import MA >>> a = MA.arange(10000000) >>> b = a[:5000000] >>> b = a[:5000000] >>> b = a[:5000000] >>> b = a[:5000000] >>> b = a[:5000000] >>> b = a[:5000000] >>> b = a[:5000000] >>> b = a[:5000000] After array a was created, memory consumption was around 40 MB. After the b slices, it's now 190 MB. Even if MA does make a copy-on-slice, it should IMHO free the slice after there are no more references to it. (I am not a Sourceforge user or anything, so I would appreciate if someone would at least let me know if this is a known problem.) For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=128025&group_id=1369 _______________________________________________ Numpy-developers mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-developers |
From: phlip <phl...@my...> - 2001-01-07 05:44:14
|
On Friday 05 January 2001 06:24, pf...@mm... wrote: > Be sure to look at the CXX on sourceforge. Ja. This is a sweet combination. The Joy of Swig is we can declare some parameters as basic types, but others as raw PyObjects. Then we can have our way with these. And using CXX, we wrap these objects in high-level C++ methods. Not the low-level fragile C. The effect compares favorably to ATL for VC++ and ActiveX. But there's just one little question here. (Hence the buckshot of list servers.) If we pass a multiarray into a function expecting a PyArrayObject, how then do we add new elements to it? I tried things like 'push_back()' and 'setItem()', but they either did not exist or did not extend the array. Am I going to have to generate a new array and copy the old one into the new? --Phlip |
From: Phlip T. <phl...@my...> - 2001-01-07 01:58:26
|
On Friday 05 January 2001 06:24, pf...@mm... wrote: > Be sure to look at the CXX on sourceforge. Ja! This is a sweet combination. The Joy of Swig is we can declare some parameters as basic types, but others as raw PyObjects. Then we can have our way with these. And using CXX, we wrap these objects in high-level C++ methods. Not the low-level fragile C. The effect compares favorably to ATL for VC++ and ActiveX. But there's just one little question here. (Hence the buckshot of list servers.) If we pass a multiarray into a function expecting a PyArrayObject, how then do we add new elements to it? I tried things like 'push_back()' and 'setItem()', but they either did not exist or did not extend the array. Am I going to have to generate a new array and copy the old one into the new? --Phlip -- Phlip ======= http://users.deltanet.com/~tegan/home.html ======= ------------------------------------------------------------ --== Sent via Deja.com ==-- http://www.deja.com/ |
From: Ryszard M. <ma...@us...> - 2001-01-06 19:17:25
|
>Nils Wagner writes: > > I am also interested in ODE (ordinary differential equation) solvers. > Hi, Some examples of solving ODE using the Chebyshev polynomial I put on my web home page : http://uranos.cto.us.edu.pl/~manka/math/newmath.html numpy, pyfort and gist are needed. -- Ryszard Manka ma...@us... |
From: Paul P. <pa...@Ac...> - 2001-01-05 17:10:08
|
"Paul F. Dubois" wrote: > > Explanation: we made these packages optional partly because they ought to be > optional but partly because one of them depends on LAPACK and many people > wish to configure the packages to use a different LAPACK than the limited > "lite" one supplied. Wow, your "lite" module is half a meg. I'd hate to see the heavy version. :) I may not understand the extension so please tell me if I'm off-base: Would it be simpler to have a single setup.py and a set of flags to turn on and off each of the "secondary" extensions? > You are welcome to add a script which runs the bdist functions on all the > optional packages, in much the same way setup_all.py works. If I did this, I would consider a different strategy. I would suggest that each of the setup.py's could move their "setup" function into an if __name__=="__main__" block. Then setup_all.py could import each one (a little trickery needed there) and then combine the symbols like sourcelist, headers, ext_modules and so forth. > You do need to > face the issue if making a bdist for the public of which LAPACK you use on > which platform. I would expect to use lapack_lite.pyd. It's easy enough to override it by copying a custom lapack on top. Paul Prescod |
From: Nils W. <nw...@is...> - 2001-01-05 14:49:27
|
Hi, I am looking for some functions for the computation of Matrix functions : matrix exponential matrix square root matrix logarithm Eigenvalue problems : generalized eigenvalue problems polynomial eigenvalue problems I am also interested in ODE (ordinary differential equation) solvers. Is there any progress ? Nils |
From: Roger H. <ro...@if...> - 2001-01-05 14:48:14
|
* Roy Dragseth [snip Numeric and swig questions about how to give a numpy array to a swigged c function] > Any hints is greatly appreciated. I've done this a few times. What you need to do is help swig understand that a numpy array is input and how to treat this as a C array. With swig you can do this with a typemap. Here's an example: we can create a swig interface file like %module exPyC %{ #include <Python.h> #include <arrayobject.h> /* Remember this one */ #include <math.h> %} %init %{ import_array() /* Initial function for NumPy */ %} %typemap(python,in) double * { PyArrayObject *py_arr; /* first check if it is a NumPy array */ if (!PyArray_Check($source)) { PyErr_SetString(PyExc_TypeError, "Not a NumPy array"); return NULL; } if (PyArray_ObjectType($source,0) != PyArray_DOUBLE) { PyErr_SetString(PyExc_ValueError, "Array must be of type double"); return NULL; } /* check that array is 1D */ py_arr = PyArray_ContiguousFromObject($source, PyArray_DOUBLE, 1, 1); /* set a double pointer to the NumPy allocated memory */ $target = py_arr->data; } /* end of typemap */ %inline %{ void arrayArg(double *arr, int n) { int i; for (i = 0; i < n; i++) { arr[i] = f(i*1.0/n); } } %} Now, compile your extension, and test >>> import exPyC >>> import Numeric >>> a = Numeric.zeros(10,'d') >>> a array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) >>> exPyC.arrayArg(a,10) >>> a array([ 0. , 0.00996671, 0.0394695 , 0.08733219, 0.15164665, 0.22984885, 0.31882112, 0.41501643, 0.51459976, 0.61360105]) You can also get rid of the length argument with a typemap(python,ignore), see the swig docs. For more info about the safety checks in the typemap check the numpy docs. Lykke til! :-) HTH, Roger |
From: Roy D. <Roy...@cc...> - 2001-01-05 13:53:48
|
Hello all. I'm trying to figure out a (simple) way to wrap extension functions using Numeric and swig. I want to use swig because I find the method described i ch. 12 of the user manual a bit too elaborate. The problem I'm facing with swig is that I cannot figure out a way to pass arrays to the functions expecting double pointers as input. Say, for example, that I want to wrap daxpy: 1. I put the function call interface into a swig file: void daxpy(int* n,double* a, double* x,int* incx, double* y, int* incy) 2. I want the function call visible to python to be something like: x =3D Numeric.arange(0.0,10.0) y =3D a[:] a =3D 1.0 daxpy(a,x,y) 3. To achieve 2. I make a python function that does the real call to daxpy: def daxpy(a,x,y): <Do consistency checking between x and y> =20 n =3D len(x) d_x =3D GetDataPointer(x) inc_x =3D <x's incrment> . . . daxpy_c(n,a,d_x,inc_x,d_y,inc_y)=20 (daxpy_c is the real daxpy) The problem I'm facing is that I cannot grab the internal data pointer from Numeric arrays and pass it to a function. Is there a simple way to do that without having to write a wrapper function in c for every function I want to use? How should I write the function GetDataPointer()? Any hints is greatly appreciated. Best regards, Roy. The Computer Center, University of Troms=F8, N-9037 TROMS=D8, Norway. phone:+47 77 64 41 07, fax:+47 77 64 41 00 Roy Dragseth, High Perfomance Computing System Administrator Direct call: +47 77 64 62 56. email: ro...@cc... |
From: Paul F. D. <pau...@ho...> - 2001-01-05 05:31:54
|
Explanation: we made these packages optional partly because they ought to be optional but partly because one of them depends on LAPACK and many people wish to configure the packages to use a different LAPACK than the limited "lite" one supplied. It would be correct in the spirit of SourceForge and Python packages to make every single one of these optional packages a separate "project" or at least a separate download. That would raise the overhead. Technically the manual should be split up into pieces too. I don't think all that trouble is worth undergoing in order to "solve" this problem. So, you could just build separate rpms for each of the optional packages. They are NOT subpackages in the true sense of the word, except that a couple of them install into Numeric's directory for backward compatibility. Numeric is not a true package either. Again, people have argued (and I agree) that purity of thought is trumped by backward compatibility and that we should leave well enough alone. You are welcome to add a script which runs the bdist functions on all the optional packages, in much the same way setup_all.py works. You do need to face the issue if making a bdist for the public of which LAPACK you use on which platform. I believe I was one of the earliest and hardest pushers for Distutils, so I have no trouble agreeing with your goals. -----Original Message----- From: num...@li... [mailto:num...@li...]On Behalf Of Paul Prescod Sent: Thursday, January 04, 2001 5:22 PM To: dis...@py... Cc: aku...@me...; num...@li... Subject: [Numpy-discussion] Multi-distribution distributions I'm somewhat surprised by the fact that some distributions (e.g. Numpy, Zodb) have multiple setup.py programs. As far as I know, these setup.py's do not share information so there is no way to do a bdist_wininst or bdist_rpm that builds a single distribution for these multiple sub-packages. I see this as a fairly large problem! The bdist_ functions are an important part of Distutils functionality. Paul Prescod _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Paul P. <pa...@Ac...> - 2001-01-05 01:21:28
|
I'm somewhat surprised by the fact that some distributions (e.g. Numpy, Zodb) have multiple setup.py programs. As far as I know, these setup.py's do not share information so there is no way to do a bdist_wininst or bdist_rpm that builds a single distribution for these multiple sub-packages. I see this as a fairly large problem! The bdist_ functions are an important part of Distutils functionality. Paul Prescod |
From: Phlip <ppl...@om...> - 2001-01-04 18:44:46
|
Nummies Where I work, our official pipe dreamer has decided to wrap a multiarray up as an STL container. This means we can write convenient high-level code in both C++ and Python, and use this wrapper in the bridge between them. E-searches for a pre-existing cut of this yield negative. Has anyone done this yet? Or am I (gulp) the first? --Phlip http://c2.com/cgi/wiki?PhlIp |
From: Paul B. <Ba...@st...> - 2001-01-03 23:05:01
|
Travis Oliphant writes: [snip snip] > > > > I have therefore come to the conclusion that we have been barking up the > > wrong tree. There might be a few cases where inheritance would buy you > > something, but essentially Numeric and MA are useless as parents. Instead, > > what would be good to have is a python class Numeric built upon a suite of C > > routines that did all the real work, such as adding, transposing, iterating > > over elements, applying a function to each element, etc. > > I think this is what I've been trying to do in the "rewrite." Paul > Barrett has made some excellent progress here. I am currently writing the PEP 209: Multidimensional Arrays documentation and hope to submit the initial draft by the end of the week for comments. The proposed design is along the lines Paul Dubois has suggested. > > Since it is now > > possible to build a Python class with C methods, which it was not when > > Numeric was created, we ought to think about it. > > What does this mean? What feature gives this ability? I'm not sure I see > when this changed? I'd also like to know what Paul Dubois means by this. > > Such an API could be used > > to make other classes with good performance. We could lose the artificial > > layer that is in there now that makes it so tedious to add a function. (I > > counted something like five or six places I had to modify when I added > > "put".) > > I'd love to talk more with you about this. Ditto! -- Dr. Paul Barrett Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Group FAX: 410-338-4767 Baltimore, MD 21218 |
From: Travis O. <Oli...@ma...> - 2001-01-03 22:32:58
|
> > A millenium-end report from the Head Nummie (this name is a joke; see the > DEVELOPERS file): > Our nummie-ears are listening.... > There have been a steady set of messages on the subject of I should do this > or that to make it easier to make RPMs. It is impossible for me to act on > these: I don't know much about RPMs, and if I did, I don't know if making > the change suggested is good or bad for someone doing something else, like > making Windows installers. > Therefore my policy is to rely on the Distutils > people to work this out. Those who wish to make it easier to make a binary > installer for platform xyz should figure out what would be required by the > Distutils bdist family of commands. Good idea to go with the distutils for doing this. I've delayed RPM's for this reason until I figure out how to interact with the distutils better (I haven't spent much time doing it yet). > > That is not to say that I don't appreciate people trying to help. I'm > grateful for all the support I get from the community. I think that relying > on division of labor in this case is the right thing to do, so that we take > advantage of the Distutils effort. If I'm wrong, I'll listen. > > There are a number of bug reports on the sourceforge site. I would be > grateful for patches. In particular there are two reports dealing with FFTs. > I lack the expertise to deal with these. I'll look into these unless somebody has them done. > > The masked array package MA has been getting more of a workout as I put it > into production at my site. I believe that it fills not only the immediate > need for dealing with missing values, but can serve as a model for how to > make a "Numeric-workalike" with extra properties, since we can't inherit > from Numeric's array. Since MA is improved fairly often, I recommend keeping > up via cvs if you are a user. > > I have new manuals but have had difficulty with the transport up to > SourceForge. Anybody else having such a problem? I used scp from a Linux box > and it sat there and didn't do anything. > > The rest of this is for developers. > > Actually, once you get into it, it isn't all that clear that inheritance > would help very much. For example, suppose you want an array class F that > has masked values but also has a special behavior f() controlled by a > parameter set at creation, beta. Suppose therefore you decide to inherit > from class MA. Thus the constructor of your new class must take the same > arguments as an MA array but add a beta=somedefault. OK, we do that. Now we > can construct a couple of F's: > f1 = F([1.,2.,3.], beta=1.) > f2 = F([4.,2.,1.], beta=2.) > Great. Now we can do f1.f(), f2.f(). Maybe we redefine __str__ so we can > print beta when we print f1. > > Now try to do something. Anything. Say, > f3 = f1 + f2 > > Oops. f3 is an MA, not an F. We might have written __add__ in MA so the it > used self.__class__ to construct the answer. But since the constructor now > needs a new parameter, there is no way MA.__add__ can make an F. It doesn't > know how. Doesn't know how to call F(), doesn't know what value to use for > beta anyway. > > So now we redefine all the methods in MA. Besides muttering that maybe > inheriting didn't buy me a lot, I am still nowhere, for the next thing I > realize is that every function f(a) that takes an MA as an argument and > returns an MA, still returns an MA. If any of these make sense for an > instance of F, I have to replace them, just as MA replaced sin, cos, sqrt, > take, etc. from Numeric. > > I have therefore come to the conclusion that we have been barking up the > wrong tree. There might be a few cases where inheritance would buy you > something, but essentially Numeric and MA are useless as parents. Instead, > what would be good to have is a python class Numeric built upon a suite of C > routines that did all the real work, such as adding, transposing, iterating > over elements, applying a function to each element, etc. I think this is what I've been trying to do in the "rewrite." Paul Barrett has made some excellent progress here. > Since it is now > possible to build a Python class with C methods, which it was not when > Numeric was created, we ought to think about it. What does this mean? What feature gives this ability? I'm not sure I see when this changed? > Such an API could be used > to make other classes with good performance. We could lose the artificial > layer that is in there now that makes it so tedious to add a function. (I > counted something like five or six places I had to modify when I added > "put".) I'd love to talk more with you about this. I'm now at my new place for anyone wishing to contact me. Travis Oliphant 437 CB Brigham Young University Provo, UT 84602 oli...@ee... (801) 378-3108 Thanks for you great efforts Paul. |
From: Tony S. <ant...@ie...> - 2001-01-03 17:47:13
|
On Wed, 3 Jan 2001, Tony Seward wrote: > On Wed, 3 Jan 2001, Konrad Hinsen wrote: > <snip> > > I have attached a patch that impliments your solution. So far it is working > for me. When I've finished with the RPM spec file I will post that to the > list as well. > > Tony > oops. The patch is attached to this message. No really. |
From: Tony S. <ant...@ie...> - 2001-01-03 17:39:23
|
On Wed, 3 Jan 2001, Konrad Hinsen wrote: > > I see two solutions: > > 1) Have the setup script make a symbolic link in the package's include > > directory to the include directory of the numpy core. Call the symbolic > > link 'Numeric.' > > > > 2) Move the include files for the core to a subdirectory called 'Numeric.' > > There's a third one: > > 3) Have the "build" stage of the packages copy the core header files > into their private include directories. > > This doesn't require links at the cost of wasting (temporarily) a tiny > amount of disk space. > > Konrad. > I have attached a patch that impliments your solution. So far it is working for me. When I've finished with the RPM spec file I will post that to the list as well. Tony |
From: Konrad H. <hi...@cn...> - 2001-01-03 12:53:57
|
> I see two solutions: > 1) Have the setup script make a symbolic link in the package's include > directory to the include directory of the numpy core. Call the symbolic > link 'Numeric.' > > 2) Move the include files for the core to a subdirectory called 'Numeric.' There's a third one: 3) Have the "build" stage of the packages copy the core header files into their private include directories. This doesn't require links at the cost of wasting (temporarily) a tiny amount of disk space. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Raymond B. <be...@hp...> - 2001-01-02 16:55:39
|
I use Windows NT/2000 (at gunpoint), and I've gotten much better performance than NumPy's Lite code by using the Intel MKL. I keep meaning to post these results to an external HP Labs web page, but we're currently engaged in an internal snit about what those pages are allowed to look like. Essentially, with NumPy 17 and Python 1.5, I've been able to: (1) Replace NumPy BLAS with Intel's MKL BLAS, after a number of routine renaming edits in the NumPy code. (Apparently "cblas" is an irresistible prefix.) This turns out to be pretty easy. (2) Replace portions of LAPACK Lite with the corresponding routines from Intel's MKL. I've been hacking away at this on a piecemeal basis, because the MKL uses Fortran calling conventions and array-ordering. In most of the MKL routines, the usual LAPACK flags for transposition are available to compensate. (3) Add new LAPACK functionality using LAPACK routines from the MKL that are not included in LAPACK Lite. I've been meaning to pick this project up again, since I want to use the Intel FFT routines in some new code that I'm writing. The chief benefit of the MKL libraries is the ability to easily use multiple processors and threads simply by setting a couple of environment variables. Until recently (when MATLAB finally started using LAPACK), I was able to gain significantly better performance than MATLAB using the Intel BLAS/LAPACK, even without multiple processors. Now the Intel libraries are only about 10-25% faster, if my watch can be believed. I have _no idea_ how the Intel MKL performs relative to the native Linux BLAS/LAPACK distributions, but I know that (for example) the MKL is about 50% faster than the linear algebra routines shipped by NAG. Although I haven't figured out the standard NumPy distribution process yet (I still use that pesky Visual Studio IDE), I'll elevate the development of a general-purpose integration of the Intel MKL into NumPy to an Official New Year's Resolution. I made my last ONYR about ten years ago, and was able to give up channel-surfing using the TV remote control. So, I have demonstrated an ability to deliver in this department; the realization that we've just started a new millennium only adds to the pressure. Presumably, when I get this finished (I'll target the end of the month), I'll be able to find a place to post it. ============================ Ray Beausoleil Hewlett-Packard Laboratories mailto:be...@hp... 425-883-6648 Office 425-957-4951 Telnet 425-941-2566 Mobile ============================ At 05:56 PM 1/2/2001 +1100, Shuetrim, Geoff wrote: >Apologies for asking an overly vague question like this but: >on an Intel/win32 platform (I only have a windows version of >Gauss), I am comparing Numpy matrix inversion with that >of Gauss (very much the same type of software as Matlab >which at least some of you used to work with). As the size >of the matrix to be inverted increases, the speed of Numpy >appears to asymptote (on my machine) to about half that of >Gauss. For small matrices, it is much worse than that >because of the various overheads that are dealt with by Numpy. > >Would this speed differential be largely eliminated if I was not >using LAPACK-LITE? If so I will try to figure my way through >hooking into Intel's MKL - anyone got hints on doing this - I saw >mention of it in the mailing list archives. Would I be better off, >speed-wise, eschewing win32 altogether and using native LAPACK >and BLAS libraries on my Linux box? > >This is relevant to me in the context of a multivariate Kalman filtering >module that I am working up to replace one I have been using on >the Gauss platform for years. The Numpy version of my filter has a >very similar logic structure to that of Gauss but is drastically slower. >I have only been working with Numpy for a month or so which may >mean that my code is relatively innefficient. I have been assuming >that Numpy - as an interpreted language is mainly slowed by >looping structures. > >Thanks in advance, > >Geoff Shuetrim > >____________________________________________________________________________ >________ > >A simple version of the filter is given below: >(Note that I have modified Matrix.py in my installation to include a >transpose method for the Matrix class, T()). > ># *************************************************************** ># kalman.py module by Geoff Shuetrim ># ># Please note - this code is thoroughly untested at this stage ># ># You may copy and use this module as you see fit with no ># guarantee implied provided you keep this notice in all copies. ># *************************************************************** > ># Minimization routines >"""kalman.py > >Routines to implement Kalman filtering and smoothing >for multivariate linear state space representations >of time-series models. > >Notation follows that contained in Harvey, A.C. (1989) >"Forecasting Structural Time Series Models and the Kalman Filter". > >Filter --- Filter - condition on data to date > >""" > ># Import the necessary modules to use NumPy >import math >from Numeric import * >from LinearAlgebra import * >from RandomArray import * >import MLab >from Matrix import * > ># Initialise module constants >Num = Numeric >max = MLab.max >min = MLab.min >abs = Num.absolute >__version__="0.0.1" > ># filtration constants >_obs = 100 >_k = 1 >_n = 1 >_s = 1 > ># Filtration global arrays >_y = Matrix(cumsum(standard_normal((_obs,1,_n)))) >_v = Matrix(zeros((_obs,1,_n),Float64)) >_Z = Matrix(ones((_obs,_k,_n),Float64)) + 1.0 >_d = Matrix(zeros((_obs,1,_n),Float64)) >_H = Matrix(zeros((_obs,_n,_n),Float64)) + 1.0 >_T = Matrix(zeros((_obs,_k,_k),Float64)) + 1.0 >_c = Matrix(zeros((_obs,1,_k),Float64)) >_R = Matrix(zeros((_obs,_k,_s),Float64)) + 1.0 >_Q = Matrix(zeros((_obs,_s,_s),Float64)) + 1.0 >_a = Matrix(zeros((_obs,1,_k),Float64)) >_a0 = Matrix(zeros((_k,1),Float64)) >_ap = _a >_as = _a >_P = Matrix(zeros((_obs,_k,_k),Float64)) >_P0 = Matrix(zeros((_k,_k),Float64)) >_Pp = _P >_Ps = _P >_LL = Matrix(zeros((_obs,1,1),Float64)) > >def Filter(): # Kalman filtering routine > > _ap[0] = _T[0] * _a0 + _c[0] > _Pp[0] = _T[0] * _P0 * _T[0].T() + _R[0] * _Q[0] * _R[0].T() > > for t in range(1,_obs-1): > > _ap[t] = _T[t] * _a[t-1] + _c[t] > _Pp[t] = _T[t] * _P0 * _T[t].T() + _R[t] * _Q[t] * _R[t].T() > > Ft = _Z[t] * _Pp[t] * _Z[t].T() + _H[t] > Ft_inverse = inverse(Ft) > _v[t] = _y[t] - _Z[t] * _ap[t] - _d[t] > > _a[t] = _ap[t] + _Pp[t] * _Z[t].T() * Ft_inverse * _v[t].T() > _P[t] = _Pp[t] - _Pp[t].T() * _Z[t].T() * Ft_inverse * _Z[t] * >_Pp[t] > _LL[t] = -0.5 * (log(2*pi) + log(determinant(Ft)) + _v[t] * >Ft_inverse * _v[t].T()) > >Filter() >____________________________________________________________________________ >________ > > > >********************************************************************** >" This email is intended only for the use of the individual or entity >named above and may contain information that is confidential and >privileged. If you are not the intended recipient, you are hereby >notified that any dissemination, distribution or copying of this >Email is strictly prohibited. When addressed to our clients, any >opinions or advice contained in this Email are subject to the >terms and conditions expressed in the governing KPMG client >engagement letter. If you have received this Email in error, please >notify us immediately by return email or telephone +61 2 93357000 >and destroy the original message. Thank You. " >**********************************************************************... > >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >http://lists.sourceforge.net/mailman/listinfo/numpy-discussion ============================ Ray Beausoleil Hewlett-Packard Laboratories mailto:be...@hp... 425-883-6648 Office 425-957-4951 Telnet 425-941-2566 Mobile ============================ |
From: Tony S. <ant...@ie...> - 2001-01-02 16:45:59
|
On Thu, 28 Dec 2000, Paul F. Dubois wrote: > A millenium-end report from the Head Nummie (this name is a joke; see the > DEVELOPERS file): > > There have been a steady set of messages on the subject of I should do this > or that to make it easier to make RPMs. It is impossible for me to act on > these: I don't know much about RPMs, and if I did, I don't know if making > the change suggested is good or bad for someone doing something else, like > making Windows installers. Therefore my policy is to rely on the Distutils > people to work this out. Those who wish to make it easier to make a binary > installer for platform xyz should figure out what would be required by the > Distutils bdist family of commands. > > That is not to say that I don't appreciate people trying to help. I'm > grateful for all the support I get from the community. I think that relying > on division of labor in this case is the right thing to do, so that we take > advantage of the Distutils effort. If I'm wrong, I'll listen. > The problem that I pointed out is not a problem with building a binary package. Invoking './setup_all.py build' on a clean machine does not work. The numpy core is built, but the packages are not. The reason is that all of the packages are looking for 'Numeric/arrayobject.h' which does not exist until the numpy core has been installed at least once. Even then, the packages will use an old version of arrayobject.h. I see two solutions: 1) Have the setup script make a symbolic link in the package's include directory to the include directory of the numpy core. Call the symbolic link 'Numeric.' 2) Move the include files for the core to a subdirectory called 'Numeric.' I would prefer the first solution, but I'm not aware of a way for the non-unix versions of python to create a link to a directory. <snip> Tony |
From: Janko H. <jh...@if...> - 2001-01-02 15:21:22
|
First, have you tested the execution time of inverse() alone in NumPy and Gauss, or only with the appended function. There are some ways to speed up the function without using a specialized library. The function also has a possible error. (see comments in the function). In general the penality for array creation (memory allocation) is rather big in Numpy, especially if you use long expressions, as the result of every term is stored in a newly created array. To speed up these kinds of operations use the inplace operations with some workspace arrays, like: >>> # Translating _ap[0] = _T[0] * _a0 + _c[0] >>> _ap[0]=add(multiply(_T[0],_a0,_ap[0]),_c[0],_ap[0]) Then it should be possible to code this without a loop at all. This would also speed up the function. HTH, __Janko Shuetrim, Geoff writes: > ________ > > A simple version of the filter is given below: > (Note that I have modified Matrix.py in my installation to include a > transpose method for the Matrix class, T()). > > # *************************************************************** > # kalman.py module by Geoff Shuetrim > # > # Please note - this code is thoroughly untested at this stage > # > # You may copy and use this module as you see fit with no > # guarantee implied provided you keep this notice in all copies. > # *************************************************************** > > # Minimization routines > """kalman.py > > Routines to implement Kalman filtering and smoothing > for multivariate linear state space representations > of time-series models. > > Notation follows that contained in Harvey, A.C. (1989) > "Forecasting Structural Time Series Models and the Kalman Filter". > > Filter --- Filter - condition on data to date > > """ > > # Import the necessary modules to use NumPy > import math > from Numeric import * > from LinearAlgebra import * > from RandomArray import * > import MLab > from Matrix import * > > # Initialise module constants > Num = Numeric > max = MLab.max > min = MLab.min > abs = Num.absolute > __version__="0.0.1" > > # filtration constants > _obs = 100 > _k = 1 > _n = 1 > _s = 1 > > # Filtration global arrays > _y = Matrix(cumsum(standard_normal((_obs,1,_n)))) > _v = Matrix(zeros((_obs,1,_n),Float64)) > _Z = Matrix(ones((_obs,_k,_n),Float64)) + 1.0 > _d = Matrix(zeros((_obs,1,_n),Float64)) > _H = Matrix(zeros((_obs,_n,_n),Float64)) + 1.0 > _T = Matrix(zeros((_obs,_k,_k),Float64)) + 1.0 > _c = Matrix(zeros((_obs,1,_k),Float64)) > _R = Matrix(zeros((_obs,_k,_s),Float64)) + 1.0 > _Q = Matrix(zeros((_obs,_s,_s),Float64)) + 1.0 > _a = Matrix(zeros((_obs,1,_k),Float64)) > _a0 = Matrix(zeros((_k,1),Float64)) > _ap = _a !!! Are you sure? This does not copy, but only makes a new reference > _as = _a !!! Same here > _P = Matrix(zeros((_obs,_k,_k),Float64)) > _P0 = Matrix(zeros((_k,_k),Float64)) > _Pp = _P > _Ps = _P > _LL = Matrix(zeros((_obs,1,1),Float64)) > > def Filter(): # Kalman filtering routine > > _ap[0] = _T[0] * _a0 + _c[0] > _Pp[0] = _T[0] * _P0 * _T[0].T() + _R[0] * _Q[0] * _R[0].T() > > for t in range(1,_obs-1): > > _ap[t] = _T[t] * _a[t-1] + _c[t] !!! You are changing _a and _as and _ap at the same time > _Pp[t] = _T[t] * _P0 * _T[t].T() + _R[t] * _Q[t] * _R[t].T() > > Ft = _Z[t] * _Pp[t] * _Z[t].T() + _H[t] > Ft_inverse = inverse(Ft) > _v[t] = _y[t] - _Z[t] * _ap[t] - _d[t] > > _a[t] = _ap[t] + _Pp[t] * _Z[t].T() * Ft_inverse * _v[t].T() > _P[t] = _Pp[t] - _Pp[t].T() * _Z[t].T() * Ft_inverse * _Z[t] * > _Pp[t] > _LL[t] = -0.5 * (log(2*pi) + log(determinant(Ft)) + _v[t] * > Ft_inverse * _v[t].T()) > > Filter() > ____________________________________________________________________________ > ________ > > > > ********************************************************************** > " This email is intended only for the use of the individual or entity > named above and may contain information that is confidential and > privileged. If you are not the intended recipient, you are hereby > notified that any dissemination, distribution or copying of this > Email is strictly prohibited. When addressed to our clients, any > opinions or advice contained in this Email are subject to the > terms and conditions expressed in the governing KPMG client > engagement letter. If you have received this Email in error, please > notify us immediately by return email or telephone +61 2 93357000 > and destroy the original message. Thank You. " > **********************************************************************... > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: David H. M. <mar...@nx...> - 2001-01-02 15:14:36
|
I have a problem that seems related to the one below. I'm trying to build and install Numeric 17.2.0 using the lapack and blas libraries under Python 2.0 on Red Hat 6.2 with the The build and install go fine, but when I run python and import LinearAlgebra, I get the following message: Python 2.0 (#1, Nov 13 2000, 14:15:52) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> from Numeric import * >>> from LinearAlgebra import * Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/local/lib/python2.0/site-packages/Numeric/LinearAlgebra.py", line 8, in ? import lapack_lite ImportError: /usr/local/lib/python2.0/site-packages/Numeric/lapack_lite.so: undefined symbol: dgesvd_ >>> I did try relinking with g77, as Scott's earlier message suggested: [root@harmony LALITE]# g77 -shared build/temp.linux-i686-2.0/Src/lapack_litemodule.o -L/usr/lib/lapack -o build/lib.linux-i686-2.0/lapack_lite.so ld: cannot open crtbeginS.o: No such file or directory [root@harmony LALITE]# But I don't know much about g77 (or gcc, for that matter), so I didn't know how to diagnose the error. I'm also not sure I'd know what to do next if the link step had worked! I'd sure appreciate some help with this... Thanks! David "Scott M. Ransom" wrote: > > Frank Horowitz wrote: > > > > However, when I > > coerced the distutils system to get around that bug (by specifying > > "/usr/lib " with a trailing blank for the BLASLIBDIR and LAPACKLIBDIR > > variables in setup.py) the same problem (i.e. an "ImportError: > > /usr/lib/liblapack.so.3: undefined symbol: e_wsfe" in importing > > lapack_lite) ultimately manifested itself. > > This problem is easily fixed (at least on linux) by performing the link > of lapack_lite.so with g77 instead of gcc (this is required because the > lapack and/or blas libraries are based on fortran object files...). > > For instance the out-of-box link command on my machine (Debian 2.2) is: > > gcc -shared build/temp.linux2/Src/lapack_litemodule.o -L/usr/local/lib > -L/usr/lib -llapack -lblas -o build/lib.linux2/lapack_lite.so > > Simply change the 'gcc' to 'g77' and everything works nicely. > > Not sure if this is specific to Linux or not... > > Scott > > -- > Scott M. Ransom Address: Harvard-Smithsonian CfA > Phone: (617) 495-4142 60 Garden St. MS 10 > email: ra...@cf... Cambridge, MA 02138 > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Shuetrim, G. <gsh...@kp...> - 2001-01-02 06:56:34
|
Apologies for asking an overly vague question like this but: on an Intel/win32 platform (I only have a windows version of Gauss), I am comparing Numpy matrix inversion with that of Gauss (very much the same type of software as Matlab which at least some of you used to work with). As the size of the matrix to be inverted increases, the speed of Numpy appears to asymptote (on my machine) to about half that of Gauss. For small matrices, it is much worse than that because of the various overheads that are dealt with by Numpy. Would this speed differential be largely eliminated if I was not using LAPACK-LITE? If so I will try to figure my way through hooking into Intel's MKL - anyone got hints on doing this - I saw mention of it in the mailing list archives. Would I be better off, speed-wise, eschewing win32 altogether and using native LAPACK and BLAS libraries on my Linux box? This is relevant to me in the context of a multivariate Kalman filtering module that I am working up to replace one I have been using on the Gauss platform for years. The Numpy version of my filter has a very similar logic structure to that of Gauss but is drastically slower. I have only been working with Numpy for a month or so which may mean that my code is relatively innefficient. I have been assuming that Numpy - as an interpreted language is mainly slowed by looping structures. Thanks in advance, Geoff Shuetrim ____________________________________________________________________________ ________ A simple version of the filter is given below: (Note that I have modified Matrix.py in my installation to include a transpose method for the Matrix class, T()). # *************************************************************** # kalman.py module by Geoff Shuetrim # # Please note - this code is thoroughly untested at this stage # # You may copy and use this module as you see fit with no # guarantee implied provided you keep this notice in all copies. # *************************************************************** # Minimization routines """kalman.py Routines to implement Kalman filtering and smoothing for multivariate linear state space representations of time-series models. Notation follows that contained in Harvey, A.C. (1989) "Forecasting Structural Time Series Models and the Kalman Filter". Filter --- Filter - condition on data to date """ # Import the necessary modules to use NumPy import math from Numeric import * from LinearAlgebra import * from RandomArray import * import MLab from Matrix import * # Initialise module constants Num = Numeric max = MLab.max min = MLab.min abs = Num.absolute __version__="0.0.1" # filtration constants _obs = 100 _k = 1 _n = 1 _s = 1 # Filtration global arrays _y = Matrix(cumsum(standard_normal((_obs,1,_n)))) _v = Matrix(zeros((_obs,1,_n),Float64)) _Z = Matrix(ones((_obs,_k,_n),Float64)) + 1.0 _d = Matrix(zeros((_obs,1,_n),Float64)) _H = Matrix(zeros((_obs,_n,_n),Float64)) + 1.0 _T = Matrix(zeros((_obs,_k,_k),Float64)) + 1.0 _c = Matrix(zeros((_obs,1,_k),Float64)) _R = Matrix(zeros((_obs,_k,_s),Float64)) + 1.0 _Q = Matrix(zeros((_obs,_s,_s),Float64)) + 1.0 _a = Matrix(zeros((_obs,1,_k),Float64)) _a0 = Matrix(zeros((_k,1),Float64)) _ap = _a _as = _a _P = Matrix(zeros((_obs,_k,_k),Float64)) _P0 = Matrix(zeros((_k,_k),Float64)) _Pp = _P _Ps = _P _LL = Matrix(zeros((_obs,1,1),Float64)) def Filter(): # Kalman filtering routine _ap[0] = _T[0] * _a0 + _c[0] _Pp[0] = _T[0] * _P0 * _T[0].T() + _R[0] * _Q[0] * _R[0].T() for t in range(1,_obs-1): _ap[t] = _T[t] * _a[t-1] + _c[t] _Pp[t] = _T[t] * _P0 * _T[t].T() + _R[t] * _Q[t] * _R[t].T() Ft = _Z[t] * _Pp[t] * _Z[t].T() + _H[t] Ft_inverse = inverse(Ft) _v[t] = _y[t] - _Z[t] * _ap[t] - _d[t] _a[t] = _ap[t] + _Pp[t] * _Z[t].T() * Ft_inverse * _v[t].T() _P[t] = _Pp[t] - _Pp[t].T() * _Z[t].T() * Ft_inverse * _Z[t] * _Pp[t] _LL[t] = -0.5 * (log(2*pi) + log(determinant(Ft)) + _v[t] * Ft_inverse * _v[t].T()) Filter() ____________________________________________________________________________ ________ ********************************************************************** " This email is intended only for the use of the individual or entity named above and may contain information that is confidential and privileged. If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this Email is strictly prohibited. When addressed to our clients, any opinions or advice contained in this Email are subject to the terms and conditions expressed in the governing KPMG client engagement letter. If you have received this Email in error, please notify us immediately by return email or telephone +61 2 93357000 and destroy the original message. Thank You. " **********************************************************************... |
From: Paul F. D. <pau...@ho...> - 2000-12-29 22:36:50
|
Sorry this is in HTML, but the documentation below requires it. The latest source release is 17.2.0. Revised documentation in HTML and PDF is available at http://numpy.sourceforge.net. Package MA contains a number of new constructors. The complete list is: Constructing masked arrays 1.. array (data, typecode = None, copy = 1, savespace = 0, mask = None, fill_value = None) creates a masked array with the given data and mask. The name array is simply an alias for the class name, MA. This constructor sets the data area of the resulting masked array to filled (data, value = fill_value, copy = copy, savespace = savespace), the mask to make_mask (mask, savespace), and the fill value is set to fill_value. The class name MA may also be used instead of the name array. 2.. masked_array (data, mask = None, fill_value = None) is an easier to use version of array, for the common case of typecode = None, copy = 0. When data is newly-created this function can be used to make it a masked array without copying the data if data is already a Numeric array. 3.. masked_values (data, value, rtol=1.e-5, atol=1.e-8, typecode = None, copy = 1, savespace = 0) constructs a masked array whose mask is set at those places where abs (data - value) < atol + rtol * abs (data). That is a careful way of saying that those elements of the data that have value = value (to within a tolerance) are to be treated as invalid. 4.. masked_object (data, value, copy=1, savespace=0) creates a masked array with those entries marked invalid that are equal to value. Again, copy and savespace are passed on to the Numeric array constructor. 5.. masked_where (condition, data) creates a masked array whose shape is that of condition, whose values are those of data, and which is masked where elements of condition are true. 6.. masked_greater (data, value) is equivalent to masked_where (greater(data, value), data)). Similarly, masked_greater_equal, masked_equal, masked_not_equal, masked_less, masked_less_equal are called in the same way with the obvious meanings. Note that for floating point data, masked_values is preferable to masked_equal in most cases. On entry to any of these constructors, data must be any object which the Numeric package can accept to create an array (with the desired typecode, if specified). The mask if given must be None or any object that can be turned into a Numeric array of integer type (it will be converted to typecode MaskType, if necessary), have the same shape as data, and contain only values of 0 or 1. If the mask is not None but its shape does not match that of data, an exception will be thrown, unless one of the two is of length 1, in which case the scalar will be resized (using Numeric.resize) to match the other. |