You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Travis O. <oli...@ie...> - 2006-10-23 22:03:02
|
Albert Strasheim wrote: > Hello all > > I'm trying to sort an array with two fields, but I'm getting a result that > doesn't seem to make sense. > > What I tried (first attempt): I have two 2-D arrays. I would like to sort > one based on the sort of the other. I managed to do this with argsort. > However, the fancy indexing required to get the sorted array using what > argsort returned was very slow. I followed this example: > > http://www.scipy.org/Numpy_Example_List#head-9f8656795227e3c43e849c6c0435eee > b32afd722 > > What I tried (second attempt): I created an array with two fields. I > think/hope/expected that sorting the array would sort it on the first field > in the dtype and then on the second. This is *much* faster than the fancy > indexing approach. > O.K. I lied, I realized that my comments in the VOID_compare code were silly (about being unable to define > or <). It makes sense to just define them based on the first field, then the second field (if the first field is equal), and so forth. Obviously this is not the only way one could define sorting (any field could be used as "the first field", and so forth. But, is is a fairly obvious default to use the first field. It turns out, it was not so difficult to implement this and is now in SVN. So, the VOID_compare now does something a little more intelligent when fields are defined which means that record arrays can now be lexicographically sorted more easily than using lexsort and take (as long as the fields are ordered according to how you want the sort to proceed). -Travis |
From: Fernando P. <fpe...@gm...> - 2006-10-23 21:39:53
|
Hi all, two colleagues have been seeing occasional crashes from very long-running code which uses numpy. We've now gotten a backtrace from one such crash, unfortunately it uses a build from a few days ago: In [3]: numpy.__version__ Out[3]: '1.0b5.dev3097' In [4]: scipy.__version__ Out[4]: '0.5.0.2180' Because it takes so long to get the code to crash (several days of 100%CPU usage), I can't make a new one right now, but I'll be happy to restart the same run with a current SVN build if necessary, and post the results in a few days. In the meantime, here's a gdb backtrace we were able to get by setting MALLOC_CHECK_ to 2 and running the python process from within gdb: Program received signal SIGABRT, Aborted. [Switching to Thread 1073880896 (LWP 26280)] 0x40000402 in __kernel_vsyscall () (gdb) bt #0 0x40000402 in __kernel_vsyscall () #1 0x0042c7d5 in raise () from /lib/tls/libc.so.6 #2 0x0042e149 in abort () from /lib/tls/libc.so.6 #3 0x0046b665 in free_check () from /lib/tls/libc.so.6 #4 0x00466e65 in free () from /lib/tls/libc.so.6 #5 0x005a4ab7 in PyObject_Free () from /usr/lib/libpython2.3.so.1.0 #6 0x403f6336 in arraydescr_dealloc (self=0x40424020) at arrayobject.c:10455 #7 0x403fab3e in PyArray_FromArray (arr=0xe081cb0, newtype=0x40424020, flags=0) at arrayobject.c:7725 #8 0x403facc3 in PyArray_FromAny (op=0xe081cb0, newtype=0x0, min_depth=0, max_depth=0, flags=0, context=0x0) at arrayobject.c:8178 #9 0x4043bc45 in PyUFunc_GenericFunction (self=0x943a660, args=0xa9dbf2c, mps=0xbfc83730) at ufuncobject.c:906 #10 0x40440a04 in ufunc_generic_call (self=0x943a660, args=0xa9dbf2c) at ufuncobject.c:2742 #11 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 #12 0x0057d6d4 in PyObject_CallFunction () from /usr/lib/libpython2.3.so.1.0 #13 0x403eabb6 in PyArray_GenericBinaryFunction (m1=Variable "m1" is not available. ) at arrayobject.c:3296 #14 0x0057b7e1 in PyNumber_Check () from /usr/lib/libpython2.3.so.1.0 #15 0x0057c1e0 in PyNumber_Multiply () from /usr/lib/libpython2.3.so.1.0 #16 0x005d16a3 in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #17 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 #18 0x005d3d8f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #19 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 #20 0x00590e2e in PyFunction_SetClosure () from /usr/lib/libpython2.3.so.1.0 #21 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 #22 0x00584d98 in PyMethod_New () from /usr/lib/libpython2.3.so.1.0 #23 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 #24 0x005b584c in _PyObject_SlotCompare () from /usr/lib/libpython2.3.so.1.0 #25 0x005aec2c in PyType_IsSubtype () from /usr/lib/libpython2.3.so.1.0 #26 0x0057d607 in PyObject_Call () from /usr/lib/libpython2.3.so.1.0 #27 0x005d2b7f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #28 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 #29 0x005d3d8f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #30 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 #31 0x005d3d8f in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #32 0x005d497b in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #33 0x005d497b in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #34 0x005d497b in _PyEval_SliceIndex () from /usr/lib/libpython2.3.so.1.0 #35 0x005d509e in PyEval_EvalCodeEx () from /usr/lib/libpython2.3.so.1.0 #36 0x005d5362 in PyEval_EvalCode () from /usr/lib/libpython2.3.so.1.0 #37 0x005ee817 in PyErr_Display () from /usr/lib/libpython2.3.so.1.0 #38 0x005ef942 in PyRun_SimpleFileExFlags () from /usr/lib/libpython2.3.so.1.0 #39 0x005f0994 in PyRun_AnyFileExFlags () from /usr/lib/libpython2.3.so.1.0 #40 0x005f568e in Py_Main () from /usr/lib/libpython2.3.so.1.0 #41 0x080485b2 in main () # End of BT. This code is running on a Fedora Core 3 box, with python 2.3.4 and numpy/scipy compiled using gcc 3.4.4. I realize that it's extremely difficult to help with so little information, but unfortunately we have no small test that can reproduce the problem. Only our large research codes, when running for multiple days on a single run, cause this. Even very intensive uses of the same code but which last only a few hours never show this. This code is a long-runing iterative algorithm, so it's basically applying the same (complex) loop over and over until convergence, using numpy and scipy pretty extensively throughout. If super Travis (or anyone else) can have a Eureka moment from the above backtrace, that would be fantastic. If there's any other information you think I may be able to provide, I'll be happy to do my best. Cheers, f |
From: Tom L. <lo...@as...> - 2006-10-23 21:32:18
|
Hi Sebastian, I'm still unclear about the problem. From your last description, it sounds like the problem is *not* 2-D; you are trying to model a 1-d function of a 1-D argument (unless B is not the scalar field strength, but rather \vec B, the 3-D field vector). Also, your problem appears to be a regression (i.e., curve/surface fitting) problem, not a density estimation problem (comparing samples drawn from two distributions), so the Numerical Recipes suggestion (which addresses comparing densities) is probably not relevant. It sounds like you have some observed data that perhaps can be modeled as y_i = f(x_i; theta) + e_i where e_i represent measurement error, f(x;theta) is the "true model" that gives the intensity as a function of the field strength, which depends on some parameters theta, and y_i are the measured intensities. I presume you know something about the y_i measurement errors (like a std error for all or for each of them). You also have a computational model that gives you simulation data that can be modeled as Y_i = g(X_i; theta) with (presumably) no Y_i "measurement error" (though perhaps there is an equivalent if you use Monte Carlo in the calculation or have other quantifiable sources of error). As you phrased the problem, it appears you know theta exactly for both the observational and simulation data (an unusual situation!). You just want to ascertain whether g is a good approximation to f. Is this correct? There is a large literature on problems like this where theta is *unknown* and one wants to either calibrate the simulation or infer theta for the observations from a sparse set of runs of the simulation at various theta. But no papers come immediately to mind for the known theta case, though the "validation" stage of some of the available papers address it. For the unknown theta case, the relevant literature is fairly new and goes under various names (DACE: Design & Analysis of Computer Experiments; BACCO: Bayesian Analysis of Computer Code Output; MUCM: Managing Uncertainty in Complex Models). The main tools are interpolators that *quantify uncertainty* in the interpolation and machinery to propagate uncertainty through subsequent analyses. Gaussian processes (which include things like splines, kriging and random walks as special cases, but with "error propagation") are used to build an "emulator" for the simulation (an emulator is an interpolator that also gives you measures of uncertainty in the interpolation). There are free codes available for Matlab and R implementing this paradigm, but it's a tricky business (as I am slowly discovering, having just started to delve into it). It is unclear from your description why the X_i cannot be made equal to the x_i. If either or both are uncontrollable---esp. if you cannot set x_i but have to rely on "nature" providing you x_i values that would differ from one set of observations to the next---this adds a layer of complexity that is not trivial. It becomes a "measurement error problem" (aka "errors-in-the-variables problem"), with subtle aspects (you can easily go astray by simply ignoring measurement errors on the x's; they do *not* "average out" as you get more sample points). This can (and has) been incorporated into the emulator framework, though I don't know if any of the free code does this. Part of the MUCM/BACCO approach is estimation of a "bias" or "misfit" term between the simulation and calibration data (here it would be delta(x) = g(x)-f(x)). Perhaps your problem can be phrased in terms of whether the bias function is significantly nonzero anywhere, and if so, where. There are various places to go to learn about this stuff if it interests you; here are a few links. The first two are to the home pages of two working groups at SAMSI (an interdisciplinary statistics/appl. math institute), which is devoting this year to research on methods and applications in analysis of computer models. http://www.stat.duke.edu/~fei/samsi/index.html http://www4.stat.ncsu.edu/~gawhite/SAMSIweb/ http://mucm.group.shef.ac.uk/ http://cran.r-project.org/src/contrib/Descriptions/BACCO.html http://www2.imm.dtu.dk/~hbn/dace/ A particularly accessible paper is by the Los Alamos statistics group: http://portal.acm.org/citation.cfm?id=1039891.1039922 Again, I don't think this literature directly addresses your problem, but it may provide the "right" way to approach it, if you are willing to do some work to connect your problem to this framework. The short answer to your question is that I think you will be hard pressed to get a good answer to your question using off-the-shelf SciPy tools. Good luck, Tom Loredo ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ |
From: Travis O. <oli...@ie...> - 2006-10-23 21:29:17
|
Albert Strasheim wrote: > Hello all > > I'm trying to sort an array with two fields, but I'm getting a result that > doesn't seem to make sense. > > What I tried (first attempt): I have two 2-D arrays. I would like to sort > one based on the sort of the other. I managed to do this with argsort. > However, the fancy indexing required to get the sorted array using what > argsort returned was very slow. I followed this example: > > http://www.scipy.org/Numpy_Example_List#head-9f8656795227e3c43e849c6c0435eee > b32afd722 > > What I tried (second attempt): I created an array with two fields. I > think/hope/expected that sorting the array would sort it on the first field > in the dtype and then on the second. This is *much* faster than the fancy > indexing approach. > That kind of sorting is what lexsort does (although with indexes) and is more complicated than what the sort routine does in NumPy. The sorting routines in NumPy use the output of the comparison operator for the type to compute the result. An array with fields is of type void. Right now, the VOID_compare routine is equivalent to the STRING_compare routine (i.e. raw bytes are compared). I doubt this will do what you want in most cases. It would be possible to adapt this compare routine when fields are present and so something like compare the first field first and the second field only if the first is equal. But, this would require a bit of work and is probably best left for 1.0.1 or later. -Travis |
From: Mathew Y. <my...@jp...> - 2006-10-23 21:22:12
|
Thank you! Travis Oliphant wrote: > Mathew Yeates wrote: > >> Hi >> Is there any support for combinatorics in numpy or scipy? Actually, all >> I need is to evaluate is (n/m) i.e. n choose m. >> >> >> > In [3]: scipy.comb(109,54,exact=1) > Out[3]: 49263609265046928387789436527216L > > 109 choose 54 > > -Travis > > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Travis O. <oli...@ie...> - 2006-10-23 20:56:12
|
Mathew Yeates wrote: > Hi > Is there any support for combinatorics in numpy or scipy? Actually, all > I need is to evaluate is (n/m) i.e. n choose m. > > In [3]: scipy.comb(109,54,exact=1) Out[3]: 49263609265046928387789436527216L 109 choose 54 -Travis |
From: Mathew Y. <my...@jp...> - 2006-10-23 20:50:59
|
Hi Is there any support for combinatorics in numpy or scipy? Actually, all I need is to evaluate is (n/m) i.e. n choose m. If numpy doesn't have it, anyone know of a good public domain prog? Mathew |
From: Colin J. W. <cj...@sy...> - 2006-10-23 18:22:53
|
-------- Original Message -------- Subject: numpy error Date: Mon, 23 Oct 2006 20:01:33 +0200 From: Juergen Kareta <ka...@we...> Newsgroups: gmane.comp.python.general Hello, this is my first try to get wxmpl-1.2.8 running. Therefor I installed: python 2.5 matplotlib-0.87.6.win32-py2.5.exe numpy-1.0rc3.win32-py2.5.exe on WinXP SP2 The result is a version mismatch (see below). Numpy version 1000002 seems to be numpy-1.0b5 which is not downloadable anymore. Any hints ? Thanks in advance. Jürgen traceback: from pylab import * RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 The import of the numpy version of the nxutils module, _nsnxutils, failed. This is is either because numpy was unavailable when matplotlib was compiled, because a dependency of _nsnxutils could not be satisfied, or because the build flag for this module was turned off in setup.py. If it appears that _nsnxutils was not built, make sure you have a working copy of numpy and then re-install matplotlib. Otherwise, the following traceback gives more details: Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\Python25\Lib\site-packages\pylab.py", line 1, in <module> from matplotlib.pylab import * File "C:\Python25\Lib\site-packages\matplotlib\pylab.py", line 199, in <module> import mlab #so I can override hist, psd, etc... File "C:\Python25\Lib\site-packages\matplotlib\mlab.py", line 64, in <module> import nxutils File "C:\Python25\Lib\site-packages\matplotlib\nxutils.py", line 17, in <module> from matplotlib._ns_nxutils import * ImportError: numpy.core.multiarray failed to import -- http://mail.python.org/mailman/listinfo/python-list |
From: Stefan v. d. W. <st...@su...> - 2006-10-23 16:33:58
|
On Mon, Oct 23, 2006 at 11:57:57AM -0400, Scott Ransom wrote: > I believe that ix_() has recently begun modifying the shapes of its=20 > input arrays. For instance: [...] This should be fixed in SVN. Cheers St=E9fan |
From: Scott R. <sr...@nr...> - 2006-10-23 15:58:35
|
I believe that ix_() has recently begun modifying the shapes of its input arrays. For instance: Python 2.4.4c0 (#2, Jul 30 2006, 18:20:12) [GCC 4.1.2 20060715 (prerelease) (Debian 4.1.1-9)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> a = N.array([1,2,3]) >>> b = N.array([7,6,5,4]) >>> ax, bx = N.ix_(a, b) >>> a array([[1], [2], [3]]) >>> b array([[7, 6, 5, 4]]) >>> N.__version__ '1.0.dev3379' Is this intended behaviour? Thanks, Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sr...@nr... Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 |
From: Pierre GM <pgm...@ma...> - 2006-10-23 12:52:28
|
Folks, I updated the alternative implementation of MaskedArray on the wiki, mainly to correct a couple of bugs. (http://projects.scipy.org/scipy/numpy/wiki/MaskedArray) In addition, I attached another file, maskedrecordarray, which introduce a new class, MaskedRecord, as a subclass of recarray and MaskedArray. An instance of this class accepts a recarray as data, and uses two masks: the 'recordmask' has as many entries as records in the array, each entry with the same fields as a record, but of boolean types, indicating whether a field is masked or not; an entry is flagged as masked in the 'mask' array if at least one field is masked. The 'mask' object is introduced mostly for compatibilty with MaskedArray, only 'recordmask' is really useful. A few examples in the file should give you an idea of what can be done. In particular, you can define a new maskedrecord array as simply as ; a = masked_record([('Alan',29,200.), ('Bill',31,260.0)], dtype=[('name','S30'),('age',int_),('weight',float_)], mask=[(1,0,0), (0,0,0)]) Note that maskedrecordarray is still quite experimental. As I'm not a regular user of records, I don't really know what should be implemented... The file can be accessed at http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/maskedrecordarray.py Once again, I need your comments and suggestions ! Thanks. Pierre |
From: Mark H. <ma...@hy...> - 2006-10-23 11:00:36
|
On Thu, 19, Oct, 2006 at 08:29:26AM -0600, Travis Oliphant spoke thus.. > Actually, you shouldn't be getting an INF at all. This is what the=20 > test is designed to test for (so I guess it's working). The test was=20 > actually written wrong and was never failing because previously keyword= =20 > arguments to ufuncs were ignored.=20 >=20 > Can you show us what 'a' is on your platform. Hi, I've just done a Mac OS X PPC build of the SVN trunk and am getting this failure too. nidesk046:~/scratch/upstream/scipy mark$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10)=20 [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> N.__version__ '1.0.dev3378' >>> N.array([1000],dtype=3DN.float).dtype dtype('float64') >>> N.array([1000],dtype=3DN.longfloat).dtype dtype('float128') >>> N.test() =2E..snip... FAIL: Ticket #112 ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/sit= e-packages/numpy/core/tests/test_regression.py", line 220, in check_longflo= at_repr assert(str(a)[1:9] =3D=3D str(a[0])[:8]) AssertionError >>> a =3D N.exp(N.array([1000],dtype=3DN.longfloat)) >>> str(a) '[inf]' Any ideas about this? Mark --=20 Mark Hymers <mark at hymers dot org dot uk> "I once absent-mindedly ordered Three Mile Island dressing in a restaurant and, with great presence of mind, they brought Thousand Island Dressing and a bottle of chili sauce." Terry Pratchett, alt.fan.pratchett |
From: Andrew S. <str...@as...> - 2006-10-23 07:41:06
|
It sounds like your hardware drivers may be buggy -- you should only get segfaults, not (the Windows equivalent of) kernel panics, when your userspace code accesses wrong memory. But if you have buggy hardware drivers, I suppose it's possible that locking the memory will help. This wouldn't be the domain of numpy, however. In Linux, this is acheived with a system call such as mlock() or mlockall(). You'll have to figure out the appropriate call in Windows. Thinking about it, if your hardware drivers require the memory to be locked, they should do it themselves. However, I'm not convinced this is the real issue. It seems at least equally likely that your hardware drivers were developed with particular pattern of timing when accessing the buffers, but now you may be attempting to hold a buffer longer (preventing the driver writing to it) than the developer ever tested. It shouldn't blue-screen, but it does... I think it quite likely that you have some buggy hardware drivers. What hardware is it? -Andrew Lars Friedrich wrote: > Hello all, > > to interact with some hardware (data retrieval), I use the following > scheme (Windows, Python 2.4): > > * in Python, I create a numpy array as a buffer > * I pass this array to a self written .dll using ctypes > * the C-code in the .dll passes the pointer to the buffer to the API of > the hardware; then the API starts writing data to my buffer > * from python I can use a helper function of the .dll to know which part > of the buffer is safe to be read at the moment; so I can copy this part > to a different numpy array and work with the data > > This works quite well, but today I got a blue screen, reporting some > paging-problem. It occured after doing some other stuff like moving some > windows on the screen and importing pylab, and I heard the harddisk > working hard, so I assume that the part of memory my buffer is in, was > paged to the harddisk. This is a problem, since the hardware driver will > continuously try to write to this specific memory location I gave to it. > > My primary question is, how to avoid a numpy-array being paged. In the > ideal case there would be a flag to set, that makes sure that this array > is always at this position in physical memory. > > Of course I am also interested in other people's work on hardware > access. Do you think the above discribed way is a good one? To me it was > the best way because I could do as much as possible in Python and keep > may C-coded .dll very small. Is anyone doing similar things? > > Thanks for every comment > > Lars > > > |
From: Lars F. <lfr...@im...> - 2006-10-23 06:25:29
|
Hello all, to interact with some hardware (data retrieval), I use the following scheme (Windows, Python 2.4): * in Python, I create a numpy array as a buffer * I pass this array to a self written .dll using ctypes * the C-code in the .dll passes the pointer to the buffer to the API of the hardware; then the API starts writing data to my buffer * from python I can use a helper function of the .dll to know which part of the buffer is safe to be read at the moment; so I can copy this part to a different numpy array and work with the data This works quite well, but today I got a blue screen, reporting some paging-problem. It occured after doing some other stuff like moving some windows on the screen and importing pylab, and I heard the harddisk working hard, so I assume that the part of memory my buffer is in, was paged to the harddisk. This is a problem, since the hardware driver will continuously try to write to this specific memory location I gave to it. My primary question is, how to avoid a numpy-array being paged. In the ideal case there would be a flag to set, that makes sure that this array is always at this position in physical memory. Of course I am also interested in other people's work on hardware access. Do you think the above discribed way is a good one? To me it was the best way because I could do as much as possible in Python and keep may C-coded .dll very small. Is anyone doing similar things? Thanks for every comment Lars -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-Köhler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfr...@im... |
From: P G. <pgm...@gm...> - 2006-10-23 04:24:43
|
Folks, I updated the alternative implementation of MaskedArray on the wiki, mainly to correct a couple of bugs. (http://projects.scipy.org/scipy/numpy/wiki/MaskedArray) In addition, I attached another file, maskedrecordarray, which introduce a new class, MaskedRecord, as a subclass of recarray and MaskedArray. An instance of this class accepts a recarray as data, and uses two masks: the 'recordmask' has as many entries as records in the array, each entry with the same fields as a record, but of boolean types, indicating whether a field is masked or not; an entry is flagged as masked in the 'mask' array if at least one field is masked. The 'mask' object is introduced mostly for compatibilty with MaskedArray, only 'recordmask' is really useful. A few examples in the file should give you an idea of what can be done. In particular, you can define a new maskedrecord array as simply as ; a = masked_record([('Alan',29,200.), ('Bill',31,260.0)], dtype=[('name','S30'),('age',int_),('weight',float_)], mask=[(1,0,0), (0,0,0)]) Note that maskedrecordarray is still quite experimental. As I'm not a regular user of records, I don't really know what should be implemented... The file can be accessed at http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/maskedrecordarray.py Once again, I need your comments and suggestions ! Thanks. Pierre |
From: Charles R H. <cha...@gm...> - 2006-10-23 03:22:24
|
On 10/22/06, Albert Strasheim <fu...@gm...> wrote: > > Hello all > > I'm trying to sort an array with two fields, but I'm getting a result that > doesn't seem to make sense. > > What I tried (first attempt): I have two 2-D arrays. I would like to sort > one based on the sort of the other. I managed to do this with argsort. > However, the fancy indexing required to get the sorted array using what > argsort returned was very slow. I followed this example: It is certainly awkward. I am going to add a function to do this after 1.0comes out. Sounds like you need it now. http://www.scipy.org/Numpy_Example_List#head-9f8656795227e3c43e849c6c0435eee > b32afd722 > > What I tried (second attempt): I created an array with two fields. I > think/hope/expected that sorting the array would sort it on the first > field > in the dtype and then on the second. This is *much* faster than the fancy > indexing approach. I believe it sorts on the two fields together as one big binary blob. <snip> Output on my system: > > 1.0.dev3376 > > before sort: > [[(1.0, 1) (2.0, 2)] > [(3.0, 3) (4.0, 4)]] > > after sort: > [[(2.0, 2) (1.0, 1)] > [(4.0, 4) (3.0, 3)]] > > The already sorted array has been unsorted in some way. Any thoughts? I suspect something to do with the binary representation of the floats. Probably depends on big/little endian also. The floats 1.0, 2.0, and 4.0all have zero mantissas while 3.0 has a one. The sort behaves differently if uint8 is used for both fields. In [3]: dt = dtype([('aaa', uint8), ('bbb', uint8)]) In [4]: x = empty((2, 2), dt) In [5]: xa = x[dt.names[0]] In [6]: xa[:] = [[1,2], [3,4]] In [7]: xb = x[dt.names[1]] In [8]: xb[:] = [[1,2], [3,4]] In [9]: x.sort() In [11]: x Out[11]: array([[(1, 1), (2, 2)], [(3, 3), (4, 4)]], dtype=[('aaa', '|u1'), ('bbb', '|u1')]) Chuck |
From: Robert K. <rob...@gm...> - 2006-10-23 02:44:20
|
Jeremy R. Fishman wrote: > Hi, I was wondering if anyone can give me some advice on how to go about > cross-compiling NumPy. I have been searching around and can't find any > support in distutils for cross compilation. Is there some way I can > still compile Numerical Python using a mipsel-linux compiler, on say a > Cygwin host? I'm afraid that distutils really does not support cross-compilation. numpy adds some more complications in that it tries to configure itself by compiling and running small programs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Jeremy R. F. <jfi...@bo...> - 2006-10-23 02:06:00
|
Hi, I was wondering if anyone can give me some advice on how to go about cross-compiling NumPy. I have been searching around and can't find any support in distutils for cross compilation. Is there some way I can still compile Numerical Python using a mipsel-linux compiler, on say a Cygwin host? The end result would be to use NumPy with an embedded Python interpreter running on a mipsel host. In specific, The OPEN-R system by Sony for use on Aibo robotic dogs. I need to figure out how to get setup to use the mipsel compiler, not how to set up the compiler. Thanks, Jeremy Fishman |
From: Albert S. <fu...@gm...> - 2006-10-23 00:48:04
|
Hello all I'm trying to sort an array with two fields, but I'm getting a result that doesn't seem to make sense. What I tried (first attempt): I have two 2-D arrays. I would like to sort one based on the sort of the other. I managed to do this with argsort. However, the fancy indexing required to get the sorted array using what argsort returned was very slow. I followed this example: http://www.scipy.org/Numpy_Example_List#head-9f8656795227e3c43e849c6c0435eee b32afd722 What I tried (second attempt): I created an array with two fields. I think/hope/expected that sorting the array would sort it on the first field in the dtype and then on the second. This is *much* faster than the fancy indexing approach. Code: import numpy as N print N.__version__ print dt = N.dtype([('aaa', N.float64), ('bbb', N.uint8)]) x = N.empty((2, 2), dt) xa = x[dt.names[0]] xa[:] = [[1.,2], [3.,4.]] xb = x[dt.names[1]] xb[:] = [[1, 2], [3, 4]] print 'before sort:' print x print x.sort(kind='quicksort') # other kinds not supported print 'after sort:' print x Output on my system: 1.0.dev3376 before sort: [[(1.0, 1) (2.0, 2)] [(3.0, 3) (4.0, 4)]] after sort: [[(2.0, 2) (1.0, 1)] [(4.0, 4) (3.0, 3)]] The already sorted array has been unsorted in some way. Any thoughts? Thanks! Regards, Albert |
From: Robert K. <rob...@gm...> - 2006-10-22 23:13:19
|
hu...@ya... wrote: > Hello, > > the docstring for compress in numpy give this > > help(numpy.compress) > > compress(condition, m, axis=None, out=None) > compress(condition, x, axis=None) = those elements of x corresponding > to those elements of condition that are "true". condition must be the > same size as the given dimension of x. > > > So (but perhaps I can misundertand the help due to my english) I don't > undersand the following error, for me a and c array does have the same > dimension and size. So someone can explain me the result please? The docstring is a bit underspecified. The condition array *must* be a 1D array with the same size *as the given axis* of the other array (using the convention that axis=None implies operating over the flattened array). There's simply no valid interpretation of this, for example: compress(array([[1, 0, 0], [1, 1, 0]]), arange(6).reshape(2,3)) since numpy arrays cannot be "ragged". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: <hu...@ya...> - 2006-10-22 22:45:51
|
Hello, the docstring for compress in numpy give this help(numpy.compress) compress(condition, m, axis=None, out=None) compress(condition, x, axis=None) = those elements of x corresponding to those elements of condition that are "true". condition must be the same size as the given dimension of x. So (but perhaps I can misundertand the help due to my english) I don't undersand the following error, for me a and c array does have the same dimension and size. So someone can explain me the result please? Thanks, N. In [86]: a = numpy.arange(9) In [87]: a = numpy.arange(9).reshape(3,3) In [88]: c = numpy.ones(9).reshape(3,3) In [89]: numpy.compress(c,a) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/gruel/tmp/Astro/FATBOY/<ipython console> /home/gruel/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py in compress(condition, m, axis, out) 353 except AttributeError: 354 return _wrapit(m, 'compress', condition, axis, out) --> 355 return compress(condition, axis, out) 356 357 def clip(m, m_min, m_max): ValueError: condition must be 1-d array |
From: Brian G. <ell...@gm...> - 2006-10-22 22:19:58
|
So, I have figured out the problem and have a solution. I have submitted a ticket for this: http://projects.scipy.org/scipy/numpy/ticket/362 That describes the problem and solution. Thanks for everyones ideas on this. Brian On 10/21/06, Travis Oliphant <oli...@ie...> wrote: > Brian Granger wrote: > > Hi, > > > > i am running numpy on aix compiling with xlc. Revision 1.0rc2 works > > fine and passes all tests. But 1.0rc3 and more recent give the > > following on import: > > > > Most likely the error-detection code is not working on your platform. > The platform dependent stuff is not that difficult. I tried to > implement something for AIX, but very likely got it wrong (and don't > have a platform to test it on). It is the UFUNC_CHECK_STATUS that must > be implemented. Perhaps, we can do a simple check and disable the > error modes: > > seterr(all='ignore') > > will work and "turn-off" error-detection on your platform. > > -Travis > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Brian G. <ell...@gm...> - 2006-10-22 21:37:48
|
> Most likely the error-detection code is not working on your platform. > The platform dependent stuff is not that difficult. I tried to > implement something for AIX, but very likely got it wrong (and don't > have a platform to test it on). It is the UFUNC_CHECK_STATUS that must > be implemented. Perhaps, we can do a simple check and disable the > error modes: I looked at the UFUNC_CHECK_STATUS implementation for AIX and put in some print statements. The the fpstatus returned to fp_read_flag is always indicating an FP_INVALID so UFUNC_CHECK_STATUS return 8. But the code looks fine to me. It seems a little odd that it would always indicate this flag. Where else can I look to try to see what it goiing on? Do you think this is worth worrying about, or should we just have use seterr(all='ignore')? I am willing to try to hunt this down, but I don't know much about the internals of numpy. Thanks Brian > seterr(all='ignore') > > will work and "turn-off" error-detection on your platform. > > -Travis > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Simon B. <si...@ar...> - 2006-10-22 21:36:20
|
Apologies if this is too off-topic for these lists, but I hope some people here find this interesting! PyDX - first public announcement ================================ Overview -------- PyDX is a package for working with calculus (differential geometry), arbitrary precision arithmetic (using gmpy), and interval arithmetic. It provides: * multivariate automatic differentiation (to arbitrary order) * Tensor objects for computations in differential geometry * Interval scalars (based on libMPFI) for calculating rigorous bounds on arithmetic operations (validated numerics). * Arbitrary order validated ODE solver. PyDX uses lazy computation techniques to greatly enhance performance of the resulting functions. This code grew out of a research project at The Australian National University, Department of Physics, which involved computing bounds on geodesics in relativistic space-time's. Documentation ------------- http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/api/index.html Subversion Repository --------------------- http://gr.anu.edu.au/svn/people/sdburton/pydx/ Simon Burton October 22 2006 The Australian National University |
From: Bill B. <wb...@gm...> - 2006-10-22 17:10:57
|
Don't know if this is any use, but to me (not knowing Fortran nearly so well as C++) this looks pretty useful: http://www.caam.rice.edu/software/ARPACK/arpack++.html http://www.ime.unicamp.br/~chico/arpack++/ It provides a nice high-level interface on top of ARPACK. I could see it being useful on a number of levels: 1) actually use its wrappers directly instead of calling the f2py'ed fortran code - at least as a stop gap measure to cover the holes ("shifted modes" etc) - this could potentially even be faster since the reverse-communication interface requires some iteration, and this way the loops would run in compiled C++ instead of python. 2) use it's code just as a reference to help write more wrappers to the fortran code 3) if nothing else, it has pretty decent documentation. For instance it includes a nice table of the amount of storage you need to reserve for various different calling modes. (It seems ARPACK's main documentation is not freely available, so I'm not sure what's in it, but if you have that then ARPACK++'s docs may not be much help). As far as I can tell, it must be under the same licensing terms as ARPACK itself. It doesn't actually specify anything as far as I could see. But it seems to be linked prominently from the ARPACK website and appears as an offshoot project. --bb On 10/22/06, Aric Hagberg <ha...@la...> wrote: > On Sat, Oct 21, 2006 at 02:05:42PM -0700, Keith Goodman wrote: > > Did you, or anybody else on the list, have any luck making a numpy > > version of eigs? > > I made a start at an ARPACK wrapper, see > http://projects.scipy.org/scipy/scipy/ticket/231 > and the short thread at scipy-dev > http://thread.gmane.org/gmane.comp.python.scientific.devel/5166/focus=5175 > > In addition to the wrapper there is a Python interface (and some tests). > I don't know if the interface is like "eigs" - I don't use Matlab. > > It will give you a few eigenvalues and eigenvectors for the standard > eigenproblem (Ax=lx) for any type of A (symmetric/nonsymmetric, real/complex, > single/double, sparse/nonsparse). > > The generalized and shifted modes are not implemented. > I need to find some time (or some help) to get it finished. > > Regards, > Aric |