You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Travis O. <oli...@ie...> - 2006-07-26 19:14:29
|
Andrew Jaffe wrote: > Hi- > > On PPC Mac OSX universal build 2.4.3, gcc 4.0, > > > In [1]: import numpy as N > > In [2]: print N.__version__ > 1.0.2897 > > In [3]: N.random.uniform(0,1) > Segmentation fault > > (This originally showed up in the Ticket 83 regression test during > numpy.test()...) > This should be O.K. now. -Travis |
From: Mike R. <mik...@gm...> - 2006-07-26 15:52:18
|
My apologies if this is a duplicate - my first attempt doesn't seem to have gone back to the list. ---------- Forwarded message ---------- From: Mike Ressler <mik...@al...> Date: Jul 25, 2006 12:17 PM Subject: Re: ***[Possible UCE]*** [Numpy-discussion] Bug in memmap/python allocation code? To: Travis Oliphant <oli...@ie...> Cc: Num...@li... On 7/24/06, Travis Oliphant <oli...@ie...> wrote: > Mike Ressler wrote: > > I'm trying to work with memmaps on very large files, i.e. > 2 GB, up > > to 10 GB. Can't believe I'm really the first, but so be it. I just discovered the problem. All the places where > PyObject_As<Read/Write>Buffer is used needs to have the final argument > changed to Py_ssize_t (which in arrayobject.h is defined as int if you > are using less than Python 2.5). > > This should be fixed in SVN shortly.... Yeess! My little script can handle everything I've thrown at it now. It can read a 10 GB raw file, strip the top 16 bits, rearrange pixels, byte swap, and write it all back to a 5 GB file in 16 minutes flat. Not bad at all. And I've verified that the output is correct ... If someone can explain the rules of engagement for Lightning Talks, I'm thinking about presenting this at SciPy 2006. Then you'll see there is a reason for my madness. As an aside, the developer pages could use some polish on explaining the different svn areas, and how to get what one wants. An svn checkout as described on the page gets you the 1.1 branch that DOES NOT have the updated memmap fix. After a minute or two of exploring, I found that "svn co http://svn.scipy.org/svn/numpy/branches/ver1.0/numpy numpy" got me what I wanted. Thanks for your help and the quick solution. FWIW, I got my copy of the book a couple of weeks ago; very nice. Mike -- mik...@al... -- mik...@al... |
From: Andrew J. <a.h...@gm...> - 2006-07-26 13:53:40
|
Hi- On PPC Mac OSX universal build 2.4.3, gcc 4.0, In [1]: import numpy as N In [2]: print N.__version__ 1.0.2897 In [3]: N.random.uniform(0,1) Segmentation fault (This originally showed up in the Ticket 83 regression test during numpy.test()...) Andrew |
From: Albert S. <fu...@gm...> - 2006-07-26 00:32:08
|
Hey Mathew The problem is that ATLAS doesn't provide all the LAPACK functions, only a few that the ATLAS developers have optimized. To get a complete LAPACK library, you need to build the Fortran LAPACK library, and then put the ATLAS-optimized functions into this library. Details here: http://math-atlas.sourceforge.net/errata.html#completelp Regards, Albert > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Mathew Yeates > Sent: 26 July 2006 00:29 > To: Num...@li... > Subject: [Numpy-discussion] lapack too small? > > When I try and build I get the warning > ********************************************************************* > Lapack library (from ATLAS) is probably incomplete: > size of > /u/fuego0b/myeates/ATLAS/lib/SunOS_HAMMER32SSE3/liblapack.a is 318k > (expected > >4000k) > > Follow the instructions in the KNOWN PROBLEMS section of the file > numpy/INSTALL.txt. > ********************************************************************* > > > > But, there is no such file INSTALL.txt > Whats wrong? This is on Solaris and I built the ATLAS libs myself. > > Mathew |
From: Mathew Y. <my...@jp...> - 2006-07-26 00:28:19
|
When I try and build I get the warning ********************************************************************* Lapack library (from ATLAS) is probably incomplete: size of /u/fuego0b/myeates/ATLAS/lib/SunOS_HAMMER32SSE3/liblapack.a is 318k (expected >4000k) Follow the instructions in the KNOWN PROBLEMS section of the file numpy/INSTALL.txt. ********************************************************************* But, there is no such file INSTALL.txt Whats wrong? This is on Solaris and I built the ATLAS libs myself. Mathew |
From: Sven S. <sve...@gm...> - 2006-07-25 21:47:10
|
Hi, I upgraded from 1.0b1 because I saw that the matrix-indexing bug was already fixed (thanks!). Now there's a new regression: >>> import numpy as n >>> n.__version__ '1.0.2891' >>> a = n.mat(n.eye(3)) >>> n.linalg.cholesky(a) array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) This used to spit out a numpy-matrix, given the numpy-matrix input. -Sven |
From: David G. <dav...@gm...> - 2006-07-25 21:29:10
|
On 7/25/06, Stephen Simmons <ma...@st...> wrote: > > Hi, > > I've written some numpy functions for grouping, translating and > subtotalling data. At the moment they exist as pure Python code, but I > have started rewriting them in C for speed. > > Did you check out Pyrex first, before looking into writing C extensions? Dave |
From: Stephen S. <ma...@st...> - 2006-07-25 20:51:12
|
Hi, I've written some numpy functions for grouping, translating and subtotalling data. At the moment they exist as pure Python code, but I have started rewriting them in C for speed. As this is my first attempt at a C extension for numpy, I'd appreciate any suggestions for good numpy coding style to make this work for strided arrays, multiple data types, etc. My first go at a C version is very low level (following bincount()'s code in _compiled_base.c as a guide) and works for integer and character arrays. Are there other (possible more recent) numpy source files that I should use as a guide to writing fast, clean, flexible code? Cheers Stephen |
From: Christopher B. <Chr...@no...> - 2006-07-25 18:47:13
|
Lars Friedrich wrote: >> In this case, you're writing a dll that understands PyObjects (or, I >> assume, a particular PyObject -- a numarray). Why not just forget ctypes >> and write a regular old extension? > good point. I am relatively new to Python. The first thing I did was > trying to write a regular extension. The problem is, that I *have to* > use a windows-machine. And at the moment, only Visual C++ 6.0 is > available here. The problem is that for a regular extension, Python and > the extension need to be compiled by the same compiler AFAIK. That's mostly true. You have three options: 1) re-compile python yourself -- but then you'd also have to re-compile all the other extensions you use! 2) You can also use MingGW to compile extensions -- it takes a bit of kludging, but it can be done, and works fine once you've got it set up. Google will help you figure out how -- it's been a while since I've done it. I have to say that I find it ironic that you can use MinGW, but not other versions of the MS compiler! 3) MS distributes a command line version of their compiler for free that can be used. Again, google should help you find out how to do that. However, as other posters mentioned, you can use ctypes as it was intended with numpy -- that may be the way to go -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Sven S. <sve...@gm...> - 2006-07-25 17:20:17
|
Robert Kern schrieb: > Sven Schreiber wrote: >> Hi, >> >> there was a thread about this before, diag() is currently only >> partly useful if you work with numpy-matrices, because the 1d->2d >> direction doesn't work, as there are no 1d-numpy-matrices. This is >> unfortunate because a numpy-matrix with shape (n,1) or (1,m) should be >> naturally treated as a vector, imho. So it would be nice if this could >> be fixed. >> >> It's probably not the most efficient solution, but what I want for >> numpy-matrix input x is to get: >> >> mat(diag(x.A.squeeze)) >> >> where diag is the current implementation. This means that if x is not a >> vector ("truly 2d"), then nothing is changed. But if one of the >> dimensions of x is ==1, then it's turned into a 1d-array, and diag works >> as it should. >> >> Does that sound reasonable? > > Not for numpy.diag() in my opinion. However, I won't object to a > numpy.matlib.diag() that knows about matrix objects and behaves the way you want. > That would be fine with me. However, I'd like to point out that after some bug-squashing currently all numpy functions deal with numpy-matrices correctly, afaik. The current behavior of numpy.diag could be viewed as a violation of that principle. (Because if x has shape (n,1), diag(x) returns only the first entry, which is pretty stupid for a diag-function operating on a vector.) I repeat, the matlib solution would be ok for me, but in some sense not fixing numpy.diag could contribute to the feeling of matrices being only second-class citizens. cheers, Sven |
From: Robert K. <rob...@gm...> - 2006-07-25 16:55:42
|
Sven Schreiber wrote: > Hi, > > there was a thread about this before, diag() is currently only > partly useful if you work with numpy-matrices, because the 1d->2d > direction doesn't work, as there are no 1d-numpy-matrices. This is > unfortunate because a numpy-matrix with shape (n,1) or (1,m) should be > naturally treated as a vector, imho. So it would be nice if this could > be fixed. > > It's probably not the most efficient solution, but what I want for > numpy-matrix input x is to get: > > mat(diag(x.A.squeeze)) > > where diag is the current implementation. This means that if x is not a > vector ("truly 2d"), then nothing is changed. But if one of the > dimensions of x is ==1, then it's turned into a 1d-array, and diag works > as it should. > > Does that sound reasonable? Not for numpy.diag() in my opinion. However, I won't object to a numpy.matlib.diag() that knows about matrix objects and behaves the way you want. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Mark H. <ma...@mi...> - 2006-07-25 16:52:55
|
Christopher Barker wrote: > Lars Friedrich wrote: > >> I would like to work with some data using python/numpy. The data is >> generated with C. To bring the data from C to python, I would like to >> use ctype. I am using python 2.4.3 on a windows-System. >> >> To accomplish the described task, I have the following plan. Please tell >> me, if this is possible, respectively if there is a better way. >> >> 1) In C, I write a .dll, that has a function >> int foo(PyObject *arg) >> > > I'm a bit confused here. I thought the primary point of ctypes was to be > able to access existing, non-python-aware dlls from Python without > writing C code. > > In this case, you're writing a dll that understands PyObjects (or, I > assume, a particular PyObject -- a numarray). Why not just forget ctypes > and write a regular old extension? > > See Albert's post up thread. Agreed, if he was going to go to the trouble of using the Py C API then he didn't really need ctypes. What the original poster wanted (Im assuming) was something much simpler like int foo(int* x) as Albert suggested, and then use the new numpy ctypes atrributes and ctypes 'argtypes' type mapping facility to pass the data contained in the py object into a simple routine. That then, is the advantage of ctypes. Mark |
From: Sven S. <sve...@gm...> - 2006-07-25 16:45:37
|
Hi, there was a thread about this before, diag() is currently only partly useful if you work with numpy-matrices, because the 1d->2d direction doesn't work, as there are no 1d-numpy-matrices. This is unfortunate because a numpy-matrix with shape (n,1) or (1,m) should be naturally treated as a vector, imho. So it would be nice if this could be fixed. It's probably not the most efficient solution, but what I want for numpy-matrix input x is to get: mat(diag(x.A.squeeze)) where diag is the current implementation. This means that if x is not a vector ("truly 2d"), then nothing is changed. But if one of the dimensions of x is ==1, then it's turned into a 1d-array, and diag works as it should. Does that sound reasonable? Thanks, Sven |
From: Christopher B. <Chr...@no...> - 2006-07-25 16:27:22
|
Lars Friedrich wrote: > I would like to work with some data using python/numpy. The data is > generated with C. To bring the data from C to python, I would like to > use ctype. I am using python 2.4.3 on a windows-System. > > To accomplish the described task, I have the following plan. Please tell > me, if this is possible, respectively if there is a better way. > > 1) In C, I write a .dll, that has a function > int foo(PyObject *arg) I'm a bit confused here. I thought the primary point of ctypes was to be able to access existing, non-python-aware dlls from Python without writing C code. In this case, you're writing a dll that understands PyObjects (or, I assume, a particular PyObject -- a numarray). Why not just forget ctypes and write a regular old extension? Or use Pyrex, or Boost, or..... Maybe ctypes has some real advantages I don't get. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Steve L. <lis...@ar...> - 2006-07-25 14:47:26
|
Hi Rudolph, > I just built numpy-1.0b1 on Mac OS X (10.4.7) using gcc 4.0 and it > fails 2 of the regression tests. Here is the output: I actually posted about this a few days ago ... which also almost got lost in the sink, but David Cooke replied to me last night to say that they are aware of the problem. http://projects.scipy.org/scipy/numpy/ticket/183 They suspect that it's a gcc4 problem .. I did try to recompile numpy last night using gcc3.3 but I'm still having the same issues ... so ... that's all I know right now :-) -steve |
From: Rudolph v. d. M. <ru...@sk...> - 2006-07-25 13:58:55
|
I just built numpy-1.0b1 on Mac OS X (10.4.7) using gcc 4.0 and it fails 2 of the regression tests. Here is the output: =================== ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:18) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 32 tests for numpy.linalg Found 13 tests for numpy.core.umath Found 4 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 147 tests for numpy.core.multiarray Found 3 tests for numpy.dft.helper Found 36 tests for numpy.core.ma Found 10 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 39 tests for numpy.lib.function_base Found 1 tests for numpy.lib.polynomial Found 8 tests for numpy.core.records Found 26 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ...................................................................................F..F............................................................................................................................................................................................................................................................................................................................................................................................. ====================================================================== FAIL: check_large_types (numpy.core.tests.test_scalarmath.test_power) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 47, in check_large_types assert b == 6765201, "error with %r: got %r" % (t,b) AssertionError: error with <type 'float128scalar'>: got 0.0 ====================================================================== FAIL: check_types (numpy.core.tests.test_scalarmath.test_types) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/core/tests/test_scalarmath.py", line 20, in check_types assert a == 1, "error with %r: got %r" % (atype,a) AssertionError: error with <type 'float128scalar'>: got 0.0 ---------------------------------------------------------------------- Ran 468 tests in 1.658s FAILED (failures=2) <unittest.TextTestRunner object at 0x10bab90> =================== On 7/25/06, Damien Miller <dj...@mi...> wrote: > On Fri, 21 Jul 2006, Travis Oliphant wrote: > > > > > I've created the 1.0b1 release tag in SVN and will be uploading files > > shortly to Sourceforge. > > FYI numpy-1.0b1 builds fine and passes all its regression tests > on OpenBSD -current. > > -d > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Rudolph van der Merwe |
From: Paul B. <peb...@gm...> - 2006-07-25 12:27:08
|
On 7/24/06, Travis Oliphant <oli...@ie...> wrote: > Paul Barrett wrote: > > I'm having a problem converting a C extension module that was > > originally written for numarray to use numpy. I using swig to create > > a wrapper flle for the C code. I have added the > > numpy.get_numarray_include() method to my setup.py file and have > > changed the numarray/libnumarray.h to use numpy/libnumarray.h. The > > extension appears to compile fine (with the exception of some warning > > messages). However, when I import the module, I get a segfault. Do I > > need to add anything else to the share library's initialization step > > other than import_libnumarray()? > > > > No, that should be enough. The numarray C-API has only been tested on > a few extension modules. It's very possible some of the calls have > problems. > > It's also possible you have an older version of numpy lying around > somewhere. Do you get any kind of error message on import? No. I'm using a recent SVN version of numpy and I remove the install and build directories before every new build, i.e. I do clean build after each SVN update. No. Just the segfault. I guess the best thing to do is put in print statements and try to locate where it fails. Thanks for the clarification, Travis. -- Paul |
From: Albert S. <fu...@gm...> - 2006-07-25 12:03:38
|
Hello all > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Lars Friedrich > Sent: 25 July 2006 13:55 > To: num...@li... > Subject: Re: [Numpy-discussion] ctypes, numpy-array > > > What's might be happening here is that a.ctypes.data is in fact being > passed > > to your function via ctypes's from_param magic (check the ctypes > tutorial > > for details). > > > > In [10]: x = N.array([]) > > > > In [11]: x.ctypes.data > > Out[11]: c_void_p(15502816) > > > > In [12]: x._as_parameter_ > > Out[12]: 15502816 > > > OK, I did not know about x.ctypes.data... Travis added this quite recently. Somebody (probably me) still has to update the wiki to reflect these changes. > > So what's probably happening here is that you already have a pointer to > the > > array's data which you then cast to a PyArrayObject pointer. > Dereferencing > > myPtr->data is looking for a pointer inside the array's data, which > contains > > zeros. > > I understand. > > > - look at ctypes's PyDLL option if you want to pass around Python > objects > > ??? You can read about PyDLL here: http://docs.python.org/dev/lib/ctypes-loading-shared-libraries.html I think PyDLL might turn out to be an interesting alternative to traditional extension modules. But for wrapping C code, I think writing functions that operate on pointers to ints and floats and whatnot works nicely. > > - Write your function as: > > > > int foo(int* x); > > > > Then do something like this: > > > > x = N.array([...], dtype=N.intc) > > mydll.foo.restype = None Slight typo on my part. For this example it should be: mydll.foo.restype = c_int > > mydll.foo.argtypes = [POINTER(c_int)] > > mydll.foo(x.ctypes.data) > > I did that, and it worked fine for me. Thank you very much! This is > really great. Cool. Enjoy! Regards, Albert |
From: Lars F. <lfr...@im...> - 2006-07-25 11:55:17
|
> What's might be happening here is that a.ctypes.data is in fact being passed > to your function via ctypes's from_param magic (check the ctypes tutorial > for details). > > In [10]: x = N.array([]) > > In [11]: x.ctypes.data > Out[11]: c_void_p(15502816) > > In [12]: x._as_parameter_ > Out[12]: 15502816 > OK, I did not know about x.ctypes.data... > So what's probably happening here is that you already have a pointer to the > array's data which you then cast to a PyArrayObject pointer. Dereferencing > myPtr->data is looking for a pointer inside the array's data, which contains > zeros. I understand. > - look at ctypes's PyDLL option if you want to pass around Python objects ??? > - Write your function as: > > int foo(int* x); > > Then do something like this: > > x = N.array([...], dtype=N.intc) > mydll.foo.restype = None > mydll.foo.argtypes = [POINTER(c_int)] > mydll.foo(x.ctypes.data) I did that, and it worked fine for me. Thank you very much! This is really great. Lars |
From: Albert S. <fu...@gm...> - 2006-07-25 11:09:34
|
Hey Lars > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of Lars Friedrich > Sent: 25 July 2006 12:50 > To: num...@li... > Subject: [Numpy-discussion] ctypes, numpy-array > > Hello, > > I would like to work with some data using python/numpy. The data is > generated with C. To bring the data from C to python, I would like to > use ctype. I am using python 2.4.3 on a windows-System. > > To accomplish the described task, I have the following plan. Please tell > me, if this is possible, respectively if there is a better way. > > 1) In C, I write a .dll, that has a function > int foo(PyObject *arg) > > 2) In python, I generate a numpy-array with the appropriate size. e.g. > a = zeros((3,3), dtype=int) > > 3) From python, I call my .dll-function with a as an argument: > windll.mydll.foo(a) What's might be happening here is that a.ctypes.data is in fact being passed to your function via ctypes's from_param magic (check the ctypes tutorial for details). In [10]: x = N.array([]) In [11]: x.ctypes.data Out[11]: c_void_p(15502816) In [12]: x._as_parameter_ Out[12]: 15502816 > 4) In the foo-function in the C-.dll I cast the pointer and access the > data-field. > PyArrayObject *myPtr = (PyArrayObject*) arg; > myPtr->data[0] = 1; > return 0; > > However, when I do this, I get an "AccessViolationError writing > 0x000000000" So what's probably happening here is that you already have a pointer to the array's data which you then cast to a PyArrayObject pointer. Dereferencing myPtr->data is looking for a pointer inside the array's data, which contains zeros. Here's a few things to try: - look at ctypes's PyDLL option if you want to pass around Python objects - Write your function as: int foo(int* x); Then do something like this: x = N.array([...], dtype=N.intc) mydll.foo.restype = None mydll.foo.argtypes = [POINTER(c_int)] mydll.foo(x.ctypes.data) This might also work: x = N.array([...], dtype=N.intc) mydll.foo.restype = None mydll.foo(x) Cheers, Albert |
From: Lars F. <lfr...@im...> - 2006-07-25 10:50:06
|
Hello, I would like to work with some data using python/numpy. The data is generated with C. To bring the data from C to python, I would like to use ctype. I am using python 2.4.3 on a windows-System. To accomplish the described task, I have the following plan. Please tell me, if this is possible, respectively if there is a better way. 1) In C, I write a .dll, that has a function int foo(PyObject *arg) 2) In python, I generate a numpy-array with the appropriate size. e.g. a = zeros((3,3), dtype=int) 3) From python, I call my .dll-function with a as an argument: windll.mydll.foo(a) 4) In the foo-function in the C-.dll I cast the pointer and access the data-field. PyArrayObject *myPtr = (PyArrayObject*) arg; myPtr->data[0] = 1; return 0; However, when I do this, I get an "AccessViolationError writing 0x000000000" What can I do about it? Thank you for every comment Lars -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-Köhler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfr...@im... |
From: James G. <jg...@ca...> - 2006-07-25 09:53:32
|
Robert Kern wrote: > Sebastian Haase wrote: >> Hi, >> Essentially I'm looking for the equivalent of what was in numarray: >> from numarray import random_array >> random_array.poisson(arr) >> >> That is: if for example arr is a 256x256 array of positive integers, then this >> returns a new array of random numbers than are drawn according to the poisson >> statistics where arr's value at coordinate y,x determines the mean of the >> poisson distribution used to generate a new value for y,x. > > I'm afraid that at this point in time, the distributions only accept scalar > values for the parameters. I've thought about reimplementing the distribution > functions as ufuncs, but that's a hefty chunk of work that won't happen for 1.0. FWIW, I've had enquires about the availability, or not, of this functionality in NumPy as well, so when someone does have time to work on it, it will be very much appreciated. -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes |
From: <hit...@ya...> - 2006-07-25 09:06:26
|
:―― INFORMATION ―――――――――――――――――――――――――: 不正・悪質なサイトを一切排除しておりますので 安心してご利用ください。 http://love-match.bz/pc/?09 :――――――――――――――――――――――――――――――――――: *・゜゜・*:.。. .。.:*・゜゜・*:.。..。:*・゜゜・*:.。..。:**・゜゜・* お金と時間を持て余している人妻の間で、噂になってるあのサイト [登録・利用料全て無料] http://love-match.bz/pc/?09 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ □■ 不倫・ワリキリ専門の無料出会いサイト『Love☆Match』 ----------------------------------------------------------------- 登録料・利用料 ・・・・・・・・・【無料】 メールの送受信 ・・・・・・・・・【無料】 ユーザーの検索 ・・・・・・・・・【無料】 掲示板の閲覧・書込み ・・・・・・【無料】 画像交換・アップロード ・・・・・【無料】 アドレス交換・電話番号交換 ・・・【無料】 ----------------------------------------------------------------- どれだけ使っても全て無料! http://love-match.bz/pc/?09 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ □■ いつでも女性ユーザーがいっぱい。その理由は? ----------------------------------------------------------------- PC&モバイルに対応!いつでもどこでも気軽に楽しめる! ----------------------------------------------------------------- 仕事中は携帯電話から、プライベートは自宅からのんびりと。 気になる相手といつでも繋がっているから、新密度も急速にUP。 http://love-match.bz/pc/?09 ----------------------------------------------------------------- PCから簡単プロフィール作成。ネット初心者でもラクラク参加OK ----------------------------------------------------------------- 面倒な登録は一切不要。パソコンから簡単なプロフィールを作成して 初心者の方や女性でもすぐに参加できます。 http://love-match.bz/pc/?09 ----------------------------------------------------------------- 自由恋愛対応!直電・直メ交換支援ツール ----------------------------------------------------------------- 基本的にメールアドレスや電話番号は非公開ですが 仲良くなった人だけにメールアドレスや電話番号を教える事ができます。 http://love-match.bz/pc/?09 ----------------------------------------------------------------- 写真アップロードに対応!好みの相手を素早くCHECK! ----------------------------------------------------------------- 待ち合わせ場所にイメージとまったく違う人が来たら…。 ピュアックスなら会う前に写真交換ができるから、そんな不安も解消。 http://love-match.bz/pc/?09 ----------------------------------------------------------------- スレッド掲示板で秘密のパートナー検索も効率UP! ----------------------------------------------------------------- メインの掲示板のほかにスレッド型の掲示板を設置。 メル友から秘密のパートナーまで目的別のユーザーが集う掲示板です。 http://love-match.bz/pc/?09 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ □■ 毎日500人近くのユーザーが続々参加中!! □----------------------------------------------------------------- リエ(21歳/会社員) いつも1人でエッチなことを考えてます。 メールだといろいろ話せるんだけど、実際に会うとあまりしゃべれなく なっちゃうので、盛り上げてくれるような楽しい男の人いないかな? 引っ込み思案のせいか、男性経験はあまり無いんです。 優しく&楽しくリードしてくれる男性からのメール待ってます。 [写真有り] http://love-match.bz/pc/?09 □----------------------------------------------------------------- 真菜(24歳/フリーター) 彼氏が浮気してて超アタマきたっ!まなだって遊びたい盛りだし、ずっと ガマンしてたのにさ!かっこいい人見つけて思いっきりふってやるつもりで 登録してみた(笑) [写真有り] http://love-match.bz/pc/?09 □----------------------------------------------------------------- みさ(34歳/専業主婦) 殆ど家に帰ってこない仕事人間のだんなさまと二人きりの毎日で、ほんと 寂しい思いをしています。年下の男の子がいれば仲良くなりたいな。 年下の人とは付き合ったことがないので興味津々です(^^) [写真無し] http://love-match.bz/pc/?09 □----------------------------------------------------------------- 恭子(28歳/会社員) 彼氏とはいつも同じようなセックスばかりでかなり冷め気味です。 誰かあたしと熱いセックスを楽しみませんか?めんどくさい事は 言いません。ただ、いつもと違うドキドキするような事がしたい だけなんです。 [写真無し] http://love-match.bz/pc/?09 □----------------------------------------------------------------- ななこ(28歳/主婦) 半年前にだんなと別れて今は×1です。 夜のお仕事なので、昼間まったりと過ごしませんか? 心身ともに疲れ気味で、今、激しく癒されたいです。 [写真有り] http://love-match.bz/pc/?09 □----------------------------------------------------------------- 祥子(31歳/クリエイター) 平日は18時くらいまでは大体仕事してるので、その後に食事したり 楽しく飲んだりできるパートナー希望です。年上でも年下でも かまいませんので気軽にメールを送って頂けると嬉しいです。 [写真有り] http://love-match.bz/pc/?09 □----------------------------------------------------------------- ゅヵ`(20歳/学生) まずゎ会ってみないとはじまらなぃょね?! 横浜近辺の人で、いろんな意味でオトナな人は プロフ付きでめぇる送って☆ [写真有り] http://love-match.bz/pc/?09 □----------------------------------------------------------------- 出会いサイトのサクラに騙されないように↓ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ 【裏】無料の出会い情報 ------------------------------------------------------------- お金と時間を持て余している人妻の間で、噂になってるあのサイト [登録・利用料全て無料] http://love-match.bz/pc/?09 ------------------------------------------------------------- 彼女達が求めているのはこんな男性です。 ?年上女性にリードしてもらいたい、経験少なめの男性 ?体力・テクニックに自信が有る男性 男性会員が不足しています。我こそは、と思う方は今すぐ参加! [登録・利用料全て無料] http://love-match.bz/pc/?09 ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ 広東省茂名市人民大街3-6-4-533 友誼網絡公司 139-3668-7892 |
From: Karol L. <kar...@kn...> - 2006-07-25 06:59:41
|
On Tuesday 25 July 2006 02:36, Mike Ressler wrote: > I'm trying to work with memmaps on very large files, i.e. > 2 GB, up to 10 > GB. The files are data cubes of images (my largest is > 1290(x)x1024(y)x2011(z)) and my immediate task is to strip the data from > 32-bits down to 16, and to rearrange some of the data on a per-xy-plane > basis. I'm running this on a Fedora Core 5 64-bit system, with > python-2.5b2(that I believe I compiled in 64-bit mode) and > numpy-1.0b1. The disk has 324 GB free space. > > The log from a minimal case is as follows: > > ressler > python2.5 > Python 2.5b2 (r25b2:50512, Jul 18 2006, 12:58:29) > [GCC 4.1.1 20060525 (Red Hat 4.1.1-1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > > >>> import numpy as np > >>> data=np.memmap('temp_file',mode='w+',shape=(2011,1280,1032),dtype='h') > > size = 2656450560 > bytes = 5312901120 > len(mm) = 5312901120 > (2011, 1280, 1032) h 0 0 > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/usr/local/lib/python2.5/site-packages/numpy/core/memmap.py", line > 75, in __new__ > offset=offset, order=order) > TypeError: buffer is too small for requested array > > > If I have a small number of frames (z=800 rather than 2011), this all works > fine. I've added a few lines to memmap.py to print some diagnostic > information - the error occurs on line 71 in the original memmap.py file, > not 75. The "size =" and "bytes =" lines show that memmap.py is calculating > the correct size for the buffer, and the len(mm) shows that the python > mmap.mmap call on line 67 is returning a buffer of the correct size. The > "(2011, 1280, 1032) h 0 0" bit is from a print statement that was left in > the source file by the authors, and indicates what the following "self = > ndarray.__new__" call is trying to do. However, it is the ndarray.__new__ > call that is breaking down, and I don't really have enough skill to > continue chasing it down. I took a quick look at the C source, but I > couldn't figure out where the ndarray.__new__ is actually defined. > > Any suggestions to help me past this? Thanks. > > Mike I know Travis has nswered in a different thread. Let me jsut add where the actual error is raised - maybe it will be of some use. It is around line 5490 of arrayobject.c (procedure array_new): else { /* buffer given -- use it */ if (dims.len == 1 && dims.ptr[0] == -1) { dims.ptr[0] = (buffer.len-(intp)offset) / itemsize; } else if ((strides.ptr == NULL) && \ buffer.len < itemsize* \ PyArray_MultiplyList(dims.ptr, dims.len)) { PyErr_SetString(PyExc_TypeError, "buffer is too small for " \ "requested array"); goto fail; } So it does look like an overflow to me. Karol -- written by Karol Langner wto lip 25 08:56:42 CEST 2006 |
From: Karol L. <kar...@kn...> - 2006-07-25 06:56:37
|
On Tuesday 25 July 2006 01:42, Bill Baxter wrote: > > > And I think byteorder matters when comparing dtypes: > > > >>> numpy.dtype('>f4') == numpy.dtype('<f4') > > > > > > False > > Ohhhhh -- that '<' part is indicating *byte order* ?! > I thought it was odd that numpy could only tell me the type was "less > than f4", which I assumed must be shorthand for "less than or equal to > f4". Makes much more sense now! > > --bb Yep! And there are then four possiblities. '>' - big-endian '<' - little '|' - not-applicable '=' - native Karol -- written by Karol Langner wto lip 25 08:54:51 CEST 2006 |