You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Adam C. <ad...@ep...> - 2006-07-13 10:19:21
|
> Lapack_lite is in the numpy sources. I think it's the fallback if you don't have a LAPACK. Thanks Bill, that's what I thought, but I don't understand why it can't find it. > You can check the config that got built with: > numpy.show_config() OK, so it appears that NumPy is imported then, albeit with the warning. The result of the above line is included below. So maybe the problem lies with SciPy? Perhaps I should take my problem to scipy-users. Unless anyone has any further insight? Cheers, Adam >>> numpy.show_config() blas_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 lapack_info: libraries = ['lapack'] library_dirs = ['/usr/local/lib'] language = f77 atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'blas'] library_dirs = ['/usr/local/lib', '/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_info: NOT AVAILABLE lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: NOT AVAILABLE mkl_info: NOT AVAILABLE |
From: Bill B. <wb...@gm...> - 2006-07-13 10:08:22
|
On 7/13/06, Adam Carter <ad...@ep...> wrote: > > Hi all, > > Do I need lapack_lite aswell? Where can I get it? If I need this other > LAPACK, how can I ensure that my code uses the optimised version of LAPACK > already on this system I'm using? Lapack_lite is in the numpy sources. I think it's the fallback if you don't have a LAPACK. You can check the config that got built with: numpy.show_config() But that doesn't help much if you can't import numpy. --bb |
From: Adam C. <ad...@ep...> - 2006-07-13 10:02:21
|
Hi all, I'm new to this list so apologies if this is a solved problem, but I haven't been able to find anything in the archives. I've just installed Python 2.4.3 and Numpy-0.9.8 on AIX, and the configure/make/install of Python and the install of Numpy _appeared_ to go smoothly. However, now when I run python and type >>> import numpy I get import linalg -> failed: No module named lapack_lite I'm not sure if this is just a warning, but I suspect not, as when I later try import scipy, it results in a traceback: >>> import scipy Traceback (most recent call last): File "<stdin>", line 1, in ? File "/hpcx/home/z001/z001/adam/packages/Python-2.4.3/lib/python2.4/site-packages /scipy/__init__.py", line 34, in ? del linalg NameError: name 'linalg' is not defined Can anytone tell me what I'm doing wrong? I've not explicitly installed any LAPACK library but I was hpoing to use an existing (vendor provided) LAPACK. The environment variable $LAPACK was set to point to this library at compile and run-time. Do I need lapack_lite aswell? Where can I get it? If I need this other LAPACK, how can I ensure that my code uses the optimised version of LAPACK already on this system I'm using? Any advice very welcome. Thanks in advance, Adam ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Adam Carter (Applications Consultant) |epcc| e: ad...@ep... w: http://www.epcc.ed.ac.uk/~adam r: 3405 JCMB t: +44 131 650 6009 |
From: Pau G. <pau...@gm...> - 2006-07-13 09:52:36
|
On 7/11/06, Pau Gargallo <pau...@gm...> wrote: > On 7/11/06, Travis Oliphant <oli...@ie...> wrote: > > Pau Gargallo wrote: > > > hi, > > > > > > looking at the upcasting table at > > > http://www.scipy.org/Tentative_NumPy_Tutorial#head-4c1d53fe504adc97baf27b65513b4b97586a4fc5 > > > I saw that int's are sometimes casted to uint's. > > > > > > In [3]: a = array([3],int16) > > > In [5]: b = array([4],uint32) > > > In [7]: a+b > > > Out[7]: array([7], dtype=uint32) > > > > > > is that intended? > > > > > It's a bug. The result should be int64. I've fixed it in SVN. > > > > Thanks!! > hi Travis, now uint64+int gives float64. Obtaining a float from ints surprise me. But anyway, I don't know if there is a better choice. pau |
From: Travis O. <oli...@ie...> - 2006-07-13 09:07:30
|
Pau Gargallo wrote: > On 7/12/06, Victoria G. Laidler <la...@st...> wrote: > >> Hi, >> >> Pardon me if I'm reprising an earlier discussion, as I'm new to the list. >> >> But is there a reason that this obscure syntax >> >> A[arange(2)[:,newaxis],indexes] >> >> A[arange(A.shape[0])[:,newaxis],indexes] >> >> is preferable to the intuitively reasonable thing that the Original >> Poster did? >> >> A[indexes] >> >> > > i don't think so. > The obscure syntax is just a way you can solve the problem with the > current state of NumPy. Of course, a more clearer syntax would be > better, but for this, something in NumPy should be changed. > > This other syntax is longer but clearer: > ind = indices(A.shape) > ind[ax] = A.argsort(axis=ax) > A[ind] > > Which brings me to the question: > > Would it be reasonable if argsort returned the complete tuple of > indices, so that > A[A.argsort(ax)] would work ? > > I think this is reasonable. We would need a way for the argsort() function to work as it does now. I'm not sure if anybody actually uses the multidimensional behavior of argsort now, but it's been in Numeric for a long time. -Travis |
From: Pau G. <pau...@gm...> - 2006-07-13 08:37:06
|
On 7/12/06, Victoria G. Laidler <la...@st...> wrote: > Hi, > > Pardon me if I'm reprising an earlier discussion, as I'm new to the list. > > But is there a reason that this obscure syntax > > A[arange(2)[:,newaxis],indexes] > > A[arange(A.shape[0])[:,newaxis],indexes] > > is preferable to the intuitively reasonable thing that the Original > Poster did? > > A[indexes] > i don't think so. The obscure syntax is just a way you can solve the problem with the current state of NumPy. Of course, a more clearer syntax would be better, but for this, something in NumPy should be changed. This other syntax is longer but clearer: ind = indices(A.shape) ind[ax] = A.argsort(axis=ax) A[ind] Which brings me to the question: Would it be reasonable if argsort returned the complete tuple of indices, so that A[A.argsort(ax)] would work ? pau |
From: Francesc A. <fa...@ca...> - 2006-07-13 07:41:26
|
A Dijous 13 Juliol 2006 01:07, Sebastian =C5=BBurek va escriure: > Hi All, > > Has anyone worked with the RandomArray module? I wonder, > if it's OK to use its pseudo-random numbers generators, or > maybe I shall find more trusted methods (ie. ran1 from Numerical Recipes)? I'm not an expert, but my understanding is that the 'random' module that co= mes=20 with NumPy is made with kind of state-of-the-art random generators, so it=20 should be fine for most of purposes. However, the experts, and in particula= r=20 Robert Kern (I think he is the implementor), may want to precise this point. > > Please, give some comments. Thanks. Done ;-) =2D-=20 >0,0< Francesc Altet =C2=A0 =C2=A0 http://www.carabos.com/ V V C=C3=A1rabos Coop. V. =C2=A0=C2=A0Enjoy Data "-" |
From: <se...@pi...> - 2006-07-13 07:30:11
|
Hi All, Has anyone worked with the RandomArray module? I wonder, if it's OK to use its pseudo-random numbers generators, or maybe I shall find more trusted methods (ie. ran1 from Numerical Recipes)? Please, give some comments. Thanks. Sebastian |
From: Bill B. <wb...@gm...> - 2006-07-13 06:16:35
|
Terrific. Nils sent me the answer: scipy.show_config() or > numpy.show_config() > will give you some useful information. > > And here it goes straight to the wiki http://www.scipy.org/Installing_SciPy/Windows --bb |
From: Bill B. <wb...@gm...> - 2006-07-13 05:24:16
|
How can you probe numpy for info about what sort of BLAS/LAPACK you have, or other build configuration info? Searching the ml archives I turned up this one hint from Travis, which can be embodied in code thusly: import numpy def have_blas(): return id(numpy.dot) != id(numpy.core.multiarray.dot) A) is that function correct? and B) Is that the most you can find out? --bb |
From: Sebastian H. <ha...@ms...> - 2006-07-13 04:14:47
|
Hi, The latest cygwin gcc seems to be version 3.4.4. Are (relatively) new SVN builds (both numpy and scipy) available somewhere for download ? (I would like to experience the new "arr.T" feature ;-) ) Thanks Sebastian Haase Bill Baxter wrote: > Thanks, that seems to have done the trick! > I've got a SVN Scipy now! > > I updated the building scipy wiki page with this piece of advice. > > --bb > > On 7/13/06, * John Carter* <jn...@ec... > <mailto:jn...@ec...>> wrote: > > I had problems building SciPy and NumPy under Windows. They went away > when I stopped using the stable version of gcc and used 3.4.5 > > I think the problem was related to differences in cygwin and mingw32. > > SciPy built and everything I've needed works but the self test fails. > > John > > Dr. John N. Carter jn...@ec... <mailto:jn...@ec...> > ISIS http://www.ecs.soton.ac.uk/~jnc/ > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > <http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642> > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > <mailto:Num...@li...> > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > -- > William V. Baxter III > OLM Digital > Kono Dens Building Rm 302 > 1-8-8 Wakabayashi Setagaya-ku > Tokyo, Japan 154-0023 > +81 (3) 3422-3380 > > > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Bill B. <wb...@gm...> - 2006-07-13 03:12:41
|
Thanks, that seems to have done the trick! I've got a SVN Scipy now! I updated the building scipy wiki page with this piece of advice. --bb On 7/13/06, John Carter <jn...@ec...> wrote: > > I had problems building SciPy and NumPy under Windows. They went away > when I stopped using the stable version of gcc and used 3.4.5 > > I think the problem was related to differences in cygwin and mingw32. > > SciPy built and everything I've needed works but the self test fails. > > John > > Dr. John N. Carter jn...@ec... > ISIS http://www.ecs.soton.ac.uk/~jnc/ > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 |
From: David M. C. <co...@ph...> - 2006-07-13 02:48:42
|
On Thu, 13 Jul 2006 11:29:46 +0900 "Bill Baxter" <wb...@gm...> wrote: > In numpy/distutils/command/config.py: > 'cookedm' added a get_output() command on June 9. > This get_output function uses os.WEXITSTATUS and various other > os.W*functions. > > These do not exist in Python on Windows. > > Is there some other way to achieve the same thing without those? > > For now, just commenting out those error checking lines seems to do the > trick. Shoot, you're right. I didn't see that in the docs for the os module. I've fixed it in svn (I think -- I don't have a Windows box to test on). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Bill B. <wb...@gm...> - 2006-07-13 02:29:49
|
In numpy/distutils/command/config.py: 'cookedm' added a get_output() command on June 9. This get_output function uses os.WEXITSTATUS and various other os.W*functions. These do not exist in Python on Windows. Is there some other way to achieve the same thing without those? For now, just commenting out those error checking lines seems to do the trick. --bb |
From: Robert K. <rob...@gm...> - 2006-07-12 23:16:55
|
Sasha wrote: > Let me repeat my suggestion that was lost in this long thread: > > Add rands(shape, dtype=float, min=default_min(dtype), max=default_max(dtype)) > to the top level. Suitable defaults can be discussed. A more flexible > variation could > be rands(shape, dtype=float, algorithm=default_algorithm(dtype)), but > that would probably be an overkill. > > I think this will help teaching: rands is similar to zeros and ones, > but with few bells and whistles to be covered in the graduate course. Well, write it up, stick it as a method in RandomState, and when we can play with it, we can discuss whether it should go into numpy.*. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Travis O. <oli...@ie...> - 2006-07-12 22:42:59
|
Attached is a patch that makes PIL Image objects both export and consume the array interface. -Travis |
From: Mark H. <ma...@mi...> - 2006-07-12 21:46:45
|
Albert Strasheim wrote: > Hello all > > Various people wrote: > >>> Im curious though: the several projects recently using ctypes >>> and numpy to wrap libraries (Pygame SDL, OpenGL, svm) must have come >>> across the issue of using a creating a numpy array from a ctypes >>> pointer. Ill have to look further. >>> >>> >> It depends on whether or not the library creates memory or not. Not >> every library manages memory (some expect you to pass in memory already >> owned --- this is easy to do already with ctypes and numpy). >> > > I see two main models for sending back data from C land. > > One is to have the C function allocate memory and return the pointer to the > caller. This raises the question of who owns the memory. It also leads to > some *very* interesting crashes on Windows when you start freeing memory > allocated in code linked against one runtime in code linked against another > runtime. I think, if you have the option, avoid this kind of thing at all > costs. > > The other model has functions that take their normal arguments, and pointers > to buffers where they can store their results. Typically the caller would > also specify the size of the buffer. If the buffer is large enough for the > function to do its work, it uses it. If not, it returns an error code. This > way, one entity is always in charge of the memory. > > If you're writing C code that you plan to use with ctypes and NumPy, I think > the second model will lead to more robust code and less time spent debugging > crashes and memory leaks. > Yes, I agree, C libs written with your model #2 would make life much easier to create robust wrappers. Many of the ones that come to my mind, and deal with array like data and thus relevant, are model #1 types. I think the reason this is so goes something like this: Joe Foo developer & friends writes an entire library API in C. To make the API complete and easy to use Joe includes getting started 'make the data' C functions. Examples: SDL: SDL_surface* SDL_CreateRGBSurface(params) Opencv: IplImage* cvCreateImage(params), ... libdc1394: uint_t * capture buffer //created by the OS 1394 driver etc. If the Joe doesn't do this then the first thing Joe's 'users' must do to create the simplest application is call malloc, sizeof( figure out what element type...), blah, blah, i.e. do lots of memory management having zilch to do with the problem at hand. Further, Joe, being conscientious, includes boat loads of examples all using the 'make the data' calls. > libsvm (which I'm wrapping with ctypes) mostly follows the second model, > except when training new models, in which case it returns a pointer to a > newly allocated structure. To deal with this, I keep a pointer to this > allocated memory in a Python object that has the following function: > > def __del__(self): > libsvm.svm_destroy_model(self.model) > > Nice > By providing this destroy function, libsvm avoids the problem of mixing > allocation and deallocation across runtimes. > > Regards, > > Albert > > -Mark |
From: Victoria G. L. <la...@st...> - 2006-07-12 21:42:19
|
Hi, Pardon me if I'm reprising an earlier discussion, as I'm new to the list. But is there a reason that this obscure syntax A[arange(2)[:,newaxis],indexes] A[arange(A.shape[0])[:,newaxis],indexes] is preferable to the intuitively reasonable thing that the Original Poster did? A[indexes] Doesn't it violate the pythonic "principle of least surprise" for the simpler syntax not to work? As a casual numpy user, I can't imagine being able to remember the obscure syntax in order to use the result of an argsort. curiously, Vicki Laidler (numarray user who came from IDL) Emanuele Olivetti wrote: >Wow. I have to study much more indexing. It works pretty well. > >Just to help indexing newbie like on using your advice: >A[arange(A.shape[0])[:,newaxis],indexes] > >Thanks a lot! > >Emanuele > >Pau Gargallo wrote: > > >>here goes a first try: >> >>A[arange(2)[:,newaxis],indexes] >> >> > > >------------------------------------------------------------------------- >Using Tomcat but need to do more? Need to support web services, security? >Get stuff done quickly with pre-integrated technology to make your job easier >Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >_______________________________________________ >Numpy-discussion mailing list >Num...@li... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Sasha <nd...@ma...> - 2006-07-12 21:37:37
|
On 7/12/06, Alan G Isaac <ai...@am...> wrote: > On Wed, 12 Jul 2006, Sasha apparently wrote: [snip] > > Add rands(shape, dtype=float, min=default_min(dtype), max=default_max(dtype)) > > to the top level. Suitable defaults can be discussed. A more flexible > > variation could > > be rands(shape, dtype=float, algorithm=default_algorithm(dtype)), but > > that would probably be an overkill. > > My only reason for not exploring this is that recent > decisions seem to preclude it. Specifically, nothing > from the random module remains in the numpy namespace, To the contrary, the recent changes cleared the way for a better random number generator in the numpy namespace. With my proposal, I would predict that rands(n) and rands((n,m)) will be used a lot in tests and examples while more sophisticated functionality will be easily discoverable via help(rands). I can also see some benefit in having rands(n, bool) that is not available at the moment. |
From: Alan G I. <ai...@am...> - 2006-07-12 21:25:47
|
On Wed, 12 Jul 2006, Sasha apparently wrote: > Let me repeat my suggestion that was lost in this long thread: > Add rands(shape, dtype=float, min=default_min(dtype), max=default_max(dtype)) > to the top level. Suitable defaults can be discussed. A more flexible > variation could > be rands(shape, dtype=float, algorithm=default_algorithm(dtype)), but > that would probably be an overkill. My only reason for not exploring this is that recent decisions seem to preclude it. Specifically, nothing from the random module remains in the numpy namespace, I think. Cheers, Alan Isaac |
From: Prabhu R. <pr...@ae...> - 2006-07-12 21:04:11
|
I sent this reply on 9th but the message seems to have never made it to scipy-dev and is still pending moderator approval on numpy-discussion. I had attached a patch for weave that made the email larger than 40 KB. I don't have checkin privileges to scipy/numpy. So if someone would be kind enough to apply the patch for me, let me know and I'll send you the patch off-list. >>>>> "Travis" == Travis Oliphant <oli...@ee...> writes: [...] Travis> I'm not opposed to putting a *short* prefix in front of Travis> everything (the Int32, Float64, stuff came from numarray Travis> which now has it's own back-ward compatible header where Travis> it could be placed now anyway). Perhaps npy_ would be a Travis> suitable prefix. Travis> That way we could get rid of the cruft entirely. Travis> I suppose we could also provide the noprefix.h header that Travis> defines the old un-prefixed cases for "backwards NumPy Travis> compatibility". Travis, you rock! Thanks for fixing this in SVN. All the problems I was having earlier are gone. Here is a patch for weave's SWIG support that should be applied to scipy. The SWIG runtime layout changed in version 1.3.28 and this broke weave support. This should be fixed soon (hopefully!) in SWIG CVS and the following patch will ensure that weave works against it. Many thanks once again for the speedy fixes! cheers, prabhu |
From: Albert S. <fu...@gm...> - 2006-07-12 20:23:30
|
Hello all Various people wrote: > > Im curious though: the several projects recently using ctypes > > and numpy to wrap libraries (Pygame SDL, OpenGL, svm) must have come > > across the issue of using a creating a numpy array from a ctypes > > pointer. Ill have to look further. > > > It depends on whether or not the library creates memory or not. Not > every library manages memory (some expect you to pass in memory already > owned --- this is easy to do already with ctypes and numpy). I see two main models for sending back data from C land. One is to have the C function allocate memory and return the pointer to the caller. This raises the question of who owns the memory. It also leads to some *very* interesting crashes on Windows when you start freeing memory allocated in code linked against one runtime in code linked against another runtime. I think, if you have the option, avoid this kind of thing at all costs. The other model has functions that take their normal arguments, and pointers to buffers where they can store their results. Typically the caller would also specify the size of the buffer. If the buffer is large enough for the function to do its work, it uses it. If not, it returns an error code. This way, one entity is always in charge of the memory. If you're writing C code that you plan to use with ctypes and NumPy, I think the second model will lead to more robust code and less time spent debugging crashes and memory leaks. libsvm (which I'm wrapping with ctypes) mostly follows the second model, except when training new models, in which case it returns a pointer to a newly allocated structure. To deal with this, I keep a pointer to this allocated memory in a Python object that has the following function: def __del__(self): libsvm.svm_destroy_model(self.model) By providing this destroy function, libsvm avoids the problem of mixing allocation and deallocation across runtimes. Regards, Albert |
From: Travis O. <oli...@ie...> - 2006-07-12 20:12:08
|
Mark Heslep wrote: > Travis Oliphant wrote: > >> The problem here is that from Python NumPy has no way to create an >> ndarray from a pointer. Doing this creates a situtation where it is >> unclear who owns the memory. It is probably best to wrap the pointer >> into some kind of object exposing the buffer protocol and then pass >> that to frombuffer (or ndarray.__new__). >> > Yep thats where I just ended up: > > from ctypes import * > import numpy as N > ... > func = pythonapi.PyBuffer_FromMemory > func.restype = py_object > buffer = func( im.imageData, size_of_the_data ) > <----imageData = ctypes.LP_c_ubyte object > return N.frombuffer( buffer, N.uint8 ) > > Works! Nice job! > Im curious though: the several projects recently using ctypes > and numpy to wrap libraries (Pygame SDL, OpenGL, svm) must have come > across the issue of using a creating a numpy array from a ctypes > pointer. Ill have to look further. > > It depends on whether or not the library creates memory or not. Not every library manages memory (some expect you to pass in memory already owned --- this is easy to do already with ctypes and numpy). >> When an ndarray is using memory that is not its own, it expects >> another object to be "in charge" of that memory and the ndarray will >> point its base attribute to it and increment its reference count. >> What should the object that is "in charge" of the memory be? >> Perhaps a suitable utility function could be created that can work >> with ctypes to create ndarrays from ctypes memory locations and either >> own or disown the data. >> >> > I suppose that is still the case w/ PyBuffer_From.. above. That is, the > underlying im.imageData pointer can not be released before buffer. > Yes, you are right. It is the memory that is most critical. Who owns the memory pointed to by im.imageData? When will it be freed? The ndarray object is holding a reference to the Python buffer object which is just *hoping* that the memory it was initiated with is not going to be freed before it gets deallocated (which won't happen until at least the ndarray object is deallocated). So, managing the memory can be a bit tricky. But, if you are sure that im.imageData memory will not be freed then you are O.K. -Travis |
From: Mark H. <ma...@mi...> - 2006-07-12 20:04:11
|
Travis Oliphant wrote: > The problem here is that from Python NumPy has no way to create an > ndarray from a pointer. Doing this creates a situtation where it is > unclear who owns the memory. It is probably best to wrap the pointer > into some kind of object exposing the buffer protocol and then pass > that to frombuffer (or ndarray.__new__). Yep thats where I just ended up: from ctypes import * import numpy as N ... func = pythonapi.PyBuffer_FromMemory func.restype = py_object buffer = func( im.imageData, size_of_the_data ) <----imageData = ctypes.LP_c_ubyte object return N.frombuffer( buffer, N.uint8 ) Works! Im curious though: the several projects recently using ctypes and numpy to wrap libraries (Pygame SDL, OpenGL, svm) must have come across the issue of using a creating a numpy array from a ctypes pointer. Ill have to look further. > When an ndarray is using memory that is not its own, it expects > another object to be "in charge" of that memory and the ndarray will > point its base attribute to it and increment its reference count. > What should the object that is "in charge" of the memory be? > Perhaps a suitable utility function could be created that can work > with ctypes to create ndarrays from ctypes memory locations and either > own or disown the data. > I suppose that is still the case w/ PyBuffer_From.. above. That is, the underlying im.imageData pointer can not be released before buffer. Mark > This needs to be thought through a bit, however. > > -Travis > > > >> The attributes in nparray.__array_interface_ are not writable, so no >> joy there. >> >> On the C side the PyArray_SimpleNewFromData( ..dimensions, ...data >> ptr) C API does the job nicely. Is there a ctypes paradigm for >> SimpleNew...? >> >> Mark >> >> >> >> ------------------------------------------------------------------------- >> >> Using Tomcat but need to do more? Need to support web services, >> security? >> Get stuff done quickly with pre-integrated technology to make your >> job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> > |
From: Travis O. <oli...@ie...> - 2006-07-12 19:26:02
|
Mark Heslep wrote: > Travis Oliphant wrote: > >> Mark Heslep wrote: >> >>> I don't see a clean way to create a numpy array from a ctypes pointer >>> object. Is the __array_interface_ in ctypes the thing thats missing >>> needed to make this happen? I've followed Albert's Scipy cookbook >>> on ctypes here >>> >>> On the C side the PyArray_SimpleNewFromData( ..dimensions, ...data >>> ptr) C API does the job nicely. Is there a ctypes paradigm for >>> SimpleNew...? >>> >>> >> Can you somehow call this function using ctypes? >> >> -Travis >> > That might work, though indirectly. As I think I understand from ctypes > docs: Ctypes uses functions exposed in a shared library, macros > existing only a header are not available. If its PyArray... is a macro > then I a) need to compile and make a little library directly from > arrayobject.h or b) need to use the root function upon which the macro > is based, PyArrayNew? > > This is more complicated because all the C-API functions are actually just pointers stored in _ARRAY_API of multiarray. So, something would have to be built to interpret the C-pointers in that C-Object. I'm not sure that is possible. -Travis |