From: Carlos P. <car...@ya...> - 2006-08-23 02:51:09
|
Hi! I'm writting a real time sound synthesis framework where processing units are interconnected via numpy arrays. These buffers are all the same size and type, so it would be easy and convenient pooling them in order to avoid excesive creation/destruction of arrays (consider that thousands of them are acquired and released per second, but just a few dozens used at the same time). But first I would like to know if numpy implements some pooling mechanism by itself. Could you give me some insight on this? Also, is it possible to obtain an uninitialized array? I mean, sometimes I don't feel like wasting valuable cpu clocks filling arrays with zeros, ones or whatever. Thank you in advance. Regards, Carlos --------------------------------- Preguntá. Respondé. Descubrí. Todo lo que querías saber, y lo que ni imaginabas, está en Yahoo! Respuestas (Beta). Probalo ya! |
From: Simon B. <si...@ar...> - 2006-08-23 03:06:47
|
On Tue, 22 Aug 2006 23:51:01 -0300 (ART) Carlos Pita <car...@ya...> wrote: > Hi! I'm writting a real time sound synthesis framework where processing u= nits are interconnected via numpy arrays. These buffers are all the same si= ze and type, so it would be easy and convenient pooling them in order to av= oid excesive creation/destruction of arrays (consider that thousands of the= m are acquired and released per second, but just a few dozens used at the s= ame time). But first I would like to know if numpy implements some pooling = mechanism by itself.=20 I don't think so. > Could you give me some insight on this? Also, is it possible to obtain an= uninitialized array? numpy.empty > I mean, sometimes I don't feel like wasting valuable cpu clocks filling a= rrays with zeros, ones or whatever. > Thank you in advance. > Regards, > Carlos Sounds like fun. Simon. >=20 >=20 >=20 > =20 > =09 > --------------------------------- > Pregunt=E1. Respond=E9. Descubr=ED. > Todo lo que quer=EDas saber, y lo que ni imaginabas, > est=E1 en Yahoo! Respuestas (Beta). > Probalo ya!=20 --=20 Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com=20 |
From: Charles R H. <cha...@gm...> - 2006-08-23 03:31:35
|
On 8/22/06, Carlos Pita <car...@ya...> wrote: > > Hi! I'm writting a real time sound synthesis framework where processing > units are interconnected via numpy arrays. These buffers are all the same > size and type, so it would be easy and convenient pooling them in order to > avoid excesive creation/destruction of arrays (consider that thousands of > them are acquired and released per second, but just a few dozens used at the > same time). But first I would like to know if numpy implements some pooling > mechanism by itself. Could you give me some insight on this? Also, is it > possible to obtain an uninitialized array? I mean, sometimes I don't feel > like wasting valuable cpu clocks filling arrays with zeros, ones or > whatever. > Is there any reason to keep allocating arrays if you are just using them as data buffers? It seems you should be able to reuse them. If you wanted to be fancy you could keep them in a list, which would retain a reference and keep them from being garbage collected. Chuck |
From: Carlos P. <car...@ya...> - 2006-08-23 04:11:12
|
One reason is to use operator syntax: buf1 = buf2 + buf3, instead of add(buf2,buf3, buf1). The other is to spare the final user (synth programmer) any buffer bookkeeping. My idea was to keep track of pooled buffers' reference counts, so that those currently unused would have a refcount of 1 and could be safely deleted (well, if pool policy variables allow it). But as buffers are acquired all the time, even a simple (pure-python) pooling policy implementation is pretty time consuming. In fact, I have benchmarked this against simply creating new zeros-arrays every time, and the non-pooling version just runs faster. That was when I thought that numpy could be doing some internal pooling by itself. Regards, Carlos Is there any reason to keep allocating arrays if you are just using them as data buffers? It seems you should be able to reuse them. If you wanted to be fancy you could keep them in a list, which would retain a reference and keep them from being garbage collected. --------------------------------- Preguntá. Respondé. Descubrí. Todo lo que querías saber, y lo que ni imaginabas, está en Yahoo! Respuestas (Beta). Probalo ya! |
From: Charles R H. <cha...@gm...> - 2006-08-23 14:39:55
|
Hi Carlos, On 8/22/06, Carlos Pita <car...@ya...> wrote: > > One reason is to use operator syntax: buf1 = buf2 + buf3, instead of > add(buf2,buf3, buf1). The other is to spare the final user (synth > programmer) any buffer bookkeeping. > I see. My idea was to keep track of pooled buffers' reference counts, so that those > currently unused would have a refcount of 1 and could be safely deleted > (well, if pool policy variables allow it). But as buffers are acquired all > the time, even a simple (pure-python) pooling policy implementation is > pretty time consuming. In fact, I have benchmarked this against simply > creating new zeros-arrays every time, and the non-pooling version just runs > faster. That was when I thought that numpy could be doing some internal > pooling by itself. > I think the language libraries themselves must do some sort of pooling, at least the linux ones seem to. C++ programs do a lot of creation/destruction of structures on the heap and I have found the overhead noticeable but surprisingly small. Numpy arrays are a couple of layers of abstraction up, so maybe not quite as fast. Chuck |