## numpy-discussion — Discussion list for all users of Numerical Python

You can subscribe to this list here.

 2000 2001 2002 2003 2004 2005 2006 Jan (8) Feb (49) Mar (48) Apr (28) May (37) Jun (28) Jul (16) Aug (16) Sep (44) Oct (61) Nov (31) Dec (24) Jan (56) Feb (54) Mar (41) Apr (71) May (48) Jun (32) Jul (53) Aug (91) Sep (56) Oct (33) Nov (81) Dec (54) Jan (72) Feb (37) Mar (126) Apr (62) May (34) Jun (124) Jul (36) Aug (34) Sep (60) Oct (37) Nov (23) Dec (104) Jan (110) Feb (73) Mar (42) Apr (8) May (76) Jun (14) Jul (52) Aug (26) Sep (108) Oct (82) Nov (89) Dec (94) Jan (117) Feb (86) Mar (75) Apr (55) May (75) Jun (160) Jul (152) Aug (86) Sep (75) Oct (134) Nov (62) Dec (60) Jan (187) Feb (318) Mar (296) Apr (205) May (84) Jun (63) Jul (122) Aug (59) Sep (66) Oct (148) Nov (120) Dec (70) Jan (460) Feb (683) Mar (589) Apr (559) May (445) Jun (712) Jul (815) Aug (663) Sep (559) Oct (930) Nov (373) Dec
S M T W T F S
1
(1)
2
(3)
3
(5)
4

5
(2)
6
(1)
7

8
(1)
9
(2)
10

11
(13)
12
(11)
13
(2)
14
(2)
15

16
(5)
17
(3)
18
(9)
19

20
(1)
21

22

23
(2)
24

25

26
(4)
27
(4)
28
(2)
29

30
(7)
31
(6)

Showing results of 86

1 2 3 4 > >> (Page 1 of 4)
 Re: [Numpy-discussion] extracting a random subset of a vector From: Rick White - 2004-08-31 19:48:30 ```On Tue, 31 Aug 2004, Curzio Basso wrote: > Hi all, I have an optimization problem. > > I currently use the following code to select a random subset of a rank-1 > array: Here's a slightly faster version. It's about 3x faster than Chris Barker's version (4x faster than your original version) for N=1000000, M=100: import numarray as NA import numarray.random_array as RA from math import sqrt N = 1000000 M = 100 full = NA.arange(N) r = RA.random(N) thresh = (M+3*sqrt(M))/N subset = NA.compress(r
 Re: [Numpy-discussion] rebin (corrected) From: Stephen Walton - 2004-08-31 18:48:14 Attachments: Message as HTML ```On Mon, 2004-08-30 at 12:05, Tim Hochberg wrote: > Russell E Owen wrote: >=20 > > I personally have no strong opinion on averaging vs summing >=20 > I really have no strong feelings since I have no use for rebinning at=20 > the moment. Back when I did, it would have been for rebinning data from=20 > particle detectors. In the IRAF image tools, one gets summing by setting "fluxconserve=3Dyes". The name of this option is nicely descriptive of what an astronomer would mean by summing as opposed to averaging. Many of the images I work with are ratio images; for example, a solar contrast map. When I reshape a ratio image I want average, not sum.=20 So, I would have to say that I need to have both average and sum available with an option to switch between them. By the way, has anyone else read http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=3D2004SoPh..219...= .3D&db_key=3DAST&link_type=3DABSTRACT&high=3D400734345713289 Craig has implemented the algorithm described therein in PDL (Perl Data Language, http://pdl.perl.org), and a Python based implementation would be awfully nice. Hope to see a few of you in Pasadena. --=20 Stephen Walton Dept. of Physics & Astronomy, Cal State Northridge ```
 Re: [Numpy-discussion] extracting a random subset of a vector From: Chris Barker - 2004-08-31 17:40:09 ```Curzio Basso wrote: > import numarray as NA > import numarray.random_array as RA > > N = 1000 > M = 100 > full = NA.arange(N) > subset = full[RA.permutation(N)][:M] > > --------------------------------------------------------- > > However, it's quite slow (at least with N~40k), you can speed it up a tiny bit my subsetting the permutation array first: subset = full[ RA.permutation(N)[:M] ] > and from the hotshot > output is looks like it's the indexing, not the permutation, which takes > time. not from my tests: import numarray.random_array as RA import numarray as NA import time N = 1000000 M = 100 full = NA.arange(N) start = time.clock() P = RA.permutation(N) print "permutation took %F seconds"%(time.clock() - start) start = time.clock() subset = full[P[:M]] print "subsetting took %F seconds"%(time.clock() - start) which results in: permutation took 1.640000 seconds subsetting took 0.000000 seconds so it's the permutation that takes the time, as I suspected. What would really speed this up is a random_array.non-repeat-randint() function, written in C. That way you wouldn't have to permute the entire N values, when you really just need M of them. Does anyone else think this would be a useful function? I can't imagine it wouldn't be that hard to write. If M <<< N, then you could probably write a little function in Python that called randint, and removed the repeats. If M is only a little smaller than N, this would be slow. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@... ```
 [Numpy-discussion] Help req: IDL --> Numeric/numarray From: RJ - 2004-08-31 14:04:31 ```Hi all, The current IDL REBIN() thread reminded me to post this here... I'm trying to port Dr. Hongjie Xie's IDL/ENVI Implementation of the FFT Based Algorithm for Automatic Image Registration (it calculates shift, scale and rotation for 2d arrays) http://www.nmt.edu/%7Ehjxie/xie-paper.pdf ftp://ftp.iamg.org/VOL29/v29-08-10.zip to Python/numarray/Numeric and running into some subtle problems. For instance, the IDL ROT() requires some additional steps to emulate, and the MAX(array, position) as well. And, I'm not at all familiar with the correspondence of the FFT libraries... even the simple shift-only routine I translated http://rjs.org/astro/1004x/Python/register/shift_idl.py produces different (incorrect) x shift values than the IDL. My current code http://rjs.org/astro/1004x/Python/register/ conforms to Xie's flow as much as possible for now, but should and will be re-factored when functional; his LogPolar function, to convert rectangular coordinate arrays to polar, is particularly slow. I was recently informed that IDL has a usable demo version which I will try this week, but I would love to hear if anyone else has interest in this. I think that the algorithm would make a nice standalone module, or perhaps something for nd_image. Xie's main examples are registration of satellite Earth imagery, but it should work just as well looking up ;-) Thank you, Ray Secret anti-spam filter-passing text. Include with reply: qwertyuiop ```
 [Numpy-discussion] Help req: IDL --> Numeric/numarray From: RJ - 2004-08-31 13:50:13 ```Hi all, The current IDL REBIN() thread reminded me to post this here... I'm trying to port Hongjie Xie's IDL/ENVI Implementation of the FFT Based Algorithm for Automatic Image Registration http://www.nmt.edu/%7Ehjxie/xie-paper.pdf ftp://ftp.iamg.org/VOL29/v29-08-10.zip to Python/numarray/Numeric and running into some subtle problems. For instance, the IDL ROT() requires some additional steps to emulate, and the MAX(array, position) as well. And, I'm not at all familiar with the correspondence of the FFT libraries... even the simple shift-only routine http://rjs.org/astro/1004x/Python/register/shift_idl.py produces different (incorrect) x shift values than the IDL. My current code http://rjs.org/astro/1004x/Python/register/ conforms to Xie's flow as much as possible for now, but should and will be re-factored when functional. His LogPolar function, to convert rectangular coordinate arrays to polar, is particularly slow. I was recently informed that IDL has a usable demo version which I will try this week, but I would love to hear if anyone else has interest in this. I think that the algorithm would make a nice standalone module, or perhaps something for nd_image, as it calculates shift, scale and rotation for 2d arrays. Xie's main examples are registration of satellite Earth imagery, but it should work just as well looking up ;-) Thank you, Ray Secret anti-spam filter-passing text. Include with reply: qwertyuiop ```
 [Numpy-discussion] extracting a random subset of a vector From: Curzio Basso - 2004-08-31 12:55:41 ```Hi all, I have an optimization problem. I currently use the following code to select a random subset of a rank-1 array: ---------------------------------------------------------- import numarray as NA import numarray.random_array as RA N = 1000 M = 100 full = NA.arange(N) subset = full[RA.permutation(N)][:M] --------------------------------------------------------- However, it's quite slow (at least with N~40k), and from the hotshot output is looks like it's the indexing, not the permutation, which takes time. Does anyone have a suggestion on a faster pure-python implementation? thanks ```
 Re: [Numpy-discussion] rebin (corrected) From: Tim Hochberg - 2004-08-30 19:05:26 ```Russell E Owen wrote: > At 10:56 AM -0700 2004-08-30, Tim Hochberg wrote: > >> [SNIP] >> >>> >>> But I still agree with Perry that we ought to provide a built-in rebin >>> function. It is particularly useful for large multi-dimensional arrays >>> where it is wasteful (in both CPU and memory) to create a full-size >>> copy of the array before resampling it down to the desired rebinned >>> size. I appended the .copy() so that at least the big array is not >>> still hanging around in memory (remember that the slice creates a >>> view rather than a copy.) >>> Rick >>> >> A reasonable facsimile of this should be doable without dropping >> into C. Something like: >> >> def rebin_sum(a, (m, n)): >> M, N = a.shape >> a = na.reshape(a, (M/m,m,N/n,n)) >> return na.sum(na.sum(a, 3), 1) / float(m*n) >> >> This does create some temps, but they're smaller than in the boxcar >> case and it doesn't do all the extra calculation. This doesn't handle >> the case where a.shape isn't an exact multiple of (m,n). However, I >> don't think that would be all that hard to implement, if there is a >> consensus on what should happen then. >> I can think of at least two different ways this might be done: >> tacking on values that match the last value as already proposed and >> tacking on zeros. There may be others as well. It should probably get >> a boundary condition argument like convolve and friends. >> Personally, I'd be find rebin a little suprising if it resulted in >> an average, as all the implementations thus far have done, rather >> than a simple sum over the stencil. When I think of rebinning I'm >> thinking of number of occurences per bin, and rebinning should keep >> the totals occurences the same, not change them by the inverse of the >> stencil size. >> >> My 2.3 cents anyway > > > I agree that it would be nice to avoid the extra calculation involved > in convolution or boxcar averaging, and the extra temp storage. > > Your algorithm certainly looks promising, but I'm not sure there's any > space saving when the array shape is not an exact multiple of the bin > factor. Duplicating the last value is probably the most reasonable > alternative for my own applications (imaging). To use your algorithm, > I guess one has to increase the array first, creating a new temporary > array that is the same as the original except expanded to an even > mutiple of the bin factor. In theory one could avoid duplication, but > I suspect to do this efficiently one really needs to use C code. I think you could probably do considerably better than the boxcar code, but it it looks like it would get fairly messy once you start worrying about odd number of bins. It might end up being simpler to implement it C, so that's probably a better idea in the long run. > I personally have no strong opinion on averaging vs summing. Summing > retains precision but risks overflow. Averaging potentially has the > opposite advantages, though avoiding overflow is tricky. Note that > Nadav Horesh's suggested solution (convolution with a mask of 1s > instead of boxcar averaging) computed the sum. I really have no strong feelings since I have no use for rebinning at the moment. Back when I did, it would have been for rebinning data from particle detectors. So for instance, you would change the bin size so that you had enough data in each bin that you could attempt to do statistics on it or plot it or whatever. In that domain it would make no sense to average on rebinning. However, I can see how it makes sense for imaging applications. In the absence of any compelling reason to do otherwise, I imagine the thing to do is copy what every one else is doing as long as they're consistent. Do you know what Matlab and friends do? -tim ```
 Re: [Numpy-discussion] rebin (corrected) From: Russell E Owen - 2004-08-30 18:27:59 ```At 10:56 AM -0700 2004-08-30, Tim Hochberg wrote: >[SNIP] > >> >>But I still agree with Perry that we ought to provide a built-in rebin >>function. It is particularly useful for large multi-dimensional arrays >>where it is wasteful (in both CPU and memory) to create a full-size >>copy of the array before resampling it down to the desired rebinned >>size. I appended the .copy() so that at least the big array is not >>still hanging around in memory (remember that the slice creates a >>view rather than a copy.) >> Rick >> >A reasonable facsimile of this should be doable without dropping >into C. Something like: > >def rebin_sum(a, (m, n)): > M, N = a.shape > a = na.reshape(a, (M/m,m,N/n,n)) > return na.sum(na.sum(a, 3), 1) / float(m*n) > >This does create some temps, but they're smaller than in the boxcar >case and it doesn't do all the extra calculation. This doesn't >handle the case where a.shape isn't an exact multiple of (m,n). >However, I don't think that would be all that hard to implement, if >there is a consensus on what should happen then. > I can think of at least two different ways this might be done: >tacking on values that match the last value as already proposed and >tacking on zeros. There may be others as well. It should probably >get a boundary condition argument like convolve and friends. > Personally, I'd be find rebin a little suprising if it resulted >in an average, as all the implementations thus far have done, rather >than a simple sum over the stencil. When I think of rebinning I'm >thinking of number of occurences per bin, and rebinning should keep >the totals occurences the same, not change them by the inverse of >the stencil size. > >My 2.3 cents anyway I agree that it would be nice to avoid the extra calculation involved in convolution or boxcar averaging, and the extra temp storage. Your algorithm certainly looks promising, but I'm not sure there's any space saving when the array shape is not an exact multiple of the bin factor. Duplicating the last value is probably the most reasonable alternative for my own applications (imaging). To use your algorithm, I guess one has to increase the array first, creating a new temporary array that is the same as the original except expanded to an even mutiple of the bin factor. In theory one could avoid duplication, but I suspect to do this efficiently one really needs to use C code. I personally have no strong opinion on averaging vs summing. Summing retains precision but risks overflow. Averaging potentially has the opposite advantages, though avoiding overflow is tricky. Note that Nadav Horesh's suggested solution (convolution with a mask of 1s instead of boxcar averaging) computed the sum. -- Russell ```
 Re: [Numpy-discussion] rebin (corrected) From: Tim Hochberg - 2004-08-30 17:57:17 ```[SNIP] > >But I still agree with Perry that we ought to provide a built-in rebin >function. It is particularly useful for large multi-dimensional arrays >where it is wasteful (in both CPU and memory) to create a full-size >copy of the array before resampling it down to the desired rebinned >size. I appended the .copy() so that at least the big array is not >still hanging around in memory (remember that the slice creates a >view rather than a copy.) > Rick > > A reasonable facsimile of this should be doable without dropping into C. Something like: def rebin_sum(a, (m, n)): M, N = a.shape a = na.reshape(a, (M/m,m,N/n,n)) return na.sum(na.sum(a, 3), 1) / float(m*n) This does create some temps, but they're smaller than in the boxcar case and it doesn't do all the extra calculation. This doesn't handle the case where a.shape isn't an exact multiple of (m,n). However, I don't think that would be all that hard to implement, if there is a consensus on what should happen then. I can think of at least two different ways this might be done: tacking on values that match the last value as already proposed and tacking on zeros. There may be others as well. It should probably get a boundary condition argument like convolve and friends. Personally, I'd be find rebin a little suprising if it resulted in an average, as all the implementations thus far have done, rather than a simple sum over the stencil. When I think of rebinning I'm thinking of number of occurences per bin, and rebinning should keep the totals occurences the same, not change them by the inverse of the stencil size. My 2.3 cents anyway -tim > > >------------------------------------------------------- >This SF.Net email is sponsored by BEA Weblogic Workshop >FREE Java Enterprise J2EE developer tools! >Get your free copy of BEA WebLogic Workshop 8.1 today. >http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click >_______________________________________________ >Numpy-discussion mailing list >Numpy-discussion@... >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > ```
 Re: [Numpy-discussion] rebin (corrected) From: Rick White - 2004-08-30 17:15:57 ```On Mon, 30 Aug 2004, Russell E Owen wrote: > nd_image.boxcar_filter has an origin argument that *might* be for > this purpose. Unfortunately, it is not documented and my attempt to > use it as desired failed. I have no idea if this is a bug in nd_image > or a misunderstanding on my part: > >>> from numarray.nd_image import boxcar_filter > >>> b = boxcar_filter(a, (2,), origin=(1,), output_type=num.Float32) > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/local/lib/python2.3/site-packages/numarray/nd_image/filters.py", > line 339, in boxcar_filter > output_type = output_type) > File > "/usr/local/lib/python2.3/site-packages/numarray/nd_image/filters.py", > line 280, in boxcar_filter1d > cval, origin, output_type) > RuntimeError: shift not within filter extent Seems like you got close to the answer. This gives the answer you want: >>> boxcar_filter(a, (2,), output_type=num.Float32,origin=-1) array([ 0.5, 1.5, 2.5, 3.5, 4. ], type=Float32) And so this works for rebin: >>> boxcar_filter(a, (2,), output_type=num.Float32,origin=-1)[::2].copy() array([ 0.5, 2.5, 4. ], type=Float32) But I still agree with Perry that we ought to provide a built-in rebin function. It is particularly useful for large multi-dimensional arrays where it is wasteful (in both CPU and memory) to create a full-size copy of the array before resampling it down to the desired rebinned size. I appended the .copy() so that at least the big array is not still hanging around in memory (remember that the slice creates a view rather than a copy.) Rick ```
 Re: [Numpy-discussion] rebin (corrected) From: Russell E Owen - 2004-08-30 16:43:10 ```At 9:06 AM -0700 2004-08-30, Russell E Owen wrote: >At 9:14 AM -0400 2004-08-30, Perry Greenfield wrote: >>... >>Note that a boxcar smoothing costs no more than doing the above averaging. >>So in numarray, you could do the following: >> >>from numarray.convolve import boxcar >>b = boxcar(a, (n,n)) >>rebinnedarray = b[::n,::n].copy() >> >>or something like this (I haven't tried to figure out the correct offset >>for the slice) where one wants to rebin by a factor of n in both dimensions. >>We should probably add a built in function to do this. >... >I think the polished version (for nxm binning) is: > >from numarray.convolve import boxcar >b = boxcar(a, (n,m), mode='constant', cval=0) >rebinnedarray = b[n//2::n,m//2::m].copy() > >A rebin function would be handy since using boxcar is a bit tricky. I made several mistakes, one of them very serious: the convolve boxcar cannot do the job unless the array size is an exact multiple of the bin factor. The problem is that boxcar starts in the "wrong" place. Here's an example: Problem: solve rebin [0, 1, 2, 3, 4] by 2 to yield: [0.5, 2.5, 4.0] where the last point (value 4) is averaged with next-off-the-end, which we approximate by extending the data (note: my propsed "polished" version messed that up; Perry had it right). Using boxcar almost works: >>> import numarray as num >>> from numarray.convolve import boxcar >>> a = num.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> boxcar(a, (2,)) array([ 0. , 0.5, 1.5, 2.5, 3.5]) >>> b = boxcar(a, (2,)) >>> b array([ 0. , 0.5, 1.5, 2.5, 3.5]) >>> b[1::2] array([ 0.5, 2.5]) but oops: the last point is missing!!! What is needed is some way to make the boxcar average start later, so it finishes by averaging 4 plus the next value off the edge of the array, e.g. a hypothetical version of boxcar with a start argument: >>> b2 = nonexistent_boxcar(a, (2,), start=1) >>> b2 [0.5, 1.5, 2.5, 3.5, 4.0] b[0::2] [0.5, 2.5, 4.0] nd_image.boxcar_filter has an origin argument that *might* be for this purpose. Unfortunately, it is not documented and my attempt to use it as desired failed. I have no idea if this is a bug in nd_image or a misunderstanding on my part: >>> from numarray.nd_image import boxcar_filter >>> # first the usual answer; omitting the origin argument yields the >>>same answer >>> b = boxcar_filter(a, (2,), origin=(0,), output_type=num.Float32) array([ 0. , 0.5, 1.5, 2.5, 3.5], type=Float32) >>> # now try the origin argument and get a traceback; origin=1 gives >>>the same error: >>> b = boxcar_filter(a, (2,), origin=(1,), output_type=num.Float32) Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/site-packages/numarray/nd_image/filters.py", line 339, in boxcar_filter output_type = output_type) File "/usr/local/lib/python2.3/site-packages/numarray/nd_image/filters.py", line 280, in boxcar_filter1d cval, origin, output_type) RuntimeError: shift not within filter extent So, yes, a rebin function that actually worked would be a real godsend! Meanwhile, any other suggestions? Fortunately in our application we *can* call out to IDL, but it seems a shame to have to do that. -- Russell ```
 Re: [Numpy-discussion] rebin From: Russell E Owen - 2004-08-30 16:06:37 ```At 9:14 AM -0400 2004-08-30, Perry Greenfield wrote: >On Aug 27, 2004, at 8:34 PM, Russell E. Owen wrote: > >> Any suggestions on an efficient means to bin a 2-d array? >> ... >> For example, to 2x2 bin a two-dimensional image, one would: >> average (0,0), (0,1), (1,0), (1,1) to form (0,0) >> average (0,2), (0,3), (1,2), (1,3) to form (0,1) >> ... >> >Note that a boxcar smoothing costs no more than doing the above averaging. >So in numarray, you could do the following: > >from numarray.convolve import boxcar >b = boxcar(a, (n,n)) >rebinnedarray = b[::n,::n].copy() > >or something like this (I haven't tried to figure out the correct offset >for the slice) where one wants to rebin by a factor of n in both dimensions. >We should probably add a built in function to do this. Thanks! Great suggestion! I think the polished version (for nxm binning) is: from numarray.convolve import boxcar b = boxcar(a, (n,m), mode='constant', cval=0) rebinnedarray = b[n//2::n,m//2::m].copy() A rebin function would be handy since using boxcar is a bit tricky. -- Russell ```
 Re: [Numpy-discussion] rebin From: Perry Greenfield - 2004-08-30 13:14:50 ```On Aug 27, 2004, at 8:34 PM, Russell E. Owen wrote: > Any suggestions on an efficient means to bin a 2-d array? REBIN is the > IDL > function I'm trying to mimic. Binning allows one to combine sets of > pixels from > one array to form a new array that is smaller by a given factor along > each > dimension. > > To nxm bin a 2-dimensional array, one averages (or sums or ?) each nxm > block of > entries from the input image to form the corresponding entry of the > output > image. > > For example, to 2x2 bin a two-dimensional image, one would: > average (0,0), (0,1), (1,0), (1,1) to form (0,0) > average (0,2), (0,3), (1,2), (1,3) to form (0,1) > ... > > In case it helps, in my immediate case I'm binning a boolean array (a > mask) and > thus can live with almost any kind of combination. > Note that a boxcar smoothing costs no more than doing the above averaging. So in numarray, you could do the following: from numarray.convolve import boxcar b = boxcar(a, (n,n)) rebinnedarray = b[::n,::n].copy() or something like this (I haven't tried to figure out the correct offset for the slice) where one wants to rebin by a factor of n in both dimensions. We should probably add a built in function to do this. Perry ```
 RE: [Numpy-discussion] rebin From: Nadav Horesh - 2004-08-28 20:53:53 ```For the most general form of binning I use a convolution (by a 2D mask) = followed by a subsmapling. For example for a 3x3 binning: mask =3D ones((3,3)) binned =3D convolve2d(data,mask,'same')[1::3,1::3] Nadav. -----Original Message----- From: Russell E. Owen [mailto:owen@...] Sent: Sat 28-Aug-04 03:34 To: numpy-discussion@... Cc:=09 Subject: [Numpy-discussion] rebin Any suggestions on an efficient means to bin a 2-d array? REBIN is the = IDL function I'm trying to mimic. Binning allows one to combine sets of = pixels from one array to form a new array that is smaller by a given factor along = each dimension. To nxm bin a 2-dimensional array, one averages (or sums or ?) each nxm = block of entries from the input image to form the corresponding entry of the = output image. For example, to 2x2 bin a two-dimensional image, one would: average (0,0), (0,1), (1,0), (1,1) to form (0,0) average (0,2), (0,3), (1,2), (1,3) to form (0,1) ... In case it helps, in my immediate case I'm binning a boolean array (a = mask) and thus can live with almost any kind of combination. -- Russell ------------------------------------------------------- This SF.Net email is sponsored by BEA Weblogic Workshop FREE Java Enterprise J2EE developer tools! Get your free copy of BEA WebLogic Workshop 8.1 today. http://ads.osdn.com/?ad_id=3D5047&alloc_id=3D10808&op=3Dclick _______________________________________________ Numpy-discussion mailing list Numpy-discussion@... https://lists.sourceforge.net/lists/listinfo/numpy-discussion ```
 [Numpy-discussion] rebin From: Russell E. Owen - 2004-08-28 00:34:52 ```Any suggestions on an efficient means to bin a 2-d array? REBIN is the IDL function I'm trying to mimic. Binning allows one to combine sets of pixels from one array to form a new array that is smaller by a given factor along each dimension. To nxm bin a 2-dimensional array, one averages (or sums or ?) each nxm block of entries from the input image to form the corresponding entry of the output image. For example, to 2x2 bin a two-dimensional image, one would: average (0,0), (0,1), (1,0), (1,1) to form (0,0) average (0,2), (0,3), (1,2), (1,3) to form (0,1) ... In case it helps, in my immediate case I'm binning a boolean array (a mask) and thus can live with almost any kind of combination. -- Russell ```
 [Numpy-discussion] error bar default color From: Jin-chung Hsu - 2004-08-27 20:42:54 ```In 0.61.0, when plotting a simple array with error bars, the default color of the error bars is black, instead of being the same as the line/markers color, e.g.: >>> errorbar([1,2,3,4,5],[3,4,5,6,7],fmt='ro',yerr=[1,1,1,1,1]) I prefer them to be the same, especially since the default color for marker/line is blue and a beginner may be surprised to see the different color. This may be related to my last posting regarding the marker edge color. JC ```
 [Numpy-discussion] marker edge color From: Jin-chung Hsu - 2004-08-27 20:29:56 ```In 0.61.0, the marker edge color is still black, instead of being the same as the marker face color. Personally, I think they should be the same. What do people think? One reason for the same color argument is that if the markersize is set to a small value, it will show mostly the edge color. One possible reason against it is that if the marker color is white (or whatever the background color is), then you can't see the marker. But we can default this case to black (or whatever). JC Hsu ```
 Re: [Numpy-discussion] Confusion regarding Numeric array resizing From: David M. Cooke - 2004-08-27 04:52:00 ```On Thu, Aug 26, 2004 at 10:19:15PM -0400, Jp Calderone wrote: > > I'm confused by the restriction on resizing arrays demonstrated below: > > >>> from Numeric import array > >>> a = array([0]) > >>> a.resize((2,)) > >>> b = a > >>> a.resize((3,)) > Traceback (most recent call last): > File "", line 1, in ? > ValueError: cannot resize an array that has been referenced or is > referencing another array in this way. Use the resize function. > >>> del b > >>> a.resize((3,)) > >>> a > array([0, 0, 0]) > > This seems like an unnecessary restriction. Is there a reason I'm > missing for it to be in place? In this case it's obvious that b is not a copy of a (because that's how Python assignment works), but if your statement was b = a[1:], it's a little different. Then, b is a view of a: a[1] = 1 will make b[0] == 1. So, the question is, how should resizing a change b? The Numeric developers decided to punt on this, I guess, and tell you to explictly make a new copy. Now, this restriction doesn't occur in numarray. Resizing an array will (maybe) make a new copy of the underlying memory. Then b in this case will no longer point to a's data, but will be the owner of the old data. [I say "maybe" as it depends on where the memory for a came from.] -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm@... ```
 [Numpy-discussion] Confusion regarding Numeric array resizing From: Jp Calderone - 2004-08-27 02:19:18 ``` I'm confused by the restriction on resizing arrays demonstrated below: >>> from Numeric import array >>> a = array([0]) >>> a.resize((2,)) >>> b = a >>> a.resize((3,)) Traceback (most recent call last): File "", line 1, in ? ValueError: cannot resize an array that has been referenced or is referencing another array in this way. Use the resize function. >>> del b >>> a.resize((3,)) >>> a array([0, 0, 0]) This seems like an unnecessary restriction. Is there a reason I'm missing for it to be in place? Jp ```
 RE: [Numpy-discussion] fromstring doesn't accept buffer object From: Nadav Horesh - 2004-08-26 14:06:10 ```Sorry for mixing test and production systems: There is no problem here = with python 2.3, I got this failure with python 2.4a2: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D import numarray as n import Numeric as N s =3D '12341234' b =3D buffer(s) print 'numarray: fromstring (string ..) :' print n.fromstring(s, type=3Dn.Float64) print 'Numeric: fromstring (string ..) :' print N.fromstring(s, typecode=3DN.Float64) print 'Numeric: fromstring (buffer ..) :' print N.fromstring(b, typecode=3DN.Float64) print 'numarray: fromstring (buffer ..) :' print n.fromstring(b, type=3Dn.Float64) =3D=3D=3D Results: =3D=3D=3D=3D numarray: fromstring (string ..) : [ 3.05810932e-57] Numeric: fromstring (string ..) : [ 3.05810932e-57] Numeric: fromstring (buffer ..) : [ 3.05810932e-57] =3D=3D=3D This error happens only with Python 2.4.a2: numarray: fromstring (buffer ..) : Traceback (most recent call last): File "test.py", line 17, in ? print n.fromstring(b, type=3Dn.Float64) File = "/usr/local/lib/python2.4/site-packages/numarray/numarraycore.py", line = 378, in fromstring arr._data, 0, (type.bytes,), type.bytes) libnumarray.error: copyNbytes: access beyond buffer. offset=3D7 = buffersize=3D0 Nadav. -----Original Message----- From: Todd Miller [mailto:jmiller@...] Sent: Thu 26-Aug-04 15:51 To: Nadav Horesh Cc: numpy-discussion Subject: Re: [Numpy-discussion] fromstring doesn't accept buffer object This is what I see this morning using CVS: >>> from numarray import * >>> s =3D "thisthis" >>> fromstring(buffer(s), shape=3D(2,), type=3DInt32) array([1936287860, 1936287860]) What kind of failure are you seeing? Regards, Todd On Thu, 2004-08-26 at 09:28, Nadav Horesh wrote: > Numerc accepts a buffer object instead of a string: >=20 > arr =3D Numeric.fromstring(buffer(.some_string), ..) > while numarray doesn't. >=20 > Is it intentional? >=20 > Nadav. >=20 >=20 > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only \$29 -100pk Sonic DVD+R for only \$33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion@... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion --=20 ```
 Re: [Numpy-discussion] fromstring doesn't accept buffer object From: Todd Miller - 2004-08-26 12:51:32 ```This is what I see this morning using CVS: >>> from numarray import * >>> s = "thisthis" >>> fromstring(buffer(s), shape=(2,), type=Int32) array([1936287860, 1936287860]) What kind of failure are you seeing? Regards, Todd On Thu, 2004-08-26 at 09:28, Nadav Horesh wrote: > Numerc accepts a buffer object instead of a string: > > arr = Numeric.fromstring(buffer(.some_string), ..) > while numarray doesn't. > > Is it intentional? > > Nadav. > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only \$29 -100pk Sonic DVD+R for only \$33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion@... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- ```
 [Numpy-discussion] fromstring doesn't accept buffer object From: Nadav Horesh - 2004-08-26 12:27:28 ```Numerc accepts a buffer object instead of a string: arr =3D Numeric.fromstring(buffer(.some_string), ..) while numarray doesn't. Is it intentional? Nadav. ```
 Re: [Numpy-discussion] Accessing a C library from numarray From: Simon Burton - 2004-08-26 00:26:16 ```This sounds like a job for pyrex. Here is a recent message from the pyrex list. cheers, Simon. On Wed, 25 Aug 2004 15:37:58 +0100 (BST) Michael Hoffman wrote: > I wanted to use Pyrex with Numarray, which is the supported > implementation of Numeric, which is no longer supported. Changing the > demo that comes with Pyrex to use Numarray instead of Numeric was > instructive. I thought the results of this might be useful to anyone > else trying to do the same thing. Here are diffs: > > --- numeric_demo.pyx 2003-07-09 11:37:15.000000000 +0100 > +++ numarray_demo.pyx 2004-08-25 15:30:12.000000000 +0100 > @@ -1,33 +1,92 @@ > # > # This example demonstrates how to access the internals > -# of a Numeric array object. > +# of a Numarray object. > # > > -cdef extern from "Numeric/arrayobject.h": > +cdef extern from "numarray/libnumarray.h": > + ctypedef int maybelong > > - struct PyArray_Descr: > - int type_num, elsize > - char type > + cdef struct PyArray_Descr: > + int type_num # PyArray_TYPES > + int elsize # bytes for 1 element > + char type # One of "cb1silfdFD " Object array not supported > + # function pointers omitted > > - ctypedef class Numeric.ArrayType [object PyArrayObject]: > + ctypedef class numarray._numarray._numarray [object PyArrayObject]: > cdef char *data > cdef int nd > - cdef int *dimensions, *strides > + cdef maybelong *dimensions > + cdef maybelong *strides > cdef object base > cdef PyArray_Descr *descr > cdef int flags > + > + # numarray extras > + cdef maybelong *_dimensions > + cdef maybelong *_strides > > -def print_2d_array(ArrayType a): > - print "Type:", chr(a.descr.type) > - if chr(a.descr.type) <> "f": > + cdef object _data # object must meet buffer API > + cdef object _shadows # ill-behaved original array. > + cdef int nstrides # elements in strides array > + cdef long byteoffset # offset into buffer where array data begins > + cdef long bytestride # basic seperation of elements in bytes > + cdef long itemsize # length of 1 element in bytes > + > + cdef char byteorder # NUM_BIG_ENDIAN, NUM_LITTLE_ENDIAN > + > + cdef char _aligned # test override flag > + cdef char _contiguous # test override flag > + > + ctypedef enum: > + NUM_UNCONVERTED # 0 > + NUM_CONTIGUOUS # 1 > + NUM_NOTSWAPPED # 2 > + NUM_ALIGNED # 4 > + NUM_WRITABLE # 8 > + NUM_COPY # 16 > + NUM_C_ARRAY # = (NUM_CONTIGUOUS | NUM_ALIGNED | NUM_NOTSWAPPED) > + > + ctypedef enum NumarrayType: > + tAny > + tBool > + tInt8 > + tUInt8 > + tInt16 > + tUInt16 > + tInt32 > + tUInt32 > + tInt64 > + tUInt64 > + tFloat32 > + tFloat64 > + tComplex32 > + tComplex64 > + tObject # placeholder... does nothing > + tDefault = tFloat64 > + tLong = tInt32, > + tMaxType > + > + void import_libnumarray() > + _numarray NA_InputArray (object, NumarrayType, int) > + void *NA_OFFSETDATA(_numarray) > + > +import_libnumarray() > + > +def print_2d_array(_numarray _a): > + print "Type:", chr(_a.descr.type) > + if chr(_a.descr.type) <> "f": > raise TypeError("Float array required") > - if a.nd <> 2: > + if _a.nd <> 2: > raise ValueError("2 dimensional array required") > + > + cdef _numarray a > + a = NA_InputArray(_a, tFloat32, NUM_C_ARRAY) > + > cdef int nrows, ncols > cdef float *elems, x > nrows = a.dimensions[0] > ncols = a.dimensions[1] > - elems = a.data > + elems = NA_OFFSETDATA(a) > hyphen = "-" > divider = ("+" + 10 * hyphen) * ncols + "+" > print divider > > --- run_numeric_demo.py 2003-07-09 11:37:15.000000000 +0100 > +++ run_numarray_demo.py 2004-08-25 12:06:47.000000000 +0100 > @@ -1,5 +1,5 @@ > -import Numeric > -import numeric_demo > +import numarray > +import numarray_demo > > -a = Numeric.array([[1.0, 3.5, 8.4], [2.3, 6.6, 4.1]], "f") > -numeric_demo.print_2d_array(a) > +a = numarray.array([[1.0, 3.5, 8.4], [2.3, 6.6, 4.1]], "f") > +numarray_demo.print_2d_array(a) > -- > Michael Hoffman > European Bioinformatics Institute > > P.S. Thanks for Pyrex! I just coded a Python implementation of a > bioinformatics algorithm with the inner loop in Pyrex and it runs as > fast as the pure C implementation we use in my group. > > _______________________________________________ > Pyrex mailing list > Pyrex@... > http://lists.copyleft.no/mailman/listinfo/pyrex On Fri, 20 Aug 2004 09:51:10 -0500 Bruce Southey wrote: > Hi, > I need to access a GLP'ed C library in numarray. The library functions are > scalar ints and doubles as inputs and usually doubles as outputs. > > Is there a recommended (and simple) way to achieve this? There are a few > different C API's that exist from the Python to numarrays. > > Would it be sufficient to first write C code that uses the library and use that > code for a C API in Python or numarray? > > Thanks, > > Bruce Southey > > -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com ```
 Re: [Numpy-discussion] MA.zeros problem From: Reggie Dugard - 2004-08-23 17:26:38 ```Stephen, You might try something like: >>> a = MA.zeros((2,2)) >>> a array( [[0,0,] [0,0,]]) >>> a[1,1] = 2 >>> a array( [[0,0,] [0,2,]]) Because MA uses copy semantics, when you specify a[1][1], the a[1] makes a copy of the first row of the original array, and then the second [1] index does a setitem on that. If you use the a[1,1] syntax, the setitem is done on 'a' itself. Hope this is of some help. Reggie Dugard Merfin, LLC On Mon, 2004-08-23 at 09:25, smpitts@... wrote: > Hi all, > I have some code that uses Numeric, and I'm trying to change it so that it supports MA as well. My code uses Numeric.zeros to create a matrix and then populates it, but I'm having trouble figuring out how to do something similar in MA. > > Python 2.2.3 (#1, Oct 15 2003, 23:33:35) > [GCC 3.3.1 20030930 (Red Hat Linux 3.3.1-6)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import MA > >>> MA.__version__ > '11.1.0' > >>> a = MA.zeros((2,2)) > >>> a > array( > [[0,0,] > [0,0,]]) > >>> a[1][1] = 2 > >>> a > array( > [[0,0,] > [0,0,]]) > > Is there some function to automatically create a masked array of arbitrary dimensions filled with zeroes? Thanks. > -- > Stephen Pitts > smpitts@... > > > > ------------------------------------------------------- > SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media > 100pk Sonic DVD-R 4x for only \$29 -100pk Sonic DVD+R for only \$33 > Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. > http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion@... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion ```
 [Numpy-discussion] MA.zeros problem From: - 2004-08-23 16:24:57 ```Hi all, I have some code that uses Numeric, and I'm trying to change it so that it supports MA as well. My code uses Numeric.zeros to create a matrix and then populates it, but I'm having trouble figuring out how to do something similar in MA. Python 2.2.3 (#1, Oct 15 2003, 23:33:35) [GCC 3.3.1 20030930 (Red Hat Linux 3.3.1-6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import MA >>> MA.__version__ '11.1.0' >>> a = MA.zeros((2,2)) >>> a array( [[0,0,] [0,0,]]) >>> a[1][1] = 2 >>> a array( [[0,0,] [0,0,]]) Is there some function to automatically create a masked array of arbitrary dimensions filled with zeroes? Thanks. -- Stephen Pitts smpitts@... ```

Showing results of 86

1 2 3 4 > >> (Page 1 of 4)