You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Keith G. <kwg...@gm...> - 2006-11-02 05:29:36
|
On 11/1/06, David Cournapeau <da...@ar...> wrote: > Hi, > > I want to generate some random integers,let's say in the range [- > 2^15, 2^16]. Why doing: > > noise = numpy.random.random_integers(- (2 ** 15), (2 * 15), 22050) > > gives only negative numbers ? I guess 2^15 is too big to be an int. Would this work? int(N.random.uniform(1e15,1e16)) |
From: Fernando P. <fpe...@gm...> - 2006-11-02 05:26:00
|
On 11/1/06, David Cournapeau <da...@ar...> wrote: > Hi, > > I want to generate some random integers,let's say in the range [- > 2^15, 2^16]. Why doing: > > noise = numpy.random.random_integers(- (2 ** 15), (2 * 15), 22050) > > gives only negative numbers ? In [3]: noise = numpy.random.random_integers(- (2 ** 15), (2 * 15), 22050) In [4]: noise[noise>0].shape Out[4]: (17,) In [5]: noise[noise<0].shape Out[5]: (22033,) In [6]: noise = numpy.random.random_integers(-(2**15), (2 ** 15), 22050) In [7]: noise[noise>0].shape Out[7]: (11006,) In [8]: noise[noise>0].shape Out[8]: (11006,) In [9]: 17./22033 Out[9]: 0.00077156991785049694 In [10]: 2.0*15/2**15 Out[10]: 0.00091552734375 close enough ;) Cheers, f |
From: David C. <da...@ar...> - 2006-11-02 05:23:06
|
Hi, I want to generate some random integers,let's say in the range [- 2^15, 2^16]. Why doing: noise = numpy.random.random_integers(- (2 ** 15), (2 * 15), 22050) gives only negative numbers ? Cheers, David |
From: Robert K. <rob...@gm...> - 2006-11-02 05:17:39
|
Tim Hochberg wrote: > Travis Oliphant wrote: >> Robert Kern wrote: >>> Travis Oliphant wrote: >>>> It looks like 1.0-x is doing the right thing. >>>> >>>> The problem is 1.0*x for matrices is going to float64. For arrays it >>>> returns float32 just like the 1.0-x >>>> >>> Why is this the right thing? Python floats are float64. >>> >> Yeah, why indeed. Must be something with the scalar coercion code... > > This is one of those things that pops up every few years. I suspect that > the best thing to do here is to treat 1.0, and all Python floats as > having a kind (float), but no precision. Or, equivalently treat them as > the smallest precision floating point value. The rationale behind this > is that otherwise float32 array will be promoted whenever they are > multiplied by Python floating point scalars. If Python floats are > treated as Float64 for purposes of determining output precision then > anyone using float32 arrays is going to have to wrap all of their > literals in float32 to prevent inadvertent upcasting to float64. This > was the origin of the (rather clunky) numarray spacesaver flag. > > It's no skin off my nose either way, since I pretty much never use > float32, but I suspect that treating python floats equivalently to > float64 scalars would be a mistake. At the very least it deserves a bit > of discussion. Well, they *are* 64-bit floating point numbers. You simply can't get around that. That's why we now have all of the scalar types: you can get any precision scalars that you want as long as you are explicit about it (and explicit is better than implicit). The spacesaver flag was the only solution before the various scalar types existed. I'd like to suggest that the discussion already occurred some time ago and has concluded in favor of the scalar types. Downcasting should be explicit. However, whether or not float32 arrays operated with Python float scalars give float32 or float64 arrays is tangential to my question. Does anyone actually think that a Python float operated with a boolean array should give a float32 result? Must we *up*cast a boolean array to float64 to preserve the precision of the scalar? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Charles R H. <cha...@gm...> - 2006-11-02 05:00:07
|
On 11/1/06, Charles R Harris <cha...@gm...> wrote: > > > > On 11/1/06, Fernando Perez <fpe...@gm...> wrote: > > > > On 11/1/06, Travis Oliphant <oli...@ie...> wrote: > > > > > > Apparently some dgesdd libraries don't actually compute the correct > > > value for the work-space size if requested. > > > > > > This results in an ** ILLEGAL value and program termination from > > LAPACK. > > > > > > I've added code in the latest SVN to that particular wrapper to make > > > sure the query gives back the minimums at least, but I'd like to see > > who > > > else has the problem. > > > > > > On my system, doing numpy.linalg.svd on a 25 x 50 array pulled the > > trigger. > > > > > > Anybody else have that problem with their lapack library? > > > > Seems fine here: > > > Fine here also: 1.0.1.dev3416 > > Fedora 6, ATLAS and LAPACK from extras. > > Chuck > Oh, and 32 bit. |
From: Charles R H. <cha...@gm...> - 2006-11-02 04:58:11
|
On 11/1/06, Fernando Perez <fpe...@gm...> wrote: > > On 11/1/06, Travis Oliphant <oli...@ie...> wrote: > > > > Apparently some dgesdd libraries don't actually compute the correct > > value for the work-space size if requested. > > > > This results in an ** ILLEGAL value and program termination from LAPACK. > > > > I've added code in the latest SVN to that particular wrapper to make > > sure the query gives back the minimums at least, but I'd like to see who > > else has the problem. > > > > On my system, doing numpy.linalg.svd on a 25 x 50 array pulled the > trigger. > > > > Anybody else have that problem with their lapack library? > > Seems fine here: Fine here also: 1.0.1.dev3416 Fedora 6, ATLAS and LAPACK from extras. Chuck |
From: Scott R. <sr...@nr...> - 2006-11-02 04:55:50
|
On Wed, Nov 01, 2006 at 08:16:59PM -0700, Tim Hochberg wrote: > Travis Oliphant wrote: > > Robert Kern wrote: > >> Travis Oliphant wrote: > >> > >>> It looks like 1.0-x is doing the right thing. > >>> > >>> The problem is 1.0*x for matrices is going to float64. For arrays it > >>> returns float32 just like the 1.0-x > >>> > >> Why is this the right thing? Python floats are float64. > >> > > Yeah, why indeed. Must be something with the scalar coercion code... > > This is one of those things that pops up every few years. I suspect that > the best thing to do here is to treat 1.0, and all Python floats as > having a kind (float), but no precision. Or, equivalently treat them as > the smallest precision floating point value. The rationale behind this > is that otherwise float32 array will be promoted whenever they are > multiplied by Python floating point scalars. If Python floats are > treated as Float64 for purposes of determining output precision then > anyone using float32 arrays is going to have to wrap all of their > literals in float32 to prevent inadvertent upcasting to float64. This > was the origin of the (rather clunky) numarray spacesaver flag. I'm one of those people who made serious use of that clunky spacesaver flag for precisely this reason. I deal with several GB arrays of 32-bit floats (or 32-bit x2 complex numbers) on a regular basis. Having automatic upcasting from scalar operations can be a royal pain. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sr...@nr... Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 |
From: Charles R H. <cha...@gm...> - 2006-11-02 04:52:35
|
On 11/1/06, George Sakkis <geo...@gm...> wrote: > > Albert Strasheim wrote: > > > Check the thread "Strange results when sorting array with fields" from > > about a week back. Travis made some changes to sorting in the presence > > of fields that should solve your problem, assuming your fields appear in > > the order you want to sort (i.e. you want to sort f1, f2, f3 and not > > something like f1, f3, f2). > > I'm afraid this won't help in my case; I want to sort twice, once by f1 > and once by f2. I guess I could make a second file with the fields > swapped but this seems more messy and inefficient than Francesc's > suggestion. Do you actually want the two different orders, or do you want to sort on the first field, then sort all the items with the same first field on the second field? Chuck |
From: Fernando P. <fpe...@gm...> - 2006-11-02 04:43:25
|
On 11/1/06, Travis Oliphant <oli...@ie...> wrote: > > Apparently some dgesdd libraries don't actually compute the correct > value for the work-space size if requested. > > This results in an ** ILLEGAL value and program termination from LAPACK. > > I've added code in the latest SVN to that particular wrapper to make > sure the query gives back the minimums at least, but I'd like to see who > else has the problem. > > On my system, doing numpy.linalg.svd on a 25 x 50 array pulled the trigger. > > Anybody else have that problem with their lapack library? Seems fine here: In [4]: N.linalg.svd(N.random.rand(25,50)); In [5]: N.linalg.svd(N.random.rand(25,50)); In [6]: N.__version__ Out[6]: '1.0.1.dev3423' Ubuntu Dapper (6.06) box, up to date. I'm using the ubuntu-supplied ATLAS and LAPACK, I can give you more info if needed. Cheers, f |
From: George S. <geo...@gm...> - 2006-11-02 04:42:19
|
Albert Strasheim wrote: > Check the thread "Strange results when sorting array with fields" from > about a week back. Travis made some changes to sorting in the presence > of fields that should solve your problem, assuming your fields appear in > the order you want to sort (i.e. you want to sort f1, f2, f3 and not > something like f1, f3, f2). I'm afraid this won't help in my case; I want to sort twice, once by f1 and once by f2. I guess I could make a second file with the fields swapped but this seems more messy and inefficient than Francesc's suggestion. George |
From: Travis O. <oli...@ie...> - 2006-11-02 04:39:45
|
Apparently some dgesdd libraries don't actually compute the correct value for the work-space size if requested. This results in an ** ILLEGAL value and program termination from LAPACK. I've added code in the latest SVN to that particular wrapper to make sure the query gives back the minimums at least, but I'd like to see who else has the problem. On my system, doing numpy.linalg.svd on a 25 x 50 array pulled the trigger. Anybody else have that problem with their lapack library? -Travis |
From: Charles R H. <cha...@gm...> - 2006-11-02 03:36:50
|
On 11/1/06, Tim Hochberg <tim...@ie...> wrote: > > Travis Oliphant wrote: > > Robert Kern wrote: > > > > > >> Travis Oliphant wrote: > >> > >> > >> > >> > >>> It looks like 1.0-x is doing the right thing. > >>> > >>> The problem is 1.0*x for matrices is going to float64. For arrays it > >>> returns float32 just like the 1.0-x > >>> > >>> > >>> > >> Why is this the right thing? Python floats are float64. > >> > >> > >> > > Yeah, why indeed. Must be something with the scalar coercion code... > > This is one of those things that pops up every few years. I suspect that > the best thing to do here is to treat 1.0, and all Python floats as > having a kind (float), but no precision. Or, equivalently treat them as > the smallest precision floating point value. The rationale behind this > is that otherwise float32 array will be promoted whenever they are > multiplied by Python floating point scalars. If Python floats are > treated as Float64 for purposes of determining output precision then > anyone using float32 arrays is going to have to wrap all of their > literals in float32 to prevent inadvertent upcasting to float64. This > was the origin of the (rather clunky) numarray spacesaver flag. > > It's no skin off my nose either way, since I pretty much never use > float32, but I suspect that treating python floats equivalently to > float64 scalars would be a mistake. At the very least it deserves a bit > of discussion. Well, I think that the present convention of having the array float type determine the output type when doing a binary op with a scalar makes sense. The question is what to do when the initial array is an integer type and needs to be promoted. Now I could see 1) coercing the scalar float to integer, which is probably consistent with the treatment of integer types. (boo) 2) requiring explicit use of float types, i.e., float64(1.0), which is a bit clumsy. 3) promoting to float64 by default and expecting the user to specify float32(1.0) when needed. I prefer 3, as float32 is probably not the most used data type. So the rule would be numpy_int array + python_int -- type numpy_int numpy_int array + python_flt -- type float64 numpy_int array + numpy_flt -- type numpy_flt numpy_flt array + python_flt -- type numpy_flt Seems a bit much to remember, but things always get complicated when you want to control the types. Mind that going from int64 to float64 can lead to loss of precision. Chuck |
From: Tim H. <tim...@ie...> - 2006-11-02 03:27:39
|
A. M. Archibald wrote: > On 01/11/06, Bill Baxter <wb...@gm...> wrote: > > >> What's the reason iterators are not supported currently? >> For instance A[range(0,4)] works for a 1d A, but A[xrange(0,4)] does not. >> Are iterators just too inefficient to bother with? >> > > If you want an iterator back, a generator comprehension will do it: > (A[i] for i in xrange(0,4)) > > If the result is to be a numpy array, the size must be known ahead of > time (so that the memory can be allocated) and specified. At this > point, and considering the overhead in calling back and forth to the > generator, the cost of converting to a list (after all, > A[list(xrange(0,40)] should work fine) isn't necessarily worth the > trouble. > Another option for converting an iterator to a sequence that might be helpful, particularly if result is large, is to use fromiter. For example: a[numpy.fromiter(xrange(0,40, 2), int)] -tim |
From: A. M. A. <per...@gm...> - 2006-11-02 03:20:30
|
On 01/11/06, Bill Baxter <wb...@gm...> wrote: > What's the reason iterators are not supported currently? > For instance A[range(0,4)] works for a 1d A, but A[xrange(0,4)] does not. > Are iterators just too inefficient to bother with? If you want an iterator back, a generator comprehension will do it: (A[i] for i in xrange(0,4)) If the result is to be a numpy array, the size must be known ahead of time (so that the memory can be allocated) and specified. At this point, and considering the overhead in calling back and forth to the generator, the cost of converting to a list (after all, A[list(xrange(0,40)] should work fine) isn't necessarily worth the trouble. A. M. Archibald |
From: Charles R H. <cha...@gm...> - 2006-11-02 03:19:46
|
On 11/1/06, Keith Goodman <kwg...@gm...> wrote: > > On 11/1/06, Travis Oliphant <oli...@ee...> wrote: > > It looks like 1.0-x is doing the right thing. > > > > The problem is 1.0*x for matrices is going to float64. For arrays it > > returns float32 just like the 1.0-x > > > > This can't be changed at this point until 1.1 > > > > We will fix the bug in 1.0*x producing float64, however. I'm still not > > sure what's causing it, though. > > I think it would be great if float64 was the default in numpy. That > way most people wouldn't have to worry about dtypes when crunching > numbers. And then numpy could apply for a trademark on 'it just > works'. > > Having to worry about dtypes makes users (me) nervous. > > I imagine a change like this would not be an overnight change, more of > a long-term goal. > > This one, from a previous thread, also makes me nervous: > > >> sum(M.ones((300,1)) == 1) > matrix([[44]], dtype=int8) That one seems to be fixed: In [1]: sum(ones((300,1)) == 1) Out[1]: 300 In [2]: (ones((300,1)) == 1).sum() Out[2]: 300 The matrix version also returns a numpy scalar, however. In [20]: sum(matrix(ones((300,1)) == 1)) Out[20]: 300 I wonder if that is expected? Chuck |
From: Tim H. <tim...@ie...> - 2006-11-02 03:17:19
|
Travis Oliphant wrote: > Robert Kern wrote: > > >> Travis Oliphant wrote: >> >> >> >> >>> It looks like 1.0-x is doing the right thing. >>> >>> The problem is 1.0*x for matrices is going to float64. For arrays it >>> returns float32 just like the 1.0-x >>> >>> >>> >> Why is this the right thing? Python floats are float64. >> >> >> > Yeah, why indeed. Must be something with the scalar coercion code... This is one of those things that pops up every few years. I suspect that the best thing to do here is to treat 1.0, and all Python floats as having a kind (float), but no precision. Or, equivalently treat them as the smallest precision floating point value. The rationale behind this is that otherwise float32 array will be promoted whenever they are multiplied by Python floating point scalars. If Python floats are treated as Float64 for purposes of determining output precision then anyone using float32 arrays is going to have to wrap all of their literals in float32 to prevent inadvertent upcasting to float64. This was the origin of the (rather clunky) numarray spacesaver flag. It's no skin off my nose either way, since I pretty much never use float32, but I suspect that treating python floats equivalently to float64 scalars would be a mistake. At the very least it deserves a bit of discussion. -tim |
From: Keith G. <kwg...@gm...> - 2006-11-02 01:50:20
|
On 11/1/06, Travis Oliphant <oli...@ee...> wrote: > It looks like 1.0-x is doing the right thing. > > The problem is 1.0*x for matrices is going to float64. For arrays it > returns float32 just like the 1.0-x > > This can't be changed at this point until 1.1 > > We will fix the bug in 1.0*x producing float64, however. I'm still not > sure what's causing it, though. I think it would be great if float64 was the default in numpy. That way most people wouldn't have to worry about dtypes when crunching numbers. And then numpy could apply for a trademark on 'it just works'. Having to worry about dtypes makes users (me) nervous. I imagine a change like this would not be an overnight change, more of a long-term goal. This one, from a previous thread, also makes me nervous: >> sum(M.ones((300,1)) == 1) matrix([[44]], dtype=int8) But float64 might not make sense here. |
From: Bill B. <wb...@gm...> - 2006-11-02 01:44:44
|
Has any thought been given to using compressed or functional representations of index sets? For instance there could be a where-like function that returns an object that can generate a set of indexes on the fly, rather than explicitly allocating arrays and enumerating all of the indices. What's the reason iterators are not supported currently? For instance A[range(0,4)] works for a 1d A, but A[xrange(0,4)] does not. Are iterators just too inefficient to bother with? I could imagine an iterator that generates a set of N-tuples for an N-d array being a legal indexing construct. Even something like A[idx_iterator] = value_iterator would seem to make sense. --bb |
From: Rob H. <he...@ta...> - 2006-11-01 23:15:21
|
One small change -- on my intel mac, I need to link to /usr/local/lib/libsndfile.1.dylib instead of the .so file. Then everything works!!! This is truely a great thing -- my two loves, numpy and audio files, together forever... -r On Oct 31, 2006, at 7:37 AM, David Cournapeau wrote: > Hi, > > I improved pyaudio last WE using indications given by various > people > on the list or privately, and as I finally got the motivation to > set-up > something which looks like a webpage, there is a doc with examples > which > show how to use it. The API to open files for writing is much > saner, and > the setup.py should be smart enough to grab all informations necessary > to the wrapper, including the location of the shared libsndfile: > > download: > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pyaudio/ > #installation > doc + examples: > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pyaudio/ > > I would appreciate to hear reports on platforms which are not > linux > (windows, mac os X) to see if my the setup.py works there, > > Cheers, > > David > _______________________________________________ > SciPy-user mailing list > Sci...@sc... > http://projects.scipy.org/mailman/listinfo/scipy-user ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 |
From: Albert S. <fu...@gm...> - 2006-11-01 22:46:35
|
Hey George On Tue, 31 Oct 2006, George Sakkis wrote: > Is there a more elegant and/or faster way to read some records from a > file and then sort them by different fields ? What I have now is too > specific and error-prone in general: > > import numpy as N > records = N.fromfile(a_file, dtype=N.dtype('i2,i4')) > records_by_f0 = records.take(records.getfield('i2').argsort()) > records_by_f1 = records.take(records.getfield('i4',2).argsort()) > > If there's a better way, I'd like to see it; bonus points for in-place > sorting. Check the thread "Strange results when sorting array with fields" from about a week back. Travis made some changes to sorting in the presence of fields that should solve your problem, assuming your fields appear in the order you want to sort (i.e. you want to sort f1, f2, f3 and not something like f1, f3, f2). Cheers, Albert |
From: Travis O. <oli...@ee...> - 2006-11-01 22:44:15
|
Robert Kern wrote: >Travis Oliphant wrote: > > > >>It looks like 1.0-x is doing the right thing. >> >>The problem is 1.0*x for matrices is going to float64. For arrays it >>returns float32 just like the 1.0-x >> >> > >Why is this the right thing? Python floats are float64. > > Yeah, why indeed. Must be something with the scalar coercion code... -Travis |
From: Charles R H. <cha...@gm...> - 2006-11-01 22:21:43
|
On 11/1/06, Robert Kern <rob...@gm...> wrote: > > Travis Oliphant wrote: > > > It looks like 1.0-x is doing the right thing. > > > > The problem is 1.0*x for matrices is going to float64. For arrays it > > returns float32 just like the 1.0-x > > Why is this the right thing? Python floats are float64. Same question here. Float32 is a designer float for special occasions, float64 is for everyday use. Chuck |
From: Robert K. <rob...@gm...> - 2006-11-01 21:57:58
|
Travis Oliphant wrote: > It looks like 1.0-x is doing the right thing. > > The problem is 1.0*x for matrices is going to float64. For arrays it > returns float32 just like the 1.0-x Why is this the right thing? Python floats are float64. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Travis O. <oli...@ee...> - 2006-11-01 21:51:44
|
Keith Goodman wrote: >I had a hard time tracing a bug in my code. The culprit was this difference: > > > >>>x >>> >>> > >matrix([[True], > [True], > [True]], dtype=bool) > > >>>1.0 - x >>> >>> > >matrix([[ 0.], > [ 0.], > [ 0.]], dtype=float32) <------- float32 > > >>>1.0*x >>> >>> > >matrix([[ 1.], > [ 1.], > [ 1.]]) <-------- float64 > > > >>>numpy.__version__ >>> >>> >'1.0rc1' > >Any chance that 1.0 - x could return dtype = float64? > > It looks like 1.0-x is doing the right thing. The problem is 1.0*x for matrices is going to float64. For arrays it returns float32 just like the 1.0-x This can't be changed at this point until 1.1 We will fix the bug in 1.0*x producing float64, however. I'm still not sure what's causing it, though. -Travis |
From: Travis O. <oli...@ee...> - 2006-11-01 21:43:51
|
Keith Goodman wrote: >I had a hard time tracing a bug in my code. The culprit was this difference: > > > >>>x >>> >>> > >matrix([[True], > [True], > [True]], dtype=bool) > > >>>1.0 - x >>> >>> > >matrix([[ 0.], > [ 0.], > [ 0.]], dtype=float32) <------- float32 > > >>>1.0*x >>> >>> > >matrix([[ 1.], > [ 1.], > [ 1.]]) <-------- float64 > > > >>>numpy.__version__ >>> >>> >'1.0rc1' > >Any chance that 1.0 - x could return dtype = float64? > > I'm surprised it doesn't. Both should follow bascially the same code-path. Perhaps there is a missing function loop or something. I'll look in to it. -Travis |