You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: A. M. A. <per...@gm...> - 2006-11-02 21:26:11
|
On 02/11/06, Francesc Altet <fa...@ca...> wrote: > I see this as a major issue in numarray and poses in great danger the > intended support of PyTables for numarray that we planned for some time > (until end of 2007). It would be nice to know if the numarray crew would > be willing to address this, or, now that NumPy 1.0 is out, they have > decided to completely drop the support for it. > > We'd really like to continue offering support for numarray (in the end, > it is a very good piece of software) in PyTables, but don't having a > solution for this problem anytime soon, will make this very problematic > to us. Someone has to say it: you could just drop support for the obsolete numarray and provide only support for its successor, numpy. A. M. Archibald |
From: Todd M. <jm...@st...> - 2006-11-02 21:04:57
|
Here's a stab at a solution but I don't have easy access to 64-bit linux at the moment so it is untested. If someone is willing to test it (and/or fix it) I'll add it to the next numarray release. It should be noted that numarray is not 64-bit enabled (it compiles as a 32-bit program in terms of arrays) which is another motivation to switch to numpy. Regards, Todd Francesc Altet wrote: > Hi, > > I've detected that numarray (1.5.2) seems to be bitten by the change in > Python 2.5 for indexes (http://docs.python.org/whatsnew/pep-353.html). > In a Linux64 machine (using Python 2.5), I get this: > > >>>> a=numarray.array([1,2,3]) >>>> a[1:2] >>>> > array([1, 2]) # ! > > However, both Numeric and numpy seems to work well with the same > scenario. > > >>>> b=Numeric.array([1,2,3]) >>>> b[1:2] >>>> > array([2]) > >>>> c=numpy.array([1,2]) >>>> c[1:2] >>>> > array([2]) > > I see this as a major issue in numarray and poses in great danger the > intended support of PyTables for numarray that we planned for some time > (until end of 2007). It would be nice to know if the numarray crew would > be willing to address this, or, now that NumPy 1.0 is out, they have > decided to completely drop the support for it. > > We'd really like to continue offering support for numarray (in the end, > it is a very good piece of software) in PyTables, but don't having a > solution for this problem anytime soon, will make this very problematic > to us. > > Thanks, > > |
From: Nikhil P. <NPa...@lb...> - 2006-11-02 20:35:52
|
Hi, I recently tried to upgrade to Python 2.5 (MacPython), and I am getting segfaults on some of my f2py wrapped fortran codes. I'm running OS X 10.4.8 (ppc), and this occurs for both numpy 1.0 and 1.0rc2 (the two versions I tried). Running with Python 2.4 does not give me this error. This is using the IBM xlf compiler.... A simple test case is below..... Not sure where to go from here -- any help would be great! Thanks, Here is a stripped down version that still crashes : subroutine foo(xx, nxx, bj) implicit none integer nxx real*8 xx(nxx),bj(nxx) !f2py intent(out) bj bj = xx end subroutine foo Compiling this to foo.so, I get : Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> import foo as p >>> a = arange(10) >>> p.foo(a) Segmentation fault Intriguingly, swapping the import orders fixes this : Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import foo as p >>> from numpy import * >>> a = arange(10) >>> p.foo(a) array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>> ------------------------------------ Nikhil Padmanabhan NPa...@lb... nikhil@(510) 495-2943 |
From: Torgil S. <tor...@gm...> - 2006-11-02 19:32:55
|
> generating such a PDF, with some color and formatting tweaks so that > it prints legibly on black and white as well as looking nice on > screen. This sounds promising. I actually have had problems printing the guide, every information box showed up completely black except for the emblem. On 11/2/06, Fernando Perez <fpe...@gm...> wrote: > On 11/2/06, Travis Oliphant <oli...@ee...> wrote: > > > > >Note that this is not a request to Travis to send me the latest version > > >by private email. That would be inefficient and my need is not that > > >urgent. Nevertheless I think that issue should be settled. > > > > > > > > There will be an update, soon. I'm currently working on the index, > > corrections, and formatting issues. > > If I may make a suggestion, it would greatly increase the usability of > the PDF if it had internal and external clickable links, as well as a > PDF table of contents (the one that appears on the left sidebar of > acroread). > > Since I know the original is in Lyx, here's my standard preamble for > these kinds of documents, which includes all the goodies for > generating such a PDF, with some color and formatting tweaks so that > it prints legibly on black and white as well as looking nice on > screen. If you add this to your preamble, a simple 'view->pdflatex' > should work. I'll be happy to help if you have any issues. > > Cheers, > > f > > %%% My preamble; tweak and season to taste: > > % This gives us a better font in URL links (otherwise the default > % MonoSpace font is bitmapped, and it looks horrible in PDF) > \usepackage{courier} > > \usepackage{fullpage} > > \usepackage{color} % so we can use red for the fixme warnings > > % The hyperref package gives us a pdf with properly built > % internal navigation ('pdf bookmarks' for the table of contents, > % internal cross-reference links, web links for URLs, etc.) > > % A few colors to replace the defaults for certain link types > \definecolor{darkorange}{rgb}{.71,0.21,0.01} > \definecolor{darkgreen}{rgb}{.12,.54,.11} > > \usepackage[ > %pdftex, % needed for pdflatex > breaklinks=true, % so long urls are correctly broken across lines > colorlinks=true, > urlcolor=blue, > linkcolor=darkorange, > citecolor=darkgreen, > ]{hyperref} > > % This helps prevent overly long lines that stretch beyond the margins > \sloppy > > % Define a \fixme command to mark visually things needing fixing in the draft. > % For final printing or to simply disable these bright warnings, simply > % uncomment the \renewcommand redefinition below > > \newcommand{\fixme}[1] { > \textcolor{red}{ > {\fbox{ {\bf FIX} > \ensuremath{\blacktriangleright \blacktriangleright \blacktriangleright}} > {\bf #1} > \fbox{\ensuremath{ \blacktriangleleft \blacktriangleleft \blacktriangleleft } > } } } > } > % Uncomment the next line to make the \fixme command be a no-op > %\renewcommand{\fixme}[1]{} > > %%% If you also want to use the listings package for nicely formatted > %%% Python source code, this configuration produces good on-paper and > %%% on-screen results: > > \definecolor{orange}{cmyk}{0,0.4,0.8,0.2} > % Use and configure listings package for nicely formatted code > \usepackage{listings} > \lstset{ > language=Python, > basicstyle=\small\ttfamily, > commentstyle=\ttfamily\color{blue}, > stringstyle=\ttfamily\color{orange}, > showstringspaces=false, > breaklines=true, > postbreak = \space\dots > } > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Francesc A. <fa...@ca...> - 2006-11-02 19:26:44
|
Hi, I've detected that numarray (1.5.2) seems to be bitten by the change in Python 2.5 for indexes (http://docs.python.org/whatsnew/pep-353.html). In a Linux64 machine (using Python 2.5), I get this: >>> a=numarray.array([1,2,3]) >>> a[1:2] array([1, 2]) # ! However, both Numeric and numpy seems to work well with the same scenario. >>> b=Numeric.array([1,2,3]) >>> b[1:2] array([2]) >>> c=numpy.array([1,2]) >>> c[1:2] array([2]) I see this as a major issue in numarray and poses in great danger the intended support of PyTables for numarray that we planned for some time (until end of 2007). It would be nice to know if the numarray crew would be willing to address this, or, now that NumPy 1.0 is out, they have decided to completely drop the support for it. We'd really like to continue offering support for numarray (in the end, it is a very good piece of software) in PyTables, but don't having a solution for this problem anytime soon, will make this very problematic to us. Thanks, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth |
From: Fernando P. <fpe...@gm...> - 2006-11-02 19:07:49
|
On 10/23/06, Travis Oliphant <oli...@ie...> wrote: > I've placed them in SVN (r3384): > > arraydescr_dealloc needs to do something like. > > if (self->fields == Py_None) { > print something > incref(self) > return; > } > > Most likely there is a missing Py_INCREF() before some call that uses > the data-type object (and consumes it's reference count) --- do you have > any Pyrex code (it's harder to get it right with Pyrex). OK, we've completed another long run (several days), and this time it didn't crash. But I think there are still refcount problems. I'm attaching the full log file and a plot of the refcount. It's wrapping around, and after some point the increase switches to a pefectly linear pattern, I'm not exactly sure why (it could be a change in the underlying code after the initialization phase; it's not my code so I don't know its internals). I hope this helps, it would be nice to track this down before 1.0.1 is out. Cheers, f |
From: Alan G I. <ai...@am...> - 2006-11-02 17:12:15
|
On Thu, 2 Nov 2006, Daran Rife apparently wrote: > Once a user purchases the document, how does s/he obtain > the latest version, when it becomes available? In addition to providing updates on an "as requested" basis, Travis has made it clear that he will occasionally do a mass mailing. (At least it seems clear to me...) I have a different suggestion for Travis: I suggest each purchaser receive an ID which can be used for a download twice a year. That will put these enquiries to rest. > Looking forward to purchasing a copy, to show support for > NumPy--one of the best (free) scientific computing > packages available. It is also a very nice book. If you had to repurchase it once every year or two, which you do not, it would still be a good deal. So don't hesitate! And make sure your library and labs buy copies too! Cheers, Alan Isaac |
From: Daran R. <dr...@uc...> - 2006-11-02 16:55:02
|
Hi Travis, I have one question about the fee-based "Guide to NumPy" that doesn't seem to be addressed on the Web site: <http://www.tramy.us/guidetoscipy.html> or in the FAQ. Once a user purchases the document, how does s/he obtain the latest version, when it becomes available? Looking forward to purchasing a copy, to show support for NumPy--one of the best (free) scientific computing packages available. Thanks for fielding my question, Daran -- Date: Thu, 02 Nov 2006 08:41:11 -0700 From: Travis Oliphant <oli...@ee...> Subject: Re: [Numpy-discussion] numpy book To: Discussion of Numerical Python <num...@li...> Message-ID: <454...@ee...> Content-Type: text/plain; charset=ISO-8859-1; format=flowed > Note that this is not a request to Travis to send me the latest > version > by private email. That would be inefficient and my need is not that > urgent. Nevertheless I think that issue should be settled. > > > There will be an update, soon. I'm currently working on the index, corrections, and formatting issues. The update will be sent in conjunction with the release of 1.0.1 which I am targetting in 2 weeks. Thanks for everybody's patience. If you need an update now, I'm happy to send it to you by private email. Best regards, -Travis O. |
From: Travis O. <oli...@ee...> - 2006-11-02 16:32:55
|
George Sakkis wrote: >Charles R Harris wrote: > > > >>On 11/1/06, George Sakkis <geo...@gm...> wrote: >> >> >>>Albert Strasheim wrote: >>> >>> >>> >>>>Check the thread "Strange results when sorting array with fields" from >>>>about a week back. Travis made some changes to sorting in the presence >>>>of fields that should solve your problem, assuming your fields appear in >>>>the order you want to sort (i.e. you want to sort f1, f2, f3 and not >>>>something like f1, f3, f2). >>>> >>>> >>>I'm afraid this won't help in my case; I want to sort twice, once by f1 >>>and once by f2. I guess I could make a second file with the fields >>>swapped but this seems more messy and inefficient than Francesc's >>>suggestion. >>> >>> >>Do you actually want the two different orders, or do you want to sort on the >>first field, then sort all the items with the same first field on the second >>field? >> >> > >The former. > > Sorting on a particular field in-place would be possible if there were some-way to indicate to VOID_compare the field-order you wanted to use to compare on. There are a few ways I could think of doing this. 1) Use a thread-specific global variable (this doesn't recurse very easily). 2) Use the meta-object in the field specifier to indicate the order (the interface could still be something like .sort(order='f1') and a temporary data-type object is created and used). 3) Use a special key in the fields dictionary although this would require some fixes to all the code that cycles through the fields dictionary to recurse on structures. 4) Overload any of the other variables in the PyArray_Descr * structure. 5) Add a sort-order to the end of the PyArray_Descr * structure and a flag to the hasobject flag bits (that would be the last one available) that states that the Data-type object has the sort-order defined (so binary compatibilty is retained but the new feature can be used going forward). Any other ideas? -Travis |
From: George S. <geo...@gm...> - 2006-11-02 16:18:41
|
Charles R Harris wrote: > On 11/1/06, George Sakkis <geo...@gm...> wrote: > > > > Albert Strasheim wrote: > > > > > Check the thread "Strange results when sorting array with fields" from > > > about a week back. Travis made some changes to sorting in the presence > > > of fields that should solve your problem, assuming your fields appear in > > > the order you want to sort (i.e. you want to sort f1, f2, f3 and not > > > something like f1, f3, f2). > > > > I'm afraid this won't help in my case; I want to sort twice, once by f1 > > and once by f2. I guess I could make a second file with the fields > > swapped but this seems more messy and inefficient than Francesc's > > suggestion. > > > Do you actually want the two different orders, or do you want to sort on the > first field, then sort all the items with the same first field on the second > field? The former. George |
From: Tim H. <tim...@ie...> - 2006-11-02 16:15:33
|
Travis Oliphant wrote: > Robert Kern wrote: > > >> Tim Hochberg wrote: >> >> >> >>> Travis Oliphant wrote: >>> >>> >>> >>>> Robert Kern wrote: >>>> >>>> >>>> >>>>> Travis Oliphant wrote: >>>>> >>>>> >>>>> >>>>>> It looks like 1.0-x is doing the right thing. >>>>>> >>>>>> The problem is 1.0*x for matrices is going to float64. For arrays it >>>>>> returns float32 just like the 1.0-x >>>>>> >>>>>> >>>>>> >>>>>> >>>>> Why is this the right thing? Python floats are float64. >>>>> >>>>> >>>>> >>>>> >>>> Yeah, why indeed. Must be something with the scalar coercion code... >>>> >>>> >>>> >>> This is one of those things that pops up every few years. I suspect that >>> the best thing to do here is to treat 1.0, and all Python floats as >>> having a kind (float), but no precision. Or, equivalently treat them as >>> the smallest precision floating point value. The rationale behind this >>> is that otherwise float32 array will be promoted whenever they are >>> multiplied by Python floating point scalars. If Python floats are >>> treated as Float64 for purposes of determining output precision then >>> anyone using float32 arrays is going to have to wrap all of their >>> literals in float32 to prevent inadvertent upcasting to float64. This >>> was the origin of the (rather clunky) numarray spacesaver flag. >>> >>> It's no skin off my nose either way, since I pretty much never use >>> float32, but I suspect that treating python floats equivalently to >>> float64 scalars would be a mistake. At the very least it deserves a bit >>> of discussion. >>> >>> >>> >> Well, they *are* 64-bit floating point numbers. You simply can't get around >> that. That's why we now have all of the scalar types: you can get any precision >> scalars that you want as long as you are explicit about it (and explicit is >> better than implicit). The spacesaver flag was the only solution before the >> various scalar types existed. I'd like to suggest that the discussion already >> occurred some time ago and has concluded in favor of the scalar types. >> Downcasting should be explicit. >> >> However, whether or not float32 arrays operated with Python float scalars give >> float32 or float64 arrays is tangential to my question. Does anyone actually >> think that a Python float operated with a boolean array should give a float32 >> result? Must we *up*cast a boolean array to float64 to preserve the precision of >> the scalar? >> >> >> >> > The first basic rule is that scalars don't control the precision of the > output when doing mixed-type calculations *except* when they are of a > fundamentally different kind. > > Then (if a different kind of scalar is used), the rule is that the > arrays will be upcast to the "lowest" precision in the group to preserve > overall precision. So, when a bool is combined with a "float" kind of > scalar, the result is float32 because that preserves precision of the > bool. Remember it is array precision that takes precedence over scalars > in mixed type array-scalar operations. > > This is the rule. I agree that this rule is probably flawed in certain > circumstances. > I think any rule will be flawed in certain circumstances. This particular rule has the advantage of being relatively straightforward and the circumstances that I can think of where it could cause problems are relatively limited and relatively easy to address. The obvious "fixes" to this rule that I have I've thought of all have problems that are as bad or worse as the current rule and have the added disadvantage of being more complicated. At the very least, any replacement rule should get some serious discussion here before being implemented. We should particularly solicit the input of numarray users since that package had more infrastructure in place to support the use of lower precision arrays. > So, what should be done about it at this point? Do you think a change > is acceptable for 1.0.1 or does it need to wait a year until 1.1? > Unless someone can come up with a convincingly better solution, I say leave things as is indefinitely. -tim |
From: Fernando P. <fpe...@gm...> - 2006-11-02 16:07:25
|
On 11/2/06, Travis Oliphant <oli...@ee...> wrote: > Thanks for the pre-amble. Does it require pdflatex? I use ps2pdf > because to generate the shaded boxes and graphics. I could probably try > to do it with pdflatex and png files but I haven't tried, yet. I just tested it with ps2pdf on a document, and it worked fine. If you have problems, play with commenting (or not) the first option line to hyperref: \usepackage[ %pdftex, % needed for pdflatex In the copy I gave you it's commented out, so it /should/ work for ps2pdf. But I've had documents where I've had to uncomment it, and I think at that point they stop working with ps2pdf, and yes, you lose the usage of PostScript specials in that case (which I use for things like whole-page DRAFT watermarks). Cheers, f |
From: Travis O. <oli...@ee...> - 2006-11-02 16:03:25
|
Fernando Perez wrote: >On 11/2/06, Travis Oliphant <oli...@ee...> wrote: > > >>>Note that this is not a request to Travis to send me the latest version >>>by private email. That would be inefficient and my need is not that >>>urgent. Nevertheless I think that issue should be settled. >>> >>> >>> >>> >>There will be an update, soon. I'm currently working on the index, >>corrections, and formatting issues. >> >> > >If I may make a suggestion, it would greatly increase the usability of >the PDF if it had internal and external clickable links, as well as a >PDF table of contents (the one that appears on the left sidebar of >acroread). > >Since I know the original is in Lyx, here's my standard preamble for >these kinds of documents, which includes all the goodies for >generating such a PDF, with some color and formatting tweaks so that >it prints legibly on black and white as well as looking nice on >screen. If you add this to your preamble, a simple 'view->pdflatex' >should work. I'll be happy to help if you have any issues. > > Thanks for the pre-amble. Does it require pdflatex? I use ps2pdf because to generate the shaded boxes and graphics. I could probably try to do it with pdflatex and png files but I haven't tried, yet. -Travis |
From: Fernando P. <fpe...@gm...> - 2006-11-02 16:00:55
|
On 11/2/06, Travis Oliphant <oli...@ee...> wrote: > > >Note that this is not a request to Travis to send me the latest version > >by private email. That would be inefficient and my need is not that > >urgent. Nevertheless I think that issue should be settled. > > > > > There will be an update, soon. I'm currently working on the index, > corrections, and formatting issues. If I may make a suggestion, it would greatly increase the usability of the PDF if it had internal and external clickable links, as well as a PDF table of contents (the one that appears on the left sidebar of acroread). Since I know the original is in Lyx, here's my standard preamble for these kinds of documents, which includes all the goodies for generating such a PDF, with some color and formatting tweaks so that it prints legibly on black and white as well as looking nice on screen. If you add this to your preamble, a simple 'view->pdflatex' should work. I'll be happy to help if you have any issues. Cheers, f %%% My preamble; tweak and season to taste: % This gives us a better font in URL links (otherwise the default % MonoSpace font is bitmapped, and it looks horrible in PDF) \usepackage{courier} \usepackage{fullpage} \usepackage{color} % so we can use red for the fixme warnings % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) % A few colors to replace the defaults for certain link types \definecolor{darkorange}{rgb}{.71,0.21,0.01} \definecolor{darkgreen}{rgb}{.12,.54,.11} \usepackage[ %pdftex, % needed for pdflatex breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=blue, linkcolor=darkorange, citecolor=darkgreen, ]{hyperref} % This helps prevent overly long lines that stretch beyond the margins \sloppy % Define a \fixme command to mark visually things needing fixing in the draft. % For final printing or to simply disable these bright warnings, simply % uncomment the \renewcommand redefinition below \newcommand{\fixme}[1] { \textcolor{red}{ {\fbox{ {\bf FIX} \ensuremath{\blacktriangleright \blacktriangleright \blacktriangleright}} {\bf #1} \fbox{\ensuremath{ \blacktriangleleft \blacktriangleleft \blacktriangleleft } } } } } % Uncomment the next line to make the \fixme command be a no-op %\renewcommand{\fixme}[1]{} %%% If you also want to use the listings package for nicely formatted %%% Python source code, this configuration produces good on-paper and %%% on-screen results: \definecolor{orange}{cmyk}{0,0.4,0.8,0.2} % Use and configure listings package for nicely formatted code \usepackage{listings} \lstset{ language=Python, basicstyle=\small\ttfamily, commentstyle=\ttfamily\color{blue}, stringstyle=\ttfamily\color{orange}, showstringspaces=false, breaklines=true, postbreak = \space\dots } |
From: Travis O. <oli...@ee...> - 2006-11-02 15:55:18
|
Jonathan Wang wrote: > On numpy version 1.0, argmax and max give inconsistent results for an > array of objects. I've seen this problem in both Python native > datetime and mx.DateTime types: There is a bug in argmax for OBJECT arrays in 1.0 (it's fixed in SVN and will be in 1.0.1) -Travis |
From: Jonathan W. <jon...@gm...> - 2006-11-02 15:52:16
|
On numpy version 1.0, argmax and max give inconsistent results for an array of objects. I've seen this problem in both Python native datetime and mx.DateTime types: In [22]: print nativeDates [datetime.datetime(2006, 10, 18, 10, 11, 27), datetime.datetime(2006, 10, 18, 10, 16, 20), datetime.datetime(2006, 10, 18, 10, 21, 23), datetime.datetime(2006, 10, 18, 10, 31, 13), datetime.datetime(2006, 10, 18, 10, 39, 49), datetime.datetime(2006, 10, 18, 10, 53, 19), datetime.datetime(2006, 10, 18, 11, 23, 18), datetime.datetime(2006, 10, 18, 17, 18, 43), datetime.datetime(2006, 10, 18, 17, 21, 49), datetime.datetime(2006, 10, 18, 17, 24, 28), datetime.datetime(2006, 10, 18, 17, 28, 29), datetime.datetime(2006, 10, 18, 17, 31, 7), datetime.datetime(2006, 10, 18, 17, 36, 26), datetime.datetime(2006, 10, 19, 10, 17, 45), datetime.datetime(2006, 10, 19, 11, 23, 19), datetime.datetime(2006, 10, 19, 11, 58, 18), datetime.datetime(2006, 10, 19, 10, 27, 40), datetime.datetime(2006, 10, 19, 13, 17, 14), datetime.datetime(2006, 10, 19, 13, 21, 17), datetime.datetime(2006, 10, 19, 13, 23, 52), datetime.datetime(2006, 10, 19, 13, 29, 1)] In [23]: numpy.argmax(nativeDates) Out[23]: 0 In [24]: numpy.max(nativeDates) Out[24]: datetime.datetime(2006, 10, 19, 13, 29, 1) In [25]: nativeDates[0] Out[25]: datetime.datetime(2006, 10, 18, 10, 11, 27) I get the same results if I create an array from the list first: In [28]: dateArr = numpy.array(nativeDates, dtype=object) In [29]: print dateArr [2006-10-18 10:11:27 2006-10-18 10:16:20 2006-10-18 10:21:23 2006-10-18 10:31:13 2006-10-18 10:39:49 2006-10-18 10:53:19 2006-10-18 11:23:18 2006-10-18 17:18:43 2006-10-18 17:21:49 2006-10-18 17:24:28 2006-10-18 17:28:29 2006-10-18 17:31:07 2006-10-18 17:36:26 2006-10-19 10:17:45 2006-10-19 11:23:19 2006-10-19 11:58:18 2006-10-19 10:27:40 2006-10-19 13:17:14 2006-10-19 13:21:17 2006-10-19 13:23:52 2006-10-19 13:29:01] In [30]: numpy.argmax(dateArr) Out[30]: 0 In [31]: numpy.max(dateArr) Out[31]: datetime.datetime(2006, 10, 19, 13, 29, 1) In [32]: dateArr[0] Out[32]: datetime.datetime(2006, 10, 18, 10, 11, 27) My guess is that it's related to some underlying memory layout; I've gotten different results when running this. Jonathan |
From: Travis O. <oli...@ee...> - 2006-11-02 15:48:47
|
Robert Kern wrote: >Tim Hochberg wrote: > > >>Travis Oliphant wrote: >> >> >>>Robert Kern wrote: >>> >>> >>>>Travis Oliphant wrote: >>>> >>>> >>>>>It looks like 1.0-x is doing the right thing. >>>>> >>>>>The problem is 1.0*x for matrices is going to float64. For arrays it >>>>>returns float32 just like the 1.0-x >>>>> >>>>> >>>>> >>>>Why is this the right thing? Python floats are float64. >>>> >>>> >>>> >>>Yeah, why indeed. Must be something with the scalar coercion code... >>> >>> >>This is one of those things that pops up every few years. I suspect that >>the best thing to do here is to treat 1.0, and all Python floats as >>having a kind (float), but no precision. Or, equivalently treat them as >>the smallest precision floating point value. The rationale behind this >>is that otherwise float32 array will be promoted whenever they are >>multiplied by Python floating point scalars. If Python floats are >>treated as Float64 for purposes of determining output precision then >>anyone using float32 arrays is going to have to wrap all of their >>literals in float32 to prevent inadvertent upcasting to float64. This >>was the origin of the (rather clunky) numarray spacesaver flag. >> >>It's no skin off my nose either way, since I pretty much never use >>float32, but I suspect that treating python floats equivalently to >>float64 scalars would be a mistake. At the very least it deserves a bit >>of discussion. >> >> > >Well, they *are* 64-bit floating point numbers. You simply can't get around >that. That's why we now have all of the scalar types: you can get any precision >scalars that you want as long as you are explicit about it (and explicit is >better than implicit). The spacesaver flag was the only solution before the >various scalar types existed. I'd like to suggest that the discussion already >occurred some time ago and has concluded in favor of the scalar types. >Downcasting should be explicit. > >However, whether or not float32 arrays operated with Python float scalars give >float32 or float64 arrays is tangential to my question. Does anyone actually >think that a Python float operated with a boolean array should give a float32 >result? Must we *up*cast a boolean array to float64 to preserve the precision of >the scalar? > > > The first basic rule is that scalars don't control the precision of the output when doing mixed-type calculations *except* when they are of a fundamentally different kind. Then (if a different kind of scalar is used), the rule is that the arrays will be upcast to the "lowest" precision in the group to preserve overall precision. So, when a bool is combined with a "float" kind of scalar, the result is float32 because that preserves precision of the bool. Remember it is array precision that takes precedence over scalars in mixed type array-scalar operations. This is the rule. I agree that this rule is probably flawed in certain circumstances. So, what should be done about it at this point? Do you think a change is acceptable for 1.0.1 or does it need to wait a year until 1.1? -Travis |
From: Travis O. <oli...@ee...> - 2006-11-02 15:41:35
|
>Note that this is not a request to Travis to send me the latest version >by private email. That would be inefficient and my need is not that >urgent. Nevertheless I think that issue should be settled. > > There will be an update, soon. I'm currently working on the index, corrections, and formatting issues. The update will be sent in conjunction with the release of 1.0.1 which I am targetting in 2 weeks. Thanks for everybody's patience. If you need an update now, I'm happy to send it to you by private email. Best regards, -Travis O. |
From: Markus R. <mar...@el...> - 2006-11-02 15:13:53
|
Hi! This, and similar problems with other programs were fixed when I installed 10.4 SDK from Apple Developer Conenction. Yes, the 10.4 SDK can be installed on 10.3.9. Regards Markus Am 19.10.2006 um 23:56 schrieb Markus Rosenstihl: > Hi! > I try to compile numpy rc3 on Panther and get following errors. > (I start build with "python2.3 setup.py build" to be sure to use the > python shipped with OS X. I din't manage to compile Python2.5 either > yet with similar errors) > Does anynbody has an Idea? > gcc-3.3 > XCode 1.5 > November gcc updater is installed > > > regards > > Markus > > python2.3 setup.py build > ... > compile options: > '-Ibuild/src.darwin-7.9.0-Power_Macintosh-2.3/numpy/core/src > -Inumpy/core/include > -Ibuild/src.darwin-7.9.0-Power_Macintosh-2.3/numpy/core > -Inumpy/core/src -Inumpy/core/include > -I/System/Library/Frameworks/Python.framework/Versions/2.3/include/ > python2.3 -c' > gcc: numpy/core/src/multiarraymodule.c > In file included from numpy/core/src/arrayobject.c:511, > from numpy/core/src/multiarraymodule.c:63: > numpy/core/src/arraytypes.inc.src: In function `LONGDOUBLE_scan': > numpy/core/src/arraytypes.inc.src:883: warning: long double format, > npy_longdouble arg (arg 3) > gcc -Wl,-F. -Wl,-F. -bundle -framework Python > build/temp.darwin-7.9.0-Power_Macintosh-2.3/numpy/core/src/ > multiarraymodule.o -o > build/lib.darwin-7.9.0-Power_Macintosh-2.3/numpy/core/multiarray.so > ld: Undefined symbols: > _fstatvfs referenced from Python expected to be defined in libSystem > _lchown referenced from Python expected to be defined in libSystem > _statvfs referenced from Python expected to be defined in libSystem > ld: Undefined symbols: > _fstatvfs referenced from Python expected to be defined in libSystem > _lchown referenced from Python expected to be defined in libSystem > _statvfs referenced from Python expected to be defined in libSystem > error: Command "gcc -Wl,-F. -Wl,-F. -bundle -framework Python > build/temp.darwin-7.9.0-Power_Macintosh-2.3/numpy/core/src/ > multiarraymodule.o -o > build/lib.darwin-7.9.0-Power_Macintosh-2.3/numpy/core/multiarray.so" > failed with exit status 1 > > > ----------------------------------------------------------------------- > -- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Sven S. <sve...@gm...> - 2006-11-02 12:24:09
|
Hi, now that 1.0 is out I would like to ask again about the status of the numpy ebook. Is there any mechanism in place where owners can get the updates? Note that this is not a request to Travis to send me the latest version by private email. That would be inefficient and my need is not that urgent. Nevertheless I think that issue should be settled. Excuse me if I've missed something obvious in that respect. Thanks, Sven |
From: Fernando P. <fpe...@gm...> - 2006-11-02 07:23:51
|
On 11/2/06, David Cournapeau <da...@ar...> wrote: > (if those announcements are not welcome on the lists, please tell me) Frankly, if an announcement for a free Python signal processing library is not welcome on the scipy lists, I don't know where it would be ;) As a minor note though: please remember to make a note of this in the Topical Software wiki; it's good for that page to remain up to date with newly released packages and libraries. Thanks! f |
From: David C. <da...@ar...> - 2006-11-02 07:13:03
|
(if those announcements are not welcome on the lists, please tell me) Hi there, I've just developed a small package to wrap Source Rabbit Code from Erik de Castro de Lopo, which is a library to do high quality samplerate conversion (http://www.mega-nerd.com/SRC/). You can download it here: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pysamplerate/pysamplerate-0.1.tar.gz http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pysamplerate/pysamplerate-0.1.zip Example of error conversions: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pysamplerate/example1.png Basically, you just import resample from pysamplerate, and resample(sins, fr/fs) gives you a converted signal at sampling rate fr, with fs the original sampling rate. You can also select the conversion method (eg 'sinc_best', etc...). The pydocs + examples should be enough This package can be useful for people who deal with music signal processing and co, when scipy.resample is not enough. Only float32 and float64 input are supported for now, I will polish the package when I will have some more time, cheers, David |
From: Charles R H. <cha...@gm...> - 2006-11-02 05:55:42
|
On 11/1/06, Robert Kern <rob...@gm...> wrote: > > Tim Hochberg wrote: > > Travis Oliphant wrote: <snip> However, whether or not float32 arrays operated with Python float scalars > give > float32 or float64 arrays is tangential to my question. Does anyone > actually > think that a Python float operated with a boolean array should give a > float32 > result? Must we *up*cast a boolean array to float64 to preserve the > precision of > the scalar? Probably doesn't matter most of the time, I suppose, who is going to check? I tend to think doubles because they are a bit faster on the x86 architecture and because they are a pretty common default. Chuck |
From: David C. <da...@ar...> - 2006-11-02 05:42:42
|
Fernando Perez wrote: > On 11/1/06, David Cournapeau <da...@ar...> wrote: > >> Hi, >> >> I want to generate some random integers,let's say in the range [- >> 2^15, 2^16]. Why doing: >> >> noise = numpy.random.random_integers(- (2 ** 15), (2 * 15), 22050) >> >> gives only negative numbers ? >> > > In [3]: noise = numpy.random.random_integers(- (2 ** 15), (2 * 15), 22050) > > In [4]: noise[noise>0].shape > Out[4]: (17,) > > In [5]: noise[noise<0].shape > Out[5]: (22033,) > > In [6]: noise = numpy.random.random_integers(-(2**15), (2 ** 15), 22050) > > In [7]: noise[noise>0].shape > Out[7]: (11006,) > > In [8]: noise[noise>0].shape > Out[8]: (11006,) > > In [9]: 17./22033 > Out[9]: 0.00077156991785049694 > > In [10]: 2.0*15/2**15 > Out[10]: 0.00091552734375 > > close enough ;) > Ok, I deserve my email to be called the most stupid email of the week... Sorry for the noise, David |
From: Robert K. <rob...@gm...> - 2006-11-02 05:42:34
|
Keith Goodman wrote: > On 11/1/06, David Cournapeau <da...@ar...> wrote: >> Hi, >> >> I want to generate some random integers,let's say in the range [- >> 2^15, 2^16]. Why doing: >> >> noise = numpy.random.random_integers(- (2 ** 15), (2 * 15), 22050) >> >> gives only negative numbers ? > > I guess 2^15 is too big to be an int. No, it isn't. As Fernando pointed out, if David's code is what he typed here, then his upper bound is (2 * 15) == 30 not (2 ** 15) == 32768 as he intended intended. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |