You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Alan G I. <ai...@am...> - 2006-06-30 19:03:30
|
On Fri, 30 Jun 2006, Travis Oliphant apparently wrote:=20 > I'm convinced that we should change to float as the=20 > default, but everywhere as Sasha says.=20 Even better! Cheers, Alan Isaac |
From: David M. C. <co...@ph...> - 2006-06-30 18:59:32
|
On Fri, 30 Jun 2006 14:42:33 -0400 "Jonathan Taylor" <jon...@ut...> wrote: > +1 for some sort of float. I am a little confused as to why Float64 > is a particularly good choice. Can someone explain in more detail? > Presumably this is the most sensible ctype and translates to a python > float well? It's "float64", btw. Float64 is the old Numeric name. Python's "float" type is a C "double" (just like Python's "int" is a C "long"). In practice, C doubles are 64-bit. In NumPy, these are the same type: float32 == single (32-bit float, which is a C float) float64 == double (64-bit float, which is a C double) Also, some Python types have equivalent NumPy types (as in, they can be used interchangably as dtype arguments): int == long (C long, could be int32 or int64) float == double complex == cdouble (also complex128) Personally, I'd suggest using "single", "float", and "longdouble" in numpy code. [While we're on the subject, for portable code don't use float96 or float128: one or other or both probably won't exist; use longdouble]. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Travis O. <oli...@ee...> - 2006-06-30 18:55:31
|
Jonathan Taylor wrote: >+1 for some sort of float. I am a little confused as to why Float64 >is a particularly good choice. Can someone explain in more detail? >Presumably this is the most sensible ctype and translates to a python >float well? > > O.K. I'm convinced that we should change to float as the default, but *everywhere* as Sasha says. We will provide two tools to make the transition easier. 1) The numpy.oldnumeric sub-package will contain definitions of changed functions that keep the old defaults (integer). This is what convertcode replaces for import Numeric calls so future users who make the transition won't really notice. 2) A function/script that can be run to convert all type-less uses of the changed functions to explicitly insert dtype=int. Yes, it will be a bit painful (I made the change and count 6 failures in NumPy tests and 34 in SciPy). But, it sounds like there is support for doing it. And yes, we must do it prior to 1.0 if we do it at all. Comments? -Travis |
From: Alan G I. <ai...@am...> - 2006-06-30 18:55:00
|
On Fri, 30 Jun 2006, Jonathan Taylor apparently wrote:=20 > In general though I agree that this is a now or never change.=20 Sasha has also made that argument. I see one possible additional strategy. I think every agrees that the long view is important. Now even Sasha agrees that float64 is the best default. Suppose 1. float64 is the ideal default (I agree with this) 2. there is substantial concern about the change of default on extant code for the unwary One approach proposed is to include a different function definition in a compatability module. This seems acceptable=20 to me, but as Sasha notes it is not without drawbacks. Here is another possibility: transition by requiring an explicit data type for some period of time (say, 6-12 months). After that time, provide the default of float64. This would require some short term pain, but for the long term gain of the desired outcome. Just a thought, Alan Isaac PS I agree with Sasha's following observations: "arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. ... No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. ... If we decide to change the default, let's do it everywhere including array constructors and arange." |
From: Matthew B. <mat...@gm...> - 2006-06-30 18:48:10
|
Just one more vote for float. On the basis that Travis mentioned, of all those first-timers downloading, trying, finding something they didn't expect that was rather confusing, and giving up. |
From: Jonathan T. <jon...@ut...> - 2006-06-30 18:42:35
|
+1 for some sort of float. I am a little confused as to why Float64 is a particularly good choice. Can someone explain in more detail? Presumably this is the most sensible ctype and translates to a python float well? In general though I agree that this is a now or never change. I suspect we will change a lot of matlab -> Numeric/numarray transitions into matlab -> numpy transitions with this change. I guess it will take a little longer for 1.0 to get out though :( Ah well. Cheers. Jon. On 6/30/06, Travis Oliphant <oli...@ee...> wrote: > Jon, > > Thanks for the great feedback. You make some really good points. > > > > > > >Having {pointer + dimensions + strides + type} in the python core would > >be an incredible step forward - this is far more important than changing > >my python code to do functionally the same thing with numpy instead of > >Numeric. > > > Guido has always wanted consensus before putting things into Python. We > need to rally behind NumPy if we are going to get something of it's > infrastructure into Python itself. > > >As author of a (fairly obscure) secondary dependency package it is not > >clear that this is right time to convert. I very much admire the > >matplotlib approach of using Numerix and see this as a better solution > >than switching (or indeed re-writing in java/c++ etc). > > > I disagree with this approach. It's fine for testing and for > transition, but it is a headache long term. You are basically > supporting three packages. The community is not large enough to do > that. I also think it leads people to consider adopting that approach > instead of just switching. I'm not particularly thrilled with > strategies that essentially promote the existence of three different > packages. > > >However, looking > >into the matplotlib SVN I see: > > > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's > >numpy patch > >numerix/_sp_imports.py 2478 2 weeks teoliphant Make > >recent changes backward compatible with numpy 0.9.8 > >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > > Fix import error for new numpy > > > >While I didn't look at either the code or the diff the comments clearly > >read as: "DON'T SWITCH YET". > > > I don't understand why you interpret it that way? When I moved > old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure > that matplotlib still works with numpy 0.9.8 (which has the old-style > names in the main location). > > Why does this say "DON'T SWITCH"? If anything it should tell you that > we are conscious of trying to keep things working together and > compatible with current releases of NumPy. > > >Get the basearray into the python core and > >for sure I will be using that, whatever it is called. I was tempted to > >switch to numarray in the past because of the nd_image, but I don't see > >that in numpy just yet? > > > > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > > >I am very supportive of the work going on but have some technical > >concerns about switching. To pick some examples, it appears that > >numpy.lib.function_base.median makes a copy, sorts and picks the middle > >element. > > > I'm sure we need lots of improvements in the code-base. This has > always been true. We rely on the ability of contributors which doesn't > work well unless we have a lot of contributors which are hard to get > unless we consolidate around a single array package. Please contribute a > fix. > > >single one routine out, I was also saddened to find both Numeric and > >numpy use double precision lapack routines for single precision > >arguments. > > > The point of numpy.linalg is to provide the functionality of Numeric not > extend it. This is because SciPy provides a much more capable linalg > sub-package that works with single and double precision. It sounds > like you want SciPy. > > >For numpy to really be better than Numeric I would like to find > >algorithm selections according to the array dimensions and type. > > > These are good suggestions but for SciPy. The linear algebra in NumPy > is just for getting your feet wet and having access to basic > functionality. > > >Getting > >the basearray type into the python core is the key - then it makes sense > >to get the best of breed algorithms working as you can rely on the > >basearray being around for many years to come. > > > >Please please please get basearray into the python core! How can we help > >with that? > > > > > There is a PEP in SVN (see the array interface link at > http://numeric.scipy.org) Karol Langner is a Google summer-of-code > student working on it this summer. I'm not sure how far he'll get, but > I'm hopeful. > > I could spend more time on it, if I had funding to do it, but right now > I'm up against a wall. > > Again, thanks for the feedback. > > Best, > > -Travis > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Christopher H. <ch...@st...> - 2006-06-30 18:31:10
|
>>Get the basearray into the python core and >>for sure I will be using that, whatever it is called. I was tempted to >>switch to numarray in the past because of the nd_image, but I don't see >>that in numpy just yet? >> >> > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > Hi Travis, Where can one find and download nd_image separate from the rest of scipy? As for the the numarray C-API, we are currently doing testing here at STScI. Chris |
From: Travis O. <oli...@ee...> - 2006-06-30 18:13:30
|
Jon, Thanks for the great feedback. You make some really good points. > > >Having {pointer + dimensions + strides + type} in the python core would >be an incredible step forward - this is far more important than changing >my python code to do functionally the same thing with numpy instead of >Numeric. > Guido has always wanted consensus before putting things into Python. We need to rally behind NumPy if we are going to get something of it's infrastructure into Python itself. >As author of a (fairly obscure) secondary dependency package it is not >clear that this is right time to convert. I very much admire the >matplotlib approach of using Numerix and see this as a better solution >than switching (or indeed re-writing in java/c++ etc). > I disagree with this approach. It's fine for testing and for transition, but it is a headache long term. You are basically supporting three packages. The community is not large enough to do that. I also think it leads people to consider adopting that approach instead of just switching. I'm not particularly thrilled with strategies that essentially promote the existence of three different packages. >However, looking >into the matplotlib SVN I see: > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's >numpy patch >numerix/_sp_imports.py 2478 2 weeks teoliphant Make >recent changes backward compatible with numpy 0.9.8 >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > Fix import error for new numpy > >While I didn't look at either the code or the diff the comments clearly >read as: "DON'T SWITCH YET". > I don't understand why you interpret it that way? When I moved old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure that matplotlib still works with numpy 0.9.8 (which has the old-style names in the main location). Why does this say "DON'T SWITCH"? If anything it should tell you that we are conscious of trying to keep things working together and compatible with current releases of NumPy. >Get the basearray into the python core and >for sure I will be using that, whatever it is called. I was tempted to >switch to numarray in the past because of the nd_image, but I don't see >that in numpy just yet? > > It is in SciPy where it belongs (you can also install it as a separate package). It builds and runs on top of NumPy just fine. In fact it was the predecessor to the now fully-capable-but-in-need-of-more-testing numarray C-API that is now in NumPy. >I am very supportive of the work going on but have some technical >concerns about switching. To pick some examples, it appears that >numpy.lib.function_base.median makes a copy, sorts and picks the middle >element. > I'm sure we need lots of improvements in the code-base. This has always been true. We rely on the ability of contributors which doesn't work well unless we have a lot of contributors which are hard to get unless we consolidate around a single array package. Please contribute a fix. >single one routine out, I was also saddened to find both Numeric and >numpy use double precision lapack routines for single precision >arguments. > The point of numpy.linalg is to provide the functionality of Numeric not extend it. This is because SciPy provides a much more capable linalg sub-package that works with single and double precision. It sounds like you want SciPy. >For numpy to really be better than Numeric I would like to find >algorithm selections according to the array dimensions and type. > These are good suggestions but for SciPy. The linear algebra in NumPy is just for getting your feet wet and having access to basic functionality. >Getting >the basearray type into the python core is the key - then it makes sense >to get the best of breed algorithms working as you can rely on the >basearray being around for many years to come. > >Please please please get basearray into the python core! How can we help >with that? > > There is a PEP in SVN (see the array interface link at http://numeric.scipy.org) Karol Langner is a Google summer-of-code student working on it this summer. I'm not sure how far he'll get, but I'm hopeful. I could spend more time on it, if I had funding to do it, but right now I'm up against a wall. Again, thanks for the feedback. Best, -Travis |
From: Louis C. <lco...@po...> - 2006-06-30 18:10:44
|
> Numeric-24.2 (released Nov. 11, 2005) > > 14275 py24.exe > 2905 py23.exe > 9144 tar.gz > > Numarray 1.5.1 (released Feb, 7, 2006) > > 10272 py24.exe > 11883 py23.exe > 12779 tar.gz > > NumPy 0.9.8 (May 17, 2006) > > 3713 py24.exe > 558 py23.exe > 4111 tar.gz Here is some trends with a pretty picture. http://www.google.com/trends?q=numarray%2C+NumPy%2C+Numeric+Python Unfortunatle Numeric alone is to general a term to use. But I would say NumPy is looking good. ;) -- Louis Cordier <lco...@po...> cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org |
From: Sasha <nd...@ma...> - 2006-06-30 17:42:36
|
"In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other's programs, bugs included." --- Edsger W.Dijkstra, "How do we tell truths that might hurt?" 18 June 1975 I just miss the good old days ... On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > On 6/30/06, Sasha <nd...@ma...> wrote: > > On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > > > ... > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > Of course it's a joke. So obviously one for anyone who knows the > field, that the smiley shouldn't be needed (and yes, I despise > background laughs on television, too). Maybe a sad joke, given the > realities of scientific computing, and maybe a poor joke, but at least > an attempt at humor. > > Cheers, > > f > |
From: Sasha <nd...@ma...> - 2006-06-30 17:25:46
|
Since I was almost alone with my negative vote on the float64 default, I decided to give some more thought to the issue. I agree there are strong reasons to make the change. In addition to the points in the original post, float64 type is much more closely related to the well-known Python float than int32 to Python long. For example no-one would be surprised by either >>> float64(0)/float64(0) nan or >>> float(0)/float(0) Traceback (most recent call last): File "<stdin>", line 1, in ? ZeroDivisionError: float division but >>> int32(0)/int32(0) 0 is much more difficult to explain. As is >>> int32(2)**32 0 compared to >>> int(2)**32 4294967296L In short, arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. Another argument that I find compelling is that we are in a now or never situation. No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. If we decide to change the default, let's do it everywhere including array constructors and arange. The later is more controversial, but I still think it is worth doing (will give reasons in the future posts). Changing the defaults only in some functions or providing overrides to functions will only lead to more confusion. My revised vote is -0. On 6/30/06, Eric Jonas <jo...@mw...> wrote: > I've got to say +1 for Float64 too. I write a lot of numpy code, and > this bites me at least once a week. You'd think I'd learn better, but > it's just so easy to screw this up when you have to switch back and > forth between matlab (which I'm forced to TA) and numpy (which I use for > Real Work). > > ...Eric |
From: Fernando P. <fpe...@gm...> - 2006-06-30 17:25:10
|
On 6/30/06, Sasha <nd...@ma...> wrote: > On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. Of course it's a joke. So obviously one for anyone who knows the field, that the smiley shouldn't be needed (and yes, I despise background laughs on television, too). Maybe a sad joke, given the realities of scientific computing, and maybe a poor joke, but at least an attempt at humor. Cheers, f |
From: Alan G I. <ai...@am...> - 2006-06-30 17:15:16
|
> On 6/30/06, Fernando Perez <fpe...@gm...> wrote: >> Besides, decent unit tests will catch these problems. We >> all know that every scientific code in existence is unit >> tested to the smallest routine, so this shouldn't be >> a problem for anyone. On Fri, 30 Jun 2006, Sasha apparently wrote: > Is this a joke? It had me chuckling. ;-) The dangers of email ... Cheers, Alan Isaac |
From: Jon W. <wr...@es...> - 2006-06-30 17:04:01
|
Travis Oliphant wrote: >I hope he doesn't mean the rumors about an array object in Python >itself. Let me be the first to assure everyone that rumors of a >"capable" array object in Python have been greatly exaggerated. I would >be thrilled if we could just get the "infra-structure" into Python so >that different extension modules could at least agree on an array >interface. That is a far cry from fulfilling the needs of any current >Num user, however. > > Having {pointer + dimensions + strides + type} in the python core would be an incredible step forward - this is far more important than changing my python code to do functionally the same thing with numpy instead of Numeric. If the new array object supports most of the interface of the current "array" module then it is already very capable for many tasks. It would be great if it also works with Jython (etc). Bruce Southley wrote: >1) Identify those "[s]econdary dependency" projects as Louis states >(BioPython also comes to mind) and get them to convert. > As author of a (fairly obscure) secondary dependency package it is not clear that this is right time to convert. I very much admire the matplotlib approach of using Numerix and see this as a better solution than switching (or indeed re-writing in java/c++ etc). However, looking into the matplotlib SVN I see: _image.cpp 2420 4 weeks cmoad applied Andrew Straw's numpy patch numerix/_sp_imports.py 2478 2 weeks teoliphant Make recent changes backward compatible with numpy 0.9.8 numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant Fix import error for new numpy While I didn't look at either the code or the diff the comments clearly read as: "DON'T SWITCH YET". Get the basearray into the python core and for sure I will be using that, whatever it is called. I was tempted to switch to numarray in the past because of the nd_image, but I don't see that in numpy just yet? Seeing this on the mailing list: >So far the vote is 8 for float, 1 for int. > ... is yet another hint that I can remain with Numeric as a library, at least until numpy has a frozen interface/behaviour. I am very supportive of the work going on but have some technical concerns about switching. To pick some examples, it appears that numpy.lib.function_base.median makes a copy, sorts and picks the middle element. Some reading at http://ndevilla.free.fr/median/median/index.html or even (eek!) numerical recipes indicates this is not good news. Not to single one routine out, I was also saddened to find both Numeric and numpy use double precision lapack routines for single precision arguments. A diff of numpy's linalg.py with Numeric's LinearAlgebra.py goes a long way to explaining why there is resistance to change from Numeric to numpy. The boilerplate changes and you only get "norm" (which I am suspicious about - vector 2 norms are in blas, some matrix 2 norms are in lapack/*lange.f and computing all singular values when you only want the biggest or smallest one is a surprising algorithmic choice). I realise it might sound like harsh criticism - but I don't see what numpy adds for number crunching over and above Numeric. Clearly there *is* a lot more in terms of python integration, but I really don't want to do number crunching with python itself ;-) For numpy to really be better than Numeric I would like to find algorithm selections according to the array dimensions and type. Getting the basearray type into the python core is the key - then it makes sense to get the best of breed algorithms working as you can rely on the basearray being around for many years to come. Please please please get basearray into the python core! How can we help with that? Jon |
From: Keith G. <kwg...@gm...> - 2006-06-30 17:03:52
|
On 6/30/06, Robert Kern <rob...@gm...> wrote: > Tim Hochberg wrote: > > Regarding choice of float or int for default: > > > > The number one priority for numpy should be to unify the three disparate > > Python numeric packages. Whatever choice of defaults facilitates that is > > what I support. > > +10 > > > Personally, given no other constraints, I would probably just get rid of > > the defaults all together and make the user choose. > > My preferred solution is to add class methods to the scalar types rather than > screw up compatibility. > > In [1]: float64.ones(10) I don't think an int will be able to hold the number of votes for float64. |
From: Robert K. <rob...@gm...> - 2006-06-30 16:53:31
|
Tim Hochberg wrote: > Regarding choice of float or int for default: > > The number one priority for numpy should be to unify the three disparate > Python numeric packages. Whatever choice of defaults facilitates that is > what I support. +10 > Personally, given no other constraints, I would probably just get rid of > the defaults all together and make the user choose. My preferred solution is to add class methods to the scalar types rather than screw up compatibility. In [1]: float64.ones(10) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Eric J. <jo...@mw...> - 2006-06-30 16:51:36
|
I've got to say +1 for Float64 too. I write a lot of numpy code, and this bites me at least once a week. You'd think I'd learn better, but it's just so easy to screw this up when you have to switch back and forth between matlab (which I'm forced to TA) and numpy (which I use for Real Work). ...Eric |
From: Keith G. <kwg...@gm...> - 2006-06-30 16:43:57
|
On 6/30/06, Sasha <nd...@ma...> wrote: > On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. That's a conundrum. A joke is no longer a joke once you point out, yes it is a joke. |
From: Travis N. V. <tr...@en...> - 2006-06-30 16:40:22
|
Sasha wrote: > On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > >> ... >> Besides, decent unit tests will catch these problems. We all know >> that every scientific code in existence is unit tested to the smallest >> routine, so this shouldn't be a problem for anyone. >> > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. > Very obviously a joke...uh...with the exception of enthought-written scientific code, of course ;-) |
From: Travis N. V. <tr...@en...> - 2006-06-30 16:39:06
|
Joris De Ridder wrote: > On Friday 30 June 2006 16:29, Erin Sheldon wrote: > [ES]: <snip> the pages > [ES]: > [ES]: http://numeric.scipy.org/ -- Looks antiquated > [ES]: > [ES]: are not helping. > > My opinion too. If that page is the first page you learn about NumPy, > you won't have a good impression. > > Travis, would you accept concrete suggestions or 'help' to improve > that page? > > Cheers, > Joris > Speaking for the other Travis...I think he's open to suggestions (he hasn't yelled at me yet for suggesting the same sort of things). There was an earlier conversation on this list about the numpy page, in which we proposed redirecting all numeric/numpy links to numpy.scipy.org. I'll ask Jeff to do these redirects if: - everyone agrees that address is a good one - we have the content shaped up on that page. For now, I've copied the content with some basic cleanup (and adding a style sheet) here: http://numpy.scipy.org If anyone with a modicum of web design experience wants access to edit this site...please (please) speak up and it will be so. Other suggestions are welcome. Travis (Vaught) |
From: Sasha <nd...@ma...> - 2006-06-30 16:35:39
|
On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > ... > Besides, decent unit tests will catch these problems. We all know > that every scientific code in existence is unit tested to the smallest > routine, so this shouldn't be a problem for anyone. Is this a joke? Did anyone ever measured the coverage of numpy unittests? I would be surprized if it was more than 10%. |
From: Fernando P. <fpe...@gm...> - 2006-06-30 15:25:32
|
On 6/30/06, Scott Ransom <sr...@nr...> wrote: > +1 for float64 for me as well. +1 for float64 I have lots of code overriding the int defaults by hand which were giving me grief with hand-written extensions (which were written double-only for speed reasons). I'll be happy to clean this up. I completely understand Travis' concerns about backwards compatibility, but frankly, I think that right now the quality and community momentum of numpy is already enough that it will carry things forward. People will suffer a little during the porting days, but they'll be better off in the long run. I don't think we should undrestimate the value of eternal happiness :) Besides, decent unit tests will catch these problems. We all know that every scientific code in existence is unit tested to the smallest routine, so this shouldn't be a problem for anyone. Cheers, f |
From: Steve L. <st...@ar...> - 2006-06-30 15:21:30
|
> Before 1.0, it seems right to go with the best design > and take some short-run grief for it if necessary. > > If the right default is float, but extant code will be hurt, > then let float be the default and put the legacy-code fix > (function redefinition) in the compatability module +1 on this very idea. (sorry for sending this directly to you @ first, Alan) |
From: Joris De R. <jo...@st...> - 2006-06-30 15:21:30
|
On Friday 30 June 2006 16:29, Erin Sheldon wrote: [ES]: <snip> the pages [ES]: [ES]: http://numeric.scipy.org/ -- Looks antiquated [ES]: [ES]: are not helping. My opinion too. If that page is the first page you learn about NumPy, you won't have a good impression. Travis, would you accept concrete suggestions or 'help' to improve that page? Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm |
From: Alan I. <ai...@am...> - 2006-06-30 15:12:45
|
On Fri, 30 Jun 2006, T) Arnd Baecker wrote: > I am wondering a bit about the the behaviour of logspace: http://www.mathworks.com/access/helpdesk/help/techdoc/ref/logspace.html fwiw, Alan Isaac |