From: Travis O. <oli...@ee...> - 2006-06-30 23:18:26
|
Sasha wrote: > On 6/30/06, Travis Oliphant <oli...@ee...> wrote: > >> ... >> Yes, this is true, but the auto-generation means that success for one >> instantiation increases the likelihood for success in the others. So, >> the 26.7% is probably too pessimistic. > > > Agree, but "increases the likelihood" != "guarantees". Definitely... > > The best solution would be to autogenerate test cases so that all > types are tested where appropriate. Right on again... Here's a chance for all the Python-only coders to jump in and make a splash.... -Travis |
From: Eric J. <jo...@mw...> - 2006-06-30 16:51:36
|
I've got to say +1 for Float64 too. I write a lot of numpy code, and this bites me at least once a week. You'd think I'd learn better, but it's just so easy to screw this up when you have to switch back and forth between matlab (which I'm forced to TA) and numpy (which I use for Real Work). ...Eric |
From: Sasha <nd...@ma...> - 2006-06-30 17:25:46
|
Since I was almost alone with my negative vote on the float64 default, I decided to give some more thought to the issue. I agree there are strong reasons to make the change. In addition to the points in the original post, float64 type is much more closely related to the well-known Python float than int32 to Python long. For example no-one would be surprised by either >>> float64(0)/float64(0) nan or >>> float(0)/float(0) Traceback (most recent call last): File "<stdin>", line 1, in ? ZeroDivisionError: float division but >>> int32(0)/int32(0) 0 is much more difficult to explain. As is >>> int32(2)**32 0 compared to >>> int(2)**32 4294967296L In short, arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. Another argument that I find compelling is that we are in a now or never situation. No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. If we decide to change the default, let's do it everywhere including array constructors and arange. The later is more controversial, but I still think it is worth doing (will give reasons in the future posts). Changing the defaults only in some functions or providing overrides to functions will only lead to more confusion. My revised vote is -0. On 6/30/06, Eric Jonas <jo...@mw...> wrote: > I've got to say +1 for Float64 too. I write a lot of numpy code, and > this bites me at least once a week. You'd think I'd learn better, but > it's just so easy to screw this up when you have to switch back and > forth between matlab (which I'm forced to TA) and numpy (which I use for > Real Work). > > ...Eric |
From: Robert K. <rob...@gm...> - 2006-06-30 16:53:31
|
Tim Hochberg wrote: > Regarding choice of float or int for default: > > The number one priority for numpy should be to unify the three disparate > Python numeric packages. Whatever choice of defaults facilitates that is > what I support. +10 > Personally, given no other constraints, I would probably just get rid of > the defaults all together and make the user choose. My preferred solution is to add class methods to the scalar types rather than screw up compatibility. In [1]: float64.ones(10) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Keith G. <kwg...@gm...> - 2006-06-30 17:03:52
|
On 6/30/06, Robert Kern <rob...@gm...> wrote: > Tim Hochberg wrote: > > Regarding choice of float or int for default: > > > > The number one priority for numpy should be to unify the three disparate > > Python numeric packages. Whatever choice of defaults facilitates that is > > what I support. > > +10 > > > Personally, given no other constraints, I would probably just get rid of > > the defaults all together and make the user choose. > > My preferred solution is to add class methods to the scalar types rather than > screw up compatibility. > > In [1]: float64.ones(10) I don't think an int will be able to hold the number of votes for float64. |
From: Fernando P. <fpe...@gm...> - 2006-06-30 17:25:10
|
On 6/30/06, Sasha <nd...@ma...> wrote: > On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > > ... > > Besides, decent unit tests will catch these problems. We all know > > that every scientific code in existence is unit tested to the smallest > > routine, so this shouldn't be a problem for anyone. > > Is this a joke? Did anyone ever measured the coverage of numpy > unittests? I would be surprized if it was more than 10%. Of course it's a joke. So obviously one for anyone who knows the field, that the smiley shouldn't be needed (and yes, I despise background laughs on television, too). Maybe a sad joke, given the realities of scientific computing, and maybe a poor joke, but at least an attempt at humor. Cheers, f |
From: Sasha <nd...@ma...> - 2006-06-30 17:42:36
|
"In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other's programs, bugs included." --- Edsger W.Dijkstra, "How do we tell truths that might hurt?" 18 June 1975 I just miss the good old days ... On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > On 6/30/06, Sasha <nd...@ma...> wrote: > > On 6/30/06, Fernando Perez <fpe...@gm...> wrote: > > > ... > > > Besides, decent unit tests will catch these problems. We all know > > > that every scientific code in existence is unit tested to the smallest > > > routine, so this shouldn't be a problem for anyone. > > > > Is this a joke? Did anyone ever measured the coverage of numpy > > unittests? I would be surprized if it was more than 10%. > > Of course it's a joke. So obviously one for anyone who knows the > field, that the smiley shouldn't be needed (and yes, I despise > background laughs on television, too). Maybe a sad joke, given the > realities of scientific computing, and maybe a poor joke, but at least > an attempt at humor. > > Cheers, > > f > |
From: Travis O. <oli...@ee...> - 2006-06-30 18:13:30
|
Jon, Thanks for the great feedback. You make some really good points. > > >Having {pointer + dimensions + strides + type} in the python core would >be an incredible step forward - this is far more important than changing >my python code to do functionally the same thing with numpy instead of >Numeric. > Guido has always wanted consensus before putting things into Python. We need to rally behind NumPy if we are going to get something of it's infrastructure into Python itself. >As author of a (fairly obscure) secondary dependency package it is not >clear that this is right time to convert. I very much admire the >matplotlib approach of using Numerix and see this as a better solution >than switching (or indeed re-writing in java/c++ etc). > I disagree with this approach. It's fine for testing and for transition, but it is a headache long term. You are basically supporting three packages. The community is not large enough to do that. I also think it leads people to consider adopting that approach instead of just switching. I'm not particularly thrilled with strategies that essentially promote the existence of three different packages. >However, looking >into the matplotlib SVN I see: > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's >numpy patch >numerix/_sp_imports.py 2478 2 weeks teoliphant Make >recent changes backward compatible with numpy 0.9.8 >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > Fix import error for new numpy > >While I didn't look at either the code or the diff the comments clearly >read as: "DON'T SWITCH YET". > I don't understand why you interpret it that way? When I moved old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure that matplotlib still works with numpy 0.9.8 (which has the old-style names in the main location). Why does this say "DON'T SWITCH"? If anything it should tell you that we are conscious of trying to keep things working together and compatible with current releases of NumPy. >Get the basearray into the python core and >for sure I will be using that, whatever it is called. I was tempted to >switch to numarray in the past because of the nd_image, but I don't see >that in numpy just yet? > > It is in SciPy where it belongs (you can also install it as a separate package). It builds and runs on top of NumPy just fine. In fact it was the predecessor to the now fully-capable-but-in-need-of-more-testing numarray C-API that is now in NumPy. >I am very supportive of the work going on but have some technical >concerns about switching. To pick some examples, it appears that >numpy.lib.function_base.median makes a copy, sorts and picks the middle >element. > I'm sure we need lots of improvements in the code-base. This has always been true. We rely on the ability of contributors which doesn't work well unless we have a lot of contributors which are hard to get unless we consolidate around a single array package. Please contribute a fix. >single one routine out, I was also saddened to find both Numeric and >numpy use double precision lapack routines for single precision >arguments. > The point of numpy.linalg is to provide the functionality of Numeric not extend it. This is because SciPy provides a much more capable linalg sub-package that works with single and double precision. It sounds like you want SciPy. >For numpy to really be better than Numeric I would like to find >algorithm selections according to the array dimensions and type. > These are good suggestions but for SciPy. The linear algebra in NumPy is just for getting your feet wet and having access to basic functionality. >Getting >the basearray type into the python core is the key - then it makes sense >to get the best of breed algorithms working as you can rely on the >basearray being around for many years to come. > >Please please please get basearray into the python core! How can we help >with that? > > There is a PEP in SVN (see the array interface link at http://numeric.scipy.org) Karol Langner is a Google summer-of-code student working on it this summer. I'm not sure how far he'll get, but I'm hopeful. I could spend more time on it, if I had funding to do it, but right now I'm up against a wall. Again, thanks for the feedback. Best, -Travis |
From: Christopher H. <ch...@st...> - 2006-06-30 18:31:10
|
>>Get the basearray into the python core and >>for sure I will be using that, whatever it is called. I was tempted to >>switch to numarray in the past because of the nd_image, but I don't see >>that in numpy just yet? >> >> > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > Hi Travis, Where can one find and download nd_image separate from the rest of scipy? As for the the numarray C-API, we are currently doing testing here at STScI. Chris |
From: Jonathan T. <jon...@ut...> - 2006-06-30 18:42:35
|
+1 for some sort of float. I am a little confused as to why Float64 is a particularly good choice. Can someone explain in more detail? Presumably this is the most sensible ctype and translates to a python float well? In general though I agree that this is a now or never change. I suspect we will change a lot of matlab -> Numeric/numarray transitions into matlab -> numpy transitions with this change. I guess it will take a little longer for 1.0 to get out though :( Ah well. Cheers. Jon. On 6/30/06, Travis Oliphant <oli...@ee...> wrote: > Jon, > > Thanks for the great feedback. You make some really good points. > > > > > > >Having {pointer + dimensions + strides + type} in the python core would > >be an incredible step forward - this is far more important than changing > >my python code to do functionally the same thing with numpy instead of > >Numeric. > > > Guido has always wanted consensus before putting things into Python. We > need to rally behind NumPy if we are going to get something of it's > infrastructure into Python itself. > > >As author of a (fairly obscure) secondary dependency package it is not > >clear that this is right time to convert. I very much admire the > >matplotlib approach of using Numerix and see this as a better solution > >than switching (or indeed re-writing in java/c++ etc). > > > I disagree with this approach. It's fine for testing and for > transition, but it is a headache long term. You are basically > supporting three packages. The community is not large enough to do > that. I also think it leads people to consider adopting that approach > instead of just switching. I'm not particularly thrilled with > strategies that essentially promote the existence of three different > packages. > > >However, looking > >into the matplotlib SVN I see: > > > >_image.cpp 2420 4 weeks cmoad applied Andrew Straw's > >numpy patch > >numerix/_sp_imports.py 2478 2 weeks teoliphant Make > >recent changes backward compatible with numpy 0.9.8 > >numerix/linearalgebra/__init__.py 2474 2 weeks teoliphant > > Fix import error for new numpy > > > >While I didn't look at either the code or the diff the comments clearly > >read as: "DON'T SWITCH YET". > > > I don't understand why you interpret it that way? When I moved > old-style names to numpy.oldnumeric for SVN numpy, I needed to make sure > that matplotlib still works with numpy 0.9.8 (which has the old-style > names in the main location). > > Why does this say "DON'T SWITCH"? If anything it should tell you that > we are conscious of trying to keep things working together and > compatible with current releases of NumPy. > > >Get the basearray into the python core and > >for sure I will be using that, whatever it is called. I was tempted to > >switch to numarray in the past because of the nd_image, but I don't see > >that in numpy just yet? > > > > > It is in SciPy where it belongs (you can also install it as a separate > package). It builds and runs on top of NumPy just fine. In fact it was > the predecessor to the now fully-capable-but-in-need-of-more-testing > numarray C-API that is now in NumPy. > > >I am very supportive of the work going on but have some technical > >concerns about switching. To pick some examples, it appears that > >numpy.lib.function_base.median makes a copy, sorts and picks the middle > >element. > > > I'm sure we need lots of improvements in the code-base. This has > always been true. We rely on the ability of contributors which doesn't > work well unless we have a lot of contributors which are hard to get > unless we consolidate around a single array package. Please contribute a > fix. > > >single one routine out, I was also saddened to find both Numeric and > >numpy use double precision lapack routines for single precision > >arguments. > > > The point of numpy.linalg is to provide the functionality of Numeric not > extend it. This is because SciPy provides a much more capable linalg > sub-package that works with single and double precision. It sounds > like you want SciPy. > > >For numpy to really be better than Numeric I would like to find > >algorithm selections according to the array dimensions and type. > > > These are good suggestions but for SciPy. The linear algebra in NumPy > is just for getting your feet wet and having access to basic > functionality. > > >Getting > >the basearray type into the python core is the key - then it makes sense > >to get the best of breed algorithms working as you can rely on the > >basearray being around for many years to come. > > > >Please please please get basearray into the python core! How can we help > >with that? > > > > > There is a PEP in SVN (see the array interface link at > http://numeric.scipy.org) Karol Langner is a Google summer-of-code > student working on it this summer. I'm not sure how far he'll get, but > I'm hopeful. > > I could spend more time on it, if I had funding to do it, but right now > I'm up against a wall. > > Again, thanks for the feedback. > > Best, > > -Travis > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Matthew B. <mat...@gm...> - 2006-06-30 18:48:10
|
Just one more vote for float. On the basis that Travis mentioned, of all those first-timers downloading, trying, finding something they didn't expect that was rather confusing, and giving up. |
From: Alan G I. <ai...@am...> - 2006-06-30 18:55:00
|
On Fri, 30 Jun 2006, Jonathan Taylor apparently wrote:=20 > In general though I agree that this is a now or never change.=20 Sasha has also made that argument. I see one possible additional strategy. I think every agrees that the long view is important. Now even Sasha agrees that float64 is the best default. Suppose 1. float64 is the ideal default (I agree with this) 2. there is substantial concern about the change of default on extant code for the unwary One approach proposed is to include a different function definition in a compatability module. This seems acceptable=20 to me, but as Sasha notes it is not without drawbacks. Here is another possibility: transition by requiring an explicit data type for some period of time (say, 6-12 months). After that time, provide the default of float64. This would require some short term pain, but for the long term gain of the desired outcome. Just a thought, Alan Isaac PS I agree with Sasha's following observations: "arrays other than float64 are more of the hard-hat area and their properties may be surprising to the novices. Exposing novices to non-float64 arrays through default constructors is a bad thing. ... No one expects that their Numeric or numarray code will work in numpy 1.0 without changes, but I don't think people will tolerate major breaks in backward compatibility in the future releases. ... If we decide to change the default, let's do it everywhere including array constructors and arange." |
From: Travis O. <oli...@ee...> - 2006-06-30 18:55:31
|
Jonathan Taylor wrote: >+1 for some sort of float. I am a little confused as to why Float64 >is a particularly good choice. Can someone explain in more detail? >Presumably this is the most sensible ctype and translates to a python >float well? > > O.K. I'm convinced that we should change to float as the default, but *everywhere* as Sasha says. We will provide two tools to make the transition easier. 1) The numpy.oldnumeric sub-package will contain definitions of changed functions that keep the old defaults (integer). This is what convertcode replaces for import Numeric calls so future users who make the transition won't really notice. 2) A function/script that can be run to convert all type-less uses of the changed functions to explicitly insert dtype=int. Yes, it will be a bit painful (I made the change and count 6 failures in NumPy tests and 34 in SciPy). But, it sounds like there is support for doing it. And yes, we must do it prior to 1.0 if we do it at all. Comments? -Travis |
From: Alan G I. <ai...@am...> - 2006-06-30 19:03:30
|
On Fri, 30 Jun 2006, Travis Oliphant apparently wrote:=20 > I'm convinced that we should change to float as the=20 > default, but everywhere as Sasha says.=20 Even better! Cheers, Alan Isaac |
From: Robert K. <rob...@gm...> - 2006-06-30 19:04:19
|
Travis Oliphant wrote: > Comments? Whatever else you do, leave arange() alone. It should never have accepted floats in the first place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Travis O. <oli...@ee...> - 2006-06-30 19:25:39
|
Robert Kern wrote: >Travis Oliphant wrote: > > > >>Comments? >> >> > >Whatever else you do, leave arange() alone. It should never have accepted floats >in the first place. > > Actually, Robert makes a good point. arange with floats is problematic. We should direct people to linspace instead of changing the default of arange. Most new users will probably expect arange to return a type similar to Python's range which is int. Also: Keeping arange as ints reduces the number of errors from the change in the unit tests to 2 in NumPy 3 in SciPy So, I think from both a pragmatic and idealized situtation, arange should stay with the default of ints. People who want arange to return floats should be directed to linspace. -Travis |
From: Scott R. <sr...@nr...> - 2006-06-30 19:45:07
|
On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: > Robert Kern wrote: > > >Whatever else you do, leave arange() alone. It should never have accepted floats > >in the first place. > > > Actually, Robert makes a good point. arange with floats is > problematic. We should direct people to linspace instead of changing > the default of arange. Most new users will probably expect arange to > return a type similar to Python's range which is int. ... > So, I think from both a pragmatic and idealized situtation, arange > should stay with the default of ints. People who want arange to return > floats should be directed to linspace. I agree that arange with floats is problematic. However, if you want, for example, arange(10.0) (as I often do), you have to do: linspace(0.0, 9.0, 10), which is very un-pythonic and not at all what a new user would expect... I think of linspace as a convenience function, not as a replacement for arange with floats. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sr...@nr... Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 |
From: Robert K. <rob...@gm...> - 2006-06-30 19:54:53
|
Scott Ransom wrote: > On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: >> Robert Kern wrote: >> >>> Whatever else you do, leave arange() alone. It should never have accepted floats >>> in the first place. >>> >> Actually, Robert makes a good point. arange with floats is >> problematic. We should direct people to linspace instead of changing >> the default of arange. Most new users will probably expect arange to >> return a type similar to Python's range which is int. > ... >> So, I think from both a pragmatic and idealized situtation, arange >> should stay with the default of ints. People who want arange to return >> floats should be directed to linspace. > > I agree that arange with floats is problematic. However, > if you want, for example, arange(10.0) (as I often do), you have > to do: linspace(0.0, 9.0, 10), which is very un-pythonic and not > at all what a new user would expect... > > I think of linspace as a convenience function, not as a > replacement for arange with floats. I don't mind arange(10.0) so much, now that it exists. I would mind, a lot, about arange(10) returning a float64 array. Similarity to the builtin range() is much more important in my mind than an arbitrary "consistency" with ones() and zeros(). It's arange(0.0, 1.0, 0.1) that I think causes the most problems with arange and floats. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Travis O. <oli...@ee...> - 2006-06-30 20:11:23
|
Scott Ransom wrote: >On Fri, Jun 30, 2006 at 01:25:23PM -0600, Travis Oliphant wrote: > > >>Robert Kern wrote: >> >> >> >>>Whatever else you do, leave arange() alone. It should never have accepted floats >>>in the first place. >>> >>> >>> >>Actually, Robert makes a good point. arange with floats is >>problematic. We should direct people to linspace instead of changing >>the default of arange. Most new users will probably expect arange to >>return a type similar to Python's range which is int. >> >> >... > > >>So, I think from both a pragmatic and idealized situtation, arange >>should stay with the default of ints. People who want arange to return >>floats should be directed to linspace. >> >> I should have worded this as: "People who want arange to return floats *as a default* should be directed to linspace" So, basically, arange is not going to change. Because of this, shifting over was a cinch. I still need to write the convert-script code that inserts dtype=int in routines that use old defaults: *plea* anybody want to write that?? -Travis |
From: Sasha <nd...@ma...> - 2006-06-30 22:42:02
|
On 6/30/06, Travis Oliphant <oli...@ee...> wrote: > ... I still need to write the > convert-script code that inserts dtype=int > in routines that use old defaults: *plea* anybody want to write that?? > I will try to do it at some time over the long weekend. I was bitten by the fact that the current convert-script changes anything that resembles an old typecode such as 'b' regardless of context. (I was unlucky to have database columns called 'b'!) Fixing that is very similar to the problem at hand. |
From: Ed S. <sch...@ft...> - 2006-07-01 10:53:28
|
On 30/06/2006, at 10:11 PM, Travis Oliphant wrote: > I should have worded this as: > > "People who want arange to return floats *as a default* should be > directed to linspace" > > So, basically, arange is not going to change. > > Because of this, shifting over was a cinch. I still need to write > the > convert-script code that inserts dtype=int > in routines that use old defaults: *plea* anybody want to write > that?? Hmm ... couldn't we make the transition easier and more robust by writing compatibility interfaces for zeros, ones, empty, called e.g. intzeros, intones, intempty? These functions could of course live in oldnumeric.py. Then we can get convertcode.py to do a simple search and replace -- and, more importantly, it's easy for users to do the same manually should they choose not to use convertcode.py. I could work on this this weekend. I'm pleased we're changing the defaults to float. The combination of the int defaults and silent downcasting with in-place operators trips me up once every few months when I forget to specify dtype=float explicitly. Another wart gone from NumPy! I'm surprised and impressed that the community's willing to make this change after 10+ years with int defaults. I think it's a small but important improvement in usability. -- Ed |
From: Charles R H. <cha...@gm...> - 2006-07-01 17:48:13
|
On 6/30/06, Robert Kern <rob...@gm...> wrote: > > Travis Oliphant wrote: > > > Comments? > > Whatever else you do, leave arange() alone. It should never have accepted > floats > in the first place. Hear, hear. Using floats in arange is a lousy temptation that must be avoided. Apart from that I think that making float64 the default for most things is the right way to go. Numpy is primarily for numeric computation, and numeric computation is primarily in float64. Specialist areas like imaging can be dealt with as special cases. BTW, can someone suggest the best way to put new code into Numpy at this point? Is there a test branch of some sort? I have some free time coming up in a few weeks and would like to do the following: 1) add a left/right option to searchsorted, 2) add faster normals to random, 3) add the MWC8222 generator to random, 3) add the kind keyword to the functional forms of sort (sort, argsort) as in numarray. Chuck |
From: Travis O. <oli...@ie...> - 2006-07-01 18:30:20
|
Charles R Harris wrote: > > > On 6/30/06, *Robert Kern* <rob...@gm... > <mailto:rob...@gm...>> wrote: > > Travis Oliphant wrote: > > > Comments? > > Whatever else you do, leave arange() alone. It should never have > accepted floats > in the first place. > > > Hear, hear. Using floats in arange is a lousy temptation that must be > avoided. Apart from that I think that making float64 the default for > most things is the right way to go. Numpy is primarily for numeric > computation, and numeric computation is primarily in float64. > Specialist areas like imaging can be dealt with as special cases. > > BTW, can someone suggest the best way to put new code into Numpy at > this point? Is there a test branch of some sort? My favorite is to make changes in piece-meal steps and just commit them to the turnk as they get created. I think your projects 2 and 4 could be done that way. If a change requires a more elaborate re-write, then I usually construct a branch, switch over to the branch and make changes there. When I'm happy with the result, the branch is merged back into the trunk. Be careful with branches though. It is easy to get too far away from main-line trunk development (although at this point the trunk should be stabilizing toward release 1.0). 1) To construct a branch (just a copy of the trunk): (Make note of the revision number when you create the branch-- you can get it later but it's easier to just record it at copy). svn cp http://svn.scipy.org/svn/numpy/trunk http://svn.scipy.org/svn/numpy/branches/<somename> 2) To switch to using the branch: svn switch http://svn.scipy.org/svn/numpy/branches/<somename> You can also just have another local directory where you work on the branch so that you still have a local directory with the main trunk. Just check out the branch: svn co http://svn.scipy.org/svn/numpy/branches/<somename> mybranch 3) To merge back: a) Get back to the trunk repository: svn switch http://svn.scipy.org/svn/numpy/trunk or go to your local copy of the trunk and do an svn update b) Merge the changes from the branch back in to your local copy of the trunk: svn merge -r <branch#>:HEAD http://svn.scipy.org/svn/numpy/branches/<somename> This assumes that <branch#> is the revision number when the branch is created c) You have to now commit your local copy of the trunk (after you've dealt with and resolved any potential conflicts). If your branch is continuing a while, you may need to update your branch with changes that have happened in the main-line trunk. This will make it easier to merge back when you are done. To update your branch with changes from the main trunk do: svn merge -r <lastmerge#>:<end#> http://svn.scipy.org/svn/numpy/trunk where <lastmerge#> is the last revision number you used to update your branch (or the revision number at which you made your branch) and <end#> is the ending revision number for changes in the trunk you'd like to merge. Here is a good link explaining the process more. http://svnbook.red-bean.com/en/1.1/ch04s03.html -Travis -Travis |
From: Charles R H. <cha...@gm...> - 2006-07-01 18:50:12
|
Thanks Travis, Your directions are very helpful and much appreciated. Chuck On 7/1/06, Travis Oliphant <oli...@ie...> wrote: > > Charles R Harris wrote: > > > > > > On 6/30/06, *Robert Kern* <rob...@gm... > > <mailto:rob...@gm...>> wrote: > > > > Travis Oliphant wrote: > > > > > Comments? > > > > Whatever else you do, leave arange() alone. It should never have > > accepted floats > > in the first place. > > > > > > Hear, hear. Using floats in arange is a lousy temptation that must be > > avoided. Apart from that I think that making float64 the default for > > most things is the right way to go. Numpy is primarily for numeric > > computation, and numeric computation is primarily in float64. > > Specialist areas like imaging can be dealt with as special cases. > > > > BTW, can someone suggest the best way to put new code into Numpy at > > this point? Is there a test branch of some sort? > My favorite is to make changes in piece-meal steps and just commit them > to the turnk as they get created. I think your projects 2 and 4 could > be done that way. > > If a change requires a more elaborate re-write, then I usually construct > a branch, switch over to the branch and make changes there. When I'm > happy with the result, the branch is merged back into the trunk. > > Be careful with branches though. It is easy to get too far away from > main-line trunk development (although at this point the trunk should be > stabilizing toward release 1.0). > > 1) To construct a branch (just a copy of the trunk): > > (Make note of the revision number when you create the branch-- you can > get it later but it's easier to just record it at copy). > > svn cp http://svn.scipy.org/svn/numpy/trunk > http://svn.scipy.org/svn/numpy/branches/<somename> > > 2) To switch to using the branch: > > svn switch http://svn.scipy.org/svn/numpy/branches/<somename> > > You can also just have another local directory where you work on the > branch so that you still have a local directory with the main trunk. > Just check out the branch: > > svn co http://svn.scipy.org/svn/numpy/branches/<somename> mybranch > > 3) To merge back: > > a) Get back to the trunk repository: > > svn switch http://svn.scipy.org/svn/numpy/trunk > > or go to your local copy of the trunk and do an svn update > > b) Merge the changes from the branch back in to your local copy of the > trunk: > > svn merge -r <branch#>:HEAD > http://svn.scipy.org/svn/numpy/branches/<somename> > > This assumes that <branch#> is the revision number when the branch > is created > > c) You have to now commit your local copy of the trunk (after you've > dealt with and resolved any potential conflicts). > > If your branch is continuing a while, you may need to update your branch > with changes that have happened in the main-line trunk. This will make > it easier to merge back when you are done. > > To update your branch with changes from the main trunk do: > > svn merge -r <lastmerge#>:<end#> http://svn.scipy.org/svn/numpy/trunk > > where <lastmerge#> is the last revision number you used to update your > branch (or the revision number at which you made your branch) and <end#> > is the ending revision number for changes in the trunk you'd like to > merge. > > Here is a good link explaining the process more. > > http://svnbook.red-bean.com/en/1.1/ch04s03.html > > > > -Travis > > > > -Travis > > |
From: Travis O. <oli...@ie...> - 2006-07-01 18:53:45
|
Charles R Harris wrote: > Thanks Travis, > > Your directions are very helpful and much appreciated. I placed these instructions at http://projects.scipy.org/scipy/numpy/wiki/MakingBranches Please make any changes needed to that wiki page. -Travis |