From: Bill B. <wb...@gm...> - 2006-10-13 01:49:54
|
On 10/12/06, Stefan van der Walt <st...@su...> wrote: > On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: > > On 10/11/06, Bill Baxter <wb...@gm...> wrote: > I tried to explain the argument at > > http://www.scipy.org/NegativeSquareRoot > The proposed fix for those who want sqrt(-1) to return 1j is: from numpy.lib import scimath as SM SM.sqrt(-1) But that creates a new namespace alias, different from numpy. So I'll call numpy.array() to create a new array, but SM.sqrt() when I want a square root. Am I wrong to want some simple way to change the behavior of numpy.sqrt itself? Seems like you can get that effect via something like: for n in numpy.lib.scimath.__all__: numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] If that sort of function were available as "numpy.use_scimath()", then folks who want numpy to be like scipy can achieve that with just one line at the top of their files. The import under a different name doesn't quite achieve the goal of making that behavior numpy's "default". I guess I'm thinking mostly of the educational uses of numpy, where you may have users that haven't learned about much about numerical computing yet. I can just imagine the instructor starting off by saying "ok everyone we're going to learn numpy today! First everyone type this: 'import numpy, from numpy.lib import scimath as SM' -- Don't worry about all the things there you don't understand." Whereas "import numpy, numpy.use_scimath()" seems easier to explain and much less intimidating as your first two lines of numpy to learn. Or is that just a bad idea for some reason? --bb |
From: Charles R H. <cha...@gm...> - 2006-10-13 02:15:56
|
On 10/12/06, Bill Baxter <wb...@gm...> wrote: > > On 10/12/06, Stefan van der Walt <st...@su...> wrote: > > On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: > > > On 10/11/06, Bill Baxter <wb...@gm...> wrote: > > I tried to explain the argument at > > > > http://www.scipy.org/NegativeSquareRoot > > > > The proposed fix for those who want sqrt(-1) to return 1j is: > > from numpy.lib import scimath as SM > SM.sqrt(-1) > > > But that creates a new namespace alias, different from numpy. So I'll > call numpy.array() to create a new array, but SM.sqrt() when I want a > square root. > Am I wrong to want some simple way to change the behavior of > numpy.sqrt itself? > > Seems like you can get that effect via something like: > > for n in numpy.lib.scimath.__all__: > numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] I don't like either of those ideas, although the second seems preferable. I think it better to make an efficient way of calling a sqrt routine that accepts negative floats and returns complex numbers. The behaviour could be chosen either by key word or by specially named routines, or maybe even some global flag, but I don't think it asking too much for the students to learn that sqrt(-1) doesn't exist as a real number and that efficient computation uses real whenever possible because it is a) smaller, and b) faster. That way we also avoid having software that only works for scimath but not for numpy. Chuck. |
From: Bill B. <wb...@gm...> - 2006-10-13 03:11:10
|
On 10/13/06, Charles R Harris <cha...@gm...> wrote: > > > On 10/12/06, Bill Baxter <wb...@gm...> wrote: > > On 10/12/06, Stefan van der Walt <st...@su...> wrote: > > > On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: > > > > On 10/11/06, Bill Baxter < wb...@gm...> wrote: > > > I tried to explain the argument at > > > > > > http://www.scipy.org/NegativeSquareRoot > > > > > > > The proposed fix for those who want sqrt(-1) to return 1j is: > > > > from numpy.lib import scimath as SM > > SM.sqrt(-1) > > > > > > But that creates a new namespace alias, different from numpy. So I'll > > call numpy.array() to create a new array, but SM.sqrt() when I want a > > square root. > > Am I wrong to want some simple way to change the behavior of > > numpy.sqrt itself? > > > > Seems like you can get that effect via something like: > > > > for n in numpy.lib.scimath.__all__: > > numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] > > I don't like either of those ideas, although the second seems preferable. I > think it better to make an efficient way of calling a sqrt routine that > accepts negative floats and returns complex numbers. The behaviour could be > chosen either by key word or by specially named routines, or maybe even > some global flag, We have the "specially named routines" way already. "numpy.lib.scimath.sqrt" > but I don't think it asking too much for the students to > learn that sqrt(-1) doesn't exist as a real number and that efficient > computation uses real whenever possible because it is a) smaller, and b) > faster. That way we also avoid having software that only works for scimath > but not for numpy. I think efficiency is not a very good argument for the default behavior here, because -- lets face it -- if efficient execution was high on your priority list, you wouldn't be using python. And even if you do care about efficiency, one of the top rules of optimization is to first get it working, then get it working fast. Really, I'm just playing the devils advocate here, because I don't work with complex numbers (I see quaternions more often than complex numbers). But I would be willing to do something like numpy.use_realmath() in my code if it would make numpy more palatable to a wider audience. I wouldn't like it, however, if I had to do some import thing where I have to remember forever after that I should type 'numpy.tanh()' but 'realmath.arctanh()'. Anyway it seems like the folks who care about performance are the ones who will generally be more willing to make tweaks like that. But that's about all I have to say about this, since the status quo works fine for me. So I'll be quiet. Just it seems like the non-status-quo'ers here have some good points. I taught intro to computer science to non-majors one semester. I know I would not want to have to confront all the issues with numerical computing right off the bat if I was just trying to teach people how to do some math. --bb |
From: Tim H. <tim...@ie...> - 2006-10-13 05:14:11
|
Bill Baxter wrote: > > I think efficiency is not a very good argument for the default > behavior here, because -- lets face it -- if efficient execution was > high on your priority list, you wouldn't be using python. I care very much about efficiency where it matters, which is only in a tiny fraction of my code. For the stuff that numpy does well it's pretty efficient, when that's not enough I can drop down to C, but I don't have to do that often. In fact, I've argued and still believe, that Python is frequently *more* efficient than C, given finite developer time, since it's easier to get the algorithms correct writing in Python. > And even if > you do care about efficiency, one of the top rules of optimization is > to first get it working, then get it working fast. > IMO, the current behavior is more likely to give you working code than auto-promoting to complex based on value. That concerns me more than efficiency. The whole auto-promotion thing looks like a good way to introduce data dependent bugs that don't surface till late in the game and are hard to track down. In contrast, when the current scheme causes a problem it should surface almost immediately. I would not use scipy.sqrt in code, even if the efficiency were the same, for this reason. I can see the attraction in the autopromoting version for teaching purposes and possibly for throwaway scripts, but not for "real" code. > Really, I'm just playing the devils advocate here, because I don't > work with complex numbers (I see quaternions more often than complex > numbers). But I would be willing to do something like > numpy.use_realmath() > in my code if it would make numpy more palatable to a wider audience. > I wouldn't like it, however, if I had to do some import thing where I > have to remember forever after that I should type 'numpy.tanh()' but > 'realmath.arctanh()'. > As I mentioned in my other message, the way to do this is to have a different entry point with different behavior. > Anyway it seems like the folks who care about performance are the ones > who will generally be more willing to make tweaks like that. > It's not just about performance though. It's also about correctness, or more accurately, resistance to bugs. > But that's about all I have to say about this, since the status quo > works fine for me. So I'll be quiet. Just it seems like the > non-status-quo'ers here have some good points. I taught intro to > computer science to non-majors one semester. I know I would not want > to have to confront all the issues with numerical computing right off > the bat if I was just trying to teach people how to do some math. > There's probably nothing wrong with having a package like this, it just shouldn't be numpy. It's easy enough to construct such a beast for yourself, it should take just a few lines of Python. Since what a beginners package should look like probably varies from teacher to teacher, let them construct a few. If they all, or at least most of them, have the same ideas about what constitutes such a package, that might be to the time to think about officially supporting a separate entry point that has the modified behaviour. For the moment, things look fine. -tim |
From: Tim H. <tim...@ie...> - 2006-10-13 04:43:26
|
Bill Baxter wrote: > On 10/12/06, Stefan van der Walt <st...@su...> wrote: > >> On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote: >> >>> On 10/11/06, Bill Baxter <wb...@gm...> wrote: >>> >> I tried to explain the argument at >> >> http://www.scipy.org/NegativeSquareRoot >> >> > > The proposed fix for those who want sqrt(-1) to return 1j is: > > from numpy.lib import scimath as SM > SM.sqrt(-1) > > > But that creates a new namespace alias, different from numpy. So I'll > call numpy.array() to create a new array, but SM.sqrt() when I want a > square root. > Am I wrong to want some simple way to change the behavior of > numpy.sqrt itself? > > Seems like you can get that effect via something like: > > for n in numpy.lib.scimath.__all__: > numpy.__dict__[n] = numpy.lib.scimath.__dict__[n] > > If that sort of function were available as "numpy.use_scimath()", then > folks who want numpy to be like scipy can achieve that with just one > line at the top of their files. The import under a different name > doesn't quite achieve the goal of making that behavior numpy's > "default". > > I guess I'm thinking mostly of the educational uses of numpy, where > you may have users that haven't learned about much about numerical > computing yet. I can just imagine the instructor starting off by > saying "ok everyone we're going to learn numpy today! First everyone > type this: 'import numpy, from numpy.lib import scimath as SM' -- > Don't worry about all the things there you don't understand." > Whereas "import numpy, numpy.use_scimath()" seems easier to explain > and much less intimidating as your first two lines of numpy to learn. > > Or is that just a bad idea for some reason? > > Isn't that just going to make your students *more* confused later when then run into the standard behavior of numpy? For this sort of thing, I would just make a new module to pull together the function I want and use that instead. It's then easy to explain that this new module bbeconf (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an amalgamation of stuff from multiple sources. # bbeconf.py from numpy import * fromnumpy.scimath import sqrt # possibly some other stuff to correctly handle subpackages... -tim |
From: Bill B. <wb...@gm...> - 2006-10-13 05:38:50
|
On 10/13/06, Tim Hochberg <tim...@ie...> wrote: > For this sort of thing, I > would just make a new module to pull together the function I want and > use that instead. It's then easy to explain that this new module bbeconf > (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an > amalgamation of stuff from multiple sources. > > # bbeconf.py > from numpy import * > fromnumpy.scimath import sqrt > # possibly some other stuff to correctly handle subpackages... That does sound like a good way to do it. Then you just tell your users to import 'eduNumpy' rather than numpy, and you're good to go. Added that suggestion to http://www.scipy.org/NegativeSquareRoot I'd like to ask one basic Python question related my previous suggestion of doing things like "numpy.sqrt = numpy.lib.scimath.sqrt": In python does that make it so that any module importing numpy in the same program will now see the altered sqrt function? E.g. in my program I do "import A,B". Module A alters numpy.sqrt. Does that also modify how module B sees numpy.sqrt? If so then that's a very good reason not to do it that way. I've heard people using the term "monkey-patch" before. Is that what that is? --bb |
From: Travis O. <oli...@ie...> - 2006-10-13 06:46:21
|
Bill Baxter wrote: > On 10/13/06, Tim Hochberg <tim...@ie...> wrote: > >> For this sort of thing, I >> would just make a new module to pull together the function I want and >> use that instead. It's then easy to explain that this new module bbeconf >> (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an >> amalgamation of stuff from multiple sources. >> >> # bbeconf.py >> from numpy import * >> fromnumpy.scimath import sqrt >> # possibly some other stuff to correctly handle subpackages... >> > > That does sound like a good way to do it. > Then you just tell your users to import 'eduNumpy' rather than numpy, > and you're good to go. > Added that suggestion to http://www.scipy.org/NegativeSquareRoot > > I'd like to ask one basic Python question related my previous > suggestion of doing things like "numpy.sqrt = numpy.lib.scimath.sqrt": > In python does that make it so that any module importing numpy in the > same program will now see the altered sqrt function? E.g. in my > program I do "import A,B". Module A alters numpy.sqrt. Does that > also modify how module B sees numpy.sqrt? > > If so then that's a very good reason not to do it that way. > > I've heard people using the term "monkey-patch" before. Is that what that is? > > --bb > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Tim H. <tim...@ie...> - 2006-10-13 13:44:51
|
Bill Baxter wrote: > On 10/13/06, Tim Hochberg <tim...@ie...> wrote: > >> For this sort of thing, I >> would just make a new module to pull together the function I want and >> use that instead. It's then easy to explain that this new module bbeconf >> (Bill Baxter's Excellent Collection Of Numeric Functions) is actually an >> amalgamation of stuff from multiple sources. >> >> # bbeconf.py >> from numpy import * >> fromnumpy.scimath import sqrt >> # possibly some other stuff to correctly handle subpackages... >> > > That does sound like a good way to do it. > Then you just tell your users to import 'eduNumpy' rather than numpy, > and you're good to go. > Added that suggestion to http://www.scipy.org/NegativeSquareRoot > > I'd like to ask one basic Python question related my previous > suggestion of doing things like "numpy.sqrt = numpy.lib.scimath.sqrt": > In python does that make it so that any module importing numpy in the > same program will now see the altered sqrt function? E.g. in my > program I do "import A,B". Module A alters numpy.sqrt. Does that > also modify how module B sees numpy.sqrt? > Indeed it does. Module imports are cached in sys.modules, so numpy is only imported once. (With some effort, you can usually get your own private copy of a module, that you could mess with to your hearts content, but I generally wouldn't recommend it). > If so then that's a very good reason not to do it that way. > > I've heard people using the term "monkey-patch" before. Is that what that is? > I believe that is what the term refers to although I'm not absolutely certain. -tim |