You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Hanno K. <kl...@ph...> - 2006-10-12 07:53:11
|
Travis, thank you, setting the -fPIC option indeed solved my problem. Just for future reference (if anybody else needs it), I added the -fPIC flag to the compiler flags in Make.inc in the atlas build directory, as setting CCFLAGS somehwo didn't seem to be successful. This might not be elegant but it worked. Best regards, Hanno Travis Oliphant <oli...@ee...> said: > Hanno Klemm wrote: > > >Hi, > > > >I don't know if this is a bug or just me doing something wrong (I > >suspect the latter). I try to compile numpy-1.0rc1 with python2.5 and > >atlas 3.7.17. > > > >I have build the atlas library myself, it doesn't give any errors > >under make test or make pttest, so it seems to be okay. if I try to > >build numpy I get the following error: > > > >creating build/temp.linux-x86_64-2.5/numpy/core/blasdot > >compile options: '-DATLAS_INFO="\"3.7.17\"" -Inumpy/core/blasdot > >-I/scratch/python2.5/include -Inumpy/core/include > >-Ibuild/src.linux-x86_64-2.5/numpy/core -Inumpy/core/src > >-Inumpy/core/include -I/scratch/python2.5/include/python2.5 -c' > >gcc: numpy/core/blasdot/_dotblas.c > >gcc -pthread -shared > >build/temp.linux-x86_64-2.5/numpy/core/blasdot/_dotblas.o > >-L/scratch/python2.5/lib -lcblas -latlas -o > >build/lib.linux-x86_64-2.5/numpy/core/_dotblas.so > >/usr/bin/ld: /scratch/python2.5/lib/libcblas.a(cblas_dgemm.o): > >relocation R_X86_64_32 can not be used when making a shared object; > >recompile with -fPIC > > > > > > This may be part of your problem. It's looks like the linker is having > a hard time making use of your compiled extension in a shared library. > Perhaps you should make sure -fPIC is on when you compile atlas (I'm not > sure how to do that --- perhaps setting CCFLAGS environment variable to > include -fPIC would help). > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Hanno Klemm kl...@ph... |
From: Travis O. <oli...@ie...> - 2006-10-12 07:50:06
|
Travis Oliphant wrote: > Now, it would be possible to give ufuncs a dtype keyword argument that > allowed you to specify which underlying loop was to be used for the > calculation. That way you wouldn't have to convert inputs to complex > numbers before calling the ufunc, but could let the ufunc do it in > chunks during the loop. That is certainly a reasonable enhancement: > > sqrt(a, dtype=complex). > > This no-doubt has a "library-ish"-feeling, but that is what NumPy is. > If such a change is desireable, I don't think it would be much to > implement it. > > This could be implemented, but only with a version number increase in the C-API (we would have to change the c-signature of the ufunc tp_call. This would mean that the next release of NumPy would be binary incompatible with packages built to previous NumPy releases. I've really been trying to avoid doing that. So, unless there are strong requests for this feature that outweigh the negatives of re-building dependent packages, then this feature will have to wait. OTOH: I suppose it could be implemented in a different way (using indexing or a method call): sqrt[complex](a) --- I remember Tim suggesting some use for indexing on ufuncs earlier though. -Travis |
From: Fernando P. <fpe...@gm...> - 2006-10-12 07:46:34
|
On 10/12/06, Travis Oliphant <oli...@ie...> wrote: > > > I'd like to pitch in (again) on this issue, but I'll try to make sure > > that it's clear that I'm NOT arguing about sqrt() in particular, one > > > > way or another. > > > > Fernando, > > I don't disagree with you in principle. I don't think anybody does. I > think we should try to keep the interfaces and expectations of scipy and > numpy the same. OK, I'm glad to hear that. For a moment during the discussion I misunderstood your intent and thought you did disagree with this, which worried me. Sorry for the confusion and any noise caused. I realize the sqrt() topic is a tricky one which I've deliberately sidestepped, since I wasn't too interested in that particular case but rather in the general principle. I'll leave that discussion to continue on scipy-dev for those who are intereseted in it. Regards, f |
From: <pe...@ce...> - 2006-10-12 07:45:46
|
PS: I am still sending this message to numpy list only because the proposal below affects numpy code, not scipy one. I think Fernando points make sense, numpy.foo(x) != scipy.foo(x) can cause confusion and frustration both among new numpy/scipy users and developers (who need to find explanations for the choises made). So, let me propose the following solution so that all parties will get the same results without sacrifying numpy.sqrt speed on non-negative input and scipy.sqrt backward compatibility: Define numpy.sqrt as follows: def sqrt(x): r = nx.sqrt(x) if nx.nan in r: i = nx.where(nx.isnan(r)) r = _tocomplex(r) r[i] = nx.sqrt(_tocomplex(x[i])) return r and define numpy.sqrtp that takes only non-negative input, this is for those users who expect sqrt to fail on negative input (as Numeric.sqrt and math.sqrt do). Pearu |
From: Travis O. <oli...@ie...> - 2006-10-12 07:40:30
|
David Goldsmith wrote: >> >> > I don't use scipy (and don't want to because of the overhead) but it > sounds like I should because if I'm taking the square root of a variable > whose value at run time happens to be real but less than zero, I *want* > the language I'm using to return an imaginary; in other words, it's not > the scipy behavior which "scares" me, its the numpy (which I do/have > been using) behavior. O.K. Well the functions you want are in numpy.lib.scimath. I should have directed you there. You actually don't need scipy installed at all. Just import sqrt from numpy.lib.scimath. I'm sorry I misunderstood the issue. -Travis |
From: Travis O. <oli...@ie...> - 2006-10-12 07:31:01
|
> I'd like to pitch in (again) on this issue, but I'll try to make sure > that it's clear that I'm NOT arguing about sqrt() in particular, one > > way or another. > Fernando, I don't disagree with you in principle. I don't think anybody does. I think we should try to keep the interfaces and expectations of scipy and numpy the same. Unfortunately, we have competing issues in this particular case (in the case of the functions in numpy.lib.scimath). Nobody has suggested an alternative to the current situation in SVN that is satsifying to enough users. Here is the situation. 1) NumPy ufuncs never up-cast to complex numbers without the user explicitly requesting it so sqrt(-1) creates a floating-point error condition which is either caught or ignored according to the user's desires. To get complex results from sqrt you have put in complex numbers to begin with. That's inherent in the way ufuncs work. This is long-standing behavior that has good reasons for it's existence. I don't see this changing. That's why I suggested to move the discussion over to scipy (we have the fancy functions in NumPy, they are just not in the top-level name-space). Now, it would be possible to give ufuncs a dtype keyword argument that allowed you to specify which underlying loop was to be used for the calculation. That way you wouldn't have to convert inputs to complex numbers before calling the ufunc, but could let the ufunc do it in chunks during the loop. That is certainly a reasonable enhancement: sqrt(a, dtype=complex). This no-doubt has a "library-ish"-feeling, but that is what NumPy is. If such a change is desireable, I don't think it would be much to implement it. 2) In SciPy 0.3.2 the top-level math functions were overloaded with these fancy argument-checking versions. So that scipy.sqrt(-1) would return 1j. This was done precisely to attract users like David G. who don't mind data-type conversions on the fly, but prefer automatic conversion (funny being called non-sensical when I was the one who wrote those original scipy functions --- I guess I'm schizophrenic). We changed this in SciPy 0.5.1 by accident without any discussion. It was simply a by-product of moving scipy_base (including those special-handling functions) into NumPy and forgetting to import those functions again into top-level SciPy. It was an oversight that caused backwards compatibilty issues. So, I simply changed it back to what SciPy used to be in SVN. If we want to change SciPy, then fine, but let's move that discussion over to scipy-dev. In short, I appreciate all the comments and the differences of opinion they point out, but they are ultimately non-productive. We can't just change top-level sqrt to be the fancy function willy-nilly. Paul says I'm nice (he's not talked to my children recently), but I'm not that much of a push-over. There are very good reasons that NumPy has the behavior it does. In addition, the fancy functions are already there in numpy in numpy.lib.scimath. So, use them from there if you like them. Create your own little mynumpy module that does from numpy import * from numpy.lib.scimath import * and have a ball. Python is flexible enough that the sky is not going to fall if the library doesn't do things exactly the way you would do it. We can still cooperate in areas that we agree on. Again. Put this to rest on NumPy and move the discussion over to scipy-dev. -Travis |
From: David G. <Dav...@no...> - 2006-10-12 07:23:52
|
(Very) well said, Fernando. Thanks! DG Fernando Perez wrote: > On 10/12/06, Travis Oliphant <oli...@ie...> wrote: > > >> Why in the world does it scare you away. This makes no sense to me. >> If you don't like the scipy version don't use it. NumPy and SciPy are >> not the same thing. >> > > I'd like to pitch in (again) on this issue, but I'll try to make sure > that it's clear that I'm NOT arguing about sqrt() in particular, one > way or another. > > It's perfectly clear that numpy != scipy to all of us. And yet, I > think it is equally clear that the two are /very/ tightly related. > Scipy builds on top of numpy and it directly exposes a LOT of the > numpy API as scipy functions: > > In [21]: import numpy as n, scipy as s > > In [22]: common_names = set(dir(n)) & set(dir(s)) > > In [23]: [getattr(n,x) is getattr(s,x) for x in common_names ].count(True) > Out[23]: 450 > > In [24]: len(common_names) > Out[24]: 462 > > That's 450 objects from numpy which are directly exposed in Scipy, > while only 12 names are in both top-level namespaces and yet are > different objects. Put another way, scipy is a direct wrap of 97% of > the numpy top-level namespace. While /we/ know they are distinct > entities, to the casual user a 97% match looks pretty close to being > the same, especially when the non-identical things are all > non-numerical: > > In [27]: [x for x in common_names if getattr(n,x) is not getattr(s,x)] > Out[27]: > ['pkgload', > 'version', > '__config__', > '__file__', > '__all__', > '__doc__', > 'show_config', > '__version__', > '__path__', > '__name__', > 'info', > 'test'] > > In [32]: n.__version__,s.__version__ > Out[32]: ('1.0.dev3306', '0.5.2.dev2252') > > Basically, at least for these versions, the top-level API of scipy is > a strict superset of the numpy one for all practical purposes. > > I think it's fair to say that if we start sprinkling special cases > where certain objects happen to have the same name but produce > different results for the same inputs, confusion will arise. > > Please note that I see a valid reason for scipy.foo != numpy.foo when > the scipy version uses code with extra features, is faster, has > additional options, etc. But as I said in a previous message, I think > that /for the same input/, we should really try to satisfy that > > numpy.foo(x) == scipy.foo(x) (which is NOT the same as 'numpy.foo is scipy.foo') > > within reason. Obviously the scipy version may succeed where the > numpy one fails due to better algorithms, or be faster, etc. I'm > talking about a general principle here. > > I doubt I'll be able to state my point with any more clarity, so I'll > stop now. But I really believe that this particular aspect of > consistency between numpy and scipy is a /very/ important one for its > adoption in wider communities. > > Best, > > f > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Natalee R. <jaq...@bo...> - 2006-10-12 07:20:58
|
Hi, VldAGRA for LESS http://www.tikindefunkadesinmde.com =20 Paskonjak, bag in hand and forged press credentials in pocket. Even angry and settled for a surly mutter and growl. Mata nodded blithely |
From: David G. <Dav...@no...> - 2006-10-12 07:19:34
|
Travis Oliphant wrote: > David Goldsmith wrote: > >> Travis Oliphant wrote: >> >> >>> What could be simpler? ;-) >>> >>> >> Having sqrt(-1) return 1j (without having to remember that in order to >> get this, you have to write sqrt(-1+0j) instead). >> >> >> > That's exactly what scipy.sqrt(-1) does. That was my point. > But I don't want to have to use scipy (which so far I haven't needed for any other reason) just to get this one behavior. But my personal preferences aside, I say again: if numpy doesn't behave this way, you *will* be "scaring" away many potential users. If you can live with that, so be it. DG DG > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: David G. <Dav...@no...> - 2006-10-12 07:13:23
|
Travis Oliphant wrote: > David Goldsmith wrote: > >> Travis Oliphant wrote: >> >> >>> pe...@ce... wrote: >>> >>> >>> >>> >>>> Could sqrt(-1) made to return 1j again? >>>> >>>> >>>> >>>> >>> Not in NumPy. But, in scipy it could. >>> >>> >>> >>> >> Ohmigod!!! You are definitely going to scare away many, many potential >> users - if I wasn't obliged to use open source at work, you'd be scaring >> me away. >> > Why in the world does it scare you away. This makes no sense to me. > If you don't like the scipy version don't use it. NumPy and SciPy are > not the same thing. > I don't use scipy (and don't want to because of the overhead) but it sounds like I should because if I'm taking the square root of a variable whose value at run time happens to be real but less than zero, I *want* the language I'm using to return an imaginary; in other words, it's not the scipy behavior which "scares" me, its the numpy (which I do/have been using) behavior. To which you might say, "Well, if that's what you want, and you have Matlab (as you've said you do), then just use that." But that's precisely the point: people who don't want to be bothered with having to be "a bit more care[ful]" (as Chuck put it) - and who can afford it - are going to be inclined to choose Matlab over numpy. Perhaps one doesn't care - in the grand scheme of things, it certainly doesn't matter - but I think that you all should be aware that this numpy "feature" will be seen by many as more than just a nuisance. DG |
From: Scott S. <sin...@uk...> - 2006-10-12 07:12:34
|
Fernando Perez wrote: > Please note that I see a valid reason for scipy.foo != numpy.foo when > the scipy version uses code with extra features, is faster, has > additional options, etc. But as I said in a previous message, I think > that /for the same input/, we should really try to satisfy that > > numpy.foo(x) == scipy.foo(x) (which is NOT the same as 'numpy.foo is scipy.foo') > > within reason. As far as I can tell this is exactly what happens. Consider the issue under discussion... ---------------------------------- >>> import numpy as np >>> np.sqrt(-1) -1.#IND >>> np.sqrt(-1+0j) 1j >>> a = complex(-1) >>> np.sqrt(a) 1j >>> import scipy as sp >>> sp.sqrt(-1) -1.#IND >>> np.sqrt(-1+0j) 1j >>> sp.sqrt(a) 1j >>> np.__version__ '1.0rc1' >>> sp.__version__ '0.5.1' >>> ---------------------------------- I'm sure that this hasn't changed in the development versions. Surely the point is that when your algorithm can potentially produce a complex result, the logical thing to do is to use a complex data type. In this case Numpy and Scipy behave in a way which is intuitive. If complex results are surprising and unexpected then the algorithm is probably in error or poorly understood ;-) Cheers, Scott |
From: Travis O. <oli...@ie...> - 2006-10-12 07:05:33
|
> > Personally I think that the default error mode should be tightened > up. > Then people would only see these sort of things if they really care > about them. Using Python 2.5 and the errstate class I posted earlier: > > # This is what I like for the default error state > numpy.seterr (invalid='raise', divide='raise', over='raise', > under='ignore') > > > I like these choices too. Overflow, division by zero, and sqrt(-x) are > usually errors, indicating bad data or programming bugs. Underflowed > floats, OTOH, are just really, really small numbers and can be treated > as zero. An exception might be if the result is used in division and > no error is raised, resulting in a loss of accuracy. > I'm fine with this. I've hesitated because error checking is by default slower. But, I can agree that it is "less surprising" to new-comers. People that don't mind no-checking can simply set their defaults back to 'ignore' -Travis |
From: Nils W. <nw...@ia...> - 2006-10-12 06:50:42
|
Travis Oliphant wrote: > I made some fixes to the "asbuffer" code which let me feel better about > exposing it in NumPy (where it is now named int_asbuffer). > > This code takes a Python integer and a size and returns a buffer object > that points to that memory. A little test is performed by trying to > read (and possibly write if a writeable buffer is requested) to the > first and last elements of the buffer. Any segfault is trapped and used > to raise a Python error indicating you can't use that area of memory. > > It doesn't guarantee you won't shoot yourself, but it does make it more > difficult to segfault Python. Previously a simple int_asbuffer(3423423, > 5) would have segfaulted (unless by chance you the memory area 3423423 > happens to be owned by Python). > > I have not tested the code on other platforms to make sure it works as > expected, so please try and compiled it. > > -Travis > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > Numpy version 1.0.dev3315 Scipy version 0.5.2.dev2254 Works fine here. All tests passed ! x86_64 x86_64 x86_64 GNU/Linux Nils |
From: Fernando P. <fpe...@gm...> - 2006-10-12 06:49:42
|
On 10/12/06, Travis Oliphant <oli...@ie...> wrote: > Why in the world does it scare you away. This makes no sense to me. > If you don't like the scipy version don't use it. NumPy and SciPy are > not the same thing. I'd like to pitch in (again) on this issue, but I'll try to make sure that it's clear that I'm NOT arguing about sqrt() in particular, one way or another. It's perfectly clear that numpy != scipy to all of us. And yet, I think it is equally clear that the two are /very/ tightly related. Scipy builds on top of numpy and it directly exposes a LOT of the numpy API as scipy functions: In [21]: import numpy as n, scipy as s In [22]: common_names = set(dir(n)) & set(dir(s)) In [23]: [getattr(n,x) is getattr(s,x) for x in common_names ].count(True) Out[23]: 450 In [24]: len(common_names) Out[24]: 462 That's 450 objects from numpy which are directly exposed in Scipy, while only 12 names are in both top-level namespaces and yet are different objects. Put another way, scipy is a direct wrap of 97% of the numpy top-level namespace. While /we/ know they are distinct entities, to the casual user a 97% match looks pretty close to being the same, especially when the non-identical things are all non-numerical: In [27]: [x for x in common_names if getattr(n,x) is not getattr(s,x)] Out[27]: ['pkgload', 'version', '__config__', '__file__', '__all__', '__doc__', 'show_config', '__version__', '__path__', '__name__', 'info', 'test'] In [32]: n.__version__,s.__version__ Out[32]: ('1.0.dev3306', '0.5.2.dev2252') Basically, at least for these versions, the top-level API of scipy is a strict superset of the numpy one for all practical purposes. I think it's fair to say that if we start sprinkling special cases where certain objects happen to have the same name but produce different results for the same inputs, confusion will arise. Please note that I see a valid reason for scipy.foo != numpy.foo when the scipy version uses code with extra features, is faster, has additional options, etc. But as I said in a previous message, I think that /for the same input/, we should really try to satisfy that numpy.foo(x) == scipy.foo(x) (which is NOT the same as 'numpy.foo is scipy.foo') within reason. Obviously the scipy version may succeed where the numpy one fails due to better algorithms, or be faster, etc. I'm talking about a general principle here. I doubt I'll be able to state my point with any more clarity, so I'll stop now. But I really believe that this particular aspect of consistency between numpy and scipy is a /very/ important one for its adoption in wider communities. Best, f |
From: Johannes L. <a.u...@gm...> - 2006-10-12 06:40:59
|
Hi, I absolutely do not know perl, so I do not know what the expression you posted does. However, the key is just to understand indexing in numpy: if you have a matrix mat and index arrays index1, index2 with, lets say, index1 = array([ 17, 19, 29]) index2 = array([ 12, 3, 9]) then the entries of the index arrays are used as row and column indices respectively, and the result will be an array shaped like the index arrays. So doing mat[index1, index2] will give you --> array([ mat[17, 12], mat[19, 3], mat[29, 9]]). Now if you want the diagonal of a 3x3-mat, you need index1=index2=array([ 0, 1, 2]). mat[index1, index2] --> array([ mat[0,0], mat[1,1], mat[2,2]]) That is what my code does. If you need other, arbitrary subsets of mat, you just have to fill the index arrays accordingly. Johannes |
From: Travis O. <oli...@ie...> - 2006-10-12 06:39:02
|
David Goldsmith wrote: > Travis Oliphant wrote: > >> What could be simpler? ;-) >> > Having sqrt(-1) return 1j (without having to remember that in order to > get this, you have to write sqrt(-1+0j) instead). > > That's exactly what scipy.sqrt(-1) does. That was my point. -Travis |
From: David G. <Dav...@no...> - 2006-10-12 06:38:08
|
Stefan van der Walt wrote: >> This is one reason that NumPy by itself is not a MATLAB >> replacement.=20 >> =20 > Intuitive usage is hopefully not a MATLAB-only feature. > =20 Here, here! (Or is it Hear, hear! I don't think I've ever seen it wr= itten out before. :-) ) > I'll shut up now :) But why? To my mind, you're making a lot more sense than Travis. DG > Cheers > St=E9fan > > -------------------------------------------------------------------= ------ > Using Tomcat but need to do more? Need to support web services, sec= urity? > Get stuff done quickly with pre-integrated technology to make your = job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache G= eronimo > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057= &dat=3D121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > =20 |
From: Travis O. <oli...@ie...> - 2006-10-12 06:37:33
|
I made some fixes to the "asbuffer" code which let me feel better about exposing it in NumPy (where it is now named int_asbuffer). This code takes a Python integer and a size and returns a buffer object that points to that memory. A little test is performed by trying to read (and possibly write if a writeable buffer is requested) to the first and last elements of the buffer. Any segfault is trapped and used to raise a Python error indicating you can't use that area of memory. It doesn't guarantee you won't shoot yourself, but it does make it more difficult to segfault Python. Previously a simple int_asbuffer(3423423, 5) would have segfaulted (unless by chance you the memory area 3423423 happens to be owned by Python). I have not tested the code on other platforms to make sure it works as expected, so please try and compiled it. -Travis |
From: David C. <da...@ar...> - 2006-10-12 06:35:34
|
David Goldsmith wrote: > Travis Oliphant wrote: > >> What could be simpler? ;-) >> > Having sqrt(-1) return 1j (without having to remember that in order to > get this, you have to write sqrt(-1+0j) instead). > > But this can sometimes lead to confusing errors hard to track when you don't want to treat complex numbers. That's one of the thing I hated in matlab, actually: if you don't want to handle complex numbers, you have to check regularly for it. So I don't think this is simpler. David |
From: David G. <Dav...@no...> - 2006-10-12 06:32:12
|
Travis Oliphant wrote: > > This is the one that concerns me. Slowing everybody down who knows they > have positive values just for people that don't seems problematic. > > Then have a "sqrtp" function for those users who are fortunate enough to know ahead of time that they'll only be talking square roots of nonnegatives. DG |
From: David G. <Dav...@no...> - 2006-10-12 06:27:56
|
Travis Oliphant wrote: > What could be simpler? ;-) Having sqrt(-1) return 1j (without having to remember that in order to get this, you have to write sqrt(-1+0j) instead). DG |
From: Travis O. <oli...@ie...> - 2006-10-12 06:25:00
|
David Goldsmith wrote: > Travis Oliphant wrote: > >> pe...@ce... wrote: >> >> >> >>> Could sqrt(-1) made to return 1j again? >>> >>> >>> >> Not in NumPy. But, in scipy it could. >> >> >> > Ohmigod!!! You are definitely going to scare away many, many potential > users - if I wasn't obliged to use open source at work, you'd be scaring > me away. Why in the world does it scare you away. This makes no sense to me. If you don't like the scipy version don't use it. NumPy and SciPy are not the same thing. The problem we have is that the scipy version (0.3.2) already had this feature (and Numeric didn't). What is so new here that is so scary ? -Travis |
From: David G. <Dav...@no...> - 2006-10-12 06:16:11
|
Sven Schreiber wrote: > Travis Oliphant schrieb: > > >>> If not, shouldn't >>> >>> >>> numpy.sqrt(-1) raise a ValueError instead of returning silently nan? >>> >>> >>> >> This is user adjustable. You change the error mode to raise on >> 'invalid' instead of pass silently which is now the default. >> >> -Travis >> >> > > Could you please explain how this adjustment is done, or point to the > relevant documentation. > Thank you, > Sven > I'm glad you asked this, Sven, 'cause I was thinking that if making this "user adjustment" is this advanced (I too have no idea what you're talking about, Travis), then this would be another significant strike against numpy (but I was holding my tongue, since I'd just let fly in my previous email). DG |
From: David G. <Dav...@no...> - 2006-10-12 06:10:57
|
Travis Oliphant wrote: > pe...@ce... wrote: > > >> Could sqrt(-1) made to return 1j again? >> >> > Not in NumPy. But, in scipy it could. > > Ohmigod!!! You are definitely going to scare away many, many potential users - if I wasn't obliged to use open source at work, you'd be scaring me away. I was thinking about translating all my personal fractal-generating Matlab code into Python, but I certainly won't be doing that now! |
From: David G. <Dav...@no...> - 2006-10-12 06:03:45
|
Charles R Harris wrote: > > > On 10/11/06, *Greg Willden* <gre...@gm... > <mailto:gre...@gm...>> wrote: > > Hi All, > > I've read discussions in the archives about how round() "rounds to > even" and how that is supposedly better. > > But what I haven't been able to find is "What do I use if I want > the regular old round that you learn in school?" > > > Perhaps you could explain *why* you want the schoolbook round? Given > that floating point is inherently inaccurate you would have to expect > to produce a lot of numbers exactly of the form x.5 *without errors*, > which means you probably don't need round to deal with it. Anyway, > absent a flag somewhere, you can do something like (x + > sign(x)*.5).astype(int). > > Chuck Also, where did you go to school? In Fairfax County, VA in the 80's at least, they were teaching "round to even"; since that time, I've taught math in a variety of locations and settings and with a variety of texts, and where I have seen the issue addressed, it's always "round to even" (or, as I learned it, "the rule of fives"). DG > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |