You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Travis O. <oli...@ie...> - 2006-05-28 20:37:20
|
Simon Burton wrote: > On Sun, 28 May 2006 14:33:37 -0500 > Robert Kern <rob...@gm...> wrote: > > >>> if array.dtype == numpy.Int32: ... >>> >> numpy.int32 >> > > > No that doesn't work. > > Yeah, the "canonical" types (e.g. int32, float64, etc) are actually scalar objects. The type objects themselves are dtype(int32). I don't think they are currently listed anywhere in Python (except there is one for every canonical scalar object). The difference between the scalar object and the data-type object did not become clear until December 2005. Previously the scalar object was used as the data-type (obviously there is still a relationship between them). -Travis >>>> numpy.int32 >>>> > <type 'int32scalar'> > >>>> numpy.int32 == numpy.dtype('l') >>>> > False > > > Simon. > > |
From: Robert K. <rob...@gm...> - 2006-05-28 20:03:37
|
Simon Burton wrote: > On Sun, 28 May 2006 14:33:37 -0500 > Robert Kern <rob...@gm...> wrote: > >>>if array.dtype == numpy.Int32: ... >> >>numpy.int32 > > No that doesn't work. > >>>>numpy.int32 > > <type 'int32scalar'> > >>>>numpy.int32 == numpy.dtype('l') > > False >>> from numpy import * >>> a = linspace(0, 10, 11) >>> a.dtype == dtype(float64) True -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Simon B. <si...@ar...> - 2006-05-28 19:52:20
|
On Sun, 28 May 2006 14:33:37 -0500 Robert Kern <rob...@gm...> wrote: > > > if array.dtype == numpy.Int32: ... > > numpy.int32 No that doesn't work. >>> numpy.int32 <type 'int32scalar'> >>> numpy.int32 == numpy.dtype('l') False Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com |
From: Robert K. <rob...@gm...> - 2006-05-28 19:34:04
|
Simon Burton wrote: > Is there a reason why dtype's are unhashable ? (ouch) No one thought about it, probably. If you would like to submit a patch, I think it we would check it in. > On another point, is there a canonical list of dtype's ? > I'd like to test the dtype of an array, and I always > end up with something like this: > > if array.dtype == numpy.dtype('l'): ... > > When I would prefer to write something like: > > if array.dtype == numpy.Int32: ... numpy.int32 There is a list on page 20 of _The Guide to NumPy_. It is included in the sample chapters: http://www.tramy.us/scipybooksample.pdf > (i can never remember these char codes !) > > Alternatively, should dtype's __cmp__ promote the other arg > to a dtype before the compare ? > I guess not, since that would break a lot of code: eg. dtype(None) > is legal. Correct, it should not. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Erin S. <eri...@gm...> - 2006-05-28 15:33:49
|
Hi everyone - The "fromfile" method isn't working for Int8 in ascii mode: # cat test.dat 3 4 5 >>> import numpy as np >>> np.__version__ '0.9.9.2547' >>> np.fromfile('test.dat', sep=3D'\n', dtype=3Dnp.Int16) array([3, 4, 5], dtype=3Dint16) >>> np.fromfile('test.dat', sep=3D'\n', dtype=3Dnp.Int8) Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: don't know how to read character files with that array type Was this intended? Erin |
From: Simon B. <si...@ar...> - 2006-05-28 07:50:06
|
Is there a reason why dtype's are unhashable ? (ouch) On another point, is there a canonical list of dtype's ? I'd like to test the dtype of an array, and I always end up with something like this: if array.dtype == numpy.dtype('l'): ... When I would prefer to write something like: if array.dtype == numpy.Int32: ... (i can never remember these char codes !) Alternatively, should dtype's __cmp__ promote the other arg to a dtype before the compare ? I guess not, since that would break a lot of code: eg. dtype(None) is legal. Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com |
From: Hughie B. <by...@he...> - 2006-05-26 14:40:26
|
Hi, =20 X & n a x P R O z & C A m o x / c i I l / n M e R / D / A V / a G R A A m B / E N L e V / T R A V A L / u M T r & m a d o I S O m & C i A L / S =20 http://www.bendoutin.com <http://www.bendoutin.com>=20 was there anyone to pack him in, even if there had been a chance! It=20 looked as if he would certainly lose his friends this time (nearly all=20 of them had already disappeared through the dark trap-door), and get=20 utterly left behind and have to stay lurking as a permanent burglar in=20 the elf-caves for ever. For even if he could have escaped through the=20 upper gates at once, he had precious small chance of ever finding the=20 |
From: mandy <hub...@16...> - 2006-05-26 08:11:56
|
Dear sir: Nice day! See fuel tanks , please browse our website: www.chilegroup.com/en/ Highly appropriated for your dedication on this email. Best regards, Miss Mandy, WEBSITE: WWW.CHILEGROUP.COM/en/ |
From: Robert K. <rob...@gm...> - 2006-05-25 23:39:39
|
Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>You aren't using bc correctly. > > Ooops. I am not a user and was just following your post > without reading the manual. I hope the below fixes pi; > and (I think) it makes the same point I tried to make before: > a continuity argument renders the general claim you > made suspect. (Of course it's looking like a pretty > narrow range of possible benefit as well.) Yes, you probably can construct cases where the % (2*pi) step will ultimately yield an answer closer to what you want. You cannot expect that step to give *reliable* improvements. >>If you know that you are epsilon from n*2*π (the real >>number, not the floating point one), you should just be >>calculating sin(epsilon). Usually, you do not know this, >>and % (2*pi) will not tell you this. (100*pi + epsilon) is >>not the same thing as (100*π + epsilon). > > Yes, I understand all this. Of course, > it is not quite an answer to the question: > can '%(2*pi)' offer an advantage in the > right circumstances? Not in any that aren't contrived. Not in any situations where you don't already have enough knowledge to do a better calculation (e.g. calculating sin(epsilon) rather than sin(2*n*pi + epsilon)). > And the original question > was again different: can we learn > from such calculations that **some** method might > offer an improvement? No, those calculations make no such revelation. Good implementations of sin() already reduce the argument into a small range around 0 just to make the calculation feasible. They do so much more accurately than doing % (2*pi) but they can only work with the information given to the function. It cannot know that, for example, you generated the inputs by multiplying the double-precision approximation of π by an integer. You can look at the implementation in fdlibm: http://www.netlib.org/fdlibm/s_sin.c > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > pi = 4*a(1) > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253967996 > or > 9.999999999833333e-006 > > compared to: > >>>>epsilon = 0.00001 >>>>sin(100*pi+epsilon) > > 9.999999976550551e-006 > >>>>sin((100*pi+epsilon)%(2*pi)) > > 9.9999999887966145e-006 As Sasha noted, that is an artifact of bc's use of decimal rather than binary, and Python's conversion of the literal "0.00001" into binary. [scipy]$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 pi = 4*a(1) epsilon = 1./2.^16 s(100*pi + epsilon) .00001525878906190788105354014301687863346141309981 s(epsilon) .00001525878906190788105354014301687863346141310239 [scipy]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> epsilon = 1./2.**16 >>> sin(100*pi + epsilon) 1.5258789063872268e-05 >>> sin((100*pi + epsilon) % (2*pi)) 1.5258789076118735e-05 >>> sin(epsilon) 1.5258789061907882e-05 I do recommend reading up more on floating point arithmetic. A good paper is Goldman's "What Every Computer Scientist Should Know About Floating-Point Arithmetic": http://www.physics.ohio-state.edu/~dws/grouplinks/floating_point_math.pdf -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Robert K. <rob...@gm...> - 2006-05-25 23:14:52
|
David M. Cooke wrote: > Yi Qiang <yi...@yi...> writes: >>FORTRAN = gfortran >>OPTS = -fPIC -funroll-all-loops -fno-f2c -O2 >>DRVOPTS = $(OPTS) >>NOOPT = > > Maybe NOOPT needs -fPIC? That's the only one I see where it could be > missing. That sounds right. dlamch is not supposed to be compiled with optimization, IIRC. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Alan G I. <ai...@am...> - 2006-05-25 22:46:22
|
On Thu, 25 May 2006, Robert Kern apparently wrote:=20 > You aren't using bc correctly.=20 Ooops. I am not a user and was just following your post without reading the manual. I hope the below fixes pi; and (I think) it makes the same point I tried to make before: a continuity argument renders the general claim you made suspect. (Of course it's looking like a pretty narrow range of possible benefit as well.) > If you know that you are epsilon from n*2*=CF=80 (the real=20 > number, not the floating point one), you should just be=20 > calculating sin(epsilon). Usually, you do not know this,=20 > and % (2*pi) will not tell you this. (100*pi + epsilon) is=20 > not the same thing as (100*=CF=80 + epsilon).=20 Yes, I understand all this. Of course, it is not quite an answer to the question: can '%(2*pi)' offer an advantage in the right circumstances? And the original question was again different: can we learn from such calculations that **some** method might offer an improvement? Anyway, you have already been more than generous with your time. Thanks! Alan bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale =3D 50 pi =3D 4*a(1) epsilon =3D 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253967996 or 9.999999999833333e-006 compared to: >>> epsilon =3D 0.00001 >>> sin(100*pi+epsilon) 9.999999976550551e-006 >>> sin((100*pi+epsilon)%(2*pi)) 9.9999999887966145e-006 |
From: <co...@ph...> - 2006-05-25 22:45:03
|
Yi Qiang <yi...@yi...> writes: > Yi Qiang wrote: >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > #################################################################### > # LAPACK make include file. # > # LAPACK, Version 3.0 # > # June 30, 1999 # > #################################################################### > # > SHELL = /bin/sh > # > # The machine (platform) identifier to append to the library names > # > PLAT = _LINUX > # > # Modify the FORTRAN and OPTS definitions to refer to the > # compiler and desired compiler options for your machine. NOOPT > # refers to the compiler options desired when NO OPTIMIZATION is > # selected. Define LOADER and LOADOPTS to refer to the loader and > # desired load options for your machine. > # > FORTRAN = gfortran > OPTS = -fPIC -funroll-all-loops -fno-f2c -O2 > DRVOPTS = $(OPTS) > NOOPT = Maybe NOOPT needs -fPIC? That's the only one I see where it could be missing. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Yi Q. <yi...@yi...> - 2006-05-25 22:40:50
|
Yi Qiang wrote: > Interestingly enough, if I just use the bare version of ATLAS, numpy > compiles fine. If I use the bare version of LAPACK, numpy compiles fine . Actually, I take that back. I get the same error when trying to link against the standalone version of LAPACK, so that suggests something went wrong there. And here are the files I forgot to attach! > > Any help would be greatly appreciated. > > > > -Yi > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Yi Q. <yi...@yi...> - 2006-05-25 22:33:10
|
Hi list, I searched the archives and found various threads regarding this issue and I have not found a solution there. software versions: gfortran 4.0.1 atlas 3.6.0 lapack 3.0 Basically numpy spits out this message when I try to compile it: gcc: numpy/linalg/lapack_litemodule.c /usr/bin/gfortran -shared build/temp.linux-x86_64-2.4/numpy/linalg/lapack_litemodule.o -L/usr/local/lib/atlas -llapack -lptf77blas -lptcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.4/numpy/linalg/lapack_lite.so /usr/bin/ld: /usr/local/lib/atlas/liblapack.a(dlamch.o): relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /usr/local/lib/atlas/liblapack.a(dlamch.o): relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/gfortran -shared build/temp.linux-x86_64-2.4/numpy/linalg/lapack_litemodule.o -L/usr/local/lib/atlas -llapack -lptf77blas -lptcblas -latlas -lgfortran -o build/lib.linux-x86_64-2.4/numpy/linalg/lapack_lite.so" failed with exit status 1 However I have compiled all the software explicitly with the -fPIC flag on. Attached is my make.inc for LAPACK and my Makefile for ATLAS. I followed these instructions to create a hybrid LAPACK/ATLAS archive: http://math-atlas.sourceforge.net/errata.html#completelp Interestingly enough, if I just use the bare version of ATLAS, numpy compiles fine. If I use the bare version of LAPACK, numpy compiles fine . Any help would be greatly appreciated. -Yi |
From: Sasha <nd...@ma...> - 2006-05-25 21:28:43
|
This example looks like an artifact of decimal to binary conversion. Consider this: >>> epsilon =3D 1./2**16 >>> epsilon 1.52587890625e-05 >>> sin(100*pi+epsilon) 1.5258789063872671e-05 >>> sin((100*pi+epsilon)%(2*pi)) 1.5258789076118735e-05 and in bc: scale=3D50 epsilon =3D 1./2.^16 s(100*pi + epsilon) .00001525878906190788105354014301687863346141310239 On 5/25/06, Alan G Isaac <ai...@am...> wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > > What continuity? This is floating-point arithmetic. > > Sure, but a continuity argument suggests (in the absence of > specific floating point reasons to doubt it) that a better > approximation at one point will mean better approximations > nearby. E.g., > > >>> epsilon =3D 0.00001 > >>> sin(100*pi+epsilon) > 9.999999976550551e-006 > >>> sin((100*pi+epsilon)%(2*pi)) > 9.9999999887966145e-006 > > Compare to the bc result of > 9.9999999998333333e-006 > > > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, In= c. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale =3D 50 > epsilon =3D 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253968254 > > Cheers, > Alan > > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications i= n > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmdlnk&kid=107521&bid$8729&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Robert K. <rob...@gm...> - 2006-05-25 20:42:26
|
Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>What continuity? This is floating-point arithmetic. > > Sure, but a continuity argument suggests (in the absence of > specific floating point reasons to doubt it) that a better > approximation at one point will mean better approximations > nearby. E.g., > >>>>epsilon = 0.00001 >>>>sin(100*pi+epsilon) > > 9.999999976550551e-006 > >>>>sin((100*pi+epsilon)%(2*pi)) > > 9.9999999887966145e-006 > > Compare to the bc result of > 9.9999999998333333e-006 > > bc 1.05 > Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. > This is free software with ABSOLUTELY NO WARRANTY. > For details type `warranty'. > scale = 50 > epsilon = 0.00001 > s(100*pi + epsilon) > .00000999999999983333333333416666666666468253968254 You aren't using bc correctly. bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 100*pi 0 If you know that you are epsilon from n*2*π (the real number, not the floating point one), you should just be calculating sin(epsilon). Usually, you do not know this, and % (2*pi) will not tell you this. (100*pi + epsilon) is not the same thing as (100*π + epsilon). FWIW, for the calculation that you did in bc, numpy.sin() gives the same results (up to the last digit): >>> from numpy import * >>> sin(0.00001) 9.9999999998333335e-06 You wanted to know if something there is something exploitable to improve the accuracy of numpy.sin(). In general, there is not. However, if you know the difference between your value and an integer multiple of the real number 2*π, then you can do your floating-point calculation on that difference. However, you will not in general get an improvement by using % (2*pi) to calculate that difference. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Alan G I. <ai...@am...> - 2006-05-25 20:28:23
|
On Thu, 25 May 2006, Robert Kern apparently wrote:=20 > Let me clarify. Since you created your values by=20 > multiplying the floating-point approximation pi by an=20 > integer value. When you perform the operation % (2*pi) on=20 > those values, the result happens to be exact or nearly so=20 > but only because you used the same approximation of pi.=20 > Doing that operation on an arbitrary value (like 1000000)=20 > only introduces more error to the calculation.=20 > Floating-point sin(1000000.0) should return a value within=20 > eps (~2**-52) of the true, real-valued function=20 > sin(1000000). Calculating (1000000 % (2*pi)) introduces=20 > error in two places: the approximation pi and the=20 > operation %. A floating-point implementation of sin(.)=20 > will return a value within eps of the real sin(.) of the=20 > value that is the result of the floating-point operation=20 > (1000000 % (2*pi)), which already has some error=20 > accumulated.=20 I do not think that we have any disgreement here, except=20 possibly over eps, which is not constant for different=20 argument sizes. So I wondered if there was a tradeoff:=20 smaller eps (from smaller argument) for the cost of=20 computational error in an additional operation. Anyway, thanks for the feedback on this. Cheers, Alan |
From: Alan G I. <ai...@am...> - 2006-05-25 20:17:42
|
On Thu, 25 May 2006, Robert Kern apparently wrote:=20 > What continuity? This is floating-point arithmetic.=20 Sure, but a continuity argument suggests (in the absence of=20 specific floating point reasons to doubt it) that a better=20 approximation at one point will mean better approximations=20 nearby. E.g., >>> epsilon =3D 0.00001 >>> sin(100*pi+epsilon) 9.999999976550551e-006 >>> sin((100*pi+epsilon)%(2*pi)) 9.9999999887966145e-006 Compare to the bc result of 9.9999999998333333e-006 bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale =3D 50 epsilon =3D 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253968254 Cheers, Alan |
From: Alexander B. <ale...@gm...> - 2006-05-25 20:13:24
|
I agree with Robert. In fact on the FPUs such as x87, where floating point registers have extended precision, sin(x % (2%pi)) will give you a less precise answer than sin(x). The improved presision that you see is illusory because gives that 64 bit pi is not a precise mathematical pi, sin(2*pi) is not 0. On 5/25/06, Robert Kern <rob...@gm...> wrote: > Alan G Isaac wrote: > > On Thu, 25 May 2006, Robert Kern apparently wrote: > > >>That your demonstration results in the desired exact 0.0 > >>for multiples of 2*pi is an accident. The results for > >>values other than integer multiples of pi will be as wrong > >>or more wrong. > > > > It seems that a continuity argument should undermine that as > > a general claim. Right? > > Let me clarify. Since you created your values by multiplying the floating= -point > approximation pi by an integer value. When you perform the operation % (2= *pi) on > those values, the result happens to be exact or nearly so but only becaus= e you > used the same approximation of pi. Doing that operation on an arbitrary v= alue > (like 1000000) only introduces more error to the calculation. Floating-po= int > sin(1000000.0) should return a value within eps (~2**-52) of the true, > real-valued function sin(1000000). Calculating (1000000 % (2*pi)) introdu= ces > error in two places: the approximation pi and the operation %. A floating= -point > implementation of sin(.) will return a value within eps of the real sin(.= ) of > the value that is the result of the floating-point operation (1000000 % (= 2*pi)), > which already has some error accumulated. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless eni= gma > that is made terrible by our own mad attempt to interpret it as though i= t had > an underlying truth." > -- Umberto Eco > > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications i= n > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D107521&bid=3D248729&dat= =3D121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Robert K. <rob...@gm...> - 2006-05-25 19:38:34
|
Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: >>That your demonstration results in the desired exact 0.0 >>for multiples of 2*pi is an accident. The results for >>values other than integer multiples of pi will be as wrong >>or more wrong. > > It seems that a continuity argument should undermine that as > a general claim. Right? Let me clarify. Since you created your values by multiplying the floating-point approximation pi by an integer value. When you perform the operation % (2*pi) on those values, the result happens to be exact or nearly so but only because you used the same approximation of pi. Doing that operation on an arbitrary value (like 1000000) only introduces more error to the calculation. Floating-point sin(1000000.0) should return a value within eps (~2**-52) of the true, real-valued function sin(1000000). Calculating (1000000 % (2*pi)) introduces error in two places: the approximation pi and the operation %. A floating-point implementation of sin(.) will return a value within eps of the real sin(.) of the value that is the result of the floating-point operation (1000000 % (2*pi)), which already has some error accumulated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Robert K. <rob...@gm...> - 2006-05-25 19:28:26
|
Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: >>That your demonstration results in the desired exact 0.0 >>for multiples of 2*pi is an accident. The results for >>values other than integer multiples of pi will be as wrong >>or more wrong. > > It seems that a continuity argument should undermine that as > a general claim. Right? What continuity? This is floating-point arithmetic. [~]$ bc -l bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 s(1000000) -.34999350217129295211765248678077146906140660532871 [~]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import * >>> sin(1000000.0) -0.34999350217129299 >>> sin(1000000.0 % (2*pi)) -0.34999350213477698 >>> > But like I said: I was just wondering if there was anything > exploitable here. Like I said: not really. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Alan G I. <ai...@am...> - 2006-05-25 19:15:49
|
On Thu, 25 May 2006, Robert Kern apparently wrote:=20 > But numpy doesn't deal with abstract senses. It deals with=20 > concrete floating point arithmetic. Of course. > That your demonstration results in the desired exact 0.0=20 > for multiples of 2*pi is an accident. The results for=20 > values other than integer multiples of pi will be as wrong=20 > or more wrong. It seems that a continuity argument should undermine that as=20 a general claim. Right? But like I said: I was just wondering if there was anything=20 exploitable here. Thanks, Alan |
From: Alan G I. <ai...@am...> - 2006-05-25 19:10:55
|
On Thu, 25 May 2006, Alexander Belopolsky apparently wrote: > This is not really a numpy issue, but general floating point problem. > Consider this: >>>> x=linspace(0,10*pi,11) >>>> all(array(map(math.sin, x))==sin(x)) > True I think this misses the point. I was not suggesting numpy results differ from the C math library results. >>> x1=sin(linspace(0,10*pi,21)) >>> x2=sin(linspace(0,10*pi,21)%(2*pi)) >>> all(x1==x2) False >>> x1 array([ 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, -2.44921271e-16, 1.00000000e+00, 3.67381906e-16, -1.00000000e+00, -4.89842542e-16, 1.00000000e+00, 6.12303177e-16, -1.00000000e+00, -7.34763812e-16, 1.00000000e+00, 8.57224448e-16, -1.00000000e+00, -9.79685083e-16, 1.00000000e+00, 1.10214572e-15, -1.00000000e+00, -1.22460635e-15]) >>> x2 array([ 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.22460635e-16, -1.00000000e+00, 0.00000000e+00]) I'd rather have x2: I'm just asking if there is anything exploitable here. Robert suggests not. Cheers, Alan Isaac |
From: Robert K. <rob...@gm...> - 2006-05-25 19:09:12
|
Alan G Isaac wrote: > On Thu, 25 May 2006, Robert Kern apparently wrote: > >>The method you showed of using % (2*pi) is only accurate >>when the values are created by multiplying the same pi by >>another value. Otherwise, it just introduces another >>source of error, I think. > > Just to be clear, I meant not (!) to presumptuosly propose > a method for improving thigs, but just to illustrate the > issue: both the loss of accuracy, and the obvious conceptual > point that there is (in an abstract sense, at least) no need > for sin(x) and sin(x+ 2*pi) to differ. But numpy doesn't deal with abstract senses. It deals with concrete floating point arithmetic. The best value you can *use* for pi in that expression is not the real irrational π. And the best floating-point algorithm you can use for sin() won't (and shouldn't!) assume that sin(x) will equal sin(x + 2*pi). That your demonstration results in the desired exact 0.0 for multiples of 2*pi is an accident. The results for values other than integer multiples of pi will be as wrong or more wrong. It does not demonstrate that floating-point sin(x) and sin(x + 2*pi) need not differ. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Alexander B. <ale...@gm...> - 2006-05-25 19:01:29
|
This is not really a numpy issue, but general floating point problem. Consider this: >>> x=3Dlinspace(0,10*pi,11) >>> all(array(map(math.sin, x))=3D=3Dsin(x)) True If anything can be improved, that would be the C math library. On 5/25/06, Rob Hooft <ro...@ho...> wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Robert Kern wrote: > | Alan G Isaac wrote: > | > |>I am a user, not a numerics type, > |>so this is undoubtedly a naive question. > |> > |>Might the sin function be written to give greater > |>accuracy for large real numbers? It seems that significant > |>digits are in some sense being discarded needlessly. > | > | > | Not really. The floating point representation of pi is not exact. The > problem > | only gets worse when you multiply it with something. The method you > showed of > | using % (2*pi) is only accurate when the values are created by > multiplying the > | same pi by another value. Otherwise, it just introduces another source > of error, > | I think. > | > | This is one of the few places where a version of trig functions that > directly > | operate on degrees are preferred. 360.0*n is exactly representable by > floating > | point arithmetic until n~=3D12509998964918 (give or take a power of > two). Doing % > | 360 can be done exactly. > > This reminds me of a story Richard Feynman tells in his autobiography. > He used to say: "if you can pose a mathematical question in 10 seconds, > I can solve it with 10% accuracy in one minute just calculating in my > head". This worked for a long time, until someone told him "please > calculate the sine of a million". > > Actual mantissa bits are used by the multiple of two-pi, and those are > lost at the back of the calculated value. Calculating the sine of a > million with the same precision as the sine of zero requires 20 more > bits of accuracy. > > Rob > - -- > Rob W.W. Hooft || ro...@ho... || http://www.hooft.net/people/rob/ > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.3 (GNU/Linux) > Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org > > iD8DBQFEdfziH7J/Cv8rb3QRAiO8AKCQdJ+9EMOP6bOmUX0NIhuWVoEFQgCgmvTS > fgO08dI16AUFcYKkpRJXg/Q=3D > =3DqQXI > -----END PGP SIGNATURE----- > > > ------------------------------------------------------- > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications i= n > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D107521&bid=3D248729&dat= =3D121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |