You can subscribe to this list here.
2007 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(55) 
_{Oct}
(59) 
_{Nov}
(3) 
_{Dec}
(30) 

2008 
_{Jan}
(59) 
_{Feb}
(22) 
_{Mar}
(55) 
_{Apr}
(4) 
_{May}
(15) 
_{Jun}
(29) 
_{Jul}
(6) 
_{Aug}
(17) 
_{Sep}

_{Oct}
(27) 
_{Nov}
(8) 
_{Dec}
(14) 
2009 
_{Jan}
(6) 
_{Feb}
(26) 
_{Mar}
(48) 
_{Apr}
(11) 
_{May}
(3) 
_{Jun}
(20) 
_{Jul}
(28) 
_{Aug}
(48) 
_{Sep}
(85) 
_{Oct}
(34) 
_{Nov}
(23) 
_{Dec}
(65) 
2010 
_{Jan}
(68) 
_{Feb}
(46) 
_{Mar}
(105) 
_{Apr}
(74) 
_{May}
(167) 
_{Jun}
(118) 
_{Jul}
(179) 
_{Aug}
(170) 
_{Sep}
(513) 
_{Oct}
(113) 
_{Nov}
(41) 
_{Dec}
(52) 
2011 
_{Jan}
(59) 
_{Feb}
(102) 
_{Mar}
(110) 
_{Apr}
(197) 
_{May}
(123) 
_{Jun}
(91) 
_{Jul}
(195) 
_{Aug}
(209) 
_{Sep}
(233) 
_{Oct}
(112) 
_{Nov}
(241) 
_{Dec}
(86) 
2012 
_{Jan}
(138) 
_{Feb}
(151) 
_{Mar}
(326) 
_{Apr}
(154) 
_{May}
(278) 
_{Jun}
(230) 
_{Jul}
(311) 
_{Aug}
(327) 
_{Sep}
(194) 
_{Oct}
(139) 
_{Nov}
(243) 
_{Dec}
(141) 
2013 
_{Jan}
(169) 
_{Feb}
(90) 
_{Mar}
(187) 
_{Apr}
(228) 
_{May}
(150) 
_{Jun}
(328) 
_{Jul}
(287) 
_{Aug}
(199) 
_{Sep}
(288) 
_{Oct}
(199) 
_{Nov}
(310) 
_{Dec}
(214) 
2014 
_{Jan}
(166) 
_{Feb}
(66) 
_{Mar}
(90) 
_{Apr}
(166) 
_{May}
(166) 
_{Jun}
(99) 
_{Jul}
(120) 
_{Aug}
(139) 
_{Sep}
(107) 
_{Oct}
(142) 
_{Nov}
(171) 
_{Dec}
(149) 
S  M  T  W  T  F  S 






1
(5) 
2
(3) 
3
(5) 
4
(8) 
5
(1) 
6
(1) 
7

8
(2) 
9
(11) 
10
(22) 
11
(16) 
12
(4) 
13
(1) 
14
(7) 
15
(6) 
16
(13) 
17

18
(6) 
19
(4) 
20
(1) 
21
(2) 
22
(15) 
23
(1) 
24

25
(2) 
26
(2) 
27
(16) 
28
(12) 
29
(22) 
30
(9) 
From: dashesy <dashesy@gm...>  20110409 20:36:27

> Hi, > > Sorry for jumping into this discussion, but I don't seem to understand what > the advantage is of a nonhardware supported real number representation. If > you need the two (or a bit more) decimal places required for currency and > percentages, why not just use a big integer and for display divide by 100? > No more worries about precision, up to an arbitrarily determined number of > decimal places. Are the numbers so huge that they can't be stored in a > 128bit integer, or are there stricter requirements precisionwise? Thanks! > > Ruben > > Op 9 apr. 2011 15:16 schreef "K. Frank" <kfrank29.c@...> het volgende: >> Hello NightStrike! >> >> On Sat, Apr 9, 2011 at 2:41 AM, NightStrike <nightstrike@...> wrote: >>> On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> >>> wrote: >>>> A quick glance through the document seems to tell us that the decimal >>>> arithmetic will incorporate checks to ensure that any rounding in binary >>>> floating point does not compromise the accuracy of the final decimal >>>> result. >>>> ... >>> >>> I'm being a little OT here, but I'm curious.. does that mean that >>> COBOL was a language that gave very high accuracy compared to C of the >>> day? >>> >> >> No, COBOL, by virtue of using decimal arithmetic, would not have been more >> accurate than C using binary floatingpoint, but rather >> "differently"accurate. >> (This, of course, is only true if you make an applestoapples comparison. >> If you use 64bit decimal floatingpoint  call this double precision >>  this will >> be much more accurate than 32bit singleprecision binary floating point, >> and, of course, doubleprecision binary floatingpoint will be much more >> accurate than singleprecision decimal floating;point.) >> >> That is, the set of real numbers that can be represented exactly as >> decimal >> floatingpoint numbers is different than the set of exactly representable >> binary >> floatingpoint numbers. >> >> Let me illustrate this with an approximate example  I won't get the >> exact >> numbers and details of the floatingpoint representation correct, but the >> core idea is spoton right. >> >> Compare using three decimal digits (0  999; 10^3 = 1000) with ten binary >> digits (0  1023; 2^10 = 1024), essentially the same accuracy. >> >> Consider the two real numbers: >> >> 1  1/100 = 0.99 = 99 ^ 10^2, an exact decimal floatingpoint >> >> 1  1/128 = 0.1111111 (binary) = 127 * 2^7, an exact binary >> floatingpoint >> >> The first, 1  1/100, is not exactly representable in binary, because >> 1/100 = 1 / (2^2 * 5^2), and you can't represent fractional (negative) >> powers of five exactly in binary. >> >> The second, 1  1/128, is not exactly representable in decimal, >> because we are only using three decimal digits. >> >> 1/128 = 0.0078125 (exact), >> >> so >> >> 1  1/128 = 0.9921875 (exact) >> >> If we give ourselves seven decimal digits, we can represent >> 1  1/128 exactly, but that wouldn't be an applestoapples >> comparison. >> >> The best we can do with our threedecimaldigit decimal >> floatingpoint is >> >> 1  128 ~= 0.992 = 992 * 10^3 (approximate) >> >> This shows that neither decimal nor binary is more accurate, but >> simply that they are different. If it is important that you can >> represent things like 1/100 exactly, use decimal, but if you want >> to represent things like 1/128 exactly, use binary. >> >> (In practice, for the same word size, binary is somewhat more >> accurate, because in decimal a single decimal digit is usually >> stored in four bits, wasting the difference between a decimal >> digit and a hexadecimal (0  15) digit. Also, you can trade off >> accuracy for range by moving bits from the mantissa to the >> exponent.) >> >> Happy Hacking! >> Can this new feature maybe used for fixed point arithmetic? For some hardware it might be advantageous because floating point implementation is not supported or is costly. This new addition might somehow make the libraries more standard (?), but the real advantage would be to have the math library work with it (does it now?). dashesy >> >> K. Frank >> >> >>  >> Xperia(TM) PLAY >> It's a major breakthrough. An authentic gaming >> smartphone on the nation's most reliable network. >> And it wants your games. >> http://p.sf.net/sfu/verizonsfdev >> _______________________________________________ >> Mingww64public mailing list >> Mingww64public@... >> https://lists.sourceforge.net/lists/listinfo/mingww64public > >  > Xperia(TM) PLAY > It's a major breakthrough. An authentic gaming > smartphone on the nation's most reliable network. > And it wants your games. > http://p.sf.net/sfu/verizonsfdev > _______________________________________________ > Mingww64public mailing list > Mingww64public@... > https://lists.sourceforge.net/lists/listinfo/mingww64public > > 
From: James K Beard <beardjamesk@ve...>  20110409 17:55:07

I think the long term solution is to implement the decimal arithmetic keywords with an open mind. Special requirements, like extremely long decimal words (DECIMAL128 == 128 digits?????) may require multipleprecision arithmetic, which may be problematic because most compilers support up to quad precision floating point, which is 128 bits with 12 bits exponent and 116 bits mantissa, or about 35 decimal digits. If financial calculations were all that was required, that would be enough for practical use, because overflow would be at 10^33 dollars/yen/pesos/yuan/whatever. Nothing in realworld finance requires more dynamic range. But, nothing in nature requires more than about 10^125, which is the ratio of the densities of intergalactic space and the interior of a neutron star, or the time in years for the universe to reach total heat death and an overall homogeneous thermal composition. That's why IEEE floating points overflow at 10^38 or, for more than 32 bits, 10^308. I have a multiple precision package that I use for personal work that, for software convenience and best speed, uses a signed 16bit integer for the exponent and overflows at 10^9863. I've been thinking about releasing it under the GPL but there is a lot of code cleanup needed, and some core modules are from Numerical Recipes for Fortran 90 and will require another license that I haven't pursued. James K Beard Original Message From: JonY [mailto:jon_y@...] Sent: Saturday, April 09, 2011 1:08 PM To: mingww64public@... Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/9/2011 23:03, K. Frank wrote: > > What, then, would be the advantage of using decimal floatingpoint? > I don't really know the history or what people were thinking when they > built those early decimal floatingpoint systems, but there is a > (minor) advantage of having the numbers people work with on paper > being represented exactly. I have 1.2345 * 10^10, and > 7.6543 * 10^12 written down on a piece of paper ad type them into my > decimal computer. They are represented exactly. Of course the sum > and product of these numbers is not represented exactly (with, say, > sevendigit floatingpoint), so any advantage of having used decimal > floatingpoint is minor. > > Decimal floatingpoint rarely buys you anything you really care about, > which is probably why almost all modern computers support binary > floatingpoint, but not decimal. > > This does raise the question that Ruben alluded to: Why might someone > bother with implementing a decimal floatingpoint package for the gcc > environment? It's a fair amount of work and rather tricky to do it > right, and if you don't do it right, there's no point to it. > Its part of the upcoming ISO/IEC TR 24732:2009. What you use it for, or whether you will use it or not is tangent to the issue. To give a proper explanation, binary floats doesn't give the proper machine epsilon for equivalent decimal float sizes. Sure, you can cover DECIMAL64 and lower with long doubles, but what happens for DECIMAL128? I am concerned about the correctness of its implementation, and its performance implications (runtime and precession), and/or tradeoffs. Right now, I opted to just map things to use the long double, its wrong but its definitely something. I haven't really got into it since I am quite busy at the moment. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2gkmAACgkQp56AKe10wHc8rgCfYQX+bVEV2B73q8G9i/POkwvO fAAAnR7Qkk+M1apqMaQRmA1txNYvQ3OI =O1sJ END PGP SIGNATURE 
From: JonY <jon_y@us...>  20110409 17:08:03

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/9/2011 23:03, K. Frank wrote: > > What, then, would be the advantage of using decimal floatingpoint? > I don't really know the history or what people were thinking when > they built those early decimal floatingpoint systems, but there is > a (minor) advantage of having the numbers people work with on > paper being represented exactly. I have 1.2345 * 10^10, and > 7.6543 * 10^12 written down on a piece of paper ad type them > into my decimal computer. They are represented exactly. Of > course the sum and product of these numbers is not represented > exactly (with, say, sevendigit floatingpoint), so any advantage > of having used decimal floatingpoint is minor. > > Decimal floatingpoint rarely buys you anything you really care about, > which is probably why almost all modern computers support binary > floatingpoint, but not decimal. > > This does raise the question that Ruben alluded to: Why might > someone bother with implementing a decimal floatingpoint package > for the gcc environment? It's a fair amount of work and rather tricky > to do it right, and if you don't do it right, there's no point to it. > Its part of the upcoming ISO/IEC TR 24732:2009. What you use it for, or whether you will use it or not is tangent to the issue. To give a proper explanation, binary floats doesn't give the proper machine epsilon for equivalent decimal float sizes. Sure, you can cover DECIMAL64 and lower with long doubles, but what happens for DECIMAL128? I am concerned about the correctness of its implementation, and its performance implications (runtime and precession), and/or tradeoffs. Right now, I opted to just map things to use the long double, its wrong but its definitely something. I haven't really got into it since I am quite busy at the moment. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2gkmAACgkQp56AKe10wHc8rgCfYQX+bVEV2B73q8G9i/POkwvO fAAAnR7Qkk+M1apqMaQRmA1txNYvQ3OI =O1sJ END PGP SIGNATURE 
From: James K Beard <beardjamesk@ve...>  20110409 16:52:29

Keith: No, COBOL is just another computer language. Early on, most computer users were business users, and decimal I/O is standard for all internationally accepted currencies. Some early computers, like the 8bit VAX (remember the 11/78??? The CDC 3100 series?) actually had decimal arithmetic hardware, or appeared to have it if you looked at the instruction set, which was there to support COBOL compilers. Some most of them may have been fixed point machines that multiplied and divided by 100 on I/O or, if they had decimal registers or storage, when the hardware read or wrote these registers or storage. In the gizzard, they were all just computers, and some of them were limited to fixed point for decimal arithmetic; I suppose that there are examples that actually implemented decimal arithmetic like an old mechanical adding machine or business calculator. This is so far back and removed from my personal experiences that I don't know what they did to implement interest rate computations such as P*(1+i/12)^(12*N) for computing the future value of a principal amount P at a time N years in the future with an annual interest rate i computed monthly, but clearly the simplest thing to do is to put everything in IEEE floating point, compute the result, and convert back to decimal, and that's what everyone does, whether the user knows it or not; I don't know COBOL but I suspect that all this is concealed in the code that supports the COBOL keywords or intrinsic functions  which is what I was suggesting to the people who asked. COBOL was one of the first shortlearningcurve HOLs for nontechnical people. It was designed for financial applications. It's an acronym for COmmon Business Oriented Language. It's still a living language; COBOL 2002 incorporates some applicable modern computer science features like objectoriented programming. The international standard for COBOL is IOS/IEC 1989 (the number is a serial identification number, not a year or date). Another full revision, including a set of object oriented collection class libraries, is imminent. The Wikipedia page on COBOL makes interesting reading: http://en.wikipedia.org/wiki/COBOL Note that representation of numbers may be IEEE floating point or "packed decimal." Note the External Links for further reading. I have run into people on bulletin boards that believe that COBOL is the principal language out there, and that C and Fortran and such are dead and most new code is in COBOL. I've see people talk about C that way since 1972, and I've talked to Brian Kernighan as late as the 1990's and he believes that C/C++ is the only language necessary and all other languages should be put behind us. I know for a fact that much of academia has been teaching C with the "Highlander" therecanbeonlyone background for two generations now, but there are too many people who know more than one language and quietly do what they think is best at any given time, and the number of C programmers finally began to decline in the 1990's, partly because its successor, C++, had a poor learning curve, no common extant selfteaching aid, and proved difficult and expensive to maintain (according to a late 1990's DoD study, C++ was the most expensive language to maintain, ADA was the least expensive), all issues that do not apply to ANSI C, K&R C, or C99 and its successors. I've encountered hard statements like "Fortran doesn't use the stack" and "You can't make system calls from Fortran" and such since the 1970's, and encountered far worse early in the game such as "That's FORTRAN for you. You should just junk the program and start over in C" when I reported a bug in console I/O. When I talk about Fortran 90 and later, I've had senior people that I respect in academia seek assurance from me that there was no revision after Fortran 90 and that Fortran is now dead. And, at the IEEE FFT 2010 conference at U. Maryland College Park, I had one senior fellow go breathless when I mentioned that Fortran 2003 incorporated multiprocessor and distributed computing in the standard as part of the back end in the language description, and that there was competition in the Fortran community between Open MP, coarrays, and vendordesigned multiprocessor support with few or no added keywords or intrinsic for the user to deal with; he then, noting my accent, asked where I was born, then attacked me as a racist because I was born in Texas (I gave him a polite history lesson and changed the subject). Myself, I have had to be able to read and modify programs in just about any popular language out there because of my work over the years, and it has helped me to break down defenses of leavemealoneI'mthewizardhere deadwood from obstruction or delay of several projects. These are project management skills, not programming skills. I'm proficient at C, near expert at Fortran 95/03, good at Pascal, qbasic and several assembly languages (Z80, Intel 32, 68000), and passable at JOVIAL, ADA, C++,  and an author of RATFOR distributions for the TRS80 and IBM PC for DOS and OS/2. If necessary, I will learn to read a new language passably in an hour or two and be able to modify it, with a decent reference at my elbow, in a day  I've done that from time to time with APL, LISP, and other selfobfuscating languages. Please don't take this post as a language argument; I've seen them all. If one language would work well for all of us, we would all have been speaking Esperanto since 1887 or so. But, hey, if decimal arithmetic gets any market share at all in the COBOL community, then, hey, competition is a good thing, maybe COBOL will improve as a result, C++ decimal classes will improve too, and the users win. And, a lot of people other than COBOL environments will benefit from decimal arithmetic classes and builtin support of business calculations like present value, sinking fund computations, etc. James K Beard Original Message From: NightStrike [mailto:nightstrike@...] Sent: Saturday, April 09, 2011 2:41 AM To: jkbeard1@...; mingww64public@... Cc: James K Beard; Jim Michaels Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> wrote: > A quick glance through the document seems to tell us that the decimal > arithmetic will incorporate checks to ensure that any rounding in binary > floating point does not compromise the accuracy of the final decimal > result. Thats pretty much what I was suggesting in my message of March 26 > below. The JTC1 Committee is apparently considering putting it in the > standard. This could be a very good thing for people porting code from > COBOL, and useful for new applications in environments previously restricted > to COBOL such as the banking and accounting industries. I'm being a little OT here, but I'm curious.. does that mean that COBOL was a language that gave very high accuracy compared to C of the day? 
From: K. Frank <kfrank29.c@gm...>  20110409 15:03:42

Hi Jon and Ruben! On Sat, Apr 9, 2011 at 9:47 AM, JonY <jon_y@...> wrote: > ... > On 4/9/2011 21:33, Ruben Van Boxem wrote: >> Hi, >> >> Sorry for jumping into this discussion, but I don't seem to understand what >> the advantage is of a nonhardware supported real number representation. If >> you need the two (or a bit more) decimal places required for currency and >> percentages, why not just use a big integer and for display divide by 100? >> No more worries about precision, up to an arbitrarily determined number of >> decimal places. Are the numbers so huge that they can't be stored in a >> 128bit integer, or are there stricter requirements precisionwise? Thanks! >> >> Ruben > > Sure that is fine if your range is limited. Its the same reason floating > point exists, but with more specific applications. > > No, 128bit integers are too short after factoring in exponents. > ... Obviously it depends on the exact use case you need to address, and there are some specialized situations in which decimal floatingpoint is legitimately called for, but I think Ruben is basically right. The money thing that has been mentioned a few times in this thread is a red herring. If you need to perform money calculation accurate to the penny, just do your calculations in pennies, and use exact integer arithmetic. (Or do them in dollars, and use exact twodecimalplace fixedpoint arithmetic, which is really integer arithmetic by another name.) If you really need floatingpoint because of the range provided by the exponent, then your money calculations won't be exact anyway. For example, let say you're using sevendigit decimal floating point: $10,000,000.00 (exact) + $1,000.73 (exact) = $10,001,000.73 (???) With sevendigit decimal floatingpoint your result won't be exact; instead, you'll get $10,001,000.00, with the 73 cents being lost to roundoff error. If you claim you need floatingpoint because you need the range provided by the exponent to represent a sum as large as $10,000,000, then you are no longer doing calculations exact down to the penny (with sevendigit floatingpoint). Also, note that performing the example calculation, $10,000,000.00 + $1,000.73, will not trigger any sort of traps or exceptions that are sometimes provided by floatingpoint hardware or software. This is not overflow or underflow or some other exceptional floatingpoint condition such as denormalization  this is gardenvariety roundoff error that "always" happens when performing floatingpoint calculations. So, if you need exact money calculations, used fixedpoint arithmetic (essentially integer arithmetic), live within the welldefined finite range, and throw an exception (or somehow signal the error condition) if you overflow the finite range. (Or use fixedpoint arithmetic based on bignums, as Ruben suggested, and have essentially unlimited range, at the cost of slower arithmetic.) What, then, would be the advantage of using decimal floatingpoint? I don't really know the history or what people were thinking when they built those early decimal floatingpoint systems, but there is a (minor) advantage of having the numbers people work with on paper being represented exactly. I have 1.2345 * 10^10, and 7.6543 * 10^12 written down on a piece of paper ad type them into my decimal computer. They are represented exactly. Of course the sum and product of these numbers is not represented exactly (with, say, sevendigit floatingpoint), so any advantage of having used decimal floatingpoint is minor. Decimal floatingpoint rarely buys you anything you really care about, which is probably why almost all modern computers support binary floatingpoint, but not decimal. This does raise the question that Ruben alluded to: Why might someone bother with implementing a decimal floatingpoint package for the gcc environment? It's a fair amount of work and rather tricky to do it right, and if you don't do it right, there's no point to it. Not to be critical, but why not let decimal floatingpoint die a seemingly welldeserved death, as is pretty much happening in the modern computing world? Implementing floatingpoint arithmetic correctly is rather fussy work, and I, myself, wouldn't have much taste for it. Best. K. Frank 
From: JonY <jon_y@us...>  20110409 13:48:03

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/9/2011 21:33, Ruben Van Boxem wrote: > Hi, > > Sorry for jumping into this discussion, but I don't seem to understand what > the advantage is of a nonhardware supported real number representation. If > you need the two (or a bit more) decimal places required for currency and > percentages, why not just use a big integer and for display divide by 100? > No more worries about precision, up to an arbitrarily determined number of > decimal places. Are the numbers so huge that they can't be stored in a > 128bit integer, or are there stricter requirements precisionwise? Thanks! > > Ruben Sure that is fine if your range is limited. Its the same reason floating point exists, but with more specific applications. No, 128bit integers are too short after factoring in exponents. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2gY3wACgkQp56AKe10wHfjwQCfQiiDs6j7kAxnWu92fltffiIf 5F0An1xZpHm7Az5Vl8R+IneCZN/Vo2Mn =tuWX END PGP SIGNATURE 
From: Ruben Van Boxem <vanboxem.ruben@gm...>  20110409 13:33:34

Hi, Sorry for jumping into this discussion, but I don't seem to understand what the advantage is of a nonhardware supported real number representation. If you need the two (or a bit more) decimal places required for currency and percentages, why not just use a big integer and for display divide by 100? No more worries about precision, up to an arbitrarily determined number of decimal places. Are the numbers so huge that they can't be stored in a 128bit integer, or are there stricter requirements precisionwise? Thanks! Ruben Op 9 apr. 2011 15:16 schreef "K. Frank" <kfrank29.c@...> het volgende: > Hello NightStrike! > > On Sat, Apr 9, 2011 at 2:41 AM, NightStrike <nightstrike@...> wrote: >> On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> wrote: >>> A quick glance through the document seems to tell us that the decimal >>> arithmetic will incorporate checks to ensure that any rounding in binary >>> floating point does not compromise the accuracy of the final decimal >>> result. >>> ... >> >> I'm being a little OT here, but I'm curious.. does that mean that >> COBOL was a language that gave very high accuracy compared to C of the >> day? >> > > No, COBOL, by virtue of using decimal arithmetic, would not have been more > accurate than C using binary floatingpoint, but rather "differently"accurate. > (This, of course, is only true if you make an applestoapples comparison. > If you use 64bit decimal floatingpoint  call this double precision >  this will > be much more accurate than 32bit singleprecision binary floating point, > and, of course, doubleprecision binary floatingpoint will be much more > accurate than singleprecision decimal floating;point.) > > That is, the set of real numbers that can be represented exactly as decimal > floatingpoint numbers is different than the set of exactly representable binary > floatingpoint numbers. > > Let me illustrate this with an approximate example  I won't get the exact > numbers and details of the floatingpoint representation correct, but the > core idea is spoton right. > > Compare using three decimal digits (0  999; 10^3 = 1000) with ten binary > digits (0  1023; 2^10 = 1024), essentially the same accuracy. > > Consider the two real numbers: > > 1  1/100 = 0.99 = 99 ^ 10^2, an exact decimal floatingpoint > > 1  1/128 = 0.1111111 (binary) = 127 * 2^7, an exact binary floatingpoint > > The first, 1  1/100, is not exactly representable in binary, because > 1/100 = 1 / (2^2 * 5^2), and you can't represent fractional (negative) > powers of five exactly in binary. > > The second, 1  1/128, is not exactly representable in decimal, > because we are only using three decimal digits. > > 1/128 = 0.0078125 (exact), > > so > > 1  1/128 = 0.9921875 (exact) > > If we give ourselves seven decimal digits, we can represent > 1  1/128 exactly, but that wouldn't be an applestoapples > comparison. > > The best we can do with our threedecimaldigit decimal > floatingpoint is > > 1  128 ~= 0.992 = 992 * 10^3 (approximate) > > This shows that neither decimal nor binary is more accurate, but > simply that they are different. If it is important that you can > represent things like 1/100 exactly, use decimal, but if you want > to represent things like 1/128 exactly, use binary. > > (In practice, for the same word size, binary is somewhat more > accurate, because in decimal a single decimal digit is usually > stored in four bits, wasting the difference between a decimal > digit and a hexadecimal (0  15) digit. Also, you can trade off > accuracy for range by moving bits from the mantissa to the > exponent.) > > Happy Hacking! > > > K. Frank > >  > Xperia(TM) PLAY > It's a major breakthrough. An authentic gaming > smartphone on the nation's most reliable network. > And it wants your games. > http://p.sf.net/sfu/verizonsfdev > _______________________________________________ > Mingww64public mailing list > Mingww64public@... > https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: K. Frank <kfrank29.c@gm...>  20110409 13:16:12

Hello NightStrike! On Sat, Apr 9, 2011 at 2:41 AM, NightStrike <nightstrike@...> wrote: > On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> wrote: >> A quick glance through the document seems to tell us that the decimal >> arithmetic will incorporate checks to ensure that any rounding in binary >> floating point does not compromise the accuracy of the final decimal >> result. >> ... > > I'm being a little OT here, but I'm curious.. does that mean that > COBOL was a language that gave very high accuracy compared to C of the > day? > No, COBOL, by virtue of using decimal arithmetic, would not have been more accurate than C using binary floatingpoint, but rather "differently"accurate. (This, of course, is only true if you make an applestoapples comparison. If you use 64bit decimal floatingpoint  call this double precision  this will be much more accurate than 32bit singleprecision binary floating point, and, of course, doubleprecision binary floatingpoint will be much more accurate than singleprecision decimal floating;point.) That is, the set of real numbers that can be represented exactly as decimal floatingpoint numbers is different than the set of exactly representable binary floatingpoint numbers. Let me illustrate this with an approximate example  I won't get the exact numbers and details of the floatingpoint representation correct, but the core idea is spoton right. Compare using three decimal digits (0  999; 10^3 = 1000) with ten binary digits (0  1023; 2^10 = 1024), essentially the same accuracy. Consider the two real numbers: 1  1/100 = 0.99 = 99 ^ 10^2, an exact decimal floatingpoint 1  1/128 = 0.1111111 (binary) = 127 * 2^7, an exact binary floatingpoint The first, 1  1/100, is not exactly representable in binary, because 1/100 = 1 / (2^2 * 5^2), and you can't represent fractional (negative) powers of five exactly in binary. The second, 1  1/128, is not exactly representable in decimal, because we are only using three decimal digits. 1/128 = 0.0078125 (exact), so 1  1/128 = 0.9921875 (exact) If we give ourselves seven decimal digits, we can represent 1  1/128 exactly, but that wouldn't be an applestoapples comparison. The best we can do with our threedecimaldigit decimal floatingpoint is 1  128 ~= 0.992 = 992 * 10^3 (approximate) This shows that neither decimal nor binary is more accurate, but simply that they are different. If it is important that you can represent things like 1/100 exactly, use decimal, but if you want to represent things like 1/128 exactly, use binary. (In practice, for the same word size, binary is somewhat more accurate, because in decimal a single decimal digit is usually stored in four bits, wasting the difference between a decimal digit and a hexadecimal (0  15) digit. Also, you can trade off accuracy for range by moving bits from the mantissa to the exponent.) Happy Hacking! K. Frank 
From: Antoni Jaume <a2jaume@gm...>  20110409 12:54:50

2011/4/9 NightStrike <nightstrike@...> > > I'm being a little OT here, but I'm curious.. does that mean that > COBOL was a language that gave very high accuracy compared to C of the > day? > > Cobol is quite anterior to C. It is not so much that it has high accuracy as it avoids decimal to binary and binary to decimal conversions. In the limited precision and memory available at the time the addition of cents could quickly lose precision. 
From: Jim Michaels <jmichae3@ya...>  20110409 07:21:02

the point is, I shouldn't have to modify the code. it should be done with compiler switches. unless your method of "packing" is compiler independent. the compiler shouldn't balk when I throw the fpackstruct switch. so why did they even make the switch if it's not going to work? actually, it only works in ANSI C code. I will grant you that. so it only half works. :/ Jim Michaels ________________________________ From: Jaroslav Šmíd <jardasmid@...> To: mingww64public@... Sent: Fri, April 8, 2011 1:50:16 AM Subject: Re: [Mingww64public] g++ fpackstruct and vector, iterator, stdint.h, iostream clash Looks like GCC requires that the field is aligned in order to create reference and because there is no alignment specified in STL headers (e.g. with gcc's pragma pack) even STL containers and iterators get packed. Maybe you should use __attribute__((packed)) on your structures and not do it globaly. On 04/01/2011 06:49 AM, Jim Michaels wrote: > g++ fpackstruct and vector,iterator,stdint.h,iostream clash > > I don't get these problems with microsoft and borland compilers. I can > turn on the pack struct switch and everything works great, and I can use > iostream, and any of the STL no problem. > > but not with g++. > > I am using sezero personal build 4.5.2 1002. > > this just doesn't seem right. > > In file included from >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/ios:43:0, >, > from >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/ostream:40, >, > from >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/iterator:65, >, > from diskgeom.cpp:49: >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h: >: > In member function 'std::ios_base::fmtflags > std::ios_base::setf(std::ios_base::fmtflags)': >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h:580:19: >: > error: cannot bind packed field > '((std::ios_base*)this)>std::ios_base::_M_flags' to 'std::_Ios_Fmtflags&' >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h: >: > In member function 'std::ios_base::fmtflags > std::ios_base::setf(std::ios_base::fmtflags, std::ios_base::fmtflags)': >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h:597:20: >: > error: cannot bind packed field > '((std::ios_base*)this)>std::ios_base::_M_flags' to 'std::_Ios_Fmtflags&' >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h:598:36: >: > error: cannot bind packed field > '((std::ios_base*)this)>std::ios_base::_M_flags' to 'std::_Ios_Fmtflags&' >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h: >: > In member function 'void std::ios_base::unsetf(std::ios_base::fmtflags)': >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h:610:20: >: > error: cannot bind packed field > '((std::ios_base*)this)>std::ios_base::_M_flags' to 'std::_Ios_Fmtflags&' >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h: >: > In member function 'long int& std::ios_base::iword(int)': >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h:744:21: >: > error: cannot bind packed field > '__word>std::ios_base::_Words::_M_iword' to 'long int&' >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h: >: > In member function 'void*& std::ios_base::pword(int)': >c:\mingww64bin_i686mingw_20101002_4.5_sezero\mingw64\bin\../lib/gcc/x86_64w64mingw32/4.5.2/../../../../x86_64w64mingw32/include/c++/4.5.2/bits/ios_base.h:765:21: >: > error: cannot bind packed field > '__word>std::ios_base::_Words::_M_pword' to 'void*&' > > > > >  > Jim Michaels > jmichae3@... > JimM@... > http://JimsComputerRepairandWebDesign.com > http://JesusnJim.com (my personal site, has software) > http://DoLifeComputers.JesusnJim.com (group which I lead) >  > Computer memory/disk size measurements: > [KB KiB] [MB MiB] [GB GiB] [TB TiB] > [10^3B=1,000B=1KB][2^10B=1,024B=1KiB] > [10^6B=1,000,000B=1MB][2^20B=1,048,576B=1MiB] > [10^9B=1,000,000,000B=1GB][2^30B=1,073,741,824B=1GiB] > [10^12B=1,000,000,000,000B=1TB][2^40B=1,099,511,627,776B=1TiB] > Note: disk size is measured in MB, GB, or TB, not in MiB, GiB, or TiB. > computer memory (RAM) is measured in MiB and GiB. > > > > >  > Create and publish websites with WebMatrix > Use the most popular FREE web apps or write code yourself; > WebMatrix provides all the features you need to develop and > publish your website. http://p.sf.net/sfu/mswebmatrixsf > > > > _______________________________________________ > Mingww64public mailing list > Mingww64public@... > https://lists.sourceforge.net/lists/listinfo/mingww64public  Xperia(TM) PLAY It's a major breakthrough. An authentic gaming smartphone on the nation's most reliable network. And it wants your games. http://p.sf.net/sfu/verizonsfdev _______________________________________________ Mingww64public mailing list Mingww64public@... https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: NightStrike <nightstrike@gm...>  20110409 06:41:11

On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> wrote: > A quick glance through the document seems to tell us that the decimal > arithmetic will incorporate checks to ensure that any rounding in binary > floating point does not compromise the accuracy of the final decimal > result. That’s pretty much what I was suggesting in my message of March 26 > below. The JTC1 Committee is apparently considering putting it in the > standard. This could be a very good thing for people porting code from > COBOL, and useful for new applications in environments previously restricted > to COBOL such as the banking and accounting industries. I'm being a little OT here, but I'm curious.. does that mean that COBOL was a language that gave very high accuracy compared to C of the day? 