Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
From: JonY <jon_y@us...>  20110323 13:14:56
Attachments:
0xED74C077.asc

BEGIN PGP SIGNED MESSAGE Hash: SHA1 Hello, I'm working on DFP math for mingww64, the main hurdle is that the mathematical values should never be converted to conventional floating point. Dr Beard, I'm hoping you can help me out on this task in the future. I have some documentation on the BID encoding used for the DFP numbers at <http://mingww64.svn.sourceforge.net/viewvc/mingww64/experimental/dfp_math/dfp_internal.h?revision=4087&view=markup>;. The encoding documentation is around the middle of the file, the rest of my code are a sad embarrassment to look at. They're mostly for testing the encoding, not for practical use. There are more documentation at: <http://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801?ciid=8cf166fedd1aa110VgnVCM100000a360ea10RCRD>; Without proper mathematical theory background, implementing the math library would be a bit hard, especially if it needs to be run efficient. I also welcome anybody else with the required background knowledge. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2J8jwACgkQp56AKe10wHdvtwCdEYLcGJb/pfKT7VpLscXuLARc N6cAn1Mf6Zgmz5quldFCTbTEUmLWt2vG =fdTm END PGP SIGNATURE 
From: JonY <jon_y@us...>  20110323 17:05:22
Attachments:
0xED74C077.asc

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/23/2011 22:06, James K Beard wrote: > Jon: The simplest and quite possibly the most efficient way to implement a > standard function library in BCD decimal arithmetic is to convert to IEEE > standard double precision (or, if necessary, quad precision), use the > existing libraries, and convert back to BCD decimal floating point format. > The binary floating point will have more accuracy, thus providing a few > guard bits for the process, and hardware arithmetic (even quad precision is > supported by hardware because the preserved carry fields make quad precision > simple to support and allow good efficiency) is hard to match with software > floating point, which is what any BCD decimal arithmetic would be. > > James K Beard > Hi, Thanks for the reply. To my understanding, converting DFP to BCD then IEEE float and back again seems to defeat the purpose using decimal floating points where exact representation is needed, I'm not too clear about this part. Will calculations suffer from inexact representation? According to the range of DECIMAL128, we do need quad precision. Looks like GCC does support quad precision via libquadmath, but its LGPL, so no suitable to be included directly. Kai, any inputs on the hardware arithmetic part? BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KI6gACgkQp56AKe10wHdGQACeNHQ/VnBqvGxvlHtdD2zYLrHl XGQAoIgvfcPYB90V2ULPGiQP72rZElbj =l/kw END PGP SIGNATURE 
From: James K Beard <beardjamesk@ve...>  20110323 18:11:54

You don't need to go to BCD to convert DFP to IEEE (regular) floating point. A single arithmetic operation directly in DFP will exceed what you do to convert to IEEE floating point. I would use double precision for anything up to 12 decimals of accuracy, 80bit for another three, and simply incorporate the quad precision libraries with credit (or by reference, if differences in licensing are a problem) for distribution. Anything other than binary representation will be less efficient in terms of accuracy provided by a given number of bits. By illustration, base 10 requires four bits, but provides only 3.32 bits (log2(10)) per digit of accuracy. The only relief from this fundamental fact is use of less bits for the exponent, and in IEEE floating point the size of the exponent field is minimized just about to the point of diminishing returns (problems requiring workaround in areas such as determinants, series and large polynomials) to begin with. James K Beard Original Message From: Jon [mailto:10walls@...] On Behalf Of JonY Sent: Wednesday, March 23, 2011 12:45 PM To: mingww64public@... Cc: jkbeard1@... Subject: Re: mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/23/2011 22:06, James K Beard wrote: > Jon: The simplest and quite possibly the most efficient way to > implement a standard function library in BCD decimal arithmetic is to > convert to IEEE standard double precision (or, if necessary, quad > precision), use the existing libraries, and convert back to BCD decimal floating point format. > The binary floating point will have more accuracy, thus providing a > few guard bits for the process, and hardware arithmetic (even quad > precision is supported by hardware because the preserved carry fields > make quad precision simple to support and allow good efficiency) is > hard to match with software floating point, which is what any BCD decimal arithmetic would be. > > James K Beard > Hi, Thanks for the reply. To my understanding, converting DFP to BCD then IEEE float and back again seems to defeat the purpose using decimal floating points where exact representation is needed, I'm not too clear about this part. Will calculations suffer from inexact representation? According to the range of DECIMAL128, we do need quad precision. Looks like GCC does support quad precision via libquadmath, but its LGPL, so no suitable to be included directly. Kai, any inputs on the hardware arithmetic part? BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KI6gACgkQp56AKe10wHdGQACeNHQ/VnBqvGxvlHtdD2zYLrHl XGQAoIgvfcPYB90V2ULPGiQP72rZElbj =l/kw END PGP SIGNATURE 
From: Kai Tietz <ktietz70@go...>  20110323 18:29:09

2011/3/23 James K Beard <beardjamesk@...>: > You don't need to go to BCD to convert DFP to IEEE (regular) floating point. > A single arithmetic operation directly in DFP will exceed what you do to > convert to IEEE floating point. I would use double precision for anything > up to 12 decimals of accuracy, 80bit for another three, and simply > incorporate the quad precision libraries with credit (or by reference, if > differences in licensing are a problem) for distribution. > > Anything other than binary representation will be less efficient in terms of > accuracy provided by a given number of bits. By illustration, base 10 > requires four bits, but provides only 3.32 bits (log2(10)) per digit of > accuracy. The only relief from this fundamental fact is use of less bits > for the exponent, and in IEEE floating point the size of the exponent field > is minimized just about to the point of diminishing returns (problems > requiring workaround in areas such as determinants, series and large > polynomials) to begin with. > > James K Beard Well, DFP <> IEEE conversion is already present in libgcc. So you shouldn't need here any special implementation. I would suggest that you are using for 32bit and 64bit DFP the double type, and AFAICS the 80bit IEEE should be wide enough for the 128bit DFP. How big is its exponent specified? Interesting might be the rounding. Regards, Kai 
From: JonY <jon_y@us...>  20110324 01:20:38
Attachments:
0xED74C077.asc

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/24/2011 02:29, Kai Tietz wrote: > 2011/3/23 James K Beard <beardjamesk@...>: >> You don't need to go to BCD to convert DFP to IEEE (regular) floating point. >> A single arithmetic operation directly in DFP will exceed what you do to >> convert to IEEE floating point. I would use double precision for anything >> up to 12 decimals of accuracy, 80bit for another three, and simply >> incorporate the quad precision libraries with credit (or by reference, if >> differences in licensing are a problem) for distribution. >> >> Anything other than binary representation will be less efficient in terms of >> accuracy provided by a given number of bits. By illustration, base 10 >> requires four bits, but provides only 3.32 bits (log2(10)) per digit of >> accuracy. The only relief from this fundamental fact is use of less bits >> for the exponent, and in IEEE floating point the size of the exponent field >> is minimized just about to the point of diminishing returns (problems >> requiring workaround in areas such as determinants, series and large >> polynomials) to begin with. >> >> James K Beard > > Well, DFP <> IEEE conversion is already present in libgcc. So you > shouldn't need here any special implementation. I would suggest that > you are using for 32bit and 64bit DFP the double type, and AFAICS > the 80bit IEEE should be wide enough for the 128bit DFP. How big is > its exponent specified? Interesting might be the rounding. > > Regards, > Kai > Long doubles extended precision go up to 4963 (base 10) in exponent, while DECIMAL128 go up 6144, this is assuming I didn't get the docs wrong. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KnFEACgkQp56AKe10wHf30ACeKpD4YvTTR8k8pSO9njpa9pVQ /B8An1s6P6yNV1UcTdIe6evB6VrVz9IV =hBNq END PGP SIGNATURE 
From: James K Beard <beardjamesk@ve...>  20110324 05:40:51

It may not be as bad as you might think because trigonometric functions become invalid for arguments with magnitude defined by the mantissa length, not the exponent, and the exponent is removed as part of the process in log and exponentiation operations. James K Beard Original Message From: Jon [mailto:10walls@...] On Behalf Of JonY Sent: Wednesday, March 23, 2011 9:20 PM To: Kai Tietz Cc: jkbeard1@...; mingww64public@...; James K Beard Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/24/2011 02:29, Kai Tietz wrote: > 2011/3/23 James K Beard <beardjamesk@...>: >> You don't need to go to BCD to convert DFP to IEEE (regular) floating point. >> A single arithmetic operation directly in DFP will exceed what you do >> to convert to IEEE floating point. I would use double precision for >> anything up to 12 decimals of accuracy, 80bit for another three, and >> simply incorporate the quad precision libraries with credit (or by >> reference, if differences in licensing are a problem) for distribution. >> >> Anything other than binary representation will be less efficient in >> terms of accuracy provided by a given number of bits. By >> illustration, base 10 requires four bits, but provides only 3.32 bits >> (log2(10)) per digit of accuracy. The only relief from this >> fundamental fact is use of less bits for the exponent, and in IEEE >> floating point the size of the exponent field is minimized just about >> to the point of diminishing returns (problems requiring workaround in >> areas such as determinants, series and large >> polynomials) to begin with. >> >> James K Beard > > Well, DFP <> IEEE conversion is already present in libgcc. So you > shouldn't need here any special implementation. I would suggest that > you are using for 32bit and 64bit DFP the double type, and AFAICS > the 80bit IEEE should be wide enough for the 128bit DFP. How big is > its exponent specified? Interesting might be the rounding. > > Regards, > Kai > Long doubles extended precision go up to 4963 (base 10) in exponent, while DECIMAL128 go up 6144, this is assuming I didn't get the docs wrong. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KnFEACgkQp56AKe10wHf30ACeKpD4YvTTR8k8pSO9njpa9pVQ /B8An1s6P6yNV1UcTdIe6evB6VrVz9IV =hBNq END PGP SIGNATURE 
From: Jim Michaels <jmichae3@ya...>  20110325 20:16:14
Attachments:
Message as HTML

I can add this: If you are doing accounting apps or anything dealing with money, or doing comparisons with relops, you want decimal, not fp. otherwise, you have problems adding .01+.01+.01, 0 not being +0 and thus you can't compare or something like that, etc. fp is NOT for comparison ops. decimal you can. ________________________________ From: James K Beard <beardjamesk@...> To: JonY <jon_y@...>; Kai Tietz <ktietz70@...> Cc: mingww64public@... Sent: Wed, March 23, 2011 10:40:30 PM Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math It may not be as bad as you might think because trigonometric functions become invalid for arguments with magnitude defined by the mantissa length, not the exponent, and the exponent is removed as part of the process in log and exponentiation operations. James K Beard Original Message From: Jon [mailto:10walls@...] On Behalf Of JonY Sent: Wednesday, March 23, 2011 9:20 PM To: Kai Tietz Cc: jkbeard1@...; mingww64public@...; James K Beard Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/24/2011 02:29, Kai Tietz wrote: > 2011/3/23 James K Beard <beardjamesk@...>: >> You don't need to go to BCD to convert DFP to IEEE (regular) floating point. >> A single arithmetic operation directly in DFP will exceed what you do >> to convert to IEEE floating point. I would use double precision for >> anything up to 12 decimals of accuracy, 80bit for another three, and >> simply incorporate the quad precision libraries with credit (or by >> reference, if differences in licensing are a problem) for distribution. >> >> Anything other than binary representation will be less efficient in >> terms of accuracy provided by a given number of bits. By >> illustration, base 10 requires four bits, but provides only 3.32 bits >> (log2(10)) per digit of accuracy. The only relief from this >> fundamental fact is use of less bits for the exponent, and in IEEE >> floating point the size of the exponent field is minimized just about >> to the point of diminishing returns (problems requiring workaround in >> areas such as determinants, series and large >> polynomials) to begin with. >> >> James K Beard > > Well, DFP <> IEEE conversion is already present in libgcc. So you > shouldn't need here any special implementation. I would suggest that > you are using for 32bit and 64bit DFP the double type, and AFAICS > the 80bit IEEE should be wide enough for the 128bit DFP. How big is > its exponent specified? Interesting might be the rounding. > > Regards, > Kai > Long doubles extended precision go up to 4963 (base 10) in exponent, while DECIMAL128 go up 6144, this is assuming I didn't get the docs wrong. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KnFEACgkQp56AKe10wHf30ACeKpD4YvTTR8k8pSO9njpa9pVQ /B8An1s6P6yNV1UcTdIe6evB6VrVz9IV =hBNq END PGP SIGNATURE  Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology  will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/inteldev2devmar _______________________________________________ Mingww64public mailing list Mingww64public@... https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: James K Beard <beardjamesk@ve...>  20110326 18:34:48
Attachments:
Message as HTML

If the mantissa has at least seven or eight bits below the binary point, equivalence to the cents level is achieved by xy < 0.005; the halfcent is x'0.0147ae' which can be done in any decent language. If you do it as a procedure, you can do magnitude checks to ensure that the magnitude of the currency quantity doesn't bump the exponent too high, possibly with a check on magnitude of the binary exponent. Another aspect to consider is that rounding binary values to cents will do exactly the same thing. Every time a financial program produces output for any people to read this is done as part of the formatting for I/O. But I think that's almost all moot. The only times we will need to go to the library are for transcendental functions (trigonometric, log, exponential, and other scientific and mathematical functions). Exponentiation to integers is usually done by computing an array of squared quantities and using the bit pattern of the exponent to roll them up into the result. For realworld financial computation where binary floating point is an issue, we are pretty much limited to exponentiations used in interest calculations, which are probably best done in binary floating point and rounded to the nearest cent. Look rationally at alternatives to doing this in decimal and binary arithmetic and you will be dealing with the reasons that all modern computers are binary, in their cores. On the other hand, if you want to do a mathematical library in baseten arithmetic, all you need to worry about is executing them vast quantities of times because practical problems will require this type of computation a limited number of times, so speed isn't a problem, and, in the age of terabyte HDs, space isn't either. This may be what someone wants, but you will never make a decisive case for such a library in the context of numerical analysis and computer science. James K Beard From: Jim Michaels [mailto:jmichae3@...] Sent: Friday, March 25, 2011 4:16 PM To: mingww64public@... Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math I can add this: If you are doing accounting apps or anything dealing with money, or doing comparisons with relops, you want decimal, not fp. otherwise, you have problems adding .01+.01+.01, 0 not being +0 and thus you can't compare or something like that, etc. fp is NOT for comparison ops. decimal you can. _____ From: James K Beard <beardjamesk@...> To: JonY <jon_y@...>; Kai Tietz <ktietz70@...> Cc: mingww64public@... Sent: Wed, March 23, 2011 10:40:30 PM Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math It may not be as bad as you might think because trigonometric functions become invalid for arguments with magnitude defined by the mantissa length, not the exponent, and the exponent is removed as part of the process in log and exponentiation operations. James K Beard Original Message From: Jon [mailto:10walls@...] On Behalf Of JonY Sent: Wednesday, March 23, 2011 9:20 PM To: Kai Tietz Cc: jkbeard1@...; mingww64public@...; James K Beard Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/24/2011 02:29, Kai Tietz wrote: > 2011/3/23 James K Beard <beardjamesk@...>: >> You don't need to go to BCD to convert DFP to IEEE (regular) floating point. >> A single arithmetic operation directly in DFP will exceed what you do >> to convert to IEEE floating point. I would use double precision for >> anything up to 12 decimals of accuracy, 80bit for another three, and >> simply incorporate the quad precision libraries with credit (or by >> reference, if differences in licensing are a problem) for distribution. >> >> Anything other than binary representation will be less efficient in >> terms of accuracy provided by a given number of bits. By >> illustration, base 10 requires four bits, but provides only 3.32 bits >> (log2(10)) per digit of accuracy. The only relief from this >> fundamental fact is use of less bits for the exponent, and in IEEE >> floating point the size of the exponent field is minimized just about >> to the point of diminishing returns (problems requiring workaround in >> areas such as determinants, series and large >> polynomials) to begin with. >> >> James K Beard > > Well, DFP <> IEEE conversion is already present in libgcc. So you > shouldn't need here any special implementation. I would suggest that > you are using for 32bit and 64bit DFP the double type, and AFAICS > the 80bit IEEE should be wide enough for the 128bit DFP. How big is > its exponent specified? Interesting might be the rounding. > > Regards, > Kai > Long doubles extended precision go up to 4963 (base 10) in exponent, while DECIMAL128 go up 6144, this is assuming I didn't get the docs wrong. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KnFEACgkQp56AKe10wHf30ACeKpD4YvTTR8k8pSO9njpa9pVQ /B8An1s6P6yNV1UcTdIe6evB6VrVz9IV =hBNq END PGP SIGNATURE   Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology  will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/inteldev2devmar _______________________________________________ Mingww64public mailing list Mingww64public@... https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: Jim Michaels <jmichae3@ya...>  20110403 06:06:05
Attachments:
Message as HTML

take a gander at this.... decimal floating point math is possibly coming to TR2. http://www.openstd.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf ________________________________ From: James K Beard <beardjamesk@...> To: Jim Michaels <jmichae3@...>; mingww64public@... Sent: Sat, March 26, 2011 11:33:09 AM Subject: RE: [Mingww64public] mingww64 Decimal Floating Point math If the mantissa has at least seven or eight bits below the binary point, equivalence to the cents level is achieved by xy < 0.005; the halfcent is x’0.0147ae’ which can be done in any decent language. If you do it as a procedure, you can do magnitude checks to ensure that the magnitude of the currency quantity doesn’t bump the exponent too high, possibly with a check on magnitude of the binary exponent. Another aspect to consider is that rounding binary values to cents will do exactly the same thing. Every time a financial program produces output for any people to read this is done as part of the formatting for I/O. But I think that’s almost all moot. The only times we will need to go to the library are for transcendental functions (trigonometric, log, exponential, and other scientific and mathematical functions). Exponentiation to integers is usually done by computing an array of squared quantities and using the bit pattern of the exponent to roll them up into the result. For realworld financial computation where binary floating point is an issue, we are pretty much limited to exponentiations used in interest calculations, which are probably best done in binary floating point and rounded to the nearest cent. Look rationally at alternatives to doing this in decimal and binary arithmetic and you will be dealing with the reasons that all modern computers are binary, in their cores. On the other hand, if you want to do a mathematical library in baseten arithmetic, all you need to worry about is executing them vast quantities of times because practical problems will require this type of computation a limited number of times, so speed isn’t a problem, and, in the age of terabyte HDs, space isn’t either. This may be what someone wants, but you will never make a decisive case for such a library in the context of numerical analysis and computer science. James K Beard From:Jim Michaels [mailto:jmichae3@...] Sent: Friday, March 25, 2011 4:16 PM To: mingww64public@... Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math I can add this: If you are doing accounting apps or anything dealing with money, or doing comparisons with relops, you want decimal, not fp. otherwise, you have problems adding .01+.01+.01, 0 not being +0 and thus you can't compare or something like that, etc. fp is NOT for comparison ops. decimal you can. ________________________________ From:James K Beard <beardjamesk@...> To: JonY <jon_y@...>; Kai Tietz <ktietz70@...> Cc: mingww64public@... Sent: Wed, March 23, 2011 10:40:30 PM Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math It may not be as bad as you might think because trigonometric functions become invalid for arguments with magnitude defined by the mantissa length, not the exponent, and the exponent is removed as part of the process in log and exponentiation operations. James K Beard Original Message From: Jon [mailto:10walls@...] On Behalf Of JonY Sent: Wednesday, March 23, 2011 9:20 PM To: Kai Tietz Cc: jkbeard1@...; mingww64public@...; James K Beard Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/24/2011 02:29, Kai Tietz wrote: > 2011/3/23 James K Beard <beardjamesk@...>: >> You don't need to go to BCD to convert DFP to IEEE (regular) floating point. >> A single arithmetic operation directly in DFP will exceed what you do >> to convert to IEEE floating point. I would use double precision for >> anything up to 12 decimals of accuracy, 80bit for another three, and >> simply incorporate the quad precision libraries with credit (or by >> reference, if differences in licensing are a problem) for distribution. >> >> Anything other than binary representation will be less efficient in >> terms of accuracy provided by a given number of bits. By >> illustration, base 10 requires four bits, but provides only 3.32 bits >> (log2(10)) per digit of accuracy. The only relief from this >> fundamental fact is use of less bits for the exponent, and in IEEE >> floating point the size of the exponent field is minimized just about >> to the point of diminishing returns (problems requiring workaround in >> areas such as determinants, series and large >> polynomials) to begin with. >> >> James K Beard > > Well, DFP <> IEEE conversion is already present in libgcc. So you > shouldn't need here any special implementation. I would suggest that > you are using for 32bit and 64bit DFP the double type, and AFAICS > the 80bit IEEE should be wide enough for the 128bit DFP. How big is > its exponent specified? Interesting might be the rounding. > > Regards, > Kai > Long doubles extended precision go up to 4963 (base 10) in exponent, while DECIMAL128 go up 6144, this is assuming I didn't get the docs wrong. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KnFEACgkQp56AKe10wHf30ACeKpD4YvTTR8k8pSO9njpa9pVQ /B8An1s6P6yNV1UcTdIe6evB6VrVz9IV =hBNq END PGP SIGNATURE  Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology  will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/inteldev2devmar _______________________________________________ Mingww64public mailing list Mingww64public@... https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: James K Beard <beardjamesk@ve...>  20110403 14:08:03
Attachments:
Message as HTML

A quick glance through the document seems to tell us that the decimal arithmetic will incorporate checks to ensure that any rounding in binary floating point does not compromise the accuracy of the final decimal result. That’s pretty much what I was suggesting in my message of March 26 below. The JTC1 Committee is apparently considering putting it in the standard. This could be a very good thing for people porting code from COBOL, and useful for new applications in environments previously restricted to COBOL such as the banking and accounting industries. James K Beard From: Jim Michaels [mailto:jmichae3@...] Sent: Sunday, April 03, 2011 2:06 AM To: mingww64public@... Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math take a gander at this.... decimal floating point math is possibly coming to TR2. http://www.openstd.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf _____ From: James K Beard <beardjamesk@...> To: Jim Michaels <jmichae3@...>; mingww64public@... Sent: Sat, March 26, 2011 11:33:09 AM Subject: RE: [Mingww64public] mingww64 Decimal Floating Point math If the mantissa has at least seven or eight bits below the binary point, equivalence to the cents level is achieved by xy < 0.005; the halfcent is x’0.0147ae’ which can be done in any decent language. If you do it as a procedure, you can do magnitude checks to ensure that the magnitude of the currency quantity doesn’t bump the exponent too high, possibly with a check on magnitude of the binary exponent. Another aspect to consider is that rounding binary values to cents will do exactly the same thing. Every time a financial program produces output for any people to read this is done as part of the formatting for I/O. But I think that’s almost all moot. The only times we will need to go to the library are for transcendental functions (trigonometric, log, exponential, and other scientific and mathematical functions). Exponentiation to integers is usually done by computing an array of squared quantities and using the bit pattern of the exponent to roll them up into the result. For realworld financial computation where binary floating point is an issue, we are pretty much limited to exponentiations used in interest calculations, which are probably best done in binary floating point and rounded to the nearest cent. Look rationally at alternatives to doing this in decimal and binary arithmetic and you will be dealing with the reasons that all modern computers are binary, in their cores. On the other hand, if you want to do a mathematical library in baseten arithmetic, all you need to worry about is executing them vast quantities of times because practical problems will require this type of computation a limited number of times, so speed isn’t a problem, and, in the age of terabyte HDs, space isn’t either. This may be what someone wants, but you will never make a decisive case for such a library in the context of numerical analysis and computer science. James K Beard From: Jim Michaels [mailto:jmichae3@...] Sent: Friday, March 25, 2011 4:16 PM To: mingww64public@... Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math I can add this: If you are doing accounting apps or anything dealing with money, or doing comparisons with relops, you want decimal, not fp. otherwise, you have problems adding .01+.01+.01, 0 not being +0 and thus you can't compare or something like that, etc. fp is NOT for comparison ops. decimal you can. _____ From: James K Beard <beardjamesk@...> To: JonY <jon_y@...>; Kai Tietz <ktietz70@...> Cc: mingww64public@... Sent: Wed, March 23, 2011 10:40:30 PM Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math It may not be as bad as you might think because trigonometric functions become invalid for arguments with magnitude defined by the mantissa length, not the exponent, and the exponent is removed as part of the process in log and exponentiation operations. James K Beard Original Message From: Jon [mailto:10walls@...] On Behalf Of JonY Sent: Wednesday, March 23, 2011 9:20 PM To: Kai Tietz Cc: jkbeard1@...; mingww64public@...; James K Beard Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 3/24/2011 02:29, Kai Tietz wrote: > 2011/3/23 James K Beard <beardjamesk@...>: >> You don't need to go to BCD to convert DFP to IEEE (regular) floating point. >> A single arithmetic operation directly in DFP will exceed what you do >> to convert to IEEE floating point. I would use double precision for >> anything up to 12 decimals of accuracy, 80bit for another three, and >> simply incorporate the quad precision libraries with credit (or by >> reference, if differences in licensing are a problem) for distribution. >> >> Anything other than binary representation will be less efficient in >> terms of accuracy provided by a given number of bits. By >> illustration, base 10 requires four bits, but provides only 3.32 bits >> (log2(10)) per digit of accuracy. The only relief from this >> fundamental fact is use of less bits for the exponent, and in IEEE >> floating point the size of the exponent field is minimized just about >> to the point of diminishing returns (problems requiring workaround in >> areas such as determinants, series and large >> polynomials) to begin with. >> >> James K Beard > > Well, DFP <> IEEE conversion is already present in libgcc. So you > shouldn't need here any special implementation. I would suggest that > you are using for 32bit and 64bit DFP the double type, and AFAICS > the 80bit IEEE should be wide enough for the 128bit DFP. How big is > its exponent specified? Interesting might be the rounding. > > Regards, > Kai > Long doubles extended precision go up to 4963 (base 10) in exponent, while DECIMAL128 go up 6144, this is assuming I didn't get the docs wrong. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2KnFEACgkQp56AKe10wHf30ACeKpD4YvTTR8k8pSO9njpa9pVQ /B8An1s6P6yNV1UcTdIe6evB6VrVz9IV =hBNq END PGP SIGNATURE  Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology  will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/inteldev2devmar _______________________________________________ Mingww64public mailing list Mingww64public@... https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: JonY <jon_y@us...>  20110404 01:25:42
Attachments:
0xED74C077.asc

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/3/2011 22:07, James K Beard wrote: > A quick glance through the document seems to tell us that the decimal arithmetic will incorporate checks to ensure that any rounding in binary floating point does not compromise the accuracy of the final decimal result. That’s pretty much what I was suggesting in my message of March 26 below. The JTC1 Committee is apparently considering putting it in the standard. This could be a very good thing for people porting code from COBOL, and useful for new applications in environments previously restricted to COBOL such as the banking and accounting industries. > > > > James K Beard > Guess I'll start mapping the math functions using simple type casting when I get the time, libgcc already have those routines. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2ZHgQACgkQp56AKe10wHfHSwCeOcyUzonWzg980dItyv5FF5+I 2twAn0lrttuuGhEjuTVy0xpulM9v0GYH =Ggsw END PGP SIGNATURE 
From: NightStrike <nightstrike@gm...>  20110409 06:41:11

On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> wrote: > A quick glance through the document seems to tell us that the decimal > arithmetic will incorporate checks to ensure that any rounding in binary > floating point does not compromise the accuracy of the final decimal > result. That’s pretty much what I was suggesting in my message of March 26 > below. The JTC1 Committee is apparently considering putting it in the > standard. This could be a very good thing for people porting code from > COBOL, and useful for new applications in environments previously restricted > to COBOL such as the banking and accounting industries. I'm being a little OT here, but I'm curious.. does that mean that COBOL was a language that gave very high accuracy compared to C of the day? 
From: Antoni Jaume <a2jaume@gm...>  20110409 12:54:50
Attachments:
Message as HTML

2011/4/9 NightStrike <nightstrike@...> > > I'm being a little OT here, but I'm curious.. does that mean that > COBOL was a language that gave very high accuracy compared to C of the > day? > > Cobol is quite anterior to C. It is not so much that it has high accuracy as it avoids decimal to binary and binary to decimal conversions. In the limited precision and memory available at the time the addition of cents could quickly lose precision. 
From: K. Frank <kfrank29.c@gm...>  20110409 13:16:12

Hello NightStrike! On Sat, Apr 9, 2011 at 2:41 AM, NightStrike <nightstrike@...> wrote: > On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> wrote: >> A quick glance through the document seems to tell us that the decimal >> arithmetic will incorporate checks to ensure that any rounding in binary >> floating point does not compromise the accuracy of the final decimal >> result. >> ... > > I'm being a little OT here, but I'm curious.. does that mean that > COBOL was a language that gave very high accuracy compared to C of the > day? > No, COBOL, by virtue of using decimal arithmetic, would not have been more accurate than C using binary floatingpoint, but rather "differently"accurate. (This, of course, is only true if you make an applestoapples comparison. If you use 64bit decimal floatingpoint  call this double precision  this will be much more accurate than 32bit singleprecision binary floating point, and, of course, doubleprecision binary floatingpoint will be much more accurate than singleprecision decimal floating;point.) That is, the set of real numbers that can be represented exactly as decimal floatingpoint numbers is different than the set of exactly representable binary floatingpoint numbers. Let me illustrate this with an approximate example  I won't get the exact numbers and details of the floatingpoint representation correct, but the core idea is spoton right. Compare using three decimal digits (0  999; 10^3 = 1000) with ten binary digits (0  1023; 2^10 = 1024), essentially the same accuracy. Consider the two real numbers: 1  1/100 = 0.99 = 99 ^ 10^2, an exact decimal floatingpoint 1  1/128 = 0.1111111 (binary) = 127 * 2^7, an exact binary floatingpoint The first, 1  1/100, is not exactly representable in binary, because 1/100 = 1 / (2^2 * 5^2), and you can't represent fractional (negative) powers of five exactly in binary. The second, 1  1/128, is not exactly representable in decimal, because we are only using three decimal digits. 1/128 = 0.0078125 (exact), so 1  1/128 = 0.9921875 (exact) If we give ourselves seven decimal digits, we can represent 1  1/128 exactly, but that wouldn't be an applestoapples comparison. The best we can do with our threedecimaldigit decimal floatingpoint is 1  128 ~= 0.992 = 992 * 10^3 (approximate) This shows that neither decimal nor binary is more accurate, but simply that they are different. If it is important that you can represent things like 1/100 exactly, use decimal, but if you want to represent things like 1/128 exactly, use binary. (In practice, for the same word size, binary is somewhat more accurate, because in decimal a single decimal digit is usually stored in four bits, wasting the difference between a decimal digit and a hexadecimal (0  15) digit. Also, you can trade off accuracy for range by moving bits from the mantissa to the exponent.) Happy Hacking! K. Frank 
From: Ruben Van Boxem <vanboxem.ruben@gm...>  20110409 13:33:34
Attachments:
Message as HTML

Hi, Sorry for jumping into this discussion, but I don't seem to understand what the advantage is of a nonhardware supported real number representation. If you need the two (or a bit more) decimal places required for currency and percentages, why not just use a big integer and for display divide by 100? No more worries about precision, up to an arbitrarily determined number of decimal places. Are the numbers so huge that they can't be stored in a 128bit integer, or are there stricter requirements precisionwise? Thanks! Ruben Op 9 apr. 2011 15:16 schreef "K. Frank" <kfrank29.c@...> het volgende: > Hello NightStrike! > > On Sat, Apr 9, 2011 at 2:41 AM, NightStrike <nightstrike@...> wrote: >> On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> wrote: >>> A quick glance through the document seems to tell us that the decimal >>> arithmetic will incorporate checks to ensure that any rounding in binary >>> floating point does not compromise the accuracy of the final decimal >>> result. >>> ... >> >> I'm being a little OT here, but I'm curious.. does that mean that >> COBOL was a language that gave very high accuracy compared to C of the >> day? >> > > No, COBOL, by virtue of using decimal arithmetic, would not have been more > accurate than C using binary floatingpoint, but rather "differently"accurate. > (This, of course, is only true if you make an applestoapples comparison. > If you use 64bit decimal floatingpoint  call this double precision >  this will > be much more accurate than 32bit singleprecision binary floating point, > and, of course, doubleprecision binary floatingpoint will be much more > accurate than singleprecision decimal floating;point.) > > That is, the set of real numbers that can be represented exactly as decimal > floatingpoint numbers is different than the set of exactly representable binary > floatingpoint numbers. > > Let me illustrate this with an approximate example  I won't get the exact > numbers and details of the floatingpoint representation correct, but the > core idea is spoton right. > > Compare using three decimal digits (0  999; 10^3 = 1000) with ten binary > digits (0  1023; 2^10 = 1024), essentially the same accuracy. > > Consider the two real numbers: > > 1  1/100 = 0.99 = 99 ^ 10^2, an exact decimal floatingpoint > > 1  1/128 = 0.1111111 (binary) = 127 * 2^7, an exact binary floatingpoint > > The first, 1  1/100, is not exactly representable in binary, because > 1/100 = 1 / (2^2 * 5^2), and you can't represent fractional (negative) > powers of five exactly in binary. > > The second, 1  1/128, is not exactly representable in decimal, > because we are only using three decimal digits. > > 1/128 = 0.0078125 (exact), > > so > > 1  1/128 = 0.9921875 (exact) > > If we give ourselves seven decimal digits, we can represent > 1  1/128 exactly, but that wouldn't be an applestoapples > comparison. > > The best we can do with our threedecimaldigit decimal > floatingpoint is > > 1  128 ~= 0.992 = 992 * 10^3 (approximate) > > This shows that neither decimal nor binary is more accurate, but > simply that they are different. If it is important that you can > represent things like 1/100 exactly, use decimal, but if you want > to represent things like 1/128 exactly, use binary. > > (In practice, for the same word size, binary is somewhat more > accurate, because in decimal a single decimal digit is usually > stored in four bits, wasting the difference between a decimal > digit and a hexadecimal (0  15) digit. Also, you can trade off > accuracy for range by moving bits from the mantissa to the > exponent.) > > Happy Hacking! > > > K. Frank > >  > Xperia(TM) PLAY > It's a major breakthrough. An authentic gaming > smartphone on the nation's most reliable network. > And it wants your games. > http://p.sf.net/sfu/verizonsfdev > _______________________________________________ > Mingww64public mailing list > Mingww64public@... > https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: JonY <jon_y@us...>  20110409 13:48:03
Attachments:
0xED74C077.asc

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/9/2011 21:33, Ruben Van Boxem wrote: > Hi, > > Sorry for jumping into this discussion, but I don't seem to understand what > the advantage is of a nonhardware supported real number representation. If > you need the two (or a bit more) decimal places required for currency and > percentages, why not just use a big integer and for display divide by 100? > No more worries about precision, up to an arbitrarily determined number of > decimal places. Are the numbers so huge that they can't be stored in a > 128bit integer, or are there stricter requirements precisionwise? Thanks! > > Ruben Sure that is fine if your range is limited. Its the same reason floating point exists, but with more specific applications. No, 128bit integers are too short after factoring in exponents. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2gY3wACgkQp56AKe10wHfjwQCfQiiDs6j7kAxnWu92fltffiIf 5F0An1xZpHm7Az5Vl8R+IneCZN/Vo2Mn =tuWX END PGP SIGNATURE 
From: K. Frank <kfrank29.c@gm...>  20110409 15:03:42

Hi Jon and Ruben! On Sat, Apr 9, 2011 at 9:47 AM, JonY <jon_y@...> wrote: > ... > On 4/9/2011 21:33, Ruben Van Boxem wrote: >> Hi, >> >> Sorry for jumping into this discussion, but I don't seem to understand what >> the advantage is of a nonhardware supported real number representation. If >> you need the two (or a bit more) decimal places required for currency and >> percentages, why not just use a big integer and for display divide by 100? >> No more worries about precision, up to an arbitrarily determined number of >> decimal places. Are the numbers so huge that they can't be stored in a >> 128bit integer, or are there stricter requirements precisionwise? Thanks! >> >> Ruben > > Sure that is fine if your range is limited. Its the same reason floating > point exists, but with more specific applications. > > No, 128bit integers are too short after factoring in exponents. > ... Obviously it depends on the exact use case you need to address, and there are some specialized situations in which decimal floatingpoint is legitimately called for, but I think Ruben is basically right. The money thing that has been mentioned a few times in this thread is a red herring. If you need to perform money calculation accurate to the penny, just do your calculations in pennies, and use exact integer arithmetic. (Or do them in dollars, and use exact twodecimalplace fixedpoint arithmetic, which is really integer arithmetic by another name.) If you really need floatingpoint because of the range provided by the exponent, then your money calculations won't be exact anyway. For example, let say you're using sevendigit decimal floating point: $10,000,000.00 (exact) + $1,000.73 (exact) = $10,001,000.73 (???) With sevendigit decimal floatingpoint your result won't be exact; instead, you'll get $10,001,000.00, with the 73 cents being lost to roundoff error. If you claim you need floatingpoint because you need the range provided by the exponent to represent a sum as large as $10,000,000, then you are no longer doing calculations exact down to the penny (with sevendigit floatingpoint). Also, note that performing the example calculation, $10,000,000.00 + $1,000.73, will not trigger any sort of traps or exceptions that are sometimes provided by floatingpoint hardware or software. This is not overflow or underflow or some other exceptional floatingpoint condition such as denormalization  this is gardenvariety roundoff error that "always" happens when performing floatingpoint calculations. So, if you need exact money calculations, used fixedpoint arithmetic (essentially integer arithmetic), live within the welldefined finite range, and throw an exception (or somehow signal the error condition) if you overflow the finite range. (Or use fixedpoint arithmetic based on bignums, as Ruben suggested, and have essentially unlimited range, at the cost of slower arithmetic.) What, then, would be the advantage of using decimal floatingpoint? I don't really know the history or what people were thinking when they built those early decimal floatingpoint systems, but there is a (minor) advantage of having the numbers people work with on paper being represented exactly. I have 1.2345 * 10^10, and 7.6543 * 10^12 written down on a piece of paper ad type them into my decimal computer. They are represented exactly. Of course the sum and product of these numbers is not represented exactly (with, say, sevendigit floatingpoint), so any advantage of having used decimal floatingpoint is minor. Decimal floatingpoint rarely buys you anything you really care about, which is probably why almost all modern computers support binary floatingpoint, but not decimal. This does raise the question that Ruben alluded to: Why might someone bother with implementing a decimal floatingpoint package for the gcc environment? It's a fair amount of work and rather tricky to do it right, and if you don't do it right, there's no point to it. Not to be critical, but why not let decimal floatingpoint die a seemingly welldeserved death, as is pretty much happening in the modern computing world? Implementing floatingpoint arithmetic correctly is rather fussy work, and I, myself, wouldn't have much taste for it. Best. K. Frank 
From: JonY <jon_y@us...>  20110409 17:08:03
Attachments:
0xED74C077.asc

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/9/2011 23:03, K. Frank wrote: > > What, then, would be the advantage of using decimal floatingpoint? > I don't really know the history or what people were thinking when > they built those early decimal floatingpoint systems, but there is > a (minor) advantage of having the numbers people work with on > paper being represented exactly. I have 1.2345 * 10^10, and > 7.6543 * 10^12 written down on a piece of paper ad type them > into my decimal computer. They are represented exactly. Of > course the sum and product of these numbers is not represented > exactly (with, say, sevendigit floatingpoint), so any advantage > of having used decimal floatingpoint is minor. > > Decimal floatingpoint rarely buys you anything you really care about, > which is probably why almost all modern computers support binary > floatingpoint, but not decimal. > > This does raise the question that Ruben alluded to: Why might > someone bother with implementing a decimal floatingpoint package > for the gcc environment? It's a fair amount of work and rather tricky > to do it right, and if you don't do it right, there's no point to it. > Its part of the upcoming ISO/IEC TR 24732:2009. What you use it for, or whether you will use it or not is tangent to the issue. To give a proper explanation, binary floats doesn't give the proper machine epsilon for equivalent decimal float sizes. Sure, you can cover DECIMAL64 and lower with long doubles, but what happens for DECIMAL128? I am concerned about the correctness of its implementation, and its performance implications (runtime and precession), and/or tradeoffs. Right now, I opted to just map things to use the long double, its wrong but its definitely something. I haven't really got into it since I am quite busy at the moment. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2gkmAACgkQp56AKe10wHc8rgCfYQX+bVEV2B73q8G9i/POkwvO fAAAnR7Qkk+M1apqMaQRmA1txNYvQ3OI =O1sJ END PGP SIGNATURE 
From: James K Beard <beardjamesk@ve...>  20110409 17:55:07

I think the long term solution is to implement the decimal arithmetic keywords with an open mind. Special requirements, like extremely long decimal words (DECIMAL128 == 128 digits?????) may require multipleprecision arithmetic, which may be problematic because most compilers support up to quad precision floating point, which is 128 bits with 12 bits exponent and 116 bits mantissa, or about 35 decimal digits. If financial calculations were all that was required, that would be enough for practical use, because overflow would be at 10^33 dollars/yen/pesos/yuan/whatever. Nothing in realworld finance requires more dynamic range. But, nothing in nature requires more than about 10^125, which is the ratio of the densities of intergalactic space and the interior of a neutron star, or the time in years for the universe to reach total heat death and an overall homogeneous thermal composition. That's why IEEE floating points overflow at 10^38 or, for more than 32 bits, 10^308. I have a multiple precision package that I use for personal work that, for software convenience and best speed, uses a signed 16bit integer for the exponent and overflows at 10^9863. I've been thinking about releasing it under the GPL but there is a lot of code cleanup needed, and some core modules are from Numerical Recipes for Fortran 90 and will require another license that I haven't pursued. James K Beard Original Message From: JonY [mailto:jon_y@...] Sent: Saturday, April 09, 2011 1:08 PM To: mingww64public@... Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/9/2011 23:03, K. Frank wrote: > > What, then, would be the advantage of using decimal floatingpoint? > I don't really know the history or what people were thinking when they > built those early decimal floatingpoint systems, but there is a > (minor) advantage of having the numbers people work with on paper > being represented exactly. I have 1.2345 * 10^10, and > 7.6543 * 10^12 written down on a piece of paper ad type them into my > decimal computer. They are represented exactly. Of course the sum > and product of these numbers is not represented exactly (with, say, > sevendigit floatingpoint), so any advantage of having used decimal > floatingpoint is minor. > > Decimal floatingpoint rarely buys you anything you really care about, > which is probably why almost all modern computers support binary > floatingpoint, but not decimal. > > This does raise the question that Ruben alluded to: Why might someone > bother with implementing a decimal floatingpoint package for the gcc > environment? It's a fair amount of work and rather tricky to do it > right, and if you don't do it right, there's no point to it. > Its part of the upcoming ISO/IEC TR 24732:2009. What you use it for, or whether you will use it or not is tangent to the issue. To give a proper explanation, binary floats doesn't give the proper machine epsilon for equivalent decimal float sizes. Sure, you can cover DECIMAL64 and lower with long doubles, but what happens for DECIMAL128? I am concerned about the correctness of its implementation, and its performance implications (runtime and precession), and/or tradeoffs. Right now, I opted to just map things to use the long double, its wrong but its definitely something. I haven't really got into it since I am quite busy at the moment. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2gkmAACgkQp56AKe10wHc8rgCfYQX+bVEV2B73q8G9i/POkwvO fAAAnR7Qkk+M1apqMaQRmA1txNYvQ3OI =O1sJ END PGP SIGNATURE 
From: JonY <jon_y@us...>  20110410 01:27:01
Attachments:
0xED74C077.asc

BEGIN PGP SIGNED MESSAGE Hash: SHA1 On 4/10/2011 01:54, James K Beard wrote: > I think the long term solution is to implement the decimal arithmetic > keywords with an open mind. Special requirements, like extremely long > decimal words (DECIMAL128 == 128 digits?????) may require multipleprecision > arithmetic, which may be problematic because most compilers support up to > quad precision floating point, which is 128 bits with 12 bits exponent and > 116 bits mantissa, or about 35 decimal digits. > DECIMAL128 has an epsilon of 1E33DL, I guess that means the first 33 digits after the decimal point has to be free from rounding errors. > If financial calculations were all that was required, that would be enough > for practical use, because overflow would be at 10^33 > dollars/yen/pesos/yuan/whatever. Nothing in realworld finance requires > more dynamic range. > > But, nothing in nature requires more than about 10^125, which is the ratio > of the densities of intergalactic space and the interior of a neutron star, > or the time in years for the universe to reach total heat death and an > overall homogeneous thermal composition. That's why IEEE floating points > overflow at 10^38 or, for more than 32 bits, 10^308. > > I have a multiple precision package that I use for personal work that, for > software convenience and best speed, uses a signed 16bit integer for the > exponent and overflows at 10^9863. I've been thinking about releasing it > under the GPL but there is a lot of code cleanup needed, and some core > modules are from Numerical Recipes for Fortran 90 and will require another > license that I haven't pursued. > > James K Beard > I've talked with Kai sometime ago about licensing, GPL and LGPL aren't practical for mingww64, if it were, I would have taken libdecnumber and libquadmath from gcc directly. He opted for a MIT or BSD type license so mingww64 could still be used to develop proprietary software. I did find one with BSD license, nice decimal float support too, but its build system is so horrendous that it takes more work to fix it than actual programming. BEGIN PGP SIGNATURE Version: GnuPG v2.0.16 (MingW32) iEYEARECAAYFAk2hAq0ACgkQp56AKe10wHeFZgCfQmo3NY64k/xltDCjK438gkDj 0UAAnAk42s3/1BuA24FfGal6kbncF8uF =H47M END PGP SIGNATURE 
From: dashesy <dashesy@gm...>  20110409 20:36:27

> Hi, > > Sorry for jumping into this discussion, but I don't seem to understand what > the advantage is of a nonhardware supported real number representation. If > you need the two (or a bit more) decimal places required for currency and > percentages, why not just use a big integer and for display divide by 100? > No more worries about precision, up to an arbitrarily determined number of > decimal places. Are the numbers so huge that they can't be stored in a > 128bit integer, or are there stricter requirements precisionwise? Thanks! > > Ruben > > Op 9 apr. 2011 15:16 schreef "K. Frank" <kfrank29.c@...> het volgende: >> Hello NightStrike! >> >> On Sat, Apr 9, 2011 at 2:41 AM, NightStrike <nightstrike@...> wrote: >>> On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <beardjamesk@...> >>> wrote: >>>> A quick glance through the document seems to tell us that the decimal >>>> arithmetic will incorporate checks to ensure that any rounding in binary >>>> floating point does not compromise the accuracy of the final decimal >>>> result. >>>> ... >>> >>> I'm being a little OT here, but I'm curious.. does that mean that >>> COBOL was a language that gave very high accuracy compared to C of the >>> day? >>> >> >> No, COBOL, by virtue of using decimal arithmetic, would not have been more >> accurate than C using binary floatingpoint, but rather >> "differently"accurate. >> (This, of course, is only true if you make an applestoapples comparison. >> If you use 64bit decimal floatingpoint  call this double precision >>  this will >> be much more accurate than 32bit singleprecision binary floating point, >> and, of course, doubleprecision binary floatingpoint will be much more >> accurate than singleprecision decimal floating;point.) >> >> That is, the set of real numbers that can be represented exactly as >> decimal >> floatingpoint numbers is different than the set of exactly representable >> binary >> floatingpoint numbers. >> >> Let me illustrate this with an approximate example  I won't get the >> exact >> numbers and details of the floatingpoint representation correct, but the >> core idea is spoton right. >> >> Compare using three decimal digits (0  999; 10^3 = 1000) with ten binary >> digits (0  1023; 2^10 = 1024), essentially the same accuracy. >> >> Consider the two real numbers: >> >> 1  1/100 = 0.99 = 99 ^ 10^2, an exact decimal floatingpoint >> >> 1  1/128 = 0.1111111 (binary) = 127 * 2^7, an exact binary >> floatingpoint >> >> The first, 1  1/100, is not exactly representable in binary, because >> 1/100 = 1 / (2^2 * 5^2), and you can't represent fractional (negative) >> powers of five exactly in binary. >> >> The second, 1  1/128, is not exactly representable in decimal, >> because we are only using three decimal digits. >> >> 1/128 = 0.0078125 (exact), >> >> so >> >> 1  1/128 = 0.9921875 (exact) >> >> If we give ourselves seven decimal digits, we can represent >> 1  1/128 exactly, but that wouldn't be an applestoapples >> comparison. >> >> The best we can do with our threedecimaldigit decimal >> floatingpoint is >> >> 1  128 ~= 0.992 = 992 * 10^3 (approximate) >> >> This shows that neither decimal nor binary is more accurate, but >> simply that they are different. If it is important that you can >> represent things like 1/100 exactly, use decimal, but if you want >> to represent things like 1/128 exactly, use binary. >> >> (In practice, for the same word size, binary is somewhat more >> accurate, because in decimal a single decimal digit is usually >> stored in four bits, wasting the difference between a decimal >> digit and a hexadecimal (0  15) digit. Also, you can trade off >> accuracy for range by moving bits from the mantissa to the >> exponent.) >> >> Happy Hacking! >> Can this new feature maybe used for fixed point arithmetic? For some hardware it might be advantageous because floating point implementation is not supported or is costly. This new addition might somehow make the libraries more standard (?), but the real advantage would be to have the math library work with it (does it now?). dashesy >> >> K. Frank >> >> >>  >> Xperia(TM) PLAY >> It's a major breakthrough. An authentic gaming >> smartphone on the nation's most reliable network. >> And it wants your games. >> http://p.sf.net/sfu/verizonsfdev >> _______________________________________________ >> Mingww64public mailing list >> Mingww64public@... >> https://lists.sourceforge.net/lists/listinfo/mingww64public > >  > Xperia(TM) PLAY > It's a major breakthrough. An authentic gaming > smartphone on the nation's most reliable network. > And it wants your games. > http://p.sf.net/sfu/verizonsfdev > _______________________________________________ > Mingww64public mailing list > Mingww64public@... > https://lists.sourceforge.net/lists/listinfo/mingww64public > > 
From: K. Frank <kfrank29.c@gm...>  20110323 18:48:42

Hi Jon and James! On Wed, Mar 23, 2011 at 12:45 PM, JonY <jon_y@...> wrote: > BEGIN PGP SIGNED MESSAGE > Hash: SHA1 > > On 3/23/2011 22:06, James K Beard wrote: >> Jon: The simplest and quite possibly the most efficient way to implement a >> standard function library in BCD decimal arithmetic is to convert to IEEE >> standard double precision (or, if necessary, quad precision), use the >> existing libraries, and convert back to BCD decimal floating point format. >> The binary floating point will have more accuracy, thus providing a few >> guard bits for the process, and hardware arithmetic (even quad precision is >> supported by hardware because the preserved carry fields make quad precision >> simple to support and allow good efficiency) is hard to match with software >> floating point, which is what any BCD decimal arithmetic would be. >> >> James K Beard > > Hi, > > Thanks for the reply. > > To my understanding, converting DFP to BCD then IEEE float and back > again seems to defeat the purpose using decimal floating points where > exact representation is needed, I'm not too clear about this part. Will > calculations suffer from inexact representation? I believe that this is a fully legitimate concern. To be explicit, because decimal exponents scale numbers by powers of five, as well as two (10 = 2 * 5), and binary exponents only scale by powers of two, decimal floatingpoint numbers can represent more real numbers exactly than can binary floatingpoint numbers. By way of example, 1/2 can be represented exactly in both decimal and binary floatingpoint, 1/5 can be represented exactly in only decimal floatingpoint (and 1/3 can be represented exactly in neither). Because of this, blindly converting from decimal to binary, carrying out the computation, and converting back to decimal can fail to produce the same result as carrying out the "correct" decimal computation. Having said that, if you wish to perform fixedprecision (as distinct from fixedpoint) decimal arithmetic, and your binary floatingpoint hardware has enough extra precision (I'm not sure of exactly how much is needed, but I would think that one extra decimal digit of precision would be more than enough), then I believe that (neglecting underflow, overflow, denormalization, and so on), James's scheme can be made to work (although I don't think I would call it simple). (I use the phrase fixedprecision in contrast to arbitraryprecision. By a fixedprecision decimal floatingpoint number, I mean a mantissa with a fixed number of decimal digits  say ten  and a decimal exponent.) In its simplest form, the basic idea is for each decimal floatingpoint arithmetic operation convert the operands to binary floatingpoint, perform the operation, and convert back to decimal floatingpoint by rounding to the nearest decimal floatingpoint value. This, however, isn't cheap. All of this converting and rounding is somewhat costly, and defeats the added benefits of any kind of floatingpoint pipeline and registers. Note, if you don't convert back to decimal floatingpoint after every operation (or implement some other additional logic), you won't be guaranteed to get "correct" decimal floatingpoint results. For example, (1/5 + 2/5)  3/5 is exactly equal to zero in real (noncomputer) arithmetic. It should also be exactly zero in correctlyimplemented decimal floatingpoint arithmetic, because all input values, intermediate results, and the final result are exactly representable by decimal floatingpoint numbers. However, if you calculate this with doubleprecision binary floatingpoint operations (without rounding the intermediate results back to decimal floatingpoint, and reconverting them to binary floatingpoint), you will get a nonzero result on the order of 10^16 (the approximate precision of double precision). Note, that rounding this result back to decimal floatingpoint still leaves you with this nonzero result  the result of the binary computation is a perfectly good value that can be wellapproximated by a decimal floatingpoint number with, say, ten decimal digits of precision. Of course, it all depends on what you actually need. If you don't need the specific results that correct decimal floatingpoint arithmetic gives you, then converting to binary, computing, and converting back will generally give you a very good result. But if you don't need the specific decimal results, why not just use binary from the beginning? Good luck. K. Frank 
From: James K Beard <beardjamesk@ve...>  20110323 20:33:21

You need to go with the guard bits, which is the excess in the number of bits in the IEEE double precision mantissa (52, or 15.65 decimal places for 64bit, ) and the 20.47 in 80bit). All common arithmetic coprocessors operate with 80bit floating point with a 12bit exponent, thus have 16 guard bits for double precision results. If the numerical error exceeds the depth of the guard bits, numerical error is creeping into the result. I would expect that numerical error from libraries designed for 64bit or 80bit or 128bit results won't be a problem for DFP results with 15, 20, or 34 digits, respectively. Certainly they will be less of a problem than numerical libraries that compute them using BCD arithmetic unless guard digits are used. Note that the most commonly used transcendental functions are computed in hardware in 80bit floating point. The 12 bit mantissa overflows at about 10^(308). Single precision uses an 10bit mantissa and overflows at about 10^(38). James K Beard Original Message From: Kai Tietz [mailto:ktietz70@...] Sent: Wednesday, March 23, 2011 2:29 PM To: jkbeard1@...; mingww64public@... Cc: James K Beard; JonY Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math 2011/3/23 James K Beard <beardjamesk@...>: > You don't need to go to BCD to convert DFP to IEEE (regular) floating point. > A single arithmetic operation directly in DFP will exceed what you do to > convert to IEEE floating point. I would use double precision for anything > up to 12 decimals of accuracy, 80bit for another three, and simply > incorporate the quad precision libraries with credit (or by reference, if > differences in licensing are a problem) for distribution. > > Anything other than binary representation will be less efficient in terms of > accuracy provided by a given number of bits. By illustration, base 10 > requires four bits, but provides only 3.32 bits (log2(10)) per digit of > accuracy. The only relief from this fundamental fact is use of less bits > for the exponent, and in IEEE floating point the size of the exponent field > is minimized just about to the point of diminishing returns (problems > requiring workaround in areas such as determinants, series and large > polynomials) to begin with. > > James K Beard Well, DFP <> IEEE conversion is already present in libgcc. So you shouldn't need here any special implementation. I would suggest that you are using for 32bit and 64bit DFP the double type, and AFAICS the 80bit IEEE should be wide enough for the 128bit DFP. How big is its exponent specified? Interesting might be the rounding. Regards, Kai 
From: James K Beard <beardjamesk@ve...>  20110323 20:46:11

Excuse me, but isn't the representation of a number in BCD unique except for some esoteric special cases? Why wouldn't the conversion of a binary floating point result give a correct result when converted to BCD? Perhaps I'm being naïve here so please help me out here. BCD arithmetic  but not transcendental functions  are supported in hardware in some machines that are designed for business applications. But, the speed and efficiency is in hardware binary floating point, which is ubiquitously supported at 80 bits, including the trigonometric, log, and exponential functions. Quad precision with just a few instructions that exploit the 80bit hardware is supported in most instruction sets and in GNU libraries, and now in most if not all commercial C and Fortran compilers. Without doing an operation or cycle count or timing experiments, it would seem to me that a conversion to or from BCD and binary floating point would be about the same as a single arithmetic operation, within a factor of two or so. As such the difference between computing any transcendental function in BCD and converting a BCD input to binary floating point for hardware computation and converting back to a unique or at least correct result is vastly more efficient than any possible BCD library function, particularly for trigonometric, log and exponential functions. For library functions implemented as software such as the log gamma functions, you can leverage the not inconsiderable effort needed to write and test these libraries. James K Beard Original Message From: K. Frank [mailto:kfrank29.c@...] Sent: Wednesday, March 23, 2011 2:49 PM To: mingw64 Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math Hi Jon and James! On Wed, Mar 23, 2011 at 12:45 PM, JonY <jon_y@...> wrote: > BEGIN PGP SIGNED MESSAGE > Hash: SHA1 > > On 3/23/2011 22:06, James K Beard wrote: >> Jon: The simplest and quite possibly the most efficient way to implement a >> standard function library in BCD decimal arithmetic is to convert to IEEE >> standard double precision (or, if necessary, quad precision), use the >> existing libraries, and convert back to BCD decimal floating point format. >> The binary floating point will have more accuracy, thus providing a few >> guard bits for the process, and hardware arithmetic (even quad precision is >> supported by hardware because the preserved carry fields make quad precision >> simple to support and allow good efficiency) is hard to match with software >> floating point, which is what any BCD decimal arithmetic would be. >> >> James K Beard > > Hi, > > Thanks for the reply. > > To my understanding, converting DFP to BCD then IEEE float and back > again seems to defeat the purpose using decimal floating points where > exact representation is needed, I'm not too clear about this part. Will > calculations suffer from inexact representation? I believe that this is a fully legitimate concern. To be explicit, because decimal exponents scale numbers by powers of five, as well as two (10 = 2 * 5), and binary exponents only scale by powers of two, decimal floatingpoint numbers can represent more real numbers exactly than can binary floatingpoint numbers. By way of example, 1/2 can be represented exactly in both decimal and binary floatingpoint, 1/5 can be represented exactly in only decimal floatingpoint (and 1/3 can be represented exactly in neither). Because of this, blindly converting from decimal to binary, carrying out the computation, and converting back to decimal can fail to produce the same result as carrying out the "correct" decimal computation. Having said that, if you wish to perform fixedprecision (as distinct from fixedpoint) decimal arithmetic, and your binary floatingpoint hardware has enough extra precision (I'm not sure of exactly how much is needed, but I would think that one extra decimal digit of precision would be more than enough), then I believe that (neglecting underflow, overflow, denormalization, and so on), James's scheme can be made to work (although I don't think I would call it simple). (I use the phrase fixedprecision in contrast to arbitraryprecision. By a fixedprecision decimal floatingpoint number, I mean a mantissa with a fixed number of decimal digits  say ten  and a decimal exponent.) In its simplest form, the basic idea is for each decimal floatingpoint arithmetic operation convert the operands to binary floatingpoint, perform the operation, and convert back to decimal floatingpoint by rounding to the nearest decimal floatingpoint value. This, however, isn't cheap. All of this converting and rounding is somewhat costly, and defeats the added benefits of any kind of floatingpoint pipeline and registers. Note, if you don't convert back to decimal floatingpoint after every operation (or implement some other additional logic), you won't be guaranteed to get "correct" decimal floatingpoint results. For example, (1/5 + 2/5)  3/5 is exactly equal to zero in real (noncomputer) arithmetic. It should also be exactly zero in correctlyimplemented decimal floatingpoint arithmetic, because all input values, intermediate results, and the final result are exactly representable by decimal floatingpoint numbers. However, if you calculate this with doubleprecision binary floatingpoint operations (without rounding the intermediate results back to decimal floatingpoint, and reconverting them to binary floatingpoint), you will get a nonzero result on the order of 10^16 (the approximate precision of double precision). Note, that rounding this result back to decimal floatingpoint still leaves you with this nonzero result  the result of the binary computation is a perfectly good value that can be wellapproximated by a decimal floatingpoint number with, say, ten decimal digits of precision. Of course, it all depends on what you actually need. If you don't need the specific results that correct decimal floatingpoint arithmetic gives you, then converting to binary, computing, and converting back will generally give you a very good result. But if you don't need the specific decimal results, why not just use binary from the beginning? Good luck. K. Frank   Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology  will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/inteldev2devmar _______________________________________________ Mingww64public mailing list Mingww64public@... https://lists.sourceforge.net/lists/listinfo/mingww64public 
From: K. Frank <kfrank29.c@gm...>  20110323 22:00:23

Hello James! On Wed, Mar 23, 2011 at 4:45 PM, James K Beard <beardjamesk@...> wrote: > Excuse me, but isn't the representation of a number in BCD unique except for > some esoteric special cases? Well, we could quibble about some specific details, but for the sake of this discussion, yes, let's take the BCD representation to be unique. (An aside here: I would distinguish between decimal floatingpoint representations in general, and BCD as a specific decimal floatingpoint representation. To me, a decimal floatingpoint number is an integer mantissa, and an integer exponent that represents scaling by powers of ten. (Likewise, a binary floatingpoint number is an integer mantissa and an integer exponent that represents scaling by powers of two. BCD, to me, is a specific decimal floatingpoint representation where the integer mantissa (and presumably the exponent, as well) is stored as a BCD (binarycoded decimal) integer, in which each decimal digit of the BCD integer is stored as a fourbit hexadecimal digit, with the restriction that only the values 09 are legal (i.e., AF are illegal). Converting between BCD integers and binary integers isn't cheap because you have to unpack the decimal digits from the BCD integer, multiply them by powers of ten, and add them together to get the binary integer. The point is that regardless of the semantics  what you call decimal or BCD  there are two separate things going on here: how you store your integers (in BCD or binary), and the meaning of the exponent (powers of ten vs. powers of two). When I say decimal floatingpoint I mean that the exponent represents scaling by powers of ten, and am agnostic as to the specific representation of the integer mantissa and exponent that make up the floatingpoint number.) > Why wouldn't the conversion of a binary > floating point result give a correct result when converted to BCD? Perhaps > I'm being naïve here so please help me out here. The core reason is not that the conversion from binary floatingpoint to decimal floatingpoint isn't exact (It will be exact if the binary floatingpoint representation has less precision than the decimal floating point representation, and is not guaranteed to be exact if the binary representation has greater precision.), but rather that the conversion from decimal floatingpoint to binary floatingpoint is in general not exact. That was the point of my "(1/5 + 2/5)  3/5" example. Once the inputs to the binary computation are not exact you risk the final decimal result not being exact, even if the binary computation and conversion back to decimal are exact. > BCD arithmetic  but not transcendental functions  are supported in > hardware in some machines that are designed for business applications. Ah, yes, decimal machines  from back in the steampowered era... > But, the speed and efficiency is in hardware binary floating point, which is > ubiquitously supported at 80 bits, including the trigonometric, log, and > exponential functions. Quad precision with just a few instructions that > exploit the 80bit hardware is supported in most instruction sets and in GNU > libraries, and now in most if not all commercial C and Fortran compilers. > Without doing an operation or cycle count or timing experiments, it would > seem to me that a conversion to or from BCD and binary floating point would > be about the same as a single arithmetic operation, within a factor of two > or so. No, absent hardware support, the conversion is relatively costly. Converting BCD integers to binary integers is more than a few clock cycles or arithmetic operations, and even if you use binary integers, rather than BCD, for your mantissa and/or exponent, converting 300 in decimal floatingpoint (3 x 10^2) to binary floatingpoint (300 x 2^0) is also more than a few clock cycles or arithmetic operations. > As such the difference between computing any transcendental function > in BCD and converting a BCD input to binary floating point for hardware > computation and converting back to a unique or at least correct result is > vastly more efficient than any possible BCD library function, particularly > for trigonometric, log and exponential functions. For library functions > implemented as software such as the log gamma functions, you can leverage > the not inconsiderable effort needed to write and test these libraries. This I agree with. Certainly for elementary transcendental functions (often with hardware support), and special functions (generally in libraries), and probably even for division, I would expect conversion from decimal to binary, binary computation, followed by conversion back to decimal to be much cheaper than computation entirely using the decimal representation. But for the basic arithmetic operations (+, , *, and possibly division) it's cheaper to do your arithmetic directly in the decimal representation. > > James K Beard Best. K. Frank > ... > Original Message > From: K. Frank [mailto:kfrank29.c@...] > Sent: Wednesday, March 23, 2011 2:49 PM > To: mingw64 > Subject: Re: [Mingww64public] mingww64 Decimal Floating Point math > > Hi Jon and James! > > On Wed, Mar 23, 2011 at 12:45 PM, JonY <jon_y@...> wrote: >> BEGIN PGP SIGNED MESSAGE >> Hash: SHA1 >> >> On 3/23/2011 22:06, James K Beard wrote: >>> Jon: The simplest and quite possibly the most efficient way to implement > a >>> standard function library in BCD decimal arithmetic is to convert to IEEE >>> standard double precision (or, if necessary, quad precision), use the >>> existing libraries, and convert back to BCD decimal floating point > format. >>> ... >>> James K Beard >> >> Hi, >> >> Thanks for the reply. >> >> To my understanding, converting DFP to BCD then IEEE float and back >> again seems to defeat the purpose using decimal floating points where >> exact representation is needed, I'm not too clear about this part. Will >> calculations suffer from inexact representation? > > I believe that this is a fully legitimate concern. > ... 
Sign up for the SourceForge newsletter:
No, thanks