You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
(118) |
May
(140) |
Jun
(56) |
Jul
(86) |
Aug
(4) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(94) |
Aug
(86) |
Sep
|
Oct
(3) |
Nov
(18) |
Dec
(27) |
| 2009 |
Jan
(15) |
Feb
(15) |
Mar
(27) |
Apr
(2) |
May
(1) |
Jun
(6) |
Jul
(10) |
Aug
(4) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: Michael A. <mab...@go...> - 2008-12-24 00:19:58
|
On Tue, Dec 23, 2008 at 3:36 PM, Michael Abshoff <mab...@go...> wrote: > Hi Bill, > > I am just updating to FLINT 1.0.20 in Sage and I noticed an issue I > never reported upstream: src/makefile has DOS line endings, i.e. ^M. > This causes a BSD make to throw in the towel. > > One way to fix this via vim would be to to ":s/^M//g: :) Ok, I got it all wrong and the problem was a patch I had that caused rejections, so please ignore this dumb mistake. > subversion should also offer the option to convert all the newlines > automatically. > > Cheers, > > Michael I just checked the diff I had for the makefile in Sage vs. vanilla and the only relevant change is this: +libflint.dylib64: $(FLINTOBJ) + $(CC) -m64 -single_module -fPIC -dynamiclib -o libflint.dylib $(FLINTOBJ) $(LIBS) + I.e. I need to add an extra -m64 flag to get a 64 bit OSX dylib. Hopefully we will upgrade to FLINT 1.1 during SD 12, but we will see how that goes. Cheers, Michael |
|
From: Michael A. <mab...@go...> - 2008-12-23 23:36:47
|
Hi Bill, I am just updating to FLINT 1.0.20 in Sage and I noticed an issue I never reported upstream: src/makefile has DOS line endings, i.e. ^M. This causes a BSD make to throw in the towel. One way to fix this via vim would be to to ":s/^M//g: :) subversion should also offer the option to convert all the newlines automatically. Cheers, Michael |
|
From: Bill H. <goo...@go...> - 2008-12-21 22:45:00
|
Hi all, After almost a year of development, I have just released FLINT 1.1.0. See the webpage to download it: http://www.flintlib.org/ The release has been tested on 32 and 64 bit machines, valgrinds clean and all the documentation, test code and aliasing test code has been updated and checked. This is a major feature release. Here is a list of all the new features of FLINT 1.1.0 over the 1.0.x series: Integer Multiplication ====================== * Sped up, especially for sizes between 1300 and 2300 limbs fmpz_poly (Polynomials over ZZ) =============================== * Various speedups for polynomials with small coefficients on account of the integer multiplication speedups * Convert to and from zmod_poly (polys over Z/pZ) * Chinese Remainder * Leading coefficient macro * Gaussian content * Primitive part * Pretty printing of polynomials * get_coeff_mpz_read_only * Euclidean norm * Testing for exact division of polynomials (including modular method) * GCD subresultant/modular/heuristic * Modular inversion of polynomials * Resultant * XGCD (Pohst-Zassenhaus) * multi modular polynomial multiplication * rewritten karatsuba_trunc function * derivative * polynomial evaluation * polynomial composition zmod_poly (Polynomials over Z/pZ) ================================= * addition and subtraction * Sped up multiplication * Extended multiplication to allow polynomials with moduli of up to 63 bits on a 64 bit machine * Subpolynomials (attach, attach_shift, attach_truncate) * Reverse * Truncated multiplication (left and right, classical and KS) * Scalar multiplication * Division (divrem and div), classical, divide-and-conquer and newton iteration * Series inversion * Series division * Resultant * GCD * Inversion modulo a polynomial * XGCD * Squarefree factorisation * Berlekamp factorisation * Irreducibility test * Derivative * Sped up division functions with middle products long_extras (Integer arithmetic and modulo arithmetic for word sized integers) ============================================================================== * 32 bit precomputed inverses * Add and subtract mod a modulus * Division modulo a modulus (for division in Z/pZ) * sped up the integer square root function * routines for partial factoring (i.e. finding factors whose product exceeds some bound) * z_randbits * Pocklington-Lehmer primality testing * Lucas pseudoprime test * Fermat pseudoprime test * Fast Legendre symbol computation fmpz (multiprecision integers) ============================== * Chinese remainder * Square root with remainder * Shift left and right * fmpz_mod_ui * montgomery multiplication routines * multimodular CRT and recombination routines. * fmpz_abs, fmpz_neg * fmpz_mulmod and fmpz_divmod * fmpz_invert * dramatically sped up fmpz_gcd for small operands Conversions =========== * Conversion to and from NTL polynomial format (ZZX) theta functions =============== * computation of lots of theta functions * example program for multiplying theta functions profiling ========= * test code provides timings of functions * additional quick and dirty timing functions added to the profiler Quadratic sieve =============== * Tiny quadratic sieve for integers of one and two limbs * Completely rewritten MPQS Enjoy!! Bill. |
|
From: Michael A. <mab...@go...> - 2008-12-20 22:10:26
|
On Sat, Dec 20, 2008 at 2:05 PM, Bill Hart <goo...@go...> wrote: > 2008/12/20 Michael Abshoff <mab...@go...>: >> On Fri, Dec 19, 2008 at 7:53 AM, Bill Hart <goo...@go...> wrote: Hi Bill, >> >> Have you ever fixed the deallocation issue of your custom allocation >> stack at exit? We have been seeing it hang around in Sage at exit. > > Yes, call flint_stack_cleanup(); Thanks - this is now http://trac.sagemath.org/sage_trac/ticket/4840 >> Michael >> > Cheers, Michael |
|
From: Bill H. <goo...@go...> - 2008-12-20 22:05:26
|
2008/12/20 Michael Abshoff <mab...@go...>: > On Fri, Dec 19, 2008 at 7:53 AM, Bill Hart <goo...@go...> wrote: > > Have you ever fixed the deallocation issue of your custom allocation > stack at exit? We have been seeing it hang around in Sage at exit. Yes, call flint_stack_cleanup(); > Michael > |
|
From: Michael A. <mab...@go...> - 2008-12-20 21:24:08
|
On Fri, Dec 19, 2008 at 7:53 AM, Bill Hart <goo...@go...> wrote: [I am not sure if I should trim the CC list or not, so feel free to complain if you want off] > Hi All, Hi All > I just released FLINT 1.1.0 beta. As the FLINT website is currently > down due to the upgrade of the machine (sage.math) which it is hosted > on, I give the following link: > > http://sage.math.washington.edu/home/wbhart/webpage/flint-1.1.0.tar.gz > > The documentation is here: > > http://sage.math.washington.edu/home/wbhart/webpage/flint-1.1.0.pdf > > I would be very pleased if any of you could run the test code on your > local machines and report any bugs. Running the tests is simply "make > check" in this version of FLINT. Please note that each module reports > separately - the final "all tests passed" only refers to the final > module tested. You have to scroll back through to check that all > modules passed all their tests. Ok, will do. > Note you must link against GMP/MPIR and zn_poly > (http://www.math.harvard.edu/~dmharvey/zn_poly/) for this release. To > edit the lib and include directories for these, edit the file > "flint_env" and do "source flint_env". Full instructions are to be > found in the first few pages of the documentation if you get stuck. If > you cannot build zn_poly on your machine (considered very unlikely) > please report to David Harvey (dmh...@ma...) and refer to > the FLINT documentation for how to build without zn_poly. It is not > possible to build FLINT without either GMP or MPIR. > > All the documentation is complete. Tests pass on both the old and new > 64 bit sage.math, on 32 bit Cygwin and on a 32 bit linux. All > functions that should alias, do, and there is full test code for this. > Anyone wishing to use/wrap the new functionality in FLINT can use this > release. I've spent the past month testing and documenting it. There > are very few changes to the interface for FLINT 1.1.0. > > Once build tests have passed and any issues resolved, I will valgrind > everything (I have already done a lot of this) and the beta status > will be lifted. Have you ever fixed the deallocation issue of your custom allocation stack at exit? We have been seeing it hang around in Sage at exit. > I will post an announcement on flint-devel and sage-devel when that > time comes, including a very long list of all the new features of > FLINT 1.1. This is a MAJOR new feature release, thus the beta release > before final!! > > If any of you wish to be on the FLINT development list and are not > already on it, you can register with sourceforge: > > http://sourceforge.net/account/registration/ > > and subscribe at the following link: > > http://sourceforge.net/mail/?group_id=182716 > > The traffic is very low except during English summer when > undergraduates work on the project when the traffic is medium. I would > value your contributions to the list. Ok, I really dislike the sf list management and especially their management of the archives. > If there is sufficient demand for a google groups for FLINT > development, I will create one and switch. Yes, I would be very happy to see you switch. > Subsequent feature releases will now be issued every 3 months, not > every year. FLINT 1.2, to be released in about 3 months, will contain > numerous speed ups and a new functions that got cut from 1.1. FLINT > 2.0 will be issued in about six to nine months and will include a new > polynomial module, and Andy Novocin's new factoring > technique/heuristics for ZZ[x]. About 1/3 of the code for FLINT 2.0 is > already written, but not included in this release. > > Many thanks to Andy Novocin for help with final documentation and > alias testing and thank you to the many contributors to this release > which are listed on the webpage: > > http://sage.math.washington.edu/home/wbhart/webpage (normally > http://www.flintlib.org/) > > The highlights of this release are fast polynomial GCD for ZZ[x], > (using heuristic GCD and modular half-gcd), which is now almost > uniformly faster than Magma: Nice. > http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd19.png > (ignore the heading which is incorrect), > > polynomial composition (due to Andy Novocin) and evaluation (which are > asymptotically faster than Magma) and factoring of polynomials in > Z/pZ[x] for word sized primes (much faster than Magma, due to Richard > Howell-Peak), new primality testing and factoring functions for word > sized integers (due to Peter Shrimpton) and a much faster quadratic > sieve for factoring integers. :) - we need to hook up the new quadratic sieve code into Sage since we are currently still using ancient code there. > Enjoy! > > Bill. > Cheers, Michael |
|
From: Bill H. <goo...@go...> - 2008-12-19 16:02:04
|
Hi All, I just released FLINT 1.1.0 beta. As the FLINT website is currently down due to the upgrade of the machine (sage.math) which it is hosted on, I give the following link: http://sage.math.washington.edu/home/wbhart/webpage/flint-1.1.0.tar.gz The documentation is here: http://sage.math.washington.edu/home/wbhart/webpage/flint-1.1.0.pdf I would be very pleased if any of you could run the test code on your local machines and report any bugs. Running the tests is simply "make check" in this version of FLINT. Please note that each module reports separately - the final "all tests passed" only refers to the final module tested. You have to scroll back through to check that all modules passed all their tests. Note you must link against GMP/MPIR and zn_poly (http://www.math.harvard.edu/~dmharvey/zn_poly/) for this release. To edit the lib and include directories for these, edit the file "flint_env" and do "source flint_env". Full instructions are to be found in the first few pages of the documentation if you get stuck. If you cannot build zn_poly on your machine (considered very unlikely) please report to David Harvey (dmh...@ma...) and refer to the FLINT documentation for how to build without zn_poly. It is not possible to build FLINT without either GMP or MPIR. All the documentation is complete. Tests pass on both the old and new 64 bit sage.math, on 32 bit Cygwin and on a 32 bit linux. All functions that should alias, do, and there is full test code for this. Anyone wishing to use/wrap the new functionality in FLINT can use this release. I've spent the past month testing and documenting it. There are very few changes to the interface for FLINT 1.1.0. Once build tests have passed and any issues resolved, I will valgrind everything (I have already done a lot of this) and the beta status will be lifted. I will post an announcement on flint-devel and sage-devel when that time comes, including a very long list of all the new features of FLINT 1.1. This is a MAJOR new feature release, thus the beta release before final!! If any of you wish to be on the FLINT development list and are not already on it, you can register with sourceforge: http://sourceforge.net/account/registration/ and subscribe at the following link: http://sourceforge.net/mail/?group_id=182716 The traffic is very low except during English summer when undergraduates work on the project when the traffic is medium. I would value your contributions to the list. If there is sufficient demand for a google groups for FLINT development, I will create one and switch. Subsequent feature releases will now be issued every 3 months, not every year. FLINT 1.2, to be released in about 3 months, will contain numerous speed ups and a new functions that got cut from 1.1. FLINT 2.0 will be issued in about six to nine months and will include a new polynomial module, and Andy Novocin's new factoring technique/heuristics for ZZ[x]. About 1/3 of the code for FLINT 2.0 is already written, but not included in this release. Many thanks to Andy Novocin for help with final documentation and alias testing and thank you to the many contributors to this release which are listed on the webpage: http://sage.math.washington.edu/home/wbhart/webpage (normally http://www.flintlib.org/) The highlights of this release are fast polynomial GCD for ZZ[x], (using heuristic GCD and modular half-gcd), which is now almost uniformly faster than Magma: http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd19.png (ignore the heading which is incorrect), polynomial composition (due to Andy Novocin) and evaluation (which are asymptotically faster than Magma) and factoring of polynomials in Z/pZ[x] for word sized primes (much faster than Magma, due to Richard Howell-Peak), new primality testing and factoring functions for word sized integers (due to Peter Shrimpton) and a much faster quadratic sieve for factoring integers. Enjoy! Bill. |
|
From: Bill H. <goo...@go...> - 2008-12-19 15:53:37
|
Hi All, I just released FLINT 1.1.0 beta. As the FLINT website is currently down due to the upgrade of the machine (sage.math) which it is hosted on, I give the following link: http://sage.math.washington.edu/home/wbhart/webpage/flint-1.1.0.tar.gz The documentation is here: http://sage.math.washington.edu/home/wbhart/webpage/flint-1.1.0.pdf I would be very pleased if any of you could run the test code on your local machines and report any bugs. Running the tests is simply "make check" in this version of FLINT. Please note that each module reports separately - the final "all tests passed" only refers to the final module tested. You have to scroll back through to check that all modules passed all their tests. Note you must link against GMP/MPIR and zn_poly (http://www.math.harvard.edu/~dmharvey/zn_poly/) for this release. To edit the lib and include directories for these, edit the file "flint_env" and do "source flint_env". Full instructions are to be found in the first few pages of the documentation if you get stuck. If you cannot build zn_poly on your machine (considered very unlikely) please report to David Harvey (dmh...@ma...) and refer to the FLINT documentation for how to build without zn_poly. It is not possible to build FLINT without either GMP or MPIR. All the documentation is complete. Tests pass on both the old and new 64 bit sage.math, on 32 bit Cygwin and on a 32 bit linux. All functions that should alias, do, and there is full test code for this. Anyone wishing to use/wrap the new functionality in FLINT can use this release. I've spent the past month testing and documenting it. There are very few changes to the interface for FLINT 1.1.0. Once build tests have passed and any issues resolved, I will valgrind everything (I have already done a lot of this) and the beta status will be lifted. I will post an announcement on flint-devel and sage-devel when that time comes, including a very long list of all the new features of FLINT 1.1. This is a MAJOR new feature release, thus the beta release before final!! If any of you wish to be on the FLINT development list and are not already on it, you can register with sourceforge: http://sourceforge.net/account/registration/ and subscribe at the following link: http://sourceforge.net/mail/?group_id=182716 The traffic is very low except during English summer when undergraduates work on the project when the traffic is medium. I would value your contributions to the list. If there is sufficient demand for a google groups for FLINT development, I will create one and switch. Subsequent feature releases will now be issued every 3 months, not every year. FLINT 1.2, to be released in about 3 months, will contain numerous speed ups and a new functions that got cut from 1.1. FLINT 2.0 will be issued in about six to nine months and will include a new polynomial module, and Andy Novocin's new factoring technique/heuristics for ZZ[x]. About 1/3 of the code for FLINT 2.0 is already written, but not included in this release. Many thanks to Andy Novocin for help with final documentation and alias testing and thank you to the many contributors to this release which are listed on the webpage: http://sage.math.washington.edu/home/wbhart/webpage (normally http://www.flintlib.org/) The highlights of this release are fast polynomial GCD for ZZ[x], (using heuristic GCD and modular half-gcd), which is now almost uniformly faster than Magma: http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd19.png (ignore the heading which is incorrect), polynomial composition (due to Andy Novocin) and evaluation (which are asymptotically faster than Magma) and factoring of polynomials in Z/pZ[x] for word sized primes (much faster than Magma, due to Richard Howell-Peak), new primality testing and factoring functions for word sized integers (due to Peter Shrimpton) and a much faster quadratic sieve for factoring integers. Enjoy! Bill. |
|
From: Bill H. <goo...@go...> - 2008-12-12 18:50:16
|
The test code for long_extras is now done. Bill. |
|
From: Bill H. <goo...@go...> - 2008-12-08 15:11:48
|
I've finished checking the test functions for the fmpz module. Only two modules to go. Bill. |
|
From: Bill H. <goo...@go...> - 2008-12-08 00:42:32
|
I've finished the long process of adding test functions for all the fmpz_poly functions and aliasing (and associated tests) for all user exposed functions in fmpz_poly. This whole process needs to be done for the new functions in long_extras, fmpz (aliasing is not required there) and zmod_poly. I still need to do documentation, check everything works on a 32 bit machine before a beta release and valgrind everything for a release candidate. I should also add poly_normalisation_check functions throughout the fmpz_poly test code and remove #if DEBUG statements from around the if (!result) checks at the end of each test function (we want diagnostic data to print if a function fails a test!) Bill. |
|
From: Bill H. <goo...@go...> - 2008-12-04 23:15:51
|
I've been working on removing all the bugs from the various gcd functions in FLINT, getting it ready for the big release. Firstly I found a number of bugs which were actually speeding FLINT up (but returning the wrong answers). After lots more work I found lots of other bugs (mostly memory leaks) which were slowing FLINT down. I fixed all these and the current state of affairs is: http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd19.png Probably the remaining red dots are simply due to the fact that I have to use the modular gcd to get the benefit of the new half gcd code. When MPIR is released I will be able to use the new half gcd code in MPIR which will make the heuristic gcd run faster. That should mean we'll beat Magma everywhere. Bill. |
|
From: Bill H. <goo...@go...> - 2008-11-30 17:49:13
|
I've now finished reimplementing F_mpz_LLL_fast_d over the top of the new F_mpz_mat module. Not much has changed, and I hope it runs at least as fast as it did before. The results of the LLL reductions seem to be ok. Bill. 2008/11/30 Bill Hart <goo...@go...>: > I have just released a new bug fix for FLINT, available at http://www.flintlib.org/ > > This fixes the following issues: > > * A segfault in the division and pseudo division functions > > * The bound that was being used in fmpz_poly_gcd_modular was as far as > I know, not proven. I have replaced it with a proven bound and cited > the relevant paper. This makes little difference to the timings. > > * A bug in the profiling code for fmpz_poly related to the top bit of > n bit random coefficients always being set has been fixed. > > * I identified an issue that might potentially have caused wrong > results in polynomial multiplication. However I checked that in fact, > it cannot be triggered in FLINT 1.0.17 code (I also ran specifically > constructed tests to verify that this is in fact the case). The code > will be completely rewritten for FLINT 1.1. > > I recommend Sage upgrade to FLINT 1.0.17 as the first bug is critical > and can occur in live code (I actually hit it myself). > > FLINT 1.0.17 will likely be the last 1.0.x series release before FLINT > 1.1 which should be available for use in Sage in the next few weeks. > The interface for FLINT 1.1 should be pretty much the same as for > FLINT 1.0.x, however there will be a large quantity of new functions > and a large number of substantial speedups which Sage will be able to > take advantage of. > > I intend to issue beta/release candidates in about 11 days. The > todo.txt file in FLINT 1.0.17 lists the tasks which need to be > completed before the release. > > Over the next few days I will be working on code which will appear in > FLINT 2.0, specifically a matrix module F_mpz_mat over the new F_mpz > integer library I have coded up for FLINT 2.0 and an F_mpz > implementation of fpLLL's "fast" module. > > I am about a third of the way through implementation of F_mpz_poly for > FLINT 2.0. It features a completely rewritten polynomial module over > the new FLINT integer format F_mpz and David Harvey's KS2 algorithm, > which gives *up to* a 30% increase in speed for polynomial > multiplication for lengths between about 45 and 9000. > > Bill. > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Bill H. <goo...@go...> - 2008-11-30 04:21:58
|
I've reimplemented the F_mpz_mat module including test code, using all the new functions in the F_mpz module. Tomorrow I'll reimplement fpLLL's fast module using the new F_mpz_mat implementation. To do this I need to reimplement a handful of additional F_mpz functions. After that is done, I'll be making the final push towards FLINT 1.1. Bill. |
|
From: Bill H. <goo...@go...> - 2008-11-30 00:03:13
|
I have just released a new bug fix for FLINT, available at http://www.flintlib.org/ This fixes the following issues: * A segfault in the division and pseudo division functions * The bound that was being used in fmpz_poly_gcd_modular was as far as I know, not proven. I have replaced it with a proven bound and cited the relevant paper. This makes little difference to the timings. * A bug in the profiling code for fmpz_poly related to the top bit of n bit random coefficients always being set has been fixed. * I identified an issue that might potentially have caused wrong results in polynomial multiplication. However I checked that in fact, it cannot be triggered in FLINT 1.0.17 code (I also ran specifically constructed tests to verify that this is in fact the case). The code will be completely rewritten for FLINT 1.1. I recommend Sage upgrade to FLINT 1.0.17 as the first bug is critical and can occur in live code (I actually hit it myself). FLINT 1.0.17 will likely be the last 1.0.x series release before FLINT 1.1 which should be available for use in Sage in the next few weeks. The interface for FLINT 1.1 should be pretty much the same as for FLINT 1.0.x, however there will be a large quantity of new functions and a large number of substantial speedups which Sage will be able to take advantage of. I intend to issue beta/release candidates in about 11 days. The todo.txt file in FLINT 1.0.17 lists the tasks which need to be completed before the release. Over the next few days I will be working on code which will appear in FLINT 2.0, specifically a matrix module F_mpz_mat over the new F_mpz integer library I have coded up for FLINT 2.0 and an F_mpz implementation of fpLLL's "fast" module. I am about a third of the way through implementation of F_mpz_poly for FLINT 2.0. It features a completely rewritten polynomial module over the new FLINT integer format F_mpz and David Harvey's KS2 algorithm, which gives *up to* a 30% increase in speed for polynomial multiplication for lengths between about 45 and 9000. Bill. |
|
From: Bill H. <goo...@go...> - 2008-11-21 15:43:18
|
I found the problem with F_mpz_poly_add. Basically when coefficients were being demoted as a result of cancellations, the mpz_t's were being lost instead of returned to the memory manager. The times are now much more what I would expect, and close to the FLINT 1.x series times. So now FLINT has a sensible integer format and writing polynomial functions is quite easy. The other advantage of the new model is that attaching polynomials using F_mpz_poly_attach will still be possible. It wasn't under the previous model. length 100, bits = 50, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 24.342s clear-init2-add : 17.957s clear-init2 : 9.227s flint fmpz_poly: clear-init-add-add : 10.522s clear-init-add : 5.059s clear-init : 0.183s flint-2 F_mpz_poly: clear-init-add-add: 2.360s clear-init2-add: 1.320s clear-init2: 0.400s length 100, bits = 64, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 24.608s clear-init2-add : 17.758s clear-init2 : 9.245s flint fmpz_poly: clear-init-add-add : 9.861s clear-init-add : 3.804s clear-init : 0.183s flint-2 F_mpz_poly: clear-init-add-add: 11.740s clear-init2-add: 7.240s clear-init2: 0.450s length 100, bits = 1000, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 39.639s clear-init2-add : 28.760s clear-init2 : 10.445s flint fmpz_poly: clear-init-add-add : 19.321s clear-init-add : 8.697s clear-init : 0.235s flint-2 F_mpz_poly: clear-init-add-add: 18.940s clear-init2-add: 9.240s clear-init2: 0.500s length 100, bits = 100, 200, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 33.716s clear-init2-add : 18.921s clear-init2 : 10.237s flint fmpz_poly: clear-init-add-add : 14.611s clear-init-add : 6.425s clear-init : 0.190s flint-2 F_mpz_poly: clear-init-add-add: 12.160s clear-init2-add: 6.520s clear-init2: 0.450s Bill. |
|
From: Bill H. <goo...@go...> - 2008-11-21 13:49:35
|
Apparently the message below never made it to the list, so here it is again. ---------- Forwarded message ---------- From: William Hart <ha...@ya...> Date: 2008/11/18 Subject: New F_mpz_t integer arithmetic modular To: Bill Hart <goo...@go...> I've again been implementing a FLINT integer type (F_mpz_t). This time an F_mpz is either a long (with 62 bits and a sign) or an index into a FLINT wide array of mpz_t's. Then an F_mpz_t is just an array of length 1 of F_mpz's so that call by reference can be implemented (and checking for aliasing is easy). The one downside of this is that if one wants to make this stuff multi-threaded, one needs an array of mpz_t's for each thread. But this shouldn't be too hard to manage. Originally I tried letting an F_mpz_t be directly a long or index into an array of mpz_t's. But then you have to have an interface of functions that look like F_mpz_set(F_mpz_t * f, const F_mpz_t g) and checking for aliasing is then more difficult. Of course the array of length 1 business really ends up causing an extra layer of indirection (especially bad when an integer is already being represented internally as an mpz_t), but the timings don't seem to be affected, as presumably the compiler adapts this extra layer away most of the time. Here are the timings before this extra indirection: Testing F_mpz_getset_ui()... Cpu = 1220 ms Wall = 1225 ms ... ok Testing F_mpz_getset_si()... Cpu = 1240 ms Wall = 1232 ms ... ok Testing F_mpz_getset_mpz()... Cpu = 3600 ms Wall = 3600 ms ... ok Testing F_mpz_set()... Cpu = 180 ms Wall = 189 ms ... ok Here are the timings with the extra indirection: Testing F_mpz_getset_ui()... Cpu = 1220 ms Wall = 1224 ms ... ok Testing F_mpz_getset_si()... Cpu = 1280 ms Wall = 1279 ms ... ok Testing F_mpz_getset_mpz()... Cpu = 3590 ms Wall = 3596 ms ... ok Testing F_mpz_set()... Cpu = 190 ms Wall = 192 ms ... ok That's despite amortising away the cost of random generation and setup cost in the timings and working with very small integers, of say no more than 200 bits. So I have decided to stick with the nice consistent interface that allows easy checks for aliasing but has the extra level of indirection. I've implemented all the memory management stuff and a handful of basic functions and a few test functions. It is implemented in F_mpz.c, F_mpz.h and F_mpz-test.c. Just do make F_mpz-test to make the test module. So now FLINT has it's own integer format. Yay!! Bill. |
|
From: Bill H. <goo...@go...> - 2008-11-21 13:48:38
|
I've started rewriting the F_mpz_poly module based on the new F_mpz module (which now has quite a few functions again). I've coded up F_mpz_poly_add. I ran my benchmarks to see how fast it is. The results are mixed. The benchmark involves constructing poly's A, B, C with signed coefficients and the same number of bits per coefficient and the same length as given in the tables below (except in the final case where C has a different number of bits per coefficient to A and B, as indicated). There are 3 benchmarks run for each problem: 1) Clear and initialise a polynomial R of the given length 2) Do 1 and compute R = A + B 3) Do 2 and compute R = R + C I've included times for the existing fmpz_poly from the FLINT 1.x series, times for FLINT 1.x mpz_poly and the new times for the new FLINT 2.x F_mpz_poly. The good thing is it is always faster than mpz_poly. The most interesting thing is that it is faster to add polynomials with 100 bits per coefficient than with 64 bits. Moreover, FLINT 1.x is much faster at the 64 bit problem. In fact, FLINT 2.x is nearly as slow as mpz_poly. I really don't have an explanation for this. In some senses 64 bits is a worst case for FLINT 2.x. It has to sometimes use 65 bits which means 2 limbs in the mpz_t's (though these are reused and should stabilise at 2 limbs very quickly) and sometimes it has to drop back to 62 bits when there is cancellation, which means repeatedly promoting and demoting coefficients. However when doing R = A + B over and over, I have it set up to do 10000 iterations with the same A and B before changing A and B, so there really ought not be many promotions and demotions between 62 bit coefficients and mpz_t's. Besides all this, it is nearly as slow for 70 bits for which both problems are avoided. The only explanation I have is that GMP is itself very slow at adding 70 bit signed integers. However not even this makes sense. After all, why would it be faster to add 100 bit integers!? It's certainly not a caching issue. So I'm a bit stumped. If anyone has any guesses, please let me know.... Anyhow, here are the times (which I am not unhappy with): length 100, bits = 50, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 24.342s clear-init2-add : 17.957s clear-init2 : 9.227s flint fmpz_poly: clear-init-add-add : 10.522s clear-init-add : 5.059s clear-init : 0.183s flint-2 F_mpz_poly: clear-init-add-add : 2.360s clear-init2-add : 1.320s clear-init2 : 0.400s length 100, bits = 64, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 24.608s clear-init2-add : 17.758s clear-init2 : 9.245s flint fmpz_poly: clear-init-add-add : 9.861s clear-init-add : 3.804s clear-init : 0.183s flint-2 F_mpz_poly: clear-init-add-add : 23.290s clear-init2-add : 17.259s clear-init2 : 0.450s length 100, bits = 1000, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 39.639s clear-init2-add : 28.760s clear-init2 : 10.445s flint fmpz_poly: clear-init-add-add : 19.321s clear-init-add : 8.697s clear-init : 0.235s flint-2 F_mpz_poly: clear-init-add-add : 19.320s clear-init2-add : 9.780s clear-init2 : 0.500s length 100, bits = 100, 200, iter = 1000000 ===================================== mpz_poly: clear-init2-add-add : 33.716s clear-init2-add : 18.921s clear-init2 : 10.237s flint fmpz_poly: clear-init-add-add : 14.611s clear-init-add : 6.425s clear-init : 0.190s flint-2 F_mpz_poly: clear-init-add-add : 13.800s clear-init2-add : 7.220s clear-init2 : 0.450s Bill. |
|
From: Bill H. <goo...@go...> - 2008-11-13 14:58:20
|
For those following the polynomial GCD thread, I've moved it across to: http://groups.google.co.uk/group/sage-nt/topics?hl=en&start= in particular to this thread: http://groups.google.co.uk/group/sage-nt/browse_thread/thread/2da0e434f09d1fe1?hl=en in case anyone there has any ideas. For now I've completely run out of ideas. Magma has got me completely stumped. Here is the latest graph: http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd13.png A description of the new improvements is on the sage-nt site linked above. Whilst the right hand side now looks pretty good, there is one problem. If the polynomials are coprime the Heuristic GCD is not a good algorithm to use for long polynomials. The reason is that the modular algorithm will only need to use O(1) primes to determine that the polys are coprime, whereas the Heuristic algorithm will go ahead and pack right up to the bit size of the input polynomials, which is much more expensive. So I will have to determine some crossover points experimentally. The brown we had on the right of the graph will come back and I will only be able to get rid of it by packing more than one coefficient into a single limb in the zmod_poly's, as that problem is a caching one. Bill. 2008/11/11 Bill Hart <goo...@go...>: > After heating up one of the cores on sage.math for a while I have come > to the conclusion that Magma is not cheating. It gets the right > answers for these small GCD problems. > > But I am mystified as to how? I recall when timing the division code > in FLINT that in that region Magma was much more efficient. > > Something seems to reek of inefficiency when two tests for > divisibility dominate the time to do a GCD. Perhaps I just haven't > thought of an efficient enough way to test divisibility. I would have > thought the multi-modular technique should be fast enough, but it > doesn't seem to be. > > Bill. > > 2008/11/10 Bill Hart <goo...@go...>: >> It turns out that Schoenhage already computed a bound that works for >> the heuristic gcd unconditionally. >> >> If A is the max norm of the polynomials f and g and n is the maximum >> degree then one can evaluate at an integer u with >> >> u > 4*(n+1)^n*A^(2n) >> >> and not have to run divides checks. >> >> If Magma were doing this in the red region of >> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd.png >> then we would have n = 32 and A = 2^16 say. >> >> The u would be 4*(33^32)*2^(16*64) > (2^64)^18, i.e. u would have to >> be bigger than 18 limbs!! So this proven bound is totally useless. >> >> If Magma is not doing divisions in this region, I would question >> whether the results would be correct. No doubt they do something else >> clever. >> >> Bill. >> >> 2008/11/9 Bill Hart <goo...@go...>: >>> Hmm, I think I have gotten ahead of myself with my argument about the >>> Mignotte bound. >>> >>> If f(x) = a(x)*b(x) and g(x) = a(x)*d(x) with b(x) and d(x) coprime. >>> >>> when I substitute x = 2^64 then certainly a(2^64) will be a common >>> factor of f(2^64) and g(2^64). >>> >>> But it is also possible that even though b(x) and d(x) are coprime >>> that b(2^64) and d(2^64) are not. >>> >>> But this means that we can't simply rely on the Mignotte bound for the >>> size of the gcd of f(x) and g(x). Certainly if 2^64 is bigger than the >>> Mignotte bound times the gcd of b(2^64) and d(2^64) then we are ok. >>> The polynomial gcd that we are after will just end up being multiplied >>> by some integer constant which we can remove by computing the content. >>> >>> But there is no way that I can see to bound gcd of b(2^64) and d(2^64). >>> >>> Thus it seems essential to me that one actually check that the >>> purported poly gcd actually divides f(x) and g(x). >>> >>> But the time it takes to do this "divides" test is too great. I don't >>> have a solution to this problem at present. >>> >>> Bill. >>> >>> 2008/11/9 Bill Hart <goo...@go...>: >>>> I coded up the packing code to pack into 32 bits instead of 64. There >>>> were a couple of problems with this. When the Mignotte bound is bigger >>>> than 32 bits then a divides gets carried out, which for small gcd's >>>> dominates the time. Thus sometimes it is better to pack into 64 bits >>>> instead. >>>> >>>> Once that is sorted out one soon realises that just computing the >>>> Mignotte bound takes too long. So I implemented an approximation which >>>> is sometimes slightly too large but isn't too bad. >>>> >>>> Once these issues are fixed, here is the result: >>>> >>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd11.png >>>> >>>> Now the brown in the bottom left must be where packing is done to the >>>> bit. Here the Minotte bound is usually less than the number of bits >>>> that one is packing to. >>>> >>>> Bill. >>>> >>>> 2008/11/8 Bill Hart <goo...@go...>: >>>>> I finally realised that if the number of bits one is packing into is >>>>> bigger than the Mignotte bound, then one does not need to do a divides >>>>> test at all. This is all that Magma is doing. >>>>> >>>>> Here is why: >>>>> >>>>> To do a divides, if one is under the Mignotte bound, one would pack A >>>>> and B and pack the purported GCD and see if the packed A and B divide >>>>> the packed GCD. This would be done by simply doing a multiprecision >>>>> division and seeing if there is a remainder. If not, they divide >>>>> precisely and the quotient is calculated exactly. >>>>> >>>>> But in computing the heuristic gcd, we already have the packed >>>>> versions of A and B, as the gcd of these divides both of them by >>>>> definition. But the packed version of G is precisely the gcd of the >>>>> packed version of A and B. >>>>> >>>>> I've now inserted code to compute the Mignotte bound and omit the >>>>> divides test in this case. The function passes the test code and I am >>>>> now doing a profile run against Magma again: >>>>> >>>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >>>>> >>>>> The light blue layer at the bottom left is due to the fact that FLINT >>>>> packs into 64 bits not 30 or 32 as Magma surely does. This means that >>>>> we more easily exceed the Mignotte bound so that the divides test is >>>>> not required. >>>>> >>>>> The browny red region at the bottom left is surely due to the fact >>>>> that Magma is doing a heuristic gcd packing into 32 bits (or perhaps >>>>> 30) instead of 64. >>>>> >>>>> Bill. >>>>> >>>>> 2008/11/7 William Hart <ha...@ya...>: >>>>>> I decided to remove the test for correctness in the Heuristic GCD (which it actually can't do without) and see if that sped things up. It did: >>>>>> >>>>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >>>>>> >>>>>> The red on the bottom left turned blue. So it looks like it is the function fmpz_poly_divides which checks if the purported GCD actually divides the original polynomials is taking most of the time (like 80%). >>>>>> >>>>>> One way of doing a quick check is to pack the coefficients of the polynomials into large integers and divide. If the integer divides exactly one gets the quotient, which one can unpack into a polynomial and then if that quotient times the divisor is the dividend, one can prove that the polynomials divide exactly. >>>>>> >>>>>> Clearly the brown at the bottom of the graph on the left is because Magma is packing (both in the GCD and the divides function) into 32 bits instead of 64. >>>>>> >>>>>> So I think it is clear now what has to be implemented. >>>>>> >>>>>> I'm quite surprised that the fmpz_poly_divides_modular function was not sufficiently fast. It seems one can do a lot better if exact division is expected. >>>>>> >>>>>> Bill. >>>>>> >>>>>> --- On Fri, 11/7/08, Bill Hart <goo...@go...> wrote: >>>>>> >>>>>>> From: Bill Hart <goo...@go...> >>>>>>> Subject: [flint-devel] Heuristic gcd >>>>>>> To: "Development list for FLINT" <fli...@li...> >>>>>>> Date: Friday, November 7, 2008, 2:07 PM >>>>>>> I implemented the heuristic gcd for polynomials whose >>>>>>> bitsize is up to >>>>>>> 63 bits. The result looks like this: >>>>>>> >>>>>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd9.png >>>>>>> >>>>>>> Interestingly although it makes the bottom of the graph >>>>>>> nice and blue, >>>>>>> the big red patch on the left is still there. >>>>>>> >>>>>>> I think perhaps this is just a highly efficient euclidean >>>>>>> algorithm, >>>>>>> or perhaps a Lehmer GCD algorithm. >>>>>>> >>>>>>> The red on the right is nor due to FLINT. That is due to a >>>>>>> lack of >>>>>>> integer half-gcd in GMP. We're still working on fixing >>>>>>> that. >>>>>>> >>>>>>> Bill. >>>>>>> >>>>>>> ------------------------------------------------------------------------- >>>>>>> This SF.Net email is sponsored by the Moblin Your Move >>>>>>> Developer's challenge >>>>>>> Build the coolest Linux based applications with Moblin SDK >>>>>>> & win great prizes >>>>>>> Grand prize is a trip for two to an Open Source event >>>>>>> anywhere in the world >>>>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>>>>> _______________________________________________ >>>>>>> flint-devel mailing list >>>>>>> fli...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------- >>>>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>>>> _______________________________________________ >>>>>> flint-devel mailing list >>>>>> fli...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>>>> >>>>> >>>> >>> >> > |
|
From: Bill H. <goo...@go...> - 2008-11-11 00:08:49
|
After heating up one of the cores on sage.math for a while I have come to the conclusion that Magma is not cheating. It gets the right answers for these small GCD problems. But I am mystified as to how? I recall when timing the division code in FLINT that in that region Magma was much more efficient. Something seems to reek of inefficiency when two tests for divisibility dominate the time to do a GCD. Perhaps I just haven't thought of an efficient enough way to test divisibility. I would have thought the multi-modular technique should be fast enough, but it doesn't seem to be. Bill. 2008/11/10 Bill Hart <goo...@go...>: > It turns out that Schoenhage already computed a bound that works for > the heuristic gcd unconditionally. > > If A is the max norm of the polynomials f and g and n is the maximum > degree then one can evaluate at an integer u with > > u > 4*(n+1)^n*A^(2n) > > and not have to run divides checks. > > If Magma were doing this in the red region of > http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd.png > then we would have n = 32 and A = 2^16 say. > > The u would be 4*(33^32)*2^(16*64) > (2^64)^18, i.e. u would have to > be bigger than 18 limbs!! So this proven bound is totally useless. > > If Magma is not doing divisions in this region, I would question > whether the results would be correct. No doubt they do something else > clever. > > Bill. > > 2008/11/9 Bill Hart <goo...@go...>: >> Hmm, I think I have gotten ahead of myself with my argument about the >> Mignotte bound. >> >> If f(x) = a(x)*b(x) and g(x) = a(x)*d(x) with b(x) and d(x) coprime. >> >> when I substitute x = 2^64 then certainly a(2^64) will be a common >> factor of f(2^64) and g(2^64). >> >> But it is also possible that even though b(x) and d(x) are coprime >> that b(2^64) and d(2^64) are not. >> >> But this means that we can't simply rely on the Mignotte bound for the >> size of the gcd of f(x) and g(x). Certainly if 2^64 is bigger than the >> Mignotte bound times the gcd of b(2^64) and d(2^64) then we are ok. >> The polynomial gcd that we are after will just end up being multiplied >> by some integer constant which we can remove by computing the content. >> >> But there is no way that I can see to bound gcd of b(2^64) and d(2^64). >> >> Thus it seems essential to me that one actually check that the >> purported poly gcd actually divides f(x) and g(x). >> >> But the time it takes to do this "divides" test is too great. I don't >> have a solution to this problem at present. >> >> Bill. >> >> 2008/11/9 Bill Hart <goo...@go...>: >>> I coded up the packing code to pack into 32 bits instead of 64. There >>> were a couple of problems with this. When the Mignotte bound is bigger >>> than 32 bits then a divides gets carried out, which for small gcd's >>> dominates the time. Thus sometimes it is better to pack into 64 bits >>> instead. >>> >>> Once that is sorted out one soon realises that just computing the >>> Mignotte bound takes too long. So I implemented an approximation which >>> is sometimes slightly too large but isn't too bad. >>> >>> Once these issues are fixed, here is the result: >>> >>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd11.png >>> >>> Now the brown in the bottom left must be where packing is done to the >>> bit. Here the Minotte bound is usually less than the number of bits >>> that one is packing to. >>> >>> Bill. >>> >>> 2008/11/8 Bill Hart <goo...@go...>: >>>> I finally realised that if the number of bits one is packing into is >>>> bigger than the Mignotte bound, then one does not need to do a divides >>>> test at all. This is all that Magma is doing. >>>> >>>> Here is why: >>>> >>>> To do a divides, if one is under the Mignotte bound, one would pack A >>>> and B and pack the purported GCD and see if the packed A and B divide >>>> the packed GCD. This would be done by simply doing a multiprecision >>>> division and seeing if there is a remainder. If not, they divide >>>> precisely and the quotient is calculated exactly. >>>> >>>> But in computing the heuristic gcd, we already have the packed >>>> versions of A and B, as the gcd of these divides both of them by >>>> definition. But the packed version of G is precisely the gcd of the >>>> packed version of A and B. >>>> >>>> I've now inserted code to compute the Mignotte bound and omit the >>>> divides test in this case. The function passes the test code and I am >>>> now doing a profile run against Magma again: >>>> >>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >>>> >>>> The light blue layer at the bottom left is due to the fact that FLINT >>>> packs into 64 bits not 30 or 32 as Magma surely does. This means that >>>> we more easily exceed the Mignotte bound so that the divides test is >>>> not required. >>>> >>>> The browny red region at the bottom left is surely due to the fact >>>> that Magma is doing a heuristic gcd packing into 32 bits (or perhaps >>>> 30) instead of 64. >>>> >>>> Bill. >>>> >>>> 2008/11/7 William Hart <ha...@ya...>: >>>>> I decided to remove the test for correctness in the Heuristic GCD (which it actually can't do without) and see if that sped things up. It did: >>>>> >>>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >>>>> >>>>> The red on the bottom left turned blue. So it looks like it is the function fmpz_poly_divides which checks if the purported GCD actually divides the original polynomials is taking most of the time (like 80%). >>>>> >>>>> One way of doing a quick check is to pack the coefficients of the polynomials into large integers and divide. If the integer divides exactly one gets the quotient, which one can unpack into a polynomial and then if that quotient times the divisor is the dividend, one can prove that the polynomials divide exactly. >>>>> >>>>> Clearly the brown at the bottom of the graph on the left is because Magma is packing (both in the GCD and the divides function) into 32 bits instead of 64. >>>>> >>>>> So I think it is clear now what has to be implemented. >>>>> >>>>> I'm quite surprised that the fmpz_poly_divides_modular function was not sufficiently fast. It seems one can do a lot better if exact division is expected. >>>>> >>>>> Bill. >>>>> >>>>> --- On Fri, 11/7/08, Bill Hart <goo...@go...> wrote: >>>>> >>>>>> From: Bill Hart <goo...@go...> >>>>>> Subject: [flint-devel] Heuristic gcd >>>>>> To: "Development list for FLINT" <fli...@li...> >>>>>> Date: Friday, November 7, 2008, 2:07 PM >>>>>> I implemented the heuristic gcd for polynomials whose >>>>>> bitsize is up to >>>>>> 63 bits. The result looks like this: >>>>>> >>>>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd9.png >>>>>> >>>>>> Interestingly although it makes the bottom of the graph >>>>>> nice and blue, >>>>>> the big red patch on the left is still there. >>>>>> >>>>>> I think perhaps this is just a highly efficient euclidean >>>>>> algorithm, >>>>>> or perhaps a Lehmer GCD algorithm. >>>>>> >>>>>> The red on the right is nor due to FLINT. That is due to a >>>>>> lack of >>>>>> integer half-gcd in GMP. We're still working on fixing >>>>>> that. >>>>>> >>>>>> Bill. >>>>>> >>>>>> ------------------------------------------------------------------------- >>>>>> This SF.Net email is sponsored by the Moblin Your Move >>>>>> Developer's challenge >>>>>> Build the coolest Linux based applications with Moblin SDK >>>>>> & win great prizes >>>>>> Grand prize is a trip for two to an Open Source event >>>>>> anywhere in the world >>>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>>>> _______________________________________________ >>>>>> flint-devel mailing list >>>>>> fli...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------- >>>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>>> _______________________________________________ >>>>> flint-devel mailing list >>>>> fli...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>>> >>>> >>> >> > |
|
From: Bill H. <goo...@go...> - 2008-11-10 22:31:34
|
It turns out that Schoenhage already computed a bound that works for the heuristic gcd unconditionally. If A is the max norm of the polynomials f and g and n is the maximum degree then one can evaluate at an integer u with u > 4*(n+1)^n*A^(2n) and not have to run divides checks. If Magma were doing this in the red region of http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd.png then we would have n = 32 and A = 2^16 say. The u would be 4*(33^32)*2^(16*64) > (2^64)^18, i.e. u would have to be bigger than 18 limbs!! So this proven bound is totally useless. If Magma is not doing divisions in this region, I would question whether the results would be correct. No doubt they do something else clever. Bill. 2008/11/9 Bill Hart <goo...@go...>: > Hmm, I think I have gotten ahead of myself with my argument about the > Mignotte bound. > > If f(x) = a(x)*b(x) and g(x) = a(x)*d(x) with b(x) and d(x) coprime. > > when I substitute x = 2^64 then certainly a(2^64) will be a common > factor of f(2^64) and g(2^64). > > But it is also possible that even though b(x) and d(x) are coprime > that b(2^64) and d(2^64) are not. > > But this means that we can't simply rely on the Mignotte bound for the > size of the gcd of f(x) and g(x). Certainly if 2^64 is bigger than the > Mignotte bound times the gcd of b(2^64) and d(2^64) then we are ok. > The polynomial gcd that we are after will just end up being multiplied > by some integer constant which we can remove by computing the content. > > But there is no way that I can see to bound gcd of b(2^64) and d(2^64). > > Thus it seems essential to me that one actually check that the > purported poly gcd actually divides f(x) and g(x). > > But the time it takes to do this "divides" test is too great. I don't > have a solution to this problem at present. > > Bill. > > 2008/11/9 Bill Hart <goo...@go...>: >> I coded up the packing code to pack into 32 bits instead of 64. There >> were a couple of problems with this. When the Mignotte bound is bigger >> than 32 bits then a divides gets carried out, which for small gcd's >> dominates the time. Thus sometimes it is better to pack into 64 bits >> instead. >> >> Once that is sorted out one soon realises that just computing the >> Mignotte bound takes too long. So I implemented an approximation which >> is sometimes slightly too large but isn't too bad. >> >> Once these issues are fixed, here is the result: >> >> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd11.png >> >> Now the brown in the bottom left must be where packing is done to the >> bit. Here the Minotte bound is usually less than the number of bits >> that one is packing to. >> >> Bill. >> >> 2008/11/8 Bill Hart <goo...@go...>: >>> I finally realised that if the number of bits one is packing into is >>> bigger than the Mignotte bound, then one does not need to do a divides >>> test at all. This is all that Magma is doing. >>> >>> Here is why: >>> >>> To do a divides, if one is under the Mignotte bound, one would pack A >>> and B and pack the purported GCD and see if the packed A and B divide >>> the packed GCD. This would be done by simply doing a multiprecision >>> division and seeing if there is a remainder. If not, they divide >>> precisely and the quotient is calculated exactly. >>> >>> But in computing the heuristic gcd, we already have the packed >>> versions of A and B, as the gcd of these divides both of them by >>> definition. But the packed version of G is precisely the gcd of the >>> packed version of A and B. >>> >>> I've now inserted code to compute the Mignotte bound and omit the >>> divides test in this case. The function passes the test code and I am >>> now doing a profile run against Magma again: >>> >>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >>> >>> The light blue layer at the bottom left is due to the fact that FLINT >>> packs into 64 bits not 30 or 32 as Magma surely does. This means that >>> we more easily exceed the Mignotte bound so that the divides test is >>> not required. >>> >>> The browny red region at the bottom left is surely due to the fact >>> that Magma is doing a heuristic gcd packing into 32 bits (or perhaps >>> 30) instead of 64. >>> >>> Bill. >>> >>> 2008/11/7 William Hart <ha...@ya...>: >>>> I decided to remove the test for correctness in the Heuristic GCD (which it actually can't do without) and see if that sped things up. It did: >>>> >>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >>>> >>>> The red on the bottom left turned blue. So it looks like it is the function fmpz_poly_divides which checks if the purported GCD actually divides the original polynomials is taking most of the time (like 80%). >>>> >>>> One way of doing a quick check is to pack the coefficients of the polynomials into large integers and divide. If the integer divides exactly one gets the quotient, which one can unpack into a polynomial and then if that quotient times the divisor is the dividend, one can prove that the polynomials divide exactly. >>>> >>>> Clearly the brown at the bottom of the graph on the left is because Magma is packing (both in the GCD and the divides function) into 32 bits instead of 64. >>>> >>>> So I think it is clear now what has to be implemented. >>>> >>>> I'm quite surprised that the fmpz_poly_divides_modular function was not sufficiently fast. It seems one can do a lot better if exact division is expected. >>>> >>>> Bill. >>>> >>>> --- On Fri, 11/7/08, Bill Hart <goo...@go...> wrote: >>>> >>>>> From: Bill Hart <goo...@go...> >>>>> Subject: [flint-devel] Heuristic gcd >>>>> To: "Development list for FLINT" <fli...@li...> >>>>> Date: Friday, November 7, 2008, 2:07 PM >>>>> I implemented the heuristic gcd for polynomials whose >>>>> bitsize is up to >>>>> 63 bits. The result looks like this: >>>>> >>>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd9.png >>>>> >>>>> Interestingly although it makes the bottom of the graph >>>>> nice and blue, >>>>> the big red patch on the left is still there. >>>>> >>>>> I think perhaps this is just a highly efficient euclidean >>>>> algorithm, >>>>> or perhaps a Lehmer GCD algorithm. >>>>> >>>>> The red on the right is nor due to FLINT. That is due to a >>>>> lack of >>>>> integer half-gcd in GMP. We're still working on fixing >>>>> that. >>>>> >>>>> Bill. >>>>> >>>>> ------------------------------------------------------------------------- >>>>> This SF.Net email is sponsored by the Moblin Your Move >>>>> Developer's challenge >>>>> Build the coolest Linux based applications with Moblin SDK >>>>> & win great prizes >>>>> Grand prize is a trip for two to an Open Source event >>>>> anywhere in the world >>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>>> _______________________________________________ >>>>> flint-devel mailing list >>>>> fli...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------- >>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>> _______________________________________________ >>>> flint-devel mailing list >>>> fli...@li... >>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>> >>> >> > |
|
From: Bill H. <goo...@go...> - 2008-11-09 20:54:34
|
Hmm, I think I have gotten ahead of myself with my argument about the Mignotte bound. If f(x) = a(x)*b(x) and g(x) = a(x)*d(x) with b(x) and d(x) coprime. when I substitute x = 2^64 then certainly a(2^64) will be a common factor of f(2^64) and g(2^64). But it is also possible that even though b(x) and d(x) are coprime that b(2^64) and d(2^64) are not. But this means that we can't simply rely on the Mignotte bound for the size of the gcd of f(x) and g(x). Certainly if 2^64 is bigger than the Mignotte bound times the gcd of b(2^64) and d(2^64) then we are ok. The polynomial gcd that we are after will just end up being multiplied by some integer constant which we can remove by computing the content. But there is no way that I can see to bound gcd of b(2^64) and d(2^64). Thus it seems essential to me that one actually check that the purported poly gcd actually divides f(x) and g(x). But the time it takes to do this "divides" test is too great. I don't have a solution to this problem at present. Bill. 2008/11/9 Bill Hart <goo...@go...>: > I coded up the packing code to pack into 32 bits instead of 64. There > were a couple of problems with this. When the Mignotte bound is bigger > than 32 bits then a divides gets carried out, which for small gcd's > dominates the time. Thus sometimes it is better to pack into 64 bits > instead. > > Once that is sorted out one soon realises that just computing the > Mignotte bound takes too long. So I implemented an approximation which > is sometimes slightly too large but isn't too bad. > > Once these issues are fixed, here is the result: > > http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd11.png > > Now the brown in the bottom left must be where packing is done to the > bit. Here the Minotte bound is usually less than the number of bits > that one is packing to. > > Bill. > > 2008/11/8 Bill Hart <goo...@go...>: >> I finally realised that if the number of bits one is packing into is >> bigger than the Mignotte bound, then one does not need to do a divides >> test at all. This is all that Magma is doing. >> >> Here is why: >> >> To do a divides, if one is under the Mignotte bound, one would pack A >> and B and pack the purported GCD and see if the packed A and B divide >> the packed GCD. This would be done by simply doing a multiprecision >> division and seeing if there is a remainder. If not, they divide >> precisely and the quotient is calculated exactly. >> >> But in computing the heuristic gcd, we already have the packed >> versions of A and B, as the gcd of these divides both of them by >> definition. But the packed version of G is precisely the gcd of the >> packed version of A and B. >> >> I've now inserted code to compute the Mignotte bound and omit the >> divides test in this case. The function passes the test code and I am >> now doing a profile run against Magma again: >> >> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >> >> The light blue layer at the bottom left is due to the fact that FLINT >> packs into 64 bits not 30 or 32 as Magma surely does. This means that >> we more easily exceed the Mignotte bound so that the divides test is >> not required. >> >> The browny red region at the bottom left is surely due to the fact >> that Magma is doing a heuristic gcd packing into 32 bits (or perhaps >> 30) instead of 64. >> >> Bill. >> >> 2008/11/7 William Hart <ha...@ya...>: >>> I decided to remove the test for correctness in the Heuristic GCD (which it actually can't do without) and see if that sped things up. It did: >>> >>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >>> >>> The red on the bottom left turned blue. So it looks like it is the function fmpz_poly_divides which checks if the purported GCD actually divides the original polynomials is taking most of the time (like 80%). >>> >>> One way of doing a quick check is to pack the coefficients of the polynomials into large integers and divide. If the integer divides exactly one gets the quotient, which one can unpack into a polynomial and then if that quotient times the divisor is the dividend, one can prove that the polynomials divide exactly. >>> >>> Clearly the brown at the bottom of the graph on the left is because Magma is packing (both in the GCD and the divides function) into 32 bits instead of 64. >>> >>> So I think it is clear now what has to be implemented. >>> >>> I'm quite surprised that the fmpz_poly_divides_modular function was not sufficiently fast. It seems one can do a lot better if exact division is expected. >>> >>> Bill. >>> >>> --- On Fri, 11/7/08, Bill Hart <goo...@go...> wrote: >>> >>>> From: Bill Hart <goo...@go...> >>>> Subject: [flint-devel] Heuristic gcd >>>> To: "Development list for FLINT" <fli...@li...> >>>> Date: Friday, November 7, 2008, 2:07 PM >>>> I implemented the heuristic gcd for polynomials whose >>>> bitsize is up to >>>> 63 bits. The result looks like this: >>>> >>>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd9.png >>>> >>>> Interestingly although it makes the bottom of the graph >>>> nice and blue, >>>> the big red patch on the left is still there. >>>> >>>> I think perhaps this is just a highly efficient euclidean >>>> algorithm, >>>> or perhaps a Lehmer GCD algorithm. >>>> >>>> The red on the right is nor due to FLINT. That is due to a >>>> lack of >>>> integer half-gcd in GMP. We're still working on fixing >>>> that. >>>> >>>> Bill. >>>> >>>> ------------------------------------------------------------------------- >>>> This SF.Net email is sponsored by the Moblin Your Move >>>> Developer's challenge >>>> Build the coolest Linux based applications with Moblin SDK >>>> & win great prizes >>>> Grand prize is a trip for two to an Open Source event >>>> anywhere in the world >>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>> _______________________________________________ >>>> flint-devel mailing list >>>> fli...@li... >>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>> >>> >>> >>> >>> ------------------------------------------------------------------------- >>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>> Grand prize is a trip for two to an Open Source event anywhere in the world >>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>> _______________________________________________ >>> flint-devel mailing list >>> fli...@li... >>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>> >> > |
|
From: Bill H. <goo...@go...> - 2008-11-09 00:19:46
|
I coded up the packing code to pack into 32 bits instead of 64. There were a couple of problems with this. When the Mignotte bound is bigger than 32 bits then a divides gets carried out, which for small gcd's dominates the time. Thus sometimes it is better to pack into 64 bits instead. Once that is sorted out one soon realises that just computing the Mignotte bound takes too long. So I implemented an approximation which is sometimes slightly too large but isn't too bad. Once these issues are fixed, here is the result: http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd11.png Now the brown in the bottom left must be where packing is done to the bit. Here the Minotte bound is usually less than the number of bits that one is packing to. Bill. 2008/11/8 Bill Hart <goo...@go...>: > I finally realised that if the number of bits one is packing into is > bigger than the Mignotte bound, then one does not need to do a divides > test at all. This is all that Magma is doing. > > Here is why: > > To do a divides, if one is under the Mignotte bound, one would pack A > and B and pack the purported GCD and see if the packed A and B divide > the packed GCD. This would be done by simply doing a multiprecision > division and seeing if there is a remainder. If not, they divide > precisely and the quotient is calculated exactly. > > But in computing the heuristic gcd, we already have the packed > versions of A and B, as the gcd of these divides both of them by > definition. But the packed version of G is precisely the gcd of the > packed version of A and B. > > I've now inserted code to compute the Mignotte bound and omit the > divides test in this case. The function passes the test code and I am > now doing a profile run against Magma again: > > http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png > > The light blue layer at the bottom left is due to the fact that FLINT > packs into 64 bits not 30 or 32 as Magma surely does. This means that > we more easily exceed the Mignotte bound so that the divides test is > not required. > > The browny red region at the bottom left is surely due to the fact > that Magma is doing a heuristic gcd packing into 32 bits (or perhaps > 30) instead of 64. > > Bill. > > 2008/11/7 William Hart <ha...@ya...>: >> I decided to remove the test for correctness in the Heuristic GCD (which it actually can't do without) and see if that sped things up. It did: >> >> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png >> >> The red on the bottom left turned blue. So it looks like it is the function fmpz_poly_divides which checks if the purported GCD actually divides the original polynomials is taking most of the time (like 80%). >> >> One way of doing a quick check is to pack the coefficients of the polynomials into large integers and divide. If the integer divides exactly one gets the quotient, which one can unpack into a polynomial and then if that quotient times the divisor is the dividend, one can prove that the polynomials divide exactly. >> >> Clearly the brown at the bottom of the graph on the left is because Magma is packing (both in the GCD and the divides function) into 32 bits instead of 64. >> >> So I think it is clear now what has to be implemented. >> >> I'm quite surprised that the fmpz_poly_divides_modular function was not sufficiently fast. It seems one can do a lot better if exact division is expected. >> >> Bill. >> >> --- On Fri, 11/7/08, Bill Hart <goo...@go...> wrote: >> >>> From: Bill Hart <goo...@go...> >>> Subject: [flint-devel] Heuristic gcd >>> To: "Development list for FLINT" <fli...@li...> >>> Date: Friday, November 7, 2008, 2:07 PM >>> I implemented the heuristic gcd for polynomials whose >>> bitsize is up to >>> 63 bits. The result looks like this: >>> >>> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd9.png >>> >>> Interestingly although it makes the bottom of the graph >>> nice and blue, >>> the big red patch on the left is still there. >>> >>> I think perhaps this is just a highly efficient euclidean >>> algorithm, >>> or perhaps a Lehmer GCD algorithm. >>> >>> The red on the right is nor due to FLINT. That is due to a >>> lack of >>> integer half-gcd in GMP. We're still working on fixing >>> that. >>> >>> Bill. >>> >>> ------------------------------------------------------------------------- >>> This SF.Net email is sponsored by the Moblin Your Move >>> Developer's challenge >>> Build the coolest Linux based applications with Moblin SDK >>> & win great prizes >>> Grand prize is a trip for two to an Open Source event >>> anywhere in the world >>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>> _______________________________________________ >>> flint-devel mailing list >>> fli...@li... >>> https://lists.sourceforge.net/lists/listinfo/flint-devel >> >> >> >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >> Build the coolest Linux based applications with Moblin SDK & win great prizes >> Grand prize is a trip for two to an Open Source event anywhere in the world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> > |
|
From: Bill H. <goo...@go...> - 2008-11-08 15:58:48
|
I finally realised that if the number of bits one is packing into is bigger than the Mignotte bound, then one does not need to do a divides test at all. This is all that Magma is doing. Here is why: To do a divides, if one is under the Mignotte bound, one would pack A and B and pack the purported GCD and see if the packed A and B divide the packed GCD. This would be done by simply doing a multiprecision division and seeing if there is a remainder. If not, they divide precisely and the quotient is calculated exactly. But in computing the heuristic gcd, we already have the packed versions of A and B, as the gcd of these divides both of them by definition. But the packed version of G is precisely the gcd of the packed version of A and B. I've now inserted code to compute the Mignotte bound and omit the divides test in this case. The function passes the test code and I am now doing a profile run against Magma again: http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png The light blue layer at the bottom left is due to the fact that FLINT packs into 64 bits not 30 or 32 as Magma surely does. This means that we more easily exceed the Mignotte bound so that the divides test is not required. The browny red region at the bottom left is surely due to the fact that Magma is doing a heuristic gcd packing into 32 bits (or perhaps 30) instead of 64. Bill. 2008/11/7 William Hart <ha...@ya...>: > I decided to remove the test for correctness in the Heuristic GCD (which it actually can't do without) and see if that sped things up. It did: > > http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png > > The red on the bottom left turned blue. So it looks like it is the function fmpz_poly_divides which checks if the purported GCD actually divides the original polynomials is taking most of the time (like 80%). > > One way of doing a quick check is to pack the coefficients of the polynomials into large integers and divide. If the integer divides exactly one gets the quotient, which one can unpack into a polynomial and then if that quotient times the divisor is the dividend, one can prove that the polynomials divide exactly. > > Clearly the brown at the bottom of the graph on the left is because Magma is packing (both in the GCD and the divides function) into 32 bits instead of 64. > > So I think it is clear now what has to be implemented. > > I'm quite surprised that the fmpz_poly_divides_modular function was not sufficiently fast. It seems one can do a lot better if exact division is expected. > > Bill. > > --- On Fri, 11/7/08, Bill Hart <goo...@go...> wrote: > >> From: Bill Hart <goo...@go...> >> Subject: [flint-devel] Heuristic gcd >> To: "Development list for FLINT" <fli...@li...> >> Date: Friday, November 7, 2008, 2:07 PM >> I implemented the heuristic gcd for polynomials whose >> bitsize is up to >> 63 bits. The result looks like this: >> >> http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd9.png >> >> Interestingly although it makes the bottom of the graph >> nice and blue, >> the big red patch on the left is still there. >> >> I think perhaps this is just a highly efficient euclidean >> algorithm, >> or perhaps a Lehmer GCD algorithm. >> >> The red on the right is nor due to FLINT. That is due to a >> lack of >> integer half-gcd in GMP. We're still working on fixing >> that. >> >> Bill. >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move >> Developer's challenge >> Build the coolest Linux based applications with Moblin SDK >> & win great prizes >> Grand prize is a trip for two to an Open Source event >> anywhere in the world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: William H. <ha...@ya...> - 2008-11-07 17:03:54
|
I decided to remove the test for correctness in the Heuristic GCD (which it actually can't do without) and see if that sped things up. It did: http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd10.png The red on the bottom left turned blue. So it looks like it is the function fmpz_poly_divides which checks if the purported GCD actually divides the original polynomials is taking most of the time (like 80%). One way of doing a quick check is to pack the coefficients of the polynomials into large integers and divide. If the integer divides exactly one gets the quotient, which one can unpack into a polynomial and then if that quotient times the divisor is the dividend, one can prove that the polynomials divide exactly. Clearly the brown at the bottom of the graph on the left is because Magma is packing (both in the GCD and the divides function) into 32 bits instead of 64. So I think it is clear now what has to be implemented. I'm quite surprised that the fmpz_poly_divides_modular function was not sufficiently fast. It seems one can do a lot better if exact division is expected. Bill. --- On Fri, 11/7/08, Bill Hart <goo...@go...> wrote: > From: Bill Hart <goo...@go...> > Subject: [flint-devel] Heuristic gcd > To: "Development list for FLINT" <fli...@li...> > Date: Friday, November 7, 2008, 2:07 PM > I implemented the heuristic gcd for polynomials whose > bitsize is up to > 63 bits. The result looks like this: > > http://sage.math.washington.edu/home/wbhart/flint-trunk/graphing/gcd9.png > > Interestingly although it makes the bottom of the graph > nice and blue, > the big red patch on the left is still there. > > I think perhaps this is just a highly efficient euclidean > algorithm, > or perhaps a Lehmer GCD algorithm. > > The red on the right is nor due to FLINT. That is due to a > lack of > integer half-gcd in GMP. We're still working on fixing > that. > > Bill. > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move > Developer's challenge > Build the coolest Linux based applications with Moblin SDK > & win great prizes > Grand prize is a trip for two to an Open Source event > anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel |