You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
(118) |
May
(140) |
Jun
(56) |
Jul
(86) |
Aug
(4) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(94) |
Aug
(86) |
Sep
|
Oct
(3) |
Nov
(18) |
Dec
(27) |
| 2009 |
Jan
(15) |
Feb
(15) |
Mar
(27) |
Apr
(2) |
May
(1) |
Jun
(6) |
Jul
(10) |
Aug
(4) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: Bill H. <goo...@go...> - 2015-11-23 21:10:55
|
Thanks for the update. We'll have to fix the test suite now, since I'm out of ideas. I'm afraid it might take some time. Anyway, obviously if you don't use LLL everything should be fine. At least the next Flint release is some way off, so we have time to sort it out before then. Bill. On 23 November 2015 at 21:42, Dann Corbit <DC...@co...> wrote: > lll_mpf_with_removal.... > > This application has requested the Runtime to terminate it in an unusual > way. > > Please contact the application's support team for more information. > > FAIL (randintrel): basis matrices not equal! > > [[12 -62 -52 50 -32 -70 59 -82 -124 0 -18] > > [-106 -8 32 4 -102 -111 -63 9 -50 -54 -37] > > [59 107 -18 140 92 83 13 -6 -80 -77 0] > > [-52 73 -16 -31 80 -188 -1 47 -57 -25 -6] > > [38 9 92 -49 3 132 -68 -44 -47 37 92] > > [-163 -59 -70 54 44 -26 -60 43 85 63 52] > > [134 136 -67 29 -18 74 -11 146 -24 53 87] > > [58 -32 75 119 50 94 64 151 -18 -51 126] > > [34 34 -78 -153 -47 73 80 -143 173 -155 32] > > [-58 64 2 -134 -159 139 225 69 -125 204 -118] > > ][[8212269030201924110349129 5109674321441250958607765 > 1494481417875653033370726 -3126612440029326740763271 > 51429522477502228951637 3668135059057144339165983 > -1222207475347700051823571 -82 -124 0 -18] > > [17733886805887267123209168 -560817913328917789156032 > -164028448303425332568953 343164780003218788620738 -5644703686555121748298 > -402600189408710965161118 134144722904015859924009 9 -50 -54 -37] > > [19051207754227777873488604 373878608885945193185394 > 109352298868950221851030 -228776520002145858915019 3763135791036748585363 > 268400126272473976370620 -89429815269343907473720 -6 -80 -77 0] > > [16249366647065029671069516 -2928715769606570672178631 > -856593007806776738397760 1792082740016809230647187 > -29477897029787862255024 -2102467655801046144520393 > 700533552943193932914667 47 -57 -25 -6] > > [-12712607907404507225851694 2741776465163598076439377 > 801916858372301627665048 -1677694480015736301019069 27596329134269489271044 > 1968267592664809156941446 -655818645308521979257954 -44 -47 37 92] > > [-18886497648316966397456468 -2679463363682607209657108 > -783691475227476590922561 1639565060015378656777141 > -26969139835763363403391 -1923534238286063495126026 > 640913676096964661378769 43 85 63 52] > > [4375423774225741805660394 -9097712816224666341565513 > -2660905939144455400668437 5566895320052215904776669 > -91569637581894213385283 -6531069739296866750041668 > 2176125504887368384344694 146 -24 53 87] > > [13285581601783327693783123 -9409278323629620668734582 > -2752032854868580586711161 5757542420054004121559664 > -94705584074424838242400 -6754736511190595063461373 > 2250650350945154972781690 151 -18 -51 126] > > [-21960865427861023718955812 8910773511781693745946670 > 2606229789709980290427972 -5452507060051142975981787 > 89688069686375837813100 6396869676160629763057969 > -2131410597252696431846262 -143 173 -155 32] > > [9888124790847165658516615 -4299604002188369711180535 > -1257551436992927552303989 2630929980024677380049973 > -43276061596922611342621 -3086601452133450723399302 > 1028442875597454921359986 69 -125 204 -118] > > ]../Makefile.subdirs:91: recipe for target > '../build/fmpz_lll/test/t-lll_mpf_with_removal.exe_RUN' failed > > make[1]: *** [../build/fmpz_lll/test/t-lll_mpf_with_removal.exe_RUN] Error > 3 > > make[1]: Leaving directory '/f/math/flint2-trunk/fmpz_lll' > > Makefile:221: recipe for target 'check' failed > > make: *** [check] Error 2 > > > > > > > ------------------------------------------------------------------------------ > Go from Idea to Many App Stores Faster with Intel(R) XDK > Give your users amazing mobile app experiences with Intel(R) XDK. > Use one codebase in this all-in-one HTML5 development environment. > Design, debug & build mobile apps & 2D/3D high-impact games for multiple > OSs. > http://pubads.g.doubleclick.net/gampad/clk?id=254741551&iu=/4140 > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > > |
|
From: Dann C. <DC...@co...> - 2015-11-23 20:55:12
|
lll_mpf_with_removal.... This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. FAIL (randintrel): basis matrices not equal! [[12 -62 -52 50 -32 -70 59 -82 -124 0 -18] [-106 -8 32 4 -102 -111 -63 9 -50 -54 -37] [59 107 -18 140 92 83 13 -6 -80 -77 0] [-52 73 -16 -31 80 -188 -1 47 -57 -25 -6] [38 9 92 -49 3 132 -68 -44 -47 37 92] [-163 -59 -70 54 44 -26 -60 43 85 63 52] [134 136 -67 29 -18 74 -11 146 -24 53 87] [58 -32 75 119 50 94 64 151 -18 -51 126] [34 34 -78 -153 -47 73 80 -143 173 -155 32] [-58 64 2 -134 -159 139 225 69 -125 204 -118] ][[8212269030201924110349129 5109674321441250958607765 1494481417875653033370726 -3126612440029326740763271 51429522477502228951637 3668135059057144339165983 -1222207475347700051823571 -82 -124 0 -18] [17733886805887267123209168 -560817913328917789156032 -164028448303425332568953 343164780003218788620738 -5644703686555121748298 -402600189408710965161118 134144722904015859924009 9 -50 -54 -37] [19051207754227777873488604 373878608885945193185394 109352298868950221851030 -228776520002145858915019 3763135791036748585363 268400126272473976370620 -89429815269343907473720 -6 -80 -77 0] [16249366647065029671069516 -2928715769606570672178631 -856593007806776738397760 1792082740016809230647187 -29477897029787862255024 -2102467655801046144520393 700533552943193932914667 47 -57 -25 -6] [-12712607907404507225851694 2741776465163598076439377 801916858372301627665048 -1677694480015736301019069 27596329134269489271044 1968267592664809156941446 -655818645308521979257954 -44 -47 37 92] [-18886497648316966397456468 -2679463363682607209657108 -783691475227476590922561 1639565060015378656777141 -26969139835763363403391 -1923534238286063495126026 640913676096964661378769 43 85 63 52] [4375423774225741805660394 -9097712816224666341565513 -2660905939144455400668437 5566895320052215904776669 -91569637581894213385283 -6531069739296866750041668 2176125504887368384344694 146 -24 53 87] [13285581601783327693783123 -9409278323629620668734582 -2752032854868580586711161 5757542420054004121559664 -94705584074424838242400 -6754736511190595063461373 2250650350945154972781690 151 -18 -51 126] [-21960865427861023718955812 8910773511781693745946670 2606229789709980290427972 -5452507060051142975981787 89688069686375837813100 6396869676160629763057969 -2131410597252696431846262 -143 173 -155 32] [9888124790847165658516615 -4299604002188369711180535 -1257551436992927552303989 2630929980024677380049973 -43276061596922611342621 -3086601452133450723399302 1028442875597454921359986 69 -125 204 -118] ]../Makefile.subdirs:91: recipe for target '../build/fmpz_lll/test/t-lll_mpf_with_removal.exe_RUN' failed make[1]: *** [../build/fmpz_lll/test/t-lll_mpf_with_removal.exe_RUN] Error 3 make[1]: Leaving directory '/f/math/flint2-trunk/fmpz_lll' Makefile:221: recipe for target 'check' failed make: *** [check] Error 2 |
|
From: Bill H. <goo...@go...> - 2010-06-16 18:02:25
|
Hi Sam, Actually this is only used by the profiler, which only gets built when you do make profile. At this point the makefile doesn't distinguish the cases as well as it should. In fact, when doing make check, it shouldn't need profiler.h at all (I think). We could probably do more to support MinGW. Thanks for testing it out! Bill. On 16 June 2010 18:37, Sam Rawlins <sam...@gm...> wrote: > Hi All, > I'm not sure if there is a lot of test or support for FLINT on Windows. The > documentation does mention various ways to compile on Windows, so maybe its > marginally supported. > In any case, I compiled the libraries successfully using MinGW (MSYS 1.0.14, > GCC 4.4.1, GMP 5.0.1). However, `make check' fails because profiler.h > #include's <sys/resource.h>. I know that there has been little to no plans > to include this POSIX library in MinGW, so it looked like there might not be > support for running `make check' on Windows. > However, on a hunch I simply commented out #include <sys/resource.h>, and > tried `make check' again. Ta da! "All tests passed." > Is this just an overzealous #include or is resource.h really used elsewhere? > > -- > Sam Rawlins > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > > |
|
From: Sam R. <sam...@gm...> - 2010-06-16 17:37:14
|
Hi All, I'm not sure if there is a lot of test or support for FLINT on Windows. The documentation does mention various ways to compile on Windows, so maybe its marginally supported. In any case, I compiled the libraries successfully using MinGW (MSYS 1.0.14, GCC 4.4.1, GMP 5.0.1). However, `make check' fails because profiler.h #include's <sys/resource.h>. I know that there has been little to no plans to include this POSIX library in MinGW, so it looked like there might not be support for running `make check' on Windows. However, on a hunch I simply commented out #include <sys/resource.h>, and tried `make check' again. Ta da! "All tests passed." Is this just an overzealous #include or is resource.h really used elsewhere? -- Sam Rawlins |
|
From: Bill H. <goo...@go...> - 2009-09-02 19:00:24
|
Hi all, I finally figured out how to set up a GIT repo on my new machine and publish it to the web: http://selmer.warwick.ac.uk/gitweb/FLINT-Lite.git (See below if you are curious what this new FLINT-Lite thing is). You can browse the commits online or even check out a tarball snapshot of the latest. If you want to clone this repo to your local machine and play around (assuming you have git installed): git clone git://selmer.warwick.ac.uk/FLINT-Lite.git FLINT-Lite Don't forget to make a branch straight away to play with: git branch mybranchname To add files to your repo: git add filename git commit When you make changes and want to commit them to your repo: git commit -a If you want to keep up with changes made in my git repo: git pull ---------------------- FLINT-Lite : what is it? ================ FLINT-Lite is a new version of FLINT with some new features, but in a nascent stage of development. It contains the following: * longlong.h - assembly code for x86_64 only * profiler - for profiling code * ulong_extras - provides functions for dealing with unsigned integers. All functions deal with the full 64 bits (or where noted in special cases a full 53 bits). Advantages of FLINT-Lite: ================== * Every function is in a different file and every module in a different directory * Every function has a corresponding .txt file in the doc directory * Every function has a corresponding test function in the test directory * To add a new function to FLINT-Lite is dead easy: - drop the new file in the directory corresponding to the module it belongs to, e.g. /ulong_extras/mynewfn.c - add the new prototype to the relevant .h file in the top level source directory, e.g. ulong_extras.h - drop a test function into the test directory, e.g. /ulong_extras/test/t-mynewfn.c (you can just cut and paste from existing test functions to make this easy) That's it, you're done. You will find make library, make all, make check will just Do The Right Thing without you having to even look at the makefile. * Every function is short and simple and documented, including sketches of mathematical proofs where necessary * FLINT-Lite is open for contribution!! Disadvantages of FLINT-Lite: ====================+ * So far it only works on x86_64 (some changes to longlong.h and the test code and a couple of functions would be required to port it to other platforms) * Obviously not all functions from FLINT are in FLINT-Lite yet * Only builds with gcc at present Feel like contributing to fix some of these things? Also look at the todo.txt in the top level source directory for things that haven't been done yet, but which might be interesting. I'm about to move on to a new module, F_mpz, which will be similar to the F_mpz module in FLINT 1. If anyone would like to work on ulong_extras or another module (they are dead easy to add, simply move the Makefile from ulong_extras into a new directory for the new module, add the doc, profile and test subdirs and make all the obvious changes to the top level source directory, such as listing your new directory in the list build directories there). I've also created a new google group: flint-devel http://groups.google.co.uk/group/flint-devel?hl=en Feel free to join! Bill. |
|
From: Bill H. <goo...@go...> - 2009-08-08 19:57:28
|
2009/8/8 Burcin Erocal <bu...@er...>: > On Sat, 8 Aug 2009 16:52:36 +0100 > Bill Hart <goo...@go...> wrote: > >> Hi Burcin, >> >> Thanks for the patch. I'll still review all the code going into FLINT. >> But it is then easier for you to commit directly. >> >> FLINT is currently like a heart patient, laid out on the table in >> pieces in the middle of surgery. But in a week or so I hope to have it >> back together and sewn up for the release. Most of the changes only >> affect FLINT 2.0 code, which is not being released yet. But it affects >> the F_mpz layer, which I have released. >> >> The way you've done the test function is fine by the way. It is what I >> would do. >> >> I guess your patch is against FLINT 1.4 rather than trunk, is that >> correct? > > It's against trunk. I don't think the new functions are available in > the 1.4 branch. Great. > > Will the new release be from trunk? or in other words, will it have > these functions? Yes, it will. > > Thank you. > > Burcin > Bill. |
|
From: Burcin E. <bu...@er...> - 2009-08-08 16:24:12
|
On Sat, 8 Aug 2009 16:52:36 +0100 Bill Hart <goo...@go...> wrote: > Hi Burcin, > > Thanks for the patch. I'll still review all the code going into FLINT. > But it is then easier for you to commit directly. > > FLINT is currently like a heart patient, laid out on the table in > pieces in the middle of surgery. But in a week or so I hope to have it > back together and sewn up for the release. Most of the changes only > affect FLINT 2.0 code, which is not being released yet. But it affects > the F_mpz layer, which I have released. > > The way you've done the test function is fine by the way. It is what I > would do. > > I guess your patch is against FLINT 1.4 rather than trunk, is that > correct? It's against trunk. I don't think the new functions are available in the 1.4 branch. Will the new release be from trunk? or in other words, will it have these functions? Thank you. Burcin |
|
From: Bill H. <goo...@go...> - 2009-08-08 15:52:47
|
Hi Burcin, Thanks for the patch. I'll still review all the code going into FLINT. But it is then easier for you to commit directly. FLINT is currently like a heart patient, laid out on the table in pieces in the middle of surgery. But in a week or so I hope to have it back together and sewn up for the release. Most of the changes only affect FLINT 2.0 code, which is not being released yet. But it affects the F_mpz layer, which I have released. The way you've done the test function is fine by the way. It is what I would do. I guess your patch is against FLINT 1.4 rather than trunk, is that correct? Bill. 2009/8/8 Burcin Erocal <bu...@er...>: > Hi Bill, > > On Mon, 27 Jul 2009 21:32:05 +0100 > Bill Hart <goo...@go...> wrote: > >> Great that everything works. The easiest way to write a test function >> is to cut and paste an existing one. It shouldn't be too hard to >> convert random poly to zmod_polys then use the evaluate or composition >> functions there, then compare the results. In some cases it is hard to >> do randomised testing because it is difficult to find a simpler way of >> writing the function so that you have a trusted implementation to >> compare against. In such cases I tend to look for some algebraic way >> of testing the result, e.g. Test addition by subtracting the addend >> back off and see if you get the same thing back. I rarely check >> individual cases but try to implement the randomised testing in such a >> way as to pick up corner cases. > > After another long delay, I managed to write the test functions and the > documentation. See the attached patch. > > For testing, I just used the already existing functions in the library. > For example, in order to test fmpz_poly_evaluate_mod I call > fmpz_poly_evaluate then reduce the result. > > I would be happy to revise the patch if you have any suggestions. > >> I don't actually have HG, but I can accept patches in git, svn or >> standard patch format. >> >> If it would help, I can easily give you svn access, if you have a >> sourceforge account. > > I like to know that my patches are being reviewed. Committing directly > to SVN would eliminate that step. Though if you think this process is > taking up too much time, and the possibility that I mess up the > repository is low enough, my sf user name is burcin. > > Thanks. > > Burcin > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > > |
|
From: Burcin E. <bu...@er...> - 2009-08-08 10:13:26
|
Hi Bill, On Mon, 27 Jul 2009 21:32:05 +0100 Bill Hart <goo...@go...> wrote: > Great that everything works. The easiest way to write a test function > is to cut and paste an existing one. It shouldn't be too hard to > convert random poly to zmod_polys then use the evaluate or composition > functions there, then compare the results. In some cases it is hard to > do randomised testing because it is difficult to find a simpler way of > writing the function so that you have a trusted implementation to > compare against. In such cases I tend to look for some algebraic way > of testing the result, e.g. Test addition by subtracting the addend > back off and see if you get the same thing back. I rarely check > individual cases but try to implement the randomised testing in such a > way as to pick up corner cases. After another long delay, I managed to write the test functions and the documentation. See the attached patch. For testing, I just used the already existing functions in the library. For example, in order to test fmpz_poly_evaluate_mod I call fmpz_poly_evaluate then reduce the result. I would be happy to revise the patch if you have any suggestions. > I don't actually have HG, but I can accept patches in git, svn or > standard patch format. > > If it would help, I can easily give you svn access, if you have a > sourceforge account. I like to know that my patches are being reviewed. Committing directly to SVN would eliminate that step. Though if you think this process is taking up too much time, and the possibility that I mess up the repository is low enough, my sf user name is burcin. Thanks. Burcin |
|
From: Bill H. <goo...@go...> - 2009-07-27 20:32:16
|
Hi Burcin, Great that everything works. The easiest way to write a test function is to cut and paste an existing one. It shouldn't be too hard to convert random poly to zmod_polys then use the evaluate or composition functions there, then compare the results. In some cases it is hard to do randomised testing because it is difficult to find a simpler way of writing the function so that you have a trusted implementation to compare against. In such cases I tend to look for some algebraic way of testing the result, e.g. Test addition by subtracting the addend back off and see if you get the same thing back. I rarely check individual cases but try to implement the randomised testing in such a way as to pick up corner cases. I don't actually have HG, but I can accept patches in git, svn or standard patch format. If it would help, I can easily give you svn access, if you have a sourceforge account. Bill. On 27/07/2009, Burcin Erocal <bu...@er...> wrote: > Hi Bill, > > On Thu, 16 Jul 2009 03:52:21 +0100 > Bill Hart <goo...@go...> wrote: > >> I've added the functions you supplied, to fmpz_poly.c/h in trunk. At >> your leisure, could you verify they actually work with your code. I've >> made some minor changes to them, but I haven't added test code or >> documentation (feel free to supply it if you feel that the functions >> could be of more general use). >> >> Once you are happy with them, I'll make the release tarball. > > I finally found the time to test the new functions. Everything seems to > work fine, so I'm happy. :) > > I can try to write test code, and/or documentation tomorrow. Is there > any convention or instructions on writing tests in FLINT? Should I just > pick random polynomials and values, evaluate things using the functions > and "by hand", then check the results? > > >> I'm sure you have your git repo hooked up to the FLINT svn, but if >> not, here is the FLINT trunk: >> >> http://flint.svn.sourceforge.net/svnroot/flint/trunk > > I am not really an advanced git/hg/svn guy. I might try doing this with > hg tomorrow, or just generate patches the old fashioned way. > > >> One other major improvement would be divide and conquer for >> zmod_poly_compose. This would actually be asymptotically faster I >> believe. In the case of zmod_poly_evaluate, I believe Horner's method >> is asymptotically fast. I don't know if translate would be faster with >> a divide and conquer strategy. > > The paper I referred to earlier, > > http://doi.acm.org/10.1145/258726.258745 > > suggests that method D "Paterson & Stockmeyer's (1973) method" is > asymptotically faster than divide & conquer or Horner's. I am sure I'll > have bigger problems elsewhere if I have to translate polynomials of > degree ~512 though. > > > Many thanks. > > Burcin > > ------------------------------------------------------------------------------ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Burcin E. <bu...@er...> - 2009-07-27 20:00:39
|
Hi Bill, On Thu, 16 Jul 2009 03:52:21 +0100 Bill Hart <goo...@go...> wrote: > I've added the functions you supplied, to fmpz_poly.c/h in trunk. At > your leisure, could you verify they actually work with your code. I've > made some minor changes to them, but I haven't added test code or > documentation (feel free to supply it if you feel that the functions > could be of more general use). > > Once you are happy with them, I'll make the release tarball. I finally found the time to test the new functions. Everything seems to work fine, so I'm happy. :) I can try to write test code, and/or documentation tomorrow. Is there any convention or instructions on writing tests in FLINT? Should I just pick random polynomials and values, evaluate things using the functions and "by hand", then check the results? > I'm sure you have your git repo hooked up to the FLINT svn, but if > not, here is the FLINT trunk: > > http://flint.svn.sourceforge.net/svnroot/flint/trunk I am not really an advanced git/hg/svn guy. I might try doing this with hg tomorrow, or just generate patches the old fashioned way. > One other major improvement would be divide and conquer for > zmod_poly_compose. This would actually be asymptotically faster I > believe. In the case of zmod_poly_evaluate, I believe Horner's method > is asymptotically fast. I don't know if translate would be faster with > a divide and conquer strategy. The paper I referred to earlier, http://doi.acm.org/10.1145/258726.258745 suggests that method D "Paterson & Stockmeyer's (1973) method" is asymptotically faster than divide & conquer or Horner's. I am sure I'll have bigger problems elsewhere if I have to translate polynomials of degree ~512 though. Many thanks. Burcin |
|
From: Burcin E. <bu...@er...> - 2009-07-22 10:57:46
|
Hi Bill, On Tue, 21 Jul 2009 20:50:31 +0100 Bill Hart <goo...@go...> wrote: > I recently added the functions Burcin Erocal submitted to FLINT, but > without test code or documentation at this stage. I also added > zmod_poly_evaluate (asymptotically fast), zmod_poly_compose (not > asymptotically fast) with test code and documentation. Thank you very much for these. I'll try them out as soon as possible. Unfortunately, this will probably be on Sunday. > I *thought* I posted to the FLINT devel list to announce that at the > time, but after not hearing from Burcin since then, I decided to check > for the message and couldn't find the outgoing email. It certainly > never went to Burcin. So that would explain why I never heard back. I saw that e-mail, it is on the "FLINT 1.4 Released!" thread. I'm sorry for not replying earlier. These days are very busy with conferences, and I can't spent much time in front of the computer. I forgot to add it to a to do list when I read your message, and forgot about it completely afterwards. > Anyhow, I'll wait for Burcin to let me know whether these functions > work for him. Then the new FLINT release is ready to go. It will be > FLINT 1.4.1 as it is only a minor update really. > > In the mean time, it's possible to get the code from the development > svn at: > > http://flint.svn.sourceforge.net/svnroot/flint/trunk > > Feel free to add test code and/or documentation Burcin, if you think > that would be useful. By the way, is your git repo public. If so I can > clone it and you can submit any patches that way if it is easier. I'll see what I can do. I'll have more time next week during the *-combinat workshop. Thank you. Burcin |
|
From: Bill H. <goo...@go...> - 2009-07-21 19:50:40
|
I recently added the functions Burcin Erocal submitted to FLINT, but without test code or documentation at this stage. I also added zmod_poly_evaluate (asymptotically fast), zmod_poly_compose (not asymptotically fast) with test code and documentation. I *thought* I posted to the FLINT devel list to announce that at the time, but after not hearing from Burcin since then, I decided to check for the message and couldn't find the outgoing email. It certainly never went to Burcin. So that would explain why I never heard back. Anyhow, I'll wait for Burcin to let me know whether these functions work for him. Then the new FLINT release is ready to go. It will be FLINT 1.4.1 as it is only a minor update really. In the mean time, it's possible to get the code from the development svn at: http://flint.svn.sourceforge.net/svnroot/flint/trunk Feel free to add test code and/or documentation Burcin, if you think that would be useful. By the way, is your git repo public. If so I can clone it and you can submit any patches that way if it is easier. Bill. |
|
From: Bill H. <goo...@go...> - 2009-07-16 02:57:05
|
I forgot to mention, I implemented zmod_poly_compose and zmod_poly_evaluate (supplied with test code and docs). One small inefficiency in the evaluate_mod code is that we reduce multiple coefficients mod p, but don't use the same precomputed inverse in each case. We really need an fmpz_mod_ui_precomp or something like that. Otherwise I think what you have is fairly efficient. One other major improvement would be divide and conquer for zmod_poly_compose. This would actually be asymptotically faster I believe. In the case of zmod_poly_evaluate, I believe Horner's method is asymptotically fast. I don't know if translate would be faster with a divide and conquer strategy. Bill. 2009/7/16 Bill Hart <goo...@go...>: > Hi Burcin, > > I've added the functions you supplied, to fmpz_poly.c/h in trunk. At > your leisure, could you verify they actually work with your code. I've > made some minor changes to them, but I haven't added test code or > documentation (feel free to supply it if you feel that the functions > could be of more general use). > > Once you are happy with them, I'll make the release tarball. > > I'm sure you have your git repo hooked up to the FLINT svn, but if > not, here is the FLINT trunk: > > http://flint.svn.sourceforge.net/svnroot/flint/trunk > > Bill. > > 2009/7/6 Bill Hart <goo...@go...>: >> Hi Burcin, indeed I missed the patch attached to that flint-devel post. >> >> I think I can fairly easily add the stuff you implemented in cython, >> directly to FLINT. >> >> I'll do hat in the next few days and issue FLINT 1.4.1 when I have done it. >> >> Bill. >> >> 2009/7/6 Burcin Erocal <bu...@er...>: >>> Hi Bill, >>> >>> On Mon, 6 Jul 2009 03:16:52 +0100 >>> Bill Hart <goo...@go...> wrote: >>> >>>> I've just released FLINT 1.4. Get it at http://www.flintlib.org/ >>> <snip release info> >>>> There should be a FLINT 1.5 before FLINT 2.0 comes out towards the end >>>> of this year. In particular I need to address: >>>> >>> <snip> >>>> * We don't have zmod_poly_compose or zmod_poly_evaluate yet >>> >>> Can I also request fmpz_poly_evaluate_mod? >>> >>> I posted a patch implementing this here, but I guess it got lost in >>> the bug discussion: >>> >>> http://sourceforge.net/mailarchive/forum.php?thread_name=20090331192030.7634e082%40karr&forum_name=flint-devel >>> >>> As I wrote in that message, I also need to "translate" polynomials >>> (i.e., compose with x+k for some constant k) very often. I am not sure >>> if it's worth introducing new functions for this purpose, since they >>> can be handled as a special case of composition. Though I still need >>> fmpz_poly_compose_mod in that case. >>> >>> Here is a cython file with some of these functions: >>> >>> http://sage.math.washington.edu/home/burcin/sage_extras.pyx >>> >>> >>> This is a part of a patch to Sage to add fast nullspace computation >>> over QQ(x). It would be great if this functionality was added to FLINT >>> before I submit the patch to Sage. :) >>> >>> >>> Thanks. >>> >>> Burcin >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> flint-devel mailing list >>> fli...@li... >>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>> >> > |
|
From: Bill H. <goo...@go...> - 2009-07-16 02:52:29
|
Hi Burcin, I've added the functions you supplied, to fmpz_poly.c/h in trunk. At your leisure, could you verify they actually work with your code. I've made some minor changes to them, but I haven't added test code or documentation (feel free to supply it if you feel that the functions could be of more general use). Once you are happy with them, I'll make the release tarball. I'm sure you have your git repo hooked up to the FLINT svn, but if not, here is the FLINT trunk: http://flint.svn.sourceforge.net/svnroot/flint/trunk Bill. 2009/7/6 Bill Hart <goo...@go...>: > Hi Burcin, indeed I missed the patch attached to that flint-devel post. > > I think I can fairly easily add the stuff you implemented in cython, > directly to FLINT. > > I'll do hat in the next few days and issue FLINT 1.4.1 when I have done it. > > Bill. > > 2009/7/6 Burcin Erocal <bu...@er...>: >> Hi Bill, >> >> On Mon, 6 Jul 2009 03:16:52 +0100 >> Bill Hart <goo...@go...> wrote: >> >>> I've just released FLINT 1.4. Get it at http://www.flintlib.org/ >> <snip release info> >>> There should be a FLINT 1.5 before FLINT 2.0 comes out towards the end >>> of this year. In particular I need to address: >>> >> <snip> >>> * We don't have zmod_poly_compose or zmod_poly_evaluate yet >> >> Can I also request fmpz_poly_evaluate_mod? >> >> I posted a patch implementing this here, but I guess it got lost in >> the bug discussion: >> >> http://sourceforge.net/mailarchive/forum.php?thread_name=20090331192030.7634e082%40karr&forum_name=flint-devel >> >> As I wrote in that message, I also need to "translate" polynomials >> (i.e., compose with x+k for some constant k) very often. I am not sure >> if it's worth introducing new functions for this purpose, since they >> can be handled as a special case of composition. Though I still need >> fmpz_poly_compose_mod in that case. >> >> Here is a cython file with some of these functions: >> >> http://sage.math.washington.edu/home/burcin/sage_extras.pyx >> >> >> This is a part of a patch to Sage to add fast nullspace computation >> over QQ(x). It would be great if this functionality was added to FLINT >> before I submit the patch to Sage. :) >> >> >> Thanks. >> >> Burcin >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> > |
|
From: Bill H. <goo...@go...> - 2009-07-06 13:18:25
|
Hi Burcin, indeed I missed the patch attached to that flint-devel post. I think I can fairly easily add the stuff you implemented in cython, directly to FLINT. I'll do hat in the next few days and issue FLINT 1.4.1 when I have done it. Bill. 2009/7/6 Burcin Erocal <bu...@er...>: > Hi Bill, > > On Mon, 6 Jul 2009 03:16:52 +0100 > Bill Hart <goo...@go...> wrote: > >> I've just released FLINT 1.4. Get it at http://www.flintlib.org/ > <snip release info> >> There should be a FLINT 1.5 before FLINT 2.0 comes out towards the end >> of this year. In particular I need to address: >> > <snip> >> * We don't have zmod_poly_compose or zmod_poly_evaluate yet > > Can I also request fmpz_poly_evaluate_mod? > > I posted a patch implementing this here, but I guess it got lost in > the bug discussion: > > http://sourceforge.net/mailarchive/forum.php?thread_name=20090331192030.7634e082%40karr&forum_name=flint-devel > > As I wrote in that message, I also need to "translate" polynomials > (i.e., compose with x+k for some constant k) very often. I am not sure > if it's worth introducing new functions for this purpose, since they > can be handled as a special case of composition. Though I still need > fmpz_poly_compose_mod in that case. > > Here is a cython file with some of these functions: > > http://sage.math.washington.edu/home/burcin/sage_extras.pyx > > > This is a part of a patch to Sage to add fast nullspace computation > over QQ(x). It would be great if this functionality was added to FLINT > before I submit the patch to Sage. :) > > > Thanks. > > Burcin > > ------------------------------------------------------------------------------ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Burcin E. <bu...@er...> - 2009-07-06 12:11:10
|
Hi Bill, On Mon, 6 Jul 2009 03:16:52 +0100 Bill Hart <goo...@go...> wrote: > I've just released FLINT 1.4. Get it at http://www.flintlib.org/ <snip release info> > There should be a FLINT 1.5 before FLINT 2.0 comes out towards the end > of this year. In particular I need to address: > <snip> > * We don't have zmod_poly_compose or zmod_poly_evaluate yet Can I also request fmpz_poly_evaluate_mod? I posted a patch implementing this here, but I guess it got lost in the bug discussion: http://sourceforge.net/mailarchive/forum.php?thread_name=20090331192030.7634e082%40karr&forum_name=flint-devel As I wrote in that message, I also need to "translate" polynomials (i.e., compose with x+k for some constant k) very often. I am not sure if it's worth introducing new functions for this purpose, since they can be handled as a special case of composition. Though I still need fmpz_poly_compose_mod in that case. Here is a cython file with some of these functions: http://sage.math.washington.edu/home/burcin/sage_extras.pyx This is a part of a patch to Sage to add fast nullspace computation over QQ(x). It would be great if this functionality was added to FLINT before I submit the patch to Sage. :) Thanks. Burcin |
|
From: Bill H. <goo...@go...> - 2009-07-06 02:22:32
|
I've just released FLINT 1.4. Get it at http://www.flintlib.org/ This release contains a large number of *****speedups*****. Note that the zmod_poly_gcd, xgcd, gcd_invert and resultant speedups also speed up the associated fmpz_poly functions. The gcd_invert, resultant and xgcd speedups are asymptotically fast and practically much faster. The gcd functions are much faster than they were. Here is a graph comparing zmod_poly_gcd with Magma: http://sage.math.washington.edu/home/wbhart/zpgcd8.png And here is one comparing fmpz_poly_gcd with Magma (note the scale is a factor of 20 for the brightest points!!): http://sage.math.washington.edu/home/wbhart/gcd.png There are also some *****critical bug fixes****** in this release. Users are advised to update to the latest version. Here is a summary of the changes and additions in this release: * Sped up zmod_poly division in case where operands are the same length * Sped up zmod_poly division in case where operands have lengths differing by 1 * Fixed a bug in zmod_poly_gcd for polynomials of zero length * Sped up zmod_poly_gcd considerably (both euclidean and half gcd) * Sped up zmod_poly_gcd_invert and zmod_poly_xgcd considerably * Made zmod_poly_gcd_invert and zmod_poly_xgcd asymptotically fast * Made zmod_poly_resultant asymptotically fast * Added optimised zmod_poly_rem function * Fixed a divide by zero bug in zmod_poly_factor_berlekamp * Added test code for z_factor_tinyQS and z_factor_HOLF * Fixed many bugs in the z_factor code, tinyQS and mpQS * Corrected numerous typos in the documentation and added missing descriptions * Added F_mpz_cmp function * Added documentation to the manual for the new F_mpz module As usual I have tested on 64 and 32 bit machines, including Cygwin and valgrinded the new code. There should be a FLINT 1.5 before FLINT 2.0 comes out towards the end of this year. In particular I need to address: * The zmod_poly_factor function is still waaaay slower than Magma - probably due to Magma having fast linear algebra over Z/pZ, including strassen multiplication * The fmpz_poly_divrem function is not asymptotically fast (there is still a log factor which can be removed by using newton iteration) - though we still beat Magma everywhere * The fmpz_poly_pseudo_rem function is not asymptotically fast at all, it is quadratic time * There is no fmpz_poly_rem function * We don't have zmod_poly_compose or zmod_poly_evaluate yet * The heuristic fmpz_poly_gcd tricks need to be propagated to xgcd, resultant and invmod functions and even the division functions * We don't have a modular division algorithm yet Enjoy!! Bill. |
|
From: Bill H. <goo...@go...> - 2009-07-02 17:52:44
|
This is now my favourite graph: http://sage.math.washington.edu/home/wbhart/gcd.png It is the new Z[x] GCD graph, FLINT vs Magma. The thing that makes it my favourite: the bright blue points are where FLINT is 20 (TWENTY) times faster than Magma. Bill. 2009/6/30 Bill Hart <goo...@go...>: > I've now made it so that all the gcd functions normalise the gcd so > that it is monic.This includes the xgcd function. > > I also now have a new xgcd function which has a tuned cutoff between > the existing xgcd_euclidean and the new xgcd_hgcd functions. > > I decided to see how all this work has affected the comparison between > the heuristic gcd and modular gcd in the Z[x] module. Here is a graph > showing the comparison: > > http://sage.math.washington.edu/home/wbhart/gcdheumod.png > > So the new Z/pZ[x] gcd code causes the modular gcd to win over a much > greater range than before - here is the existing tuning: > > if ((max_length <= 80) && (max_limbs <= 16)) > { > if (fmpz_poly_gcd_heuristic(res, poly1, poly2, 0, 0)) > return; > > if (max_length <= 1) > { > fmpz_poly_gcd_modular(res, poly1, poly2, 0, 0); > return; > } > > if ((max_length <= 4) || ((max_limbs <= 1) && (max_length <= 12))) > { > fmpz_poly_gcd_subresultant(res, poly1, poly2); > return; > } > > fmpz_poly_gcd_modular(res, poly1, poly2, 0, 0); > return; > } > > if (max_length <= 6) > { > fmpz_poly_gcd_subresultant(res, poly1, poly2); > return; > } > > if (max_limbs > 16) > { > fmpz_poly_gcd_modular(res, poly1, poly2, 0, 0); > return; > } > > Otherwise heuristic gcd was tried. In other words, for length > 80 it > was all heuristic gcd before. Now that is clearly not the optimal. > > Of course the Z[x] gcd graph comparing Magma with FLINT was already > almost all bright blue, so new tuning will put FLINT even further > ahead of Magma. > > Bill. > > > 2009/6/29 Bill Hart <goo...@go...>: >> I found another trick to save some time at the top level of the half >> gcd: the matrix does not need to be computed (this doesn't apply to >> xgcd, only to gcd). >> >> Here is the new graph: >> >> http://sage.math.washington.edu/home/wbhart/zpgcd7.png >> >> I also coded up asymptotically fast half gcd style xgcd. Despite >> having done this already for MPIR, it took me ages to figure out >> because of subtle differences in the half gcd algorithm used in FLINT. >> >> Now I need to think about normalisation. I think the only >> normalisation I am going to do is to make the gcd monic. This helps >> when using it for multimodular stuff in the Z[x] module. I'll probably >> also do the same for the xgcd code >> >> Bill. >> >> 2009/6/28 Bill Hart <goo...@go...>: >>> I think the remaining red dots on the left were simply due to poor >>> compilation. I wound back the optimisation to -O2 and statically >>> linked the libraries and there was a noticeable speedup. >>> >>> Here is the new graph: >>> >>> http://sage.math.washington.edu/home/wbhart/zpgcd6.png >>> >>> The red dots on the second line up on the right are probably due to >>> the modulus being 2 at random. Magma has exceptionally fast GF2 code, >>> and for polynomials that large, only a single timing may be done. If >>> the modulus happens to be 2 for Magma and 3 for FLINT at that point, >>> then magma will win hands down. >>> >>> One improvement I could make is to improve cache performance by doing >>> some of the half gcd iteration in-place. But I think this would make a >>> very small difference so I don't know if I can be bothered with it or >>> not. >>> >>> Bill. >>> >>> 2009/6/27 William Hart <ha...@ya...>: >>>> >>>> I found a way of speeding up the half gcd. >>>> >>>> Basically when one performs the half gcd step, one normally produces a matrix which keeps track of the steps undertaken so far when dealing with the top half of the coefficients. Later one then applies this matrix to the _full_ original polynomials. >>>> >>>> However, one can save part of this polynomial by matrix computation by making use of some of the information computed along the way when computing the matrix in the first place. >>>> >>>> This trick probably saves about a quarter of the time. >>>> >>>> I also managed to figure out what the red dots were on the right hand side of the graph. These were due to magma having an unfair advantage in the timing code. Basically I was choosing a single modulus for each point on the graph in magma. However for FLINT I was changing the modulus every so often. What was happening is that occasionally Magma would get a really small modulus at random and FLINT would have times averaged over a whole pile of moduli. Fixing this made the red dots go away mostly. >>>> >>>> Here is the new graph: >>>> >>>> http://sage.math.washington.edu/home/wbhart/zpgcd5.png >>>> >>>> There are still some red dots on the left of the diagram. I've no idea what these are. I checked the Magma timings and indeed Magma has exceptionally fast times in that region, the times going up immediately above and below those points on the graph!! I did check that Magma seems to return the correct results at these points, so it is not some kind of bug I don't believe. >>>> >>>> There is also a slight darkening of the blue as the diagram goes across to the right. But this is to be expected as most of the time complexity is in the multiplication code, which exhibits a similar darkening to the right. >>>> >>>> It is still the case that Magma wins hands down for modulus = 2 and quite often for modulus = 3. >>>> >>>> Bill. >>>> >>>> --- On Thu, 6/25/09, Bill Hart <goo...@go...> wrote: >>>> >>>>> From: Bill Hart <goo...@go...> >>>>> Subject: [flint-devel] Polynomial GCD over Z/pZ >>>>> To: "Development list for FLINT" <fli...@li...> >>>>> Date: Thursday, June 25, 2009, 10:25 AM >>>>> This week I've been working on the >>>>> speed of polynomial GCD over Z/pZ. >>>>> After quite a bit of work, here is a before an after >>>>> comparison (each >>>>> graph compares FLINT with Magma): >>>>> >>>>> Before: >>>>> >>>>> http://sage.math.washington.edu/home/wbhart/zpgcd.png >>>>> >>>>> After: >>>>> >>>>> http://sage.math.washington.edu/home/wbhart/zpgcd3.png >>>>> >>>>> Analysis: >>>>> ------------- >>>>> >>>>> * Timings were done on an AMD K8 Opteron with a very recent >>>>> Magma >>>>> >>>>> * Note the problem is GCD(a*b, a*c) where a, b, and c all >>>>> have a >>>>> modulus with the given number of bits and the given length >>>>> (axes are >>>>> logarithmic) >>>>> >>>>> * There are only two algorithms used: Euclidean algorithm >>>>> and half-gcd >>>>> algorithm. For moduli over 8 bits the cutoff is length 168, >>>>> and for 8 >>>>> or fewer bits the cutoff is 246, however, note that these >>>>> cutoffs >>>>> occur on the graph at half those lengths due to the fact >>>>> that we are >>>>> taking GCD(a*b, a*c) where a, b and c all have the given >>>>> length, i.e. >>>>> a*b and a*c have about twice the given length. >>>>> >>>>> * The main improvements in the last week were: >>>>> >>>>> -special code to deal with the GF2 case >>>>> >>>>> - special code to deal with division of >>>>> equal length polynomials >>>>> and polynomials whose lengths differ by 1 (an idea given to >>>>> me by >>>>> Bernard Parisse). I have implemented both rem and divrem >>>>> functions in >>>>> both cases and use them in appropriate places in both the >>>>> euclidean >>>>> and half-gcd cases. >>>>> >>>>> * The red line along the bottom is almost certainly because >>>>> Magma has >>>>> special code for GF2 (and apparently GF3). The bottom line >>>>> is modulus >>>>> = nextprime(random(1 bit)) = 2 always, the next line up is >>>>> modulus = >>>>> nextprime(random(2 bits)) = 2 or 3. I would have to merge >>>>> GF2X to beat >>>>> Magma here (I already have special code for the GF2 case, >>>>> but the >>>>> format in which the polynomials is stored is simply 1 limb >>>>> per >>>>> coefficient). >>>>> >>>>> * I have no idea why there is so much red on the right side >>>>> of the >>>>> diagram. This is in the half-gcd region. The asymptotics of >>>>> the >>>>> half-gcd are O(n log n^2 log log n). The only other things >>>>> used are: >>>>> >>>>> - multiplication : see >>>>> http://sage.math.washington.edu/home/wbhart/zpmul.png >>>>> (no real >>>>> problems there, courtesy of zn_poly) >>>>> >>>>> - division : see >>>>> http://sage.math.washington.edu/home/wbhart/zpdiv.png >>>>> (definitely no >>>>> problems there, we also use some zn_poly code in the >>>>> division code) >>>>> >>>>> - strassen multiplication, but only for 2x2 matrices >>>>> (I use it) >>>>> >>>>> - the half-gcd algorithm itself, which doesn't yield >>>>> too much >>>>> opportunity for optimisation that I know of >>>>> >>>>> So it is mystifying how Magma can be winning here. Any >>>>> ideas? >>>>> >>>>> * I have no idea why there is a brown blob at about >>>>> log_length = 6. >>>>> Overheads? I doesn't appear to be a tuning issue. However I >>>>> don't know >>>>> of any other algorithms for this problem. >>>>> >>>>> * The dark blue patch to the left of the diagram is just >>>>> due to >>>>> overheads. I spent a couple of days just working on that >>>>> patch. >>>>> >>>>> * The red dots at about log_length = 11 are just timing >>>>> irregularities, they do not occur on every graph. >>>>> >>>>> * Above 30 bits we always win, as Magma uses multiprecision >>>>> code in >>>>> this region. FLINT goes all the way out to moduli of 63 >>>>> bits on 64 bit >>>>> machines. >>>>> >>>>> Bill. >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> _______________________________________________ >>>>> flint-devel mailing list >>>>> fli...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> _______________________________________________ >>>> flint-devel mailing list >>>> fli...@li... >>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>> >>> >> > |
|
From: Bill H. <goo...@go...> - 2009-06-30 03:51:55
|
I've now made it so that all the gcd functions normalise the gcd so that it is monic.This includes the xgcd function. I also now have a new xgcd function which has a tuned cutoff between the existing xgcd_euclidean and the new xgcd_hgcd functions. I decided to see how all this work has affected the comparison between the heuristic gcd and modular gcd in the Z[x] module. Here is a graph showing the comparison: http://sage.math.washington.edu/home/wbhart/gcdheumod.png So the new Z/pZ[x] gcd code causes the modular gcd to win over a much greater range than before - here is the existing tuning: if ((max_length <= 80) && (max_limbs <= 16)) { if (fmpz_poly_gcd_heuristic(res, poly1, poly2, 0, 0)) return; if (max_length <= 1) { fmpz_poly_gcd_modular(res, poly1, poly2, 0, 0); return; } if ((max_length <= 4) || ((max_limbs <= 1) && (max_length <= 12))) { fmpz_poly_gcd_subresultant(res, poly1, poly2); return; } fmpz_poly_gcd_modular(res, poly1, poly2, 0, 0); return; } if (max_length <= 6) { fmpz_poly_gcd_subresultant(res, poly1, poly2); return; } if (max_limbs > 16) { fmpz_poly_gcd_modular(res, poly1, poly2, 0, 0); return; } Otherwise heuristic gcd was tried. In other words, for length > 80 it was all heuristic gcd before. Now that is clearly not the optimal. Of course the Z[x] gcd graph comparing Magma with FLINT was already almost all bright blue, so new tuning will put FLINT even further ahead of Magma. Bill. 2009/6/29 Bill Hart <goo...@go...>: > I found another trick to save some time at the top level of the half > gcd: the matrix does not need to be computed (this doesn't apply to > xgcd, only to gcd). > > Here is the new graph: > > http://sage.math.washington.edu/home/wbhart/zpgcd7.png > > I also coded up asymptotically fast half gcd style xgcd. Despite > having done this already for MPIR, it took me ages to figure out > because of subtle differences in the half gcd algorithm used in FLINT. > > Now I need to think about normalisation. I think the only > normalisation I am going to do is to make the gcd monic. This helps > when using it for multimodular stuff in the Z[x] module. I'll probably > also do the same for the xgcd code > > Bill. > > 2009/6/28 Bill Hart <goo...@go...>: >> I think the remaining red dots on the left were simply due to poor >> compilation. I wound back the optimisation to -O2 and statically >> linked the libraries and there was a noticeable speedup. >> >> Here is the new graph: >> >> http://sage.math.washington.edu/home/wbhart/zpgcd6.png >> >> The red dots on the second line up on the right are probably due to >> the modulus being 2 at random. Magma has exceptionally fast GF2 code, >> and for polynomials that large, only a single timing may be done. If >> the modulus happens to be 2 for Magma and 3 for FLINT at that point, >> then magma will win hands down. >> >> One improvement I could make is to improve cache performance by doing >> some of the half gcd iteration in-place. But I think this would make a >> very small difference so I don't know if I can be bothered with it or >> not. >> >> Bill. >> >> 2009/6/27 William Hart <ha...@ya...>: >>> >>> I found a way of speeding up the half gcd. >>> >>> Basically when one performs the half gcd step, one normally produces a matrix which keeps track of the steps undertaken so far when dealing with the top half of the coefficients. Later one then applies this matrix to the _full_ original polynomials. >>> >>> However, one can save part of this polynomial by matrix computation by making use of some of the information computed along the way when computing the matrix in the first place. >>> >>> This trick probably saves about a quarter of the time. >>> >>> I also managed to figure out what the red dots were on the right hand side of the graph. These were due to magma having an unfair advantage in the timing code. Basically I was choosing a single modulus for each point on the graph in magma. However for FLINT I was changing the modulus every so often. What was happening is that occasionally Magma would get a really small modulus at random and FLINT would have times averaged over a whole pile of moduli. Fixing this made the red dots go away mostly. >>> >>> Here is the new graph: >>> >>> http://sage.math.washington.edu/home/wbhart/zpgcd5.png >>> >>> There are still some red dots on the left of the diagram. I've no idea what these are. I checked the Magma timings and indeed Magma has exceptionally fast times in that region, the times going up immediately above and below those points on the graph!! I did check that Magma seems to return the correct results at these points, so it is not some kind of bug I don't believe. >>> >>> There is also a slight darkening of the blue as the diagram goes across to the right. But this is to be expected as most of the time complexity is in the multiplication code, which exhibits a similar darkening to the right. >>> >>> It is still the case that Magma wins hands down for modulus = 2 and quite often for modulus = 3. >>> >>> Bill. >>> >>> --- On Thu, 6/25/09, Bill Hart <goo...@go...> wrote: >>> >>>> From: Bill Hart <goo...@go...> >>>> Subject: [flint-devel] Polynomial GCD over Z/pZ >>>> To: "Development list for FLINT" <fli...@li...> >>>> Date: Thursday, June 25, 2009, 10:25 AM >>>> This week I've been working on the >>>> speed of polynomial GCD over Z/pZ. >>>> After quite a bit of work, here is a before an after >>>> comparison (each >>>> graph compares FLINT with Magma): >>>> >>>> Before: >>>> >>>> http://sage.math.washington.edu/home/wbhart/zpgcd.png >>>> >>>> After: >>>> >>>> http://sage.math.washington.edu/home/wbhart/zpgcd3.png >>>> >>>> Analysis: >>>> ------------- >>>> >>>> * Timings were done on an AMD K8 Opteron with a very recent >>>> Magma >>>> >>>> * Note the problem is GCD(a*b, a*c) where a, b, and c all >>>> have a >>>> modulus with the given number of bits and the given length >>>> (axes are >>>> logarithmic) >>>> >>>> * There are only two algorithms used: Euclidean algorithm >>>> and half-gcd >>>> algorithm. For moduli over 8 bits the cutoff is length 168, >>>> and for 8 >>>> or fewer bits the cutoff is 246, however, note that these >>>> cutoffs >>>> occur on the graph at half those lengths due to the fact >>>> that we are >>>> taking GCD(a*b, a*c) where a, b and c all have the given >>>> length, i.e. >>>> a*b and a*c have about twice the given length. >>>> >>>> * The main improvements in the last week were: >>>> >>>> -special code to deal with the GF2 case >>>> >>>> - special code to deal with division of >>>> equal length polynomials >>>> and polynomials whose lengths differ by 1 (an idea given to >>>> me by >>>> Bernard Parisse). I have implemented both rem and divrem >>>> functions in >>>> both cases and use them in appropriate places in both the >>>> euclidean >>>> and half-gcd cases. >>>> >>>> * The red line along the bottom is almost certainly because >>>> Magma has >>>> special code for GF2 (and apparently GF3). The bottom line >>>> is modulus >>>> = nextprime(random(1 bit)) = 2 always, the next line up is >>>> modulus = >>>> nextprime(random(2 bits)) = 2 or 3. I would have to merge >>>> GF2X to beat >>>> Magma here (I already have special code for the GF2 case, >>>> but the >>>> format in which the polynomials is stored is simply 1 limb >>>> per >>>> coefficient). >>>> >>>> * I have no idea why there is so much red on the right side >>>> of the >>>> diagram. This is in the half-gcd region. The asymptotics of >>>> the >>>> half-gcd are O(n log n^2 log log n). The only other things >>>> used are: >>>> >>>> - multiplication : see >>>> http://sage.math.washington.edu/home/wbhart/zpmul.png >>>> (no real >>>> problems there, courtesy of zn_poly) >>>> >>>> - division : see >>>> http://sage.math.washington.edu/home/wbhart/zpdiv.png >>>> (definitely no >>>> problems there, we also use some zn_poly code in the >>>> division code) >>>> >>>> - strassen multiplication, but only for 2x2 matrices >>>> (I use it) >>>> >>>> - the half-gcd algorithm itself, which doesn't yield >>>> too much >>>> opportunity for optimisation that I know of >>>> >>>> So it is mystifying how Magma can be winning here. Any >>>> ideas? >>>> >>>> * I have no idea why there is a brown blob at about >>>> log_length = 6. >>>> Overheads? I doesn't appear to be a tuning issue. However I >>>> don't know >>>> of any other algorithms for this problem. >>>> >>>> * The dark blue patch to the left of the diagram is just >>>> due to >>>> overheads. I spent a couple of days just working on that >>>> patch. >>>> >>>> * The red dots at about log_length = 11 are just timing >>>> irregularities, they do not occur on every graph. >>>> >>>> * Above 30 bits we always win, as Magma uses multiprecision >>>> code in >>>> this region. FLINT goes all the way out to moduli of 63 >>>> bits on 64 bit >>>> machines. >>>> >>>> Bill. >>>> >>>> ------------------------------------------------------------------------------ >>>> _______________________________________________ >>>> flint-devel mailing list >>>> fli...@li... >>>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>>> >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> flint-devel mailing list >>> fli...@li... >>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>> >> > |
|
From: Bill H. <goo...@go...> - 2009-06-29 15:32:22
|
I found another trick to save some time at the top level of the half gcd: the matrix does not need to be computed (this doesn't apply to xgcd, only to gcd). Here is the new graph: http://sage.math.washington.edu/home/wbhart/zpgcd7.png I also coded up asymptotically fast half gcd style xgcd. Despite having done this already for MPIR, it took me ages to figure out because of subtle differences in the half gcd algorithm used in FLINT. Now I need to think about normalisation. I think the only normalisation I am going to do is to make the gcd monic. This helps when using it for multimodular stuff in the Z[x] module. I'll probably also do the same for the xgcd code Bill. 2009/6/28 Bill Hart <goo...@go...>: > I think the remaining red dots on the left were simply due to poor > compilation. I wound back the optimisation to -O2 and statically > linked the libraries and there was a noticeable speedup. > > Here is the new graph: > > http://sage.math.washington.edu/home/wbhart/zpgcd6.png > > The red dots on the second line up on the right are probably due to > the modulus being 2 at random. Magma has exceptionally fast GF2 code, > and for polynomials that large, only a single timing may be done. If > the modulus happens to be 2 for Magma and 3 for FLINT at that point, > then magma will win hands down. > > One improvement I could make is to improve cache performance by doing > some of the half gcd iteration in-place. But I think this would make a > very small difference so I don't know if I can be bothered with it or > not. > > Bill. > > 2009/6/27 William Hart <ha...@ya...>: >> >> I found a way of speeding up the half gcd. >> >> Basically when one performs the half gcd step, one normally produces a matrix which keeps track of the steps undertaken so far when dealing with the top half of the coefficients. Later one then applies this matrix to the _full_ original polynomials. >> >> However, one can save part of this polynomial by matrix computation by making use of some of the information computed along the way when computing the matrix in the first place. >> >> This trick probably saves about a quarter of the time. >> >> I also managed to figure out what the red dots were on the right hand side of the graph. These were due to magma having an unfair advantage in the timing code. Basically I was choosing a single modulus for each point on the graph in magma. However for FLINT I was changing the modulus every so often. What was happening is that occasionally Magma would get a really small modulus at random and FLINT would have times averaged over a whole pile of moduli. Fixing this made the red dots go away mostly. >> >> Here is the new graph: >> >> http://sage.math.washington.edu/home/wbhart/zpgcd5.png >> >> There are still some red dots on the left of the diagram. I've no idea what these are. I checked the Magma timings and indeed Magma has exceptionally fast times in that region, the times going up immediately above and below those points on the graph!! I did check that Magma seems to return the correct results at these points, so it is not some kind of bug I don't believe. >> >> There is also a slight darkening of the blue as the diagram goes across to the right. But this is to be expected as most of the time complexity is in the multiplication code, which exhibits a similar darkening to the right. >> >> It is still the case that Magma wins hands down for modulus = 2 and quite often for modulus = 3. >> >> Bill. >> >> --- On Thu, 6/25/09, Bill Hart <goo...@go...> wrote: >> >>> From: Bill Hart <goo...@go...> >>> Subject: [flint-devel] Polynomial GCD over Z/pZ >>> To: "Development list for FLINT" <fli...@li...> >>> Date: Thursday, June 25, 2009, 10:25 AM >>> This week I've been working on the >>> speed of polynomial GCD over Z/pZ. >>> After quite a bit of work, here is a before an after >>> comparison (each >>> graph compares FLINT with Magma): >>> >>> Before: >>> >>> http://sage.math.washington.edu/home/wbhart/zpgcd.png >>> >>> After: >>> >>> http://sage.math.washington.edu/home/wbhart/zpgcd3.png >>> >>> Analysis: >>> ------------- >>> >>> * Timings were done on an AMD K8 Opteron with a very recent >>> Magma >>> >>> * Note the problem is GCD(a*b, a*c) where a, b, and c all >>> have a >>> modulus with the given number of bits and the given length >>> (axes are >>> logarithmic) >>> >>> * There are only two algorithms used: Euclidean algorithm >>> and half-gcd >>> algorithm. For moduli over 8 bits the cutoff is length 168, >>> and for 8 >>> or fewer bits the cutoff is 246, however, note that these >>> cutoffs >>> occur on the graph at half those lengths due to the fact >>> that we are >>> taking GCD(a*b, a*c) where a, b and c all have the given >>> length, i.e. >>> a*b and a*c have about twice the given length. >>> >>> * The main improvements in the last week were: >>> >>> -special code to deal with the GF2 case >>> >>> - special code to deal with division of >>> equal length polynomials >>> and polynomials whose lengths differ by 1 (an idea given to >>> me by >>> Bernard Parisse). I have implemented both rem and divrem >>> functions in >>> both cases and use them in appropriate places in both the >>> euclidean >>> and half-gcd cases. >>> >>> * The red line along the bottom is almost certainly because >>> Magma has >>> special code for GF2 (and apparently GF3). The bottom line >>> is modulus >>> = nextprime(random(1 bit)) = 2 always, the next line up is >>> modulus = >>> nextprime(random(2 bits)) = 2 or 3. I would have to merge >>> GF2X to beat >>> Magma here (I already have special code for the GF2 case, >>> but the >>> format in which the polynomials is stored is simply 1 limb >>> per >>> coefficient). >>> >>> * I have no idea why there is so much red on the right side >>> of the >>> diagram. This is in the half-gcd region. The asymptotics of >>> the >>> half-gcd are O(n log n^2 log log n). The only other things >>> used are: >>> >>> - multiplication : see >>> http://sage.math.washington.edu/home/wbhart/zpmul.png >>> (no real >>> problems there, courtesy of zn_poly) >>> >>> - division : see >>> http://sage.math.washington.edu/home/wbhart/zpdiv.png >>> (definitely no >>> problems there, we also use some zn_poly code in the >>> division code) >>> >>> - strassen multiplication, but only for 2x2 matrices >>> (I use it) >>> >>> - the half-gcd algorithm itself, which doesn't yield >>> too much >>> opportunity for optimisation that I know of >>> >>> So it is mystifying how Magma can be winning here. Any >>> ideas? >>> >>> * I have no idea why there is a brown blob at about >>> log_length = 6. >>> Overheads? I doesn't appear to be a tuning issue. However I >>> don't know >>> of any other algorithms for this problem. >>> >>> * The dark blue patch to the left of the diagram is just >>> due to >>> overheads. I spent a couple of days just working on that >>> patch. >>> >>> * The red dots at about log_length = 11 are just timing >>> irregularities, they do not occur on every graph. >>> >>> * Above 30 bits we always win, as Magma uses multiprecision >>> code in >>> this region. FLINT goes all the way out to moduli of 63 >>> bits on 64 bit >>> machines. >>> >>> Bill. >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> flint-devel mailing list >>> fli...@li... >>> https://lists.sourceforge.net/lists/listinfo/flint-devel >>> >> >> >> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> > |
|
From: Bill H. <goo...@go...> - 2009-06-28 02:09:41
|
I think the remaining red dots on the left were simply due to poor compilation. I wound back the optimisation to -O2 and statically linked the libraries and there was a noticeable speedup. Here is the new graph: http://sage.math.washington.edu/home/wbhart/zpgcd6.png The red dots on the second line up on the right are probably due to the modulus being 2 at random. Magma has exceptionally fast GF2 code, and for polynomials that large, only a single timing may be done. If the modulus happens to be 2 for Magma and 3 for FLINT at that point, then magma will win hands down. One improvement I could make is to improve cache performance by doing some of the half gcd iteration in-place. But I think this would make a very small difference so I don't know if I can be bothered with it or not. Bill. 2009/6/27 William Hart <ha...@ya...>: > > I found a way of speeding up the half gcd. > > Basically when one performs the half gcd step, one normally produces a matrix which keeps track of the steps undertaken so far when dealing with the top half of the coefficients. Later one then applies this matrix to the _full_ original polynomials. > > However, one can save part of this polynomial by matrix computation by making use of some of the information computed along the way when computing the matrix in the first place. > > This trick probably saves about a quarter of the time. > > I also managed to figure out what the red dots were on the right hand side of the graph. These were due to magma having an unfair advantage in the timing code. Basically I was choosing a single modulus for each point on the graph in magma. However for FLINT I was changing the modulus every so often. What was happening is that occasionally Magma would get a really small modulus at random and FLINT would have times averaged over a whole pile of moduli. Fixing this made the red dots go away mostly. > > Here is the new graph: > > http://sage.math.washington.edu/home/wbhart/zpgcd5.png > > There are still some red dots on the left of the diagram. I've no idea what these are. I checked the Magma timings and indeed Magma has exceptionally fast times in that region, the times going up immediately above and below those points on the graph!! I did check that Magma seems to return the correct results at these points, so it is not some kind of bug I don't believe. > > There is also a slight darkening of the blue as the diagram goes across to the right. But this is to be expected as most of the time complexity is in the multiplication code, which exhibits a similar darkening to the right. > > It is still the case that Magma wins hands down for modulus = 2 and quite often for modulus = 3. > > Bill. > > --- On Thu, 6/25/09, Bill Hart <goo...@go...> wrote: > >> From: Bill Hart <goo...@go...> >> Subject: [flint-devel] Polynomial GCD over Z/pZ >> To: "Development list for FLINT" <fli...@li...> >> Date: Thursday, June 25, 2009, 10:25 AM >> This week I've been working on the >> speed of polynomial GCD over Z/pZ. >> After quite a bit of work, here is a before an after >> comparison (each >> graph compares FLINT with Magma): >> >> Before: >> >> http://sage.math.washington.edu/home/wbhart/zpgcd.png >> >> After: >> >> http://sage.math.washington.edu/home/wbhart/zpgcd3.png >> >> Analysis: >> ------------- >> >> * Timings were done on an AMD K8 Opteron with a very recent >> Magma >> >> * Note the problem is GCD(a*b, a*c) where a, b, and c all >> have a >> modulus with the given number of bits and the given length >> (axes are >> logarithmic) >> >> * There are only two algorithms used: Euclidean algorithm >> and half-gcd >> algorithm. For moduli over 8 bits the cutoff is length 168, >> and for 8 >> or fewer bits the cutoff is 246, however, note that these >> cutoffs >> occur on the graph at half those lengths due to the fact >> that we are >> taking GCD(a*b, a*c) where a, b and c all have the given >> length, i.e. >> a*b and a*c have about twice the given length. >> >> * The main improvements in the last week were: >> >> -special code to deal with the GF2 case >> >> - special code to deal with division of >> equal length polynomials >> and polynomials whose lengths differ by 1 (an idea given to >> me by >> Bernard Parisse). I have implemented both rem and divrem >> functions in >> both cases and use them in appropriate places in both the >> euclidean >> and half-gcd cases. >> >> * The red line along the bottom is almost certainly because >> Magma has >> special code for GF2 (and apparently GF3). The bottom line >> is modulus >> = nextprime(random(1 bit)) = 2 always, the next line up is >> modulus = >> nextprime(random(2 bits)) = 2 or 3. I would have to merge >> GF2X to beat >> Magma here (I already have special code for the GF2 case, >> but the >> format in which the polynomials is stored is simply 1 limb >> per >> coefficient). >> >> * I have no idea why there is so much red on the right side >> of the >> diagram. This is in the half-gcd region. The asymptotics of >> the >> half-gcd are O(n log n^2 log log n). The only other things >> used are: >> >> - multiplication : see >> http://sage.math.washington.edu/home/wbhart/zpmul.png >> (no real >> problems there, courtesy of zn_poly) >> >> - division : see >> http://sage.math.washington.edu/home/wbhart/zpdiv.png >> (definitely no >> problems there, we also use some zn_poly code in the >> division code) >> >> - strassen multiplication, but only for 2x2 matrices >> (I use it) >> >> - the half-gcd algorithm itself, which doesn't yield >> too much >> opportunity for optimisation that I know of >> >> So it is mystifying how Magma can be winning here. Any >> ideas? >> >> * I have no idea why there is a brown blob at about >> log_length = 6. >> Overheads? I doesn't appear to be a tuning issue. However I >> don't know >> of any other algorithms for this problem. >> >> * The dark blue patch to the left of the diagram is just >> due to >> overheads. I spent a couple of days just working on that >> patch. >> >> * The red dots at about log_length = 11 are just timing >> irregularities, they do not occur on every graph. >> >> * Above 30 bits we always win, as Magma uses multiprecision >> code in >> this region. FLINT goes all the way out to moduli of 63 >> bits on 64 bit >> machines. >> >> Bill. >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: William H. <ha...@ya...> - 2009-06-27 22:49:54
|
I found a way of speeding up the half gcd. Basically when one performs the half gcd step, one normally produces a matrix which keeps track of the steps undertaken so far when dealing with the top half of the coefficients. Later one then applies this matrix to the _full_ original polynomials. However, one can save part of this polynomial by matrix computation by making use of some of the information computed along the way when computing the matrix in the first place. This trick probably saves about a quarter of the time. I also managed to figure out what the red dots were on the right hand side of the graph. These were due to magma having an unfair advantage in the timing code. Basically I was choosing a single modulus for each point on the graph in magma. However for FLINT I was changing the modulus every so often. What was happening is that occasionally Magma would get a really small modulus at random and FLINT would have times averaged over a whole pile of moduli. Fixing this made the red dots go away mostly. Here is the new graph: http://sage.math.washington.edu/home/wbhart/zpgcd5.png There are still some red dots on the left of the diagram. I've no idea what these are. I checked the Magma timings and indeed Magma has exceptionally fast times in that region, the times going up immediately above and below those points on the graph!! I did check that Magma seems to return the correct results at these points, so it is not some kind of bug I don't believe. There is also a slight darkening of the blue as the diagram goes across to the right. But this is to be expected as most of the time complexity is in the multiplication code, which exhibits a similar darkening to the right. It is still the case that Magma wins hands down for modulus = 2 and quite often for modulus = 3. Bill. --- On Thu, 6/25/09, Bill Hart <goo...@go...> wrote: > From: Bill Hart <goo...@go...> > Subject: [flint-devel] Polynomial GCD over Z/pZ > To: "Development list for FLINT" <fli...@li...> > Date: Thursday, June 25, 2009, 10:25 AM > This week I've been working on the > speed of polynomial GCD over Z/pZ. > After quite a bit of work, here is a before an after > comparison (each > graph compares FLINT with Magma): > > Before: > > http://sage.math.washington.edu/home/wbhart/zpgcd.png > > After: > > http://sage.math.washington.edu/home/wbhart/zpgcd3.png > > Analysis: > ------------- > > * Timings were done on an AMD K8 Opteron with a very recent > Magma > > * Note the problem is GCD(a*b, a*c) where a, b, and c all > have a > modulus with the given number of bits and the given length > (axes are > logarithmic) > > * There are only two algorithms used: Euclidean algorithm > and half-gcd > algorithm. For moduli over 8 bits the cutoff is length 168, > and for 8 > or fewer bits the cutoff is 246, however, note that these > cutoffs > occur on the graph at half those lengths due to the fact > that we are > taking GCD(a*b, a*c) where a, b and c all have the given > length, i.e. > a*b and a*c have about twice the given length. > > * The main improvements in the last week were: > > -special code to deal with the GF2 case > > - special code to deal with division of > equal length polynomials > and polynomials whose lengths differ by 1 (an idea given to > me by > Bernard Parisse). I have implemented both rem and divrem > functions in > both cases and use them in appropriate places in both the > euclidean > and half-gcd cases. > > * The red line along the bottom is almost certainly because > Magma has > special code for GF2 (and apparently GF3). The bottom line > is modulus > = nextprime(random(1 bit)) = 2 always, the next line up is > modulus = > nextprime(random(2 bits)) = 2 or 3. I would have to merge > GF2X to beat > Magma here (I already have special code for the GF2 case, > but the > format in which the polynomials is stored is simply 1 limb > per > coefficient). > > * I have no idea why there is so much red on the right side > of the > diagram. This is in the half-gcd region. The asymptotics of > the > half-gcd are O(n log n^2 log log n). The only other things > used are: > > - multiplication : see > http://sage.math.washington.edu/home/wbhart/zpmul.png > (no real > problems there, courtesy of zn_poly) > > - division : see > http://sage.math.washington.edu/home/wbhart/zpdiv.png > (definitely no > problems there, we also use some zn_poly code in the > division code) > > - strassen multiplication, but only for 2x2 matrices > (I use it) > > - the half-gcd algorithm itself, which doesn't yield > too much > opportunity for optimisation that I know of > > So it is mystifying how Magma can be winning here. Any > ideas? > > * I have no idea why there is a brown blob at about > log_length = 6. > Overheads? I doesn't appear to be a tuning issue. However I > don't know > of any other algorithms for this problem. > > * The dark blue patch to the left of the diagram is just > due to > overheads. I spent a couple of days just working on that > patch. > > * The red dots at about log_length = 11 are just timing > irregularities, they do not occur on every graph. > > * Above 30 bits we always win, as Magma uses multiprecision > code in > this region. FLINT goes all the way out to moduli of 63 > bits on 64 bit > machines. > > Bill. > > ------------------------------------------------------------------------------ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Bill H. <goo...@go...> - 2009-06-25 01:25:53
|
This week I've been working on the speed of polynomial GCD over Z/pZ. After quite a bit of work, here is a before an after comparison (each graph compares FLINT with Magma): Before: http://sage.math.washington.edu/home/wbhart/zpgcd.png After: http://sage.math.washington.edu/home/wbhart/zpgcd3.png Analysis: ------------- * Timings were done on an AMD K8 Opteron with a very recent Magma * Note the problem is GCD(a*b, a*c) where a, b, and c all have a modulus with the given number of bits and the given length (axes are logarithmic) * There are only two algorithms used: Euclidean algorithm and half-gcd algorithm. For moduli over 8 bits the cutoff is length 168, and for 8 or fewer bits the cutoff is 246, however, note that these cutoffs occur on the graph at half those lengths due to the fact that we are taking GCD(a*b, a*c) where a, b and c all have the given length, i.e. a*b and a*c have about twice the given length. * The main improvements in the last week were: -special code to deal with the GF2 case - special code to deal with division of equal length polynomials and polynomials whose lengths differ by 1 (an idea given to me by Bernard Parisse). I have implemented both rem and divrem functions in both cases and use them in appropriate places in both the euclidean and half-gcd cases. * The red line along the bottom is almost certainly because Magma has special code for GF2 (and apparently GF3). The bottom line is modulus = nextprime(random(1 bit)) = 2 always, the next line up is modulus = nextprime(random(2 bits)) = 2 or 3. I would have to merge GF2X to beat Magma here (I already have special code for the GF2 case, but the format in which the polynomials is stored is simply 1 limb per coefficient). * I have no idea why there is so much red on the right side of the diagram. This is in the half-gcd region. The asymptotics of the half-gcd are O(n log n^2 log log n). The only other things used are: - multiplication : see http://sage.math.washington.edu/home/wbhart/zpmul.png (no real problems there, courtesy of zn_poly) - division : see http://sage.math.washington.edu/home/wbhart/zpdiv.png (definitely no problems there, we also use some zn_poly code in the division code) - strassen multiplication, but only for 2x2 matrices (I use it) - the half-gcd algorithm itself, which doesn't yield too much opportunity for optimisation that I know of So it is mystifying how Magma can be winning here. Any ideas? * I have no idea why there is a brown blob at about log_length = 6. Overheads? I doesn't appear to be a tuning issue. However I don't know of any other algorithms for this problem. * The dark blue patch to the left of the diagram is just due to overheads. I spent a couple of days just working on that patch. * The red dots at about log_length = 11 are just timing irregularities, they do not occur on every graph. * Above 30 bits we always win, as Magma uses multiprecision code in this region. FLINT goes all the way out to moduli of 63 bits on 64 bit machines. Bill. |
|
From: Bill H. <goo...@go...> - 2009-06-09 03:00:13
|
I've just issued FLINT 1.3.0 at http://www.flintlib.org/. This is a relatively minor release but fixes some issues on Cygwin which sage-cygwin currently patches, adds Tom Boothby's code for fast perfect power testing, my one line factor program and Tom's heuristic for a faster factoring routine for numbers less that one limb, etc. I've also added the F_mpz module, which was specifically requested by Fredrick Johansson, though it is only documented in the F_mpz.h file at this stage, not the main pdf file and I have not added an F_mpz_cmp function or the other functions Fredrick requested as yet. I've valgrinded all the changed code and tested the release on 32 and 64 bit machines, including Cygwin. This is not a critical update for Sage, but does supply features some people have been waiting for, including fast factorisation of unsigned longs. I also fixed the issues that David Harvey noted in the makefile and fmpz.c which Sage currently patches. Bill. 2009/6/8 Nick Alexander <nca...@gm...>: > > Hi everyone, > > Craig Citro and I (Nick Alexander) are your humble release managers > for this iteration, sage-4.0.2. > > The plan is to cut a release candidate Friday, June 12, probably > evening PST. The release itself will be Saturday, June 13. Hopefully > Minh will update the release notes and Harald the website shortly > afterward. > > We are going to try to merge all the code on trac that currently has a > positive review. However, there are a significant number of patches > needing review; now is the time to campaign for reviews. Please email > reviewers ASAP! You can also add them to the cc field on trac, > possibly using the list on the front page of trac (http://trac.sagemath.org/sage_trac/ > ) to help map names to trac usernames. If you can't think of a > suitable reviewer for one of your patches, Craig and I will try to > assist. > > We intend to upgrade the following software: > > * Singular > * Scipy > * Numpy > * mpir > > There was some talk about merging the categories code of the sage- > combinat group. Since much of this code is still under review, that > won't happen this iteration. Sorry! Hopefully a push can be made to > get this in for sage-4.1. > > Nick (for Craig) > > --~--~---------~--~----~------------~-------~--~----~ > You received this message because you are subscribed to the Google Groups "sage-release" group. > To post to this group, send email to sag...@go... > To unsubscribe from this group, send email to sag...@go... > For more options, visit this group at http://groups.google.com/group/sage-release?hl=en > -~----------~----~----~----~------~----~------~--~--- > > |