You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
(118) |
May
(140) |
Jun
(56) |
Jul
(86) |
Aug
(4) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(94) |
Aug
(86) |
Sep
|
Oct
(3) |
Nov
(18) |
Dec
(27) |
| 2009 |
Jan
(15) |
Feb
(15) |
Mar
(27) |
Apr
(2) |
May
(1) |
Jun
(6) |
Jul
(10) |
Aug
(4) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: William H. <ha...@ya...> - 2008-08-08 12:32:51
|
--- D.C...@wa... wrote: > The method for approximating real square roots works > for p-adics as well; > that is, if x is an approximation to sqrt(S), then > 1/2(x + S/x) is a > better approximation. We would get a list of > rationals and the final step > would be to convert the rational a/b into its p-adic > expansion. > I'm not completely convinced it is right to work in Q then convert at the end. Won't the numerator and denominator get really huge doing this? Why can't one implement this as an algorithm with power series (where our power series are still represented as large integers)? This is actually an important detail, and perhaps you can convince me one way or another. By the way, how many additional terms of the p-adic series do you get each time with this algorithm? > > You may also like to take a look at this division > algorithm I found a > little while ago; > > http://en.wikipedia.org/wiki/P-adic_division_algorithm > That seems interesting. How is division normally performed? > I haven't found this algorithm used any where else, > but it does seem to > work and may speed up the process of finding the > p-adic expansion of a/b > after performing a division as it offers an > alternative method to compute > 1/b. I can include this in my write up, but I don't > think it would benefit > us in terms of speed. I also don't know how easy it > would be to implement > into FLINT. > > (Its probably worth noting that I don't think either > of these options > would be available to us had we opted for a power > series implementation.) > > > The version of Hensels Lemma I've been reading > actually places the > restriction on the polynomial that the coefficients > must be p-adic > integers. From this restriction, the nth root must > lie in Z_p according to > the proof. So its probably not worth using this > method for finding nth > roots. > I see. So how does one find roots of polynomials in Qp then? That's an important question we have to figure out. > > With regards to the Bristol conference, > unfortunately I'll be home by > then. Depending on the time you'll be leaving > Coventry train station, I > could look at getting an early train from home and > meeting you at Bristol, > but my hometown is ~200 miles north of Coventry so > this probably won't be > possible unfortunately. > > > > Daniel Ellam > > > > > I don't know if this looks efficient or not. Isn't > there a method > > which gives you more than one coefficient at a > time. > > > > Also, this looks completely generic in that if I > had an algorithm for > > finding a root of an arbitrary polynomial, I could > just feed it x^n-a > > and it would essentially do exactly as you > described, and not be any > > slower. > > > > So perhaps there is no need to have it implemented > separately for this > > reason. > > > > I'm also unsure that you can expect the solution > to always exist in > > Zp. Isn't it sometimes in Qp? > > > > Bill. > > > > 2008/8/8 <D.C...@wa...>: > >> Hi, > >> > >> My plan for finding nth roots was to consider > p-adic nth roots as roots > >> of > >> a polynomial of the form f(x) = x^n - a and then > basically applying > >> Hensel's Lemma. So if S is a root of x^n - a and > S = s0 + s1p + s2p^2 + > >> ... then if we have already calculated s0, > s1,...,s(n-1) and want to > >> calculate sn, we write sn = s(n-1) + (bn)*p^n > where > >> > >> (bn)*p^n = - f(b(n-1))/f'(b(n-1)) mod p^(n+1). > >> > >> Of course we will have to find the formal > derivative of f, but in this > >> specific case this is always easy. This method is > quite easy to do by > >> hand > >> and should be easy to implement; I didn't find > anything better during my > >> research, do you think this method is worth > pursuing? > >> > >> > >> > >> > >> Daniel Ellam > >> > >> > >> > >>>Perhaps n-th roots can be thought of as roots of > a polynomial, i.e. > >>> roots > >> of >x^n-1. So maybe we don't even need that > separately. It would be > >> worth > >> it if >there is an algorithm out there which is > definitely faster than > >> something generic. > >> > >> > >> > >> > >> > >> > >> > >> > ------------------------------------------------------------------------- > >> This SF.Net email is sponsored by the Moblin Your > Move Developer's > >> challenge > >> Build the coolest Linux based applications with > Moblin SDK & win great > >> prizes > >> Grand prize is a trip for two to an Open Source > event anywhere in the > >> world > >> > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > >> _______________________________________________ > >> flint-devel mailing list > >> fli...@li... > >> > https://lists.sourceforge.net/lists/listinfo/flint-devel > >> > > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your > Move Developer's > > challenge > > Build the coolest Linux based applications with > Moblin SDK & win great > > prizes > > Grand prize is a trip for two to an Open Source > event anywhere in the > > world > > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > _______________________________________________ > > flint-devel mailing list > > fli...@li... > > > https://lists.sourceforge.net/lists/listinfo/flint-devel > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your > Move Developer's challenge > Build the coolest Linux based applications with > Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source > event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: <D.C...@wa...> - 2008-08-08 12:02:37
|
The method for approximating real square roots works for p-adics as well; that is, if x is an approximation to sqrt(S), then 1/2(x + S/x) is a better approximation. We would get a list of rationals and the final step would be to convert the rational a/b into its p-adic expansion. You may also like to take a look at this division algorithm I found a little while ago; http://en.wikipedia.org/wiki/P-adic_division_algorithm I haven't found this algorithm used any where else, but it does seem to work and may speed up the process of finding the p-adic expansion of a/b after performing a division as it offers an alternative method to compute 1/b. I can include this in my write up, but I don't think it would benefit us in terms of speed. I also don't know how easy it would be to implement into FLINT. (Its probably worth noting that I don't think either of these options would be available to us had we opted for a power series implementation.) The version of Hensels Lemma I've been reading actually places the restriction on the polynomial that the coefficients must be p-adic integers. From this restriction, the nth root must lie in Z_p according to the proof. So its probably not worth using this method for finding nth roots. With regards to the Bristol conference, unfortunately I'll be home by then. Depending on the time you'll be leaving Coventry train station, I could look at getting an early train from home and meeting you at Bristol, but my hometown is ~200 miles north of Coventry so this probably won't be possible unfortunately. Daniel Ellam > I don't know if this looks efficient or not. Isn't there a method > which gives you more than one coefficient at a time. > > Also, this looks completely generic in that if I had an algorithm for > finding a root of an arbitrary polynomial, I could just feed it x^n-a > and it would essentially do exactly as you described, and not be any > slower. > > So perhaps there is no need to have it implemented separately for this > reason. > > I'm also unsure that you can expect the solution to always exist in > Zp. Isn't it sometimes in Qp? > > Bill. > > 2008/8/8 <D.C...@wa...>: >> Hi, >> >> My plan for finding nth roots was to consider p-adic nth roots as roots >> of >> a polynomial of the form f(x) = x^n - a and then basically applying >> Hensel's Lemma. So if S is a root of x^n - a and S = s0 + s1p + s2p^2 + >> ... then if we have already calculated s0, s1,...,s(n-1) and want to >> calculate sn, we write sn = s(n-1) + (bn)*p^n where >> >> (bn)*p^n = - f(b(n-1))/f'(b(n-1)) mod p^(n+1). >> >> Of course we will have to find the formal derivative of f, but in this >> specific case this is always easy. This method is quite easy to do by >> hand >> and should be easy to implement; I didn't find anything better during my >> research, do you think this method is worth pursuing? >> >> >> >> >> Daniel Ellam >> >> >> >>>Perhaps n-th roots can be thought of as roots of a polynomial, i.e. >>> roots >> of >x^n-1. So maybe we don't even need that separately. It would be >> worth >> it if >there is an algorithm out there which is definitely faster than >> something generic. >> >> >> >> >> >> >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> Build the coolest Linux based applications with Moblin SDK & win great >> prizes >> Grand prize is a trip for two to an Open Source event anywhere in the >> world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the > world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Bill H. <goo...@go...> - 2008-08-08 10:47:43
|
Actually, I think I am wrong. The seed values can be any 32 bit numbers in that code, so long as the other constants are not altered. Bill. 2008/8/8 William Hart <ha...@ya...>: > Do you mean every time you run the program, or every > time you generate a random polynomial. > > If you mean the former, then you could modify randint > to start with a global seed value (or values). > > At present it always starts with the seed values > 4035456057 and 6748392731 which need to be coprime to > 4294967311 and 4294967357 respectively. > > You could provide a function called z_random_seed or > something like that which crunched away with the > current time in milliseconds and came up with two 32 > bit numbers which are coprime to those two values. > > If you do modify this function, can you ensure that if > the seed function is not called that it uses the > defaults that it currently uses, and also ensure that > you make it work for 32 bit and 64 bit machines > (you'll see there is separate code for both at > present). > > Bill. > > --- Richard Howell-Peak <ric...@gm...> wrote: > >> When generating a random polynomial or using >> something like randint, is it >> possible to change the random seed because it keeps >> giving me the same >> polynomials every time. >> >> -- >> Richard >> > > ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your >> Move Developer's challenge >> Build the coolest Linux based applications with >> Moblin SDK & win great prizes >> Grand prize is a trip for two to an Open Source >> event anywhere in the world >> > http://moblin-contest.org/redirect.php?banner_id=100&url=/> > _______________________________________________ >> flint-devel mailing list >> fli...@li... >> > https://lists.sourceforge.net/lists/listinfo/flint-devel >> > > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: William H. <ha...@ya...> - 2008-08-08 10:44:22
|
Guys, which of you three will be around on 18th, 19th
or 20th of August. This workshop on modular forms is
happening in Bristol at that time.
If you wanted to attend for a half day or full day, we
could meet at Coventry train station early in the
morning and I could pay for you to attend to see what
mathematics conferences are like. We could just leave
when you got bored. I've asked if there is going to be
a time which would be better to attend than any other.
I'll let you know what they say.
The program is already set, so not even I will speak,
but it would be an interesting experience for you guys
I think. Don't expect to understand a single word, and
be prepared for people to ask you what you are working
on in the coffee break, but if you just say you are an
undergrad from Warwick working on a computational
project with me and that you are really just there to
observe, you'll be fine.
If for some reason they don't have space for us, I'll
let you know. I did email and ask.
Bill.
|
|
From: William H. <ha...@ya...> - 2008-08-08 10:36:22
|
Do you mean every time you run the program, or every time you generate a random polynomial. If you mean the former, then you could modify randint to start with a global seed value (or values). At present it always starts with the seed values 4035456057 and 6748392731 which need to be coprime to 4294967311 and 4294967357 respectively. You could provide a function called z_random_seed or something like that which crunched away with the current time in milliseconds and came up with two 32 bit numbers which are coprime to those two values. If you do modify this function, can you ensure that if the seed function is not called that it uses the defaults that it currently uses, and also ensure that you make it work for 32 bit and 64 bit machines (you'll see there is separate code for both at present). Bill. --- Richard Howell-Peak <ric...@gm...> wrote: > When generating a random polynomial or using > something like randint, is it > possible to change the random seed because it keeps > giving me the same > polynomials every time. > > -- > Richard > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your > Move Developer's challenge > Build the coolest Linux based applications with > Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source > event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/> _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Richard Howell-P. <ric...@gm...> - 2008-08-08 10:18:17
|
When generating a random polynomial or using something like randint, is it possible to change the random seed because it keeps giving me the same polynomials every time. -- Richard |
|
From: Bill H. <goo...@go...> - 2008-08-08 10:08:45
|
Hi Guys, Two of you will be preparing posters for the projects you have done, so I thought I would write a handful of comments on how to go about this. * There's always less space than you think on one of these posters, so don't be too ambitious about what to include in terms of writing. * Colour and graphics are essential. Every other poster will be colourful and have pictures, so we want to do the same. Try and come up with some graphics or photos which will capture the viewers' attention, but which are relevant to the project. Graphs can be useful in this regard. I think Richard already has a timing graph in mind, I'm not exactly sure what Daniel can put on there, perhaps a diagram explaining one of the more technical p-adic algrorithms. Remember you can use photos as backgrounds, and this can help. * When you write, don't write stuff like, "during this project I learned about...." The purpose of the work you have been doing is real research, i.e. working on something that wasn't already done before. We aren't doing a report on the reading we did about what other people had already done before us. Whilst it is valuable that you learned about stuff written in books and papers, from a research point of view, that is not interesting. What you learned from actual hands on work, that no one else knows about or did before, is valuable from a research point of view. Richard you came up with a completely fresh implementation of various factoring and irreducibility algorithms. Daniel you did a comparative analysis of what other computer algebra systems do and planned a completely new interface for FLINT. So speak about what you guys actually did. Of course you can also mention any special insights you got from your reading, i.e. things that changed your mind about something important and which had real significance for the direction your project took. Daniel you should definitely mention the fact that we chose to use a single large integer for the representation, not a power series of coefficients as such. I wish there had been more insights like this, but we have to just make do with what we were able to come up with. * Don't be afraid to make the posters look technical. Take a look at the posters produced last year (they are all online somewhere) and you will see no one held back with the technical details. Often the posters were totally incomprehensible. At the same time, your introduction should be readable by a chemist who knows how to multiply decimal numbers together and that is about it. Remember when the posters go on display, you will actually be there for the whole day explaining to people over and over exactly what your project is about. It's useful if you have interesting stuff to point to. * Try not to glorify FLINT, i.e. don't think of this as an advertisement for FLINT. You are advertising the brand new research *you* did, not the FLINT project. About 5 people last year did projects on coming up with ways to advertise their supervisor's work, and they looked really, really silly, since everyone else had spent the summer doing actual research themselves, some of it publishable in a journal!! * There's no need to mention me more than once, for exactly the same reasons. What is on display here is *your* innovation and what *you* spent the summer doing. * This is not the place to be writing about how much you enjoyed the work. You should write such things in the survey www.survey.bris.ac.uk/reading/urop which you have been asked to take part in (it was emailed to me, but I'm now sending the link to you). * Don't use the words "I" or "me", or "did" or "got" in the project *at all*. Instead use "we", "found", "obtained". Researchers use a particular language when writing, and even when they did the research completely on their own, tend to use the royal plural, i.e. "we found that...." or "we obtained the following...." * Don't criticise anyone else's work or papers, etc. If you found an error, or something that was useless, simply act puzzled about it, but not overly concerned. * Don't include references on the poster, it's not the place. We'll put those on the website. * You can mention any further work that is planned for the future, but don't major on it, since otherwise it makes it look like you spent all summer procrastinating and not actually doing anything and thinking about what it would be like to do real research. Daniel, you have a slightly harder job with this since you didn't implement something new, didn't build something, or run an experiment or test a hypothesis, etc. All the other posters are going to be of this form. So you need to make the unique feature of your work the point of interest in your project. You investigated the different approaches computer algebra systems have taken to this problem and you tried to construct an approach which took the best aspects of them all. You will want to focus on the design decisions you made and why they were important to the final design. The research you were doing was essentially to solve the problem of how best to implement a p-adic module in a computer algebra system. And your findings are the key "discoveries" you made about how to achieve this. In particular I think you probably picked up a fair bit about how basic p-adic algorithms are actually used, for example in the various transcendental functions implemented and in algebraic number theory. Hopefully that will have informed your choices in the final design you came up with. Richard, in answer to your question, yes we definitely need some Magma timings for various sized polynomials on a chart. Remember that Magma will be running the squarefree factorisation algorithm first and you have to subtract the time it takes to generate the random polynomials, which can take some time. Magma has some special code for factorising things like binomials, quadratics and a few other things like that quickly, so beware of this when picking your polynomials. You want them to be generic, not easily factorised by tricks, otherwise the timing comparisons will be meaningless and highly in Magma's favour. It's possible to actually use the test code to time stuff now, as it prints out the time the test took. With various modifications of the test code you should be able to get plenty of timings. If you want something more systematic, we can use the FLINT profiling module, which is also able to produce graphs. I can show you how to use that if you need it. Here is the code I use to time factorisation in Magma: t:=Cputime(); for i := 1 to 1000 do p := NextPrime(RandomBits(Random(62)+2)); R:=GaloisField(p); S<x>:=PolynomialRing(R); f:=x^(2*(Random(20)+1)+1)-1; g:=Factorisation(f); end for; Cputime(t); You can modify it to do whatever you need. If you want random polynomials of say length 20, you can do: length:=20; f:=Polynomial([Random(R): x in [1..length]]); You'll find that for long polynomials Magma quickly gets ahead of us. That is due to their better Gaussian elimination code and also an algorithm called half-gcd which is much better at computing GCD's of polynomials over length about 200 I think. Bill. |
|
From: Bill H. <goo...@go...> - 2008-08-08 09:14:01
|
I don't know if this looks efficient or not. Isn't there a method which gives you more than one coefficient at a time. Also, this looks completely generic in that if I had an algorithm for finding a root of an arbitrary polynomial, I could just feed it x^n-a and it would essentially do exactly as you described, and not be any slower. So perhaps there is no need to have it implemented separately for this reason. I'm also unsure that you can expect the solution to always exist in Zp. Isn't it sometimes in Qp? Bill. 2008/8/8 <D.C...@wa...>: > Hi, > > My plan for finding nth roots was to consider p-adic nth roots as roots of > a polynomial of the form f(x) = x^n - a and then basically applying > Hensel's Lemma. So if S is a root of x^n - a and S = s0 + s1p + s2p^2 + > ... then if we have already calculated s0, s1,...,s(n-1) and want to > calculate sn, we write sn = s(n-1) + (bn)*p^n where > > (bn)*p^n = - f(b(n-1))/f'(b(n-1)) mod p^(n+1). > > Of course we will have to find the formal derivative of f, but in this > specific case this is always easy. This method is quite easy to do by hand > and should be easy to implement; I didn't find anything better during my > research, do you think this method is worth pursuing? > > > > > Daniel Ellam > > > >>Perhaps n-th roots can be thought of as roots of a polynomial, i.e. roots > of >x^n-1. So maybe we don't even need that separately. It would be worth > it if >there is an algorithm out there which is definitely faster than > something generic. > > > > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Richard Howell-P. <ric...@gm...> - 2008-08-08 08:26:06
|
Here's some code for an irreducibility test, I'm pretty sure it works but I
don't know how I'd test it properly
/**
* Irreducibility test for polynomials over Z/pZ
* @param input Arbitary polynomial
* @returns 1 if irredicible, 0 if not
*/
int irreducible(zmod_poly_t f)
{
//printf("input = "); zmod_poly_print(f); printf("\n");
//see if it is non linear
if( zmod_poly_length(f) > 2 )
{
unsigned long p = zmod_poly_modulus(f);
unsigned long n = zmod_poly_degree(f);
//Some polynomials we will need
zmod_poly_t a,x,x_p,one,x_modf;
zmod_poly_init(a, p);
zmod_poly_init(x, p);
zmod_poly_init(x_p, p);
zmod_poly_init(one, p);
zmod_poly_init(x_modf, p);
//Set up the constant polynomials
zmod_poly_set_coeff_ui(x, 1, 1);
zmod_poly_set_coeff_ui(one, 0, 1);
//compute x^q mod f
zmod_poly_powmod(x_p, x, p, f);
//compute x_q^n mod f
zmod_poly_powmod(a, x_p, n, f);
//now reduce x mod f and do the comparison, it is VITAL the
comparison is done mod f
zmod_poly_mulmod(x_modf, x, one, f);
//now do the irreducibility test
if( !zmod_poly_equal(a, x_modf) )
return 0;
else
{
factor_t* factors;
z_factor(factors, n);
for(unsigned long i = 0; i < factors->num; ++i)
{
zmod_poly_powmod( a, x_p, n / factors->p[i], f);
zmod_poly_sub( a, a, x);
zmod_poly_gcd( a, a, f);
if( !zmod_poly_is_one(a))
return 0;
}
}
zmod_poly_clear(a);
zmod_poly_clear(x);
zmod_poly_clear(x_p);
zmod_poly_clear(one);
zmod_poly_clear(x_modf);
}
return 1;
}
--
Richard
|
|
From: <D.C...@wa...> - 2008-08-07 23:33:03
|
Hi, My plan for finding nth roots was to consider p-adic nth roots as roots of a polynomial of the form f(x) = x^n - a and then basically applying Hensel's Lemma. So if S is a root of x^n - a and S = s0 + s1p + s2p^2 + ... then if we have already calculated s0, s1,...,s(n-1) and want to calculate sn, we write sn = s(n-1) + (bn)*p^n where (bn)*p^n = - f(b(n-1))/f'(b(n-1)) mod p^(n+1). Of course we will have to find the formal derivative of f, but in this specific case this is always easy. This method is quite easy to do by hand and should be easy to implement; I didn't find anything better during my research, do you think this method is worth pursuing? Daniel Ellam >Perhaps n-th roots can be thought of as roots of a polynomial, i.e. roots of >x^n-1. So maybe we don't even need that separately. It would be worth it if >there is an algorithm out there which is definitely faster than something generic. |
|
From: William H. <ha...@ya...> - 2008-08-07 21:12:44
|
By the way, a divrem should be faster than a mulmod by 1 if you want to reduce a polynomial mod f, since the latter uses the former! Bill. --- Richard Howell-Peak <ric...@gm...> wrote: > It would be good to have a simple function for > reducing a polynomial mod f, > at the moment I have just been doing mulmod and > multiplying by the constant > polynomial 1. I know for integers there is a more > efficient way of doing > this than a divrem, is that the case for polynomials > as well? > > I have also devised a test for berlekamp's algorithm > which I would like to > know if it is worth implementing. It involves > generating random polynomials > of random length and testing them for irreducibility > (using the irreducible > test I have re-written since losing it) and keeping > them if they are > irreducible. We then raise them to random powers and > compute the product and > attempt to reclaim the factors. The alternative is > to simply run berlekamp > on random polynomials and then test all the factors > and see if they are > irreducible or not. This has the advantage of being > quicker but I suspect > most polynomials will be irreducible and therefore > unaffected by berlekamp. > > > Also should I implement a more generic function: > > zmod_poly_factor(zmod_poly_factor_t factors, > zmod_poly_t input) > > which will automatically call square-free and then > berlekamp accordingly? > > > Lastly is it worth beginning work on distinct > degree, equal degree > factorisation algorithms or do you want to focus on > optimising berlekamp? > > -- > Richard > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your > Move Developer's challenge > Build the coolest Linux based applications with > Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source > event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/> _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: William H. <ha...@ya...> - 2008-08-07 21:10:39
|
--- Richard Howell-Peak <ric...@gm...> wrote:
> It would be good to have a simple function for
> reducing a polynomial mod f,
> at the moment I have just been doing mulmod and
> multiplying by the constant
> polynomial 1. I know for integers there is a more
> efficient way of doing
> this than a divrem, is that the case for polynomials
> as well?
Yes, one can use a precomputed inverse in the same
way. I'm not sure how much more efficient that is than
what we do currently though. I use a pretty fast
algorithm for zmod_poly_divrem
>
> I have also devised a test for berlekamp's algorithm
> which I would like to
> know if it is worth implementing. It involves
> generating random polynomials
> of random length and testing them for irreducibility
> (using the irreducible
> test I have re-written since losing it) and keeping
> them if they are
> irreducible. We then raise them to random powers and
> compute the product and
> attempt to reclaim the factors.
Yes, it is definitely worth implementing. The one
slight thing to watch out for is that it is possible
that by random chance the same irreducible polynomial
gets generated twice. Then you might think you have k
distinct polynomials which you have taken powers of,
when in reality you only have k-1. This may cause the
test to fail when it shouldn't, if not implemented
carefully.
> The alternative is
> to simply run berlekamp
> on random polynomials and then test all the factors
> and see if they are
> irreducible or not. This has the advantage of being
> quicker but I suspect
> most polynomials will be irreducible and therefore
> unaffected by berlekamp.
I've been factoring polynomials of the form x^{2n+1) -
1 and these seem to have quite a few factors. It's
true that a random polynomial over Z is probably
irreducible, but it isn't so over finite fields, it
seems.
By the way, the nullity of the Berlekamp algebra
actually turns out to be the number of irreducible
factors of f. Thus you can actually use the first part
of Berlekamp as an irreducibility test!!
>
>
> Also should I implement a more generic function:
>
> zmod_poly_factor(zmod_poly_factor_t factors,
> zmod_poly_t input)
>
> which will automatically call square-free and then
> berlekamp accordingly?
Yes, definitely. I believe the square-free part will
actually take very little of the run time.
>
>
> Lastly is it worth beginning work on distinct
> degree, equal degree
> factorisation algorithms or do you want to focus on
> optimising berlekamp?
>
That's actually up to you. There are two ways we can
go. We can optimise Berlekamp, or we can work on von
zur Gathen - Shoup.
By the way, I found a way of taking more than a factor
of 2 off the time to do Gaussian reduction when you
have an extremely small prime modulus (like 5).
Of course to really get the low, low times, one needs
to use asymptotically fast inversion (an algorithm due
to Strassen, which relies on an algorithm for matrix
multiplication, also due to Strassen) and use a packed
representation of the matrix (i.e. only 3 bits for
each entry in Z/5Z instead of a whole word).
But those optmisations are not that interesting from
our perspective, since we care mainly about word sized
primes for now and optimising matrix operations takes
us well away from polynomial factorisation.
Bill.
|
|
From: Richard Howell-P. <ric...@gm...> - 2008-08-07 20:57:35
|
It would be good to have a simple function for reducing a polynomial mod f, at the moment I have just been doing mulmod and multiplying by the constant polynomial 1. I know for integers there is a more efficient way of doing this than a divrem, is that the case for polynomials as well? I have also devised a test for berlekamp's algorithm which I would like to know if it is worth implementing. It involves generating random polynomials of random length and testing them for irreducibility (using the irreducible test I have re-written since losing it) and keeping them if they are irreducible. We then raise them to random powers and compute the product and attempt to reclaim the factors. The alternative is to simply run berlekamp on random polynomials and then test all the factors and see if they are irreducible or not. This has the advantage of being quicker but I suspect most polynomials will be irreducible and therefore unaffected by berlekamp. Also should I implement a more generic function: zmod_poly_factor(zmod_poly_factor_t factors, zmod_poly_t input) which will automatically call square-free and then berlekamp accordingly? Lastly is it worth beginning work on distinct degree, equal degree factorisation algorithms or do you want to focus on optimising berlekamp? -- Richard |
|
From: Bill H. <goo...@go...> - 2008-08-07 16:07:45
|
Hi Richard, A couple of days ago, someone on the sage-devel list was complaining how slow SAGE was at reducing matrices over Z/5Z. Their test problem was a random matrix of size 2500x3125. Magma does it in 1.970s The time for SAGE to do it is 11.54s So I decided to time our implementation just to see how it goes: Testing zmod_mat_row_reduce_gauss()... Cpu = 134210 ms Wall = 134310 ms ... ok Oh well. So we have a way to go yet!! This isn't really a priority for us anyway, since we probably don't need it to be all that fast for polynomial factoring. One day it will get fixed. Bill. |
|
From: Bill H. <goo...@go...> - 2008-08-07 15:01:18
|
That's interesting. Can you track down the segfault. Perhaps we have a bug in something else in FLINT. I'm just about to commit your final changes to Pocklington-Lehmer. I am slowly catching up with all the code you have been writing, in between sorting out somewhere to live next week. Bill. 2008/8/7 Peter Shrimpton <ps...@gm...>: > Hello > > I think I have implemented the psudocode that I wrote yesterday and > amazingly I think it might actually be working! However I am not sure if it > is even supposed to. I realized whilst I was writing the code that the > parameter passed to the Lucas chain actually depends on n (A=a^2*b^(-1) - 2 > mod n (where b^(-1) is the inverse mod n)) However for a=1,b=-1 which is the > case I am interested in (its worth $620) I realized that this simplified to > -3 mod n or n-3. At this point I had already written most of the code so I > thought I may as well just try passing N-3 as the parameter (where > N=n_1*n_2....n_r as in the psudocode). This seamed to work and simply > reducing V_M mod n_i and continuing from there got me the same answers as > computing the V's from scratch. Unfortunately the computer seg-faults if we > try to do more then 5 17-digit numbers (I think N is getting too large. Not > sure though). Also in order for it to be worth the extra effort the numbers > need to have as many of there highest bits in common as possible. I am not > sure this will actually be the case after we have sieved etc. > > It might be worthwile writing some profiling code of the various methods we > have developed for (psudo)prime tests for several numbers. If that shows > that there are significant benefits between them and testing numbers one at > a time then perhaps it would be worth writing up? > > Peter > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > > |
|
From: Peter S. <ps...@gm...> - 2008-08-07 14:52:12
|
Hello I think I have implemented the psudocode that I wrote yesterday and amazingly I think it might actually be working! However I am not sure if it is even supposed to. I realized whilst I was writing the code that the parameter passed to the Lucas chain actually depends on n (A=a^2*b^(-1) - 2 mod n (where b^(-1) is the inverse mod n)) However for a=1,b=-1 which is the case I am interested in (its worth $620) I realized that this simplified to -3 mod n or n-3. At this point I had already written most of the code so I thought I may as well just try passing N-3 as the parameter (where N=n_1*n_2....n_r as in the psudocode). This seamed to work and simply reducing V_M mod n_i and continuing from there got me the same answers as computing the V's from scratch. Unfortunately the computer seg-faults if we try to do more then 5 17-digit numbers (I think N is getting too large. Not sure though). Also in order for it to be worth the extra effort the numbers need to have as many of there highest bits in common as possible. I am not sure this will actually be the case after we have sieved etc. It might be worthwile writing some profiling code of the various methods we have developed for (psudo)prime tests for several numbers. If that shows that there are significant benefits between them and testing numbers one at a time then perhaps it would be worth writing up? Peter |
|
From: <D.C...@wa...> - 2008-08-07 13:25:52
|
Hi, I'm pretty sure I attached everything last time but it didnt seem to work so here it is again with the attachments. I've basically broken down all the text I sent on division into the two attached documents, one is pseudocode (hopefully not written at too high a level) for some of the functions but not all and for division. This isnt any good by itself so I've written up in plain English how everything should work and all the functions into another (attached) document. Hopefully the two together will be useful and should cover everything if I haven't missed anything out. I'll most likely change some of this later and I'll be writing something similar (if what I've done here is ok) for the other functions, such as multiplication, etc. Where I havent been sure what exactly the input for the functions are and what they return, I just stated the function name. I havent included any data types yet. I'm still using long extras as I don't think we can do modulo arithmetic with fmpz. In SAGE, the integer is stored as an mpz_t. In the end I imagine having two documents, a document for 'discussion' and a document for the pseudocode to accompany it. I'll write a third document containing the research I did, containing papers, books, etc that I've used throughout my project. Daniel Ellam |
|
From: Bill H. <goo...@go...> - 2008-08-07 13:25:47
|
Hi Daniel, With regard to long_extras, you can assume anything that is in there will eventually be implemented in fmpz for multiprecision integers as well. You can just about replace z_blah() with fmpz_blah() and it should be ok. Note that to reduce an fmpz mod n there is fmpz_mod and fmpz_mod_ui. But you can just specify which functions need to be implemented in fmpz if there is something you need which isn't there. Leave it up to the person who does that work to figure out how to actually implement that stuff. The idea of splitting the discussion and pseudocode up sounds good. Bill. 2008/8/7 <D.C...@wa...>: > Hi, > > I've basically broken down all the text I sent on division into the two > attached documents, one is pseudocode (hopefully not written at too high a > level) for some of the functions but not all and for division. This isnt > any good by itself so I've written up in plain English how everything > should work and all the functions into another (attached) document. > Hopefully the two together will be useful and should cover everything if I > haven't missed anything out. I'll most likely change some of this later > and I'll be writing something similar (if what I've done here is ok) for > the other functions, such as multiplication, etc. > > Where I havent been sure what exactly the input for the functions are and > what they return, I just stated the function name. I havent included any > data types yet. > > I'm still using long extras as I don't think we can do modulo arithmetic > with fmpz. In SAGE, the integer is stored as an mpz_t. > > In the end I imagine having two documents, a document for 'discussion' and > a document for the pseudocode to accompany it. I'll write a third document > containing the research I did, containing papers, books, etc that I've > used throughout my project. > > > > Daniel Ellam > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: <D.C...@wa...> - 2008-08-07 13:13:01
|
Hi, I've basically broken down all the text I sent on division into the two attached documents, one is pseudocode (hopefully not written at too high a level) for some of the functions but not all and for division. This isnt any good by itself so I've written up in plain English how everything should work and all the functions into another (attached) document. Hopefully the two together will be useful and should cover everything if I haven't missed anything out. I'll most likely change some of this later and I'll be writing something similar (if what I've done here is ok) for the other functions, such as multiplication, etc. Where I havent been sure what exactly the input for the functions are and what they return, I just stated the function name. I havent included any data types yet. I'm still using long extras as I don't think we can do modulo arithmetic with fmpz. In SAGE, the integer is stored as an mpz_t. In the end I imagine having two documents, a document for 'discussion' and a document for the pseudocode to accompany it. I'll write a third document containing the research I did, containing papers, books, etc that I've used throughout my project. Daniel Ellam |
|
From: Peter S. <ps...@gm...> - 2008-08-06 16:52:44
|
Hello I have done some more work on the simultaneous Sieving and Pocklington code and that now works for numbers that have a prime factor greater then primes[TF_CUTTOF]. See attached. I think the collision code needs more work as currently it is not doing an awful lot to deal with it. Although the way I have done it with both the sieve and the hash table it is still possible to recover which two numbers collided and often one of them will fail the Baillie-PSW test so it dosn't need to be factorised. Also I was thinking about what you were saying with computing 2^k mod N=n_1*n_2....n_r and I think that it could apply to the Lucas test. I have written some pseudocode (attached) which might help. The basic idea is to compute V_M mod N where M has several of the last bits in common with the m_i's This gives us a starting point to compute the m's from. I am not sure weather this will actually be quicker and it will require some multi precision arithmetic but it might be worth while. Peter |
|
From: Bill H. <goo...@go...> - 2008-08-06 14:44:15
|
Great! Bill. On 06/08/2008, Peter Shrimpton <ps...@gm...> wrote: > Yea I didn't have it returning 0 when it failed a pesudoprime test. But I > changed that and it now passes the test. > > Peter > > 2008/8/6 Bill Hart <goo...@go...> > >> Right, we can't use it to prove compositeness, but that number isn't a >> charmichael number is it? >> >> It just seems like a very small composite number to pass 1000 >> different Fermat pseudoprime tests. >> >> Bill. >> >> >> On 06/08/2008, Peter Shrimpton <ps...@gm...> wrote: >> > Hello >> > >> > I think I have found the problem. The problem with the >> > Pocklington-Lehmer >> > test is that if you want to prove somthing is composite then you have to >> > show that no b between 2 and n-1 satisfies >> > >> > 1) n is a b-fermat-psudoprime and >> > 2) gcd(b^((n-1)/p)-1,n)=1 >> > >> > My code searched through all of the b's until it found one that worked >> > or >> > reached _iterations_ whereupon it returned -1. I realized that as we are >> > doing a fermat-psudoprime test we can return 0 if it fails any of these >> > tests. This is far from fool proof because of Carmichael numbers but it >> will >> > help alot. I think proving that numbers are composite using the >> > Pocklington-Lehmer test may be a lost cause. >> > >> > There are currently two different algorithms for doing the >> > Pocklington-Lehmer test implemented in my code. One simply searches >> through >> > the b's from 2 to iterations and checks conditions 1 and 2. The other >> uses >> > the ideas from http://www.jstor.org/stable/2005583?seq=4 to reduced the >> > number of gcd calculations. I have done some quick tests to see which is >> > quicker and I think the second one is but im not sure if this will still >> be >> > the case when we use the code. >> > >> > Peter >> > >> > >> > 2008/8/6 Bill Hart <goo...@go...> >> > >> >> Peter, >> >> >> >> I merged your Pocklington-Lehmer code, but some kind of obscure bug >> >> exists. For n = 449*479 it returns -1 (I changed the return values by >> >> swapping your -1 and -2, since eventually -2 won't be needed). >> >> >> >> I set the number of iterations to 1000 and it runs out of iterations. >> >> This is pretty suspicious, since this is a pretty small number. It >> >> should be able to rule that it is composite pretty easily. Something >> >> seems to be wrong. Do you want to have a look and see if you can >> >> figure it out. >> >> >> >> You have to check out flint/trunk and do make long_extras-test then >> >> run long_extras-test >> >> >> >> Bill. >> >> >> >> 2008/8/5 Peter Shrimpton <ps...@gm...>: >> >> > Hello Bill >> >> > >> >> > I have written some code to sieve and factor simultaneously >> >> > >> >> > sievefact.c sieves a chunk of numbers and uses the trial devision to >> >> factor >> >> > n-1 for all n that get through the sieve. It uses a hash table to >> reduce >> >> the >> >> > storage space required with the hash function being the 2nd to last 4 >> >> bits. >> >> > (The last bit is alway the same). This seams to work very well >> >> > indeed. >> >> > >> >> > sievfact2.c simultaneously sieves and checks to see if it can find a >> >> > b >> >> such >> >> > that gcd(b^((n-1)/p)-1,n)=1. It uses the same code as sievefact.c but >> it >> >> > doesn't store the factors, only the cofactor. This could be useful if >> >> > the >> >> > Pocklington test is quicker then the pseudoprime tests the only >> trouble >> >> > being to then decide which numbers to do the pseudoprime tests on to >> >> > find >> >> a >> >> > counter example to the Baillie-PSW test. >> >> > >> >> > Would it be possible for you to explain your idea about calculating >> >> powers >> >> > of 2 mod N for lots of different numbers at once again? I can't quite >> >> > remember exactly how it was to be done or how we could go about >> >> > splitting >> >> > the powers up once we had calculated the power of the product. An >> >> > explanation in an email should suffice. >> >> > >> >> > I have also written a program that uses the list I emailed you before >> to >> >> > look for counter examples to the Baillie-PSW test below 2^63. >> >> Unfortunately >> >> > there weren't that many products in the list that were less then2^63 >> and >> >> I >> >> > didn't find any counter examples. Maybe it could be worth >> >> > implementing >> a >> >> > Baillie-PSW test in fmpz's and having another go? >> >> > >> >> > >> >> > Peter >> >> > >> >> > >> ------------------------------------------------------------------------- >> >> > This SF.Net email is sponsored by the Moblin Your Move Developer's >> >> challenge >> >> > Build the coolest Linux based applications with Moblin SDK & win >> >> > great >> >> > prizes >> >> > Grand prize is a trip for two to an Open Source event anywhere in the >> >> world >> >> > http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> >> > _______________________________________________ >> >> > flint-devel mailing list >> >> > fli...@li... >> >> > https://lists.sourceforge.net/lists/listinfo/flint-devel >> >> > >> >> > >> >> >> >> >> ------------------------------------------------------------------------- >> >> This SF.Net email is sponsored by the Moblin Your Move Developer's >> >> challenge >> >> Build the coolest Linux based applications with Moblin SDK & win great >> >> prizes >> >> Grand prize is a trip for two to an Open Source event anywhere in the >> >> world >> >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> >> _______________________________________________ >> >> flint-devel mailing list >> >> fli...@li... >> >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> >> >> > >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> Build the coolest Linux based applications with Moblin SDK & win great >> prizes >> Grand prize is a trip for two to an Open Source event anywhere in the >> world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> > |
|
From: Peter S. <ps...@gm...> - 2008-08-06 12:38:35
|
Yea I didn't have it returning 0 when it failed a pesudoprime test. But I changed that and it now passes the test. Peter 2008/8/6 Bill Hart <goo...@go...> > Right, we can't use it to prove compositeness, but that number isn't a > charmichael number is it? > > It just seems like a very small composite number to pass 1000 > different Fermat pseudoprime tests. > > Bill. > > > On 06/08/2008, Peter Shrimpton <ps...@gm...> wrote: > > Hello > > > > I think I have found the problem. The problem with the Pocklington-Lehmer > > test is that if you want to prove somthing is composite then you have to > > show that no b between 2 and n-1 satisfies > > > > 1) n is a b-fermat-psudoprime and > > 2) gcd(b^((n-1)/p)-1,n)=1 > > > > My code searched through all of the b's until it found one that worked or > > reached _iterations_ whereupon it returned -1. I realized that as we are > > doing a fermat-psudoprime test we can return 0 if it fails any of these > > tests. This is far from fool proof because of Carmichael numbers but it > will > > help alot. I think proving that numbers are composite using the > > Pocklington-Lehmer test may be a lost cause. > > > > There are currently two different algorithms for doing the > > Pocklington-Lehmer test implemented in my code. One simply searches > through > > the b's from 2 to iterations and checks conditions 1 and 2. The other > uses > > the ideas from http://www.jstor.org/stable/2005583?seq=4 to reduced the > > number of gcd calculations. I have done some quick tests to see which is > > quicker and I think the second one is but im not sure if this will still > be > > the case when we use the code. > > > > Peter > > > > > > 2008/8/6 Bill Hart <goo...@go...> > > > >> Peter, > >> > >> I merged your Pocklington-Lehmer code, but some kind of obscure bug > >> exists. For n = 449*479 it returns -1 (I changed the return values by > >> swapping your -1 and -2, since eventually -2 won't be needed). > >> > >> I set the number of iterations to 1000 and it runs out of iterations. > >> This is pretty suspicious, since this is a pretty small number. It > >> should be able to rule that it is composite pretty easily. Something > >> seems to be wrong. Do you want to have a look and see if you can > >> figure it out. > >> > >> You have to check out flint/trunk and do make long_extras-test then > >> run long_extras-test > >> > >> Bill. > >> > >> 2008/8/5 Peter Shrimpton <ps...@gm...>: > >> > Hello Bill > >> > > >> > I have written some code to sieve and factor simultaneously > >> > > >> > sievefact.c sieves a chunk of numbers and uses the trial devision to > >> factor > >> > n-1 for all n that get through the sieve. It uses a hash table to > reduce > >> the > >> > storage space required with the hash function being the 2nd to last 4 > >> bits. > >> > (The last bit is alway the same). This seams to work very well indeed. > >> > > >> > sievfact2.c simultaneously sieves and checks to see if it can find a b > >> such > >> > that gcd(b^((n-1)/p)-1,n)=1. It uses the same code as sievefact.c but > it > >> > doesn't store the factors, only the cofactor. This could be useful if > >> > the > >> > Pocklington test is quicker then the pseudoprime tests the only > trouble > >> > being to then decide which numbers to do the pseudoprime tests on to > >> > find > >> a > >> > counter example to the Baillie-PSW test. > >> > > >> > Would it be possible for you to explain your idea about calculating > >> powers > >> > of 2 mod N for lots of different numbers at once again? I can't quite > >> > remember exactly how it was to be done or how we could go about > >> > splitting > >> > the powers up once we had calculated the power of the product. An > >> > explanation in an email should suffice. > >> > > >> > I have also written a program that uses the list I emailed you before > to > >> > look for counter examples to the Baillie-PSW test below 2^63. > >> Unfortunately > >> > there weren't that many products in the list that were less then2^63 > and > >> I > >> > didn't find any counter examples. Maybe it could be worth implementing > a > >> > Baillie-PSW test in fmpz's and having another go? > >> > > >> > > >> > Peter > >> > > >> > > ------------------------------------------------------------------------- > >> > This SF.Net email is sponsored by the Moblin Your Move Developer's > >> challenge > >> > Build the coolest Linux based applications with Moblin SDK & win great > >> > prizes > >> > Grand prize is a trip for two to an Open Source event anywhere in the > >> world > >> > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > >> > _______________________________________________ > >> > flint-devel mailing list > >> > fli...@li... > >> > https://lists.sourceforge.net/lists/listinfo/flint-devel > >> > > >> > > >> > >> > ------------------------------------------------------------------------- > >> This SF.Net email is sponsored by the Moblin Your Move Developer's > >> challenge > >> Build the coolest Linux based applications with Moblin SDK & win great > >> prizes > >> Grand prize is a trip for two to an Open Source event anywhere in the > >> world > >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ > >> _______________________________________________ > >> flint-devel mailing list > >> fli...@li... > >> https://lists.sourceforge.net/lists/listinfo/flint-devel > >> > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Bill H. <goo...@go...> - 2008-08-06 11:39:18
|
Right, we can't use it to prove compositeness, but that number isn't a charmichael number is it? It just seems like a very small composite number to pass 1000 different Fermat pseudoprime tests. Bill. On 06/08/2008, Peter Shrimpton <ps...@gm...> wrote: > Hello > > I think I have found the problem. The problem with the Pocklington-Lehmer > test is that if you want to prove somthing is composite then you have to > show that no b between 2 and n-1 satisfies > > 1) n is a b-fermat-psudoprime and > 2) gcd(b^((n-1)/p)-1,n)=1 > > My code searched through all of the b's until it found one that worked or > reached _iterations_ whereupon it returned -1. I realized that as we are > doing a fermat-psudoprime test we can return 0 if it fails any of these > tests. This is far from fool proof because of Carmichael numbers but it will > help alot. I think proving that numbers are composite using the > Pocklington-Lehmer test may be a lost cause. > > There are currently two different algorithms for doing the > Pocklington-Lehmer test implemented in my code. One simply searches through > the b's from 2 to iterations and checks conditions 1 and 2. The other uses > the ideas from http://www.jstor.org/stable/2005583?seq=4 to reduced the > number of gcd calculations. I have done some quick tests to see which is > quicker and I think the second one is but im not sure if this will still be > the case when we use the code. > > Peter > > > 2008/8/6 Bill Hart <goo...@go...> > >> Peter, >> >> I merged your Pocklington-Lehmer code, but some kind of obscure bug >> exists. For n = 449*479 it returns -1 (I changed the return values by >> swapping your -1 and -2, since eventually -2 won't be needed). >> >> I set the number of iterations to 1000 and it runs out of iterations. >> This is pretty suspicious, since this is a pretty small number. It >> should be able to rule that it is composite pretty easily. Something >> seems to be wrong. Do you want to have a look and see if you can >> figure it out. >> >> You have to check out flint/trunk and do make long_extras-test then >> run long_extras-test >> >> Bill. >> >> 2008/8/5 Peter Shrimpton <ps...@gm...>: >> > Hello Bill >> > >> > I have written some code to sieve and factor simultaneously >> > >> > sievefact.c sieves a chunk of numbers and uses the trial devision to >> factor >> > n-1 for all n that get through the sieve. It uses a hash table to reduce >> the >> > storage space required with the hash function being the 2nd to last 4 >> bits. >> > (The last bit is alway the same). This seams to work very well indeed. >> > >> > sievfact2.c simultaneously sieves and checks to see if it can find a b >> such >> > that gcd(b^((n-1)/p)-1,n)=1. It uses the same code as sievefact.c but it >> > doesn't store the factors, only the cofactor. This could be useful if >> > the >> > Pocklington test is quicker then the pseudoprime tests the only trouble >> > being to then decide which numbers to do the pseudoprime tests on to >> > find >> a >> > counter example to the Baillie-PSW test. >> > >> > Would it be possible for you to explain your idea about calculating >> powers >> > of 2 mod N for lots of different numbers at once again? I can't quite >> > remember exactly how it was to be done or how we could go about >> > splitting >> > the powers up once we had calculated the power of the product. An >> > explanation in an email should suffice. >> > >> > I have also written a program that uses the list I emailed you before to >> > look for counter examples to the Baillie-PSW test below 2^63. >> Unfortunately >> > there weren't that many products in the list that were less then2^63 and >> I >> > didn't find any counter examples. Maybe it could be worth implementing a >> > Baillie-PSW test in fmpz's and having another go? >> > >> > >> > Peter >> > >> > ------------------------------------------------------------------------- >> > This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> > Build the coolest Linux based applications with Moblin SDK & win great >> > prizes >> > Grand prize is a trip for two to an Open Source event anywhere in the >> world >> > http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> > _______________________________________________ >> > flint-devel mailing list >> > fli...@li... >> > https://lists.sourceforge.net/lists/listinfo/flint-devel >> > >> > >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> Build the coolest Linux based applications with Moblin SDK & win great >> prizes >> Grand prize is a trip for two to an Open Source event anywhere in the >> world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> flint-devel mailing list >> fli...@li... >> https://lists.sourceforge.net/lists/listinfo/flint-devel >> > |
|
From: Peter S. <ps...@gm...> - 2008-08-06 10:52:16
|
Hello I think I have found the problem. The problem with the Pocklington-Lehmer test is that if you want to prove somthing is composite then you have to show that no b between 2 and n-1 satisfies 1) n is a b-fermat-psudoprime and 2) gcd(b^((n-1)/p)-1,n)=1 My code searched through all of the b's until it found one that worked or reached _iterations_ whereupon it returned -1. I realized that as we are doing a fermat-psudoprime test we can return 0 if it fails any of these tests. This is far from fool proof because of Carmichael numbers but it will help alot. I think proving that numbers are composite using the Pocklington-Lehmer test may be a lost cause. There are currently two different algorithms for doing the Pocklington-Lehmer test implemented in my code. One simply searches through the b's from 2 to iterations and checks conditions 1 and 2. The other uses the ideas from http://www.jstor.org/stable/2005583?seq=4 to reduced the number of gcd calculations. I have done some quick tests to see which is quicker and I think the second one is but im not sure if this will still be the case when we use the code. Peter 2008/8/6 Bill Hart <goo...@go...> > Peter, > > I merged your Pocklington-Lehmer code, but some kind of obscure bug > exists. For n = 449*479 it returns -1 (I changed the return values by > swapping your -1 and -2, since eventually -2 won't be needed). > > I set the number of iterations to 1000 and it runs out of iterations. > This is pretty suspicious, since this is a pretty small number. It > should be able to rule that it is composite pretty easily. Something > seems to be wrong. Do you want to have a look and see if you can > figure it out. > > You have to check out flint/trunk and do make long_extras-test then > run long_extras-test > > Bill. > > 2008/8/5 Peter Shrimpton <ps...@gm...>: > > Hello Bill > > > > I have written some code to sieve and factor simultaneously > > > > sievefact.c sieves a chunk of numbers and uses the trial devision to > factor > > n-1 for all n that get through the sieve. It uses a hash table to reduce > the > > storage space required with the hash function being the 2nd to last 4 > bits. > > (The last bit is alway the same). This seams to work very well indeed. > > > > sievfact2.c simultaneously sieves and checks to see if it can find a b > such > > that gcd(b^((n-1)/p)-1,n)=1. It uses the same code as sievefact.c but it > > doesn't store the factors, only the cofactor. This could be useful if the > > Pocklington test is quicker then the pseudoprime tests the only trouble > > being to then decide which numbers to do the pseudoprime tests on to find > a > > counter example to the Baillie-PSW test. > > > > Would it be possible for you to explain your idea about calculating > powers > > of 2 mod N for lots of different numbers at once again? I can't quite > > remember exactly how it was to be done or how we could go about splitting > > the powers up once we had calculated the power of the product. An > > explanation in an email should suffice. > > > > I have also written a program that uses the list I emailed you before to > > look for counter examples to the Baillie-PSW test below 2^63. > Unfortunately > > there weren't that many products in the list that were less then2^63 and > I > > didn't find any counter examples. Maybe it could be worth implementing a > > Baillie-PSW test in fmpz's and having another go? > > > > > > Peter > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > > Build the coolest Linux based applications with Moblin SDK & win great > > prizes > > Grand prize is a trip for two to an Open Source event anywhere in the > world > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > _______________________________________________ > > flint-devel mailing list > > fli...@li... > > https://lists.sourceforge.net/lists/listinfo/flint-devel > > > > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > |
|
From: Bill H. <goo...@go...> - 2008-08-06 01:11:49
|
Peter, I merged your Pocklington-Lehmer code, but some kind of obscure bug exists. For n = 449*479 it returns -1 (I changed the return values by swapping your -1 and -2, since eventually -2 won't be needed). I set the number of iterations to 1000 and it runs out of iterations. This is pretty suspicious, since this is a pretty small number. It should be able to rule that it is composite pretty easily. Something seems to be wrong. Do you want to have a look and see if you can figure it out. You have to check out flint/trunk and do make long_extras-test then run long_extras-test Bill. 2008/8/5 Peter Shrimpton <ps...@gm...>: > Hello Bill > > I have written some code to sieve and factor simultaneously > > sievefact.c sieves a chunk of numbers and uses the trial devision to factor > n-1 for all n that get through the sieve. It uses a hash table to reduce the > storage space required with the hash function being the 2nd to last 4 bits. > (The last bit is alway the same). This seams to work very well indeed. > > sievfact2.c simultaneously sieves and checks to see if it can find a b such > that gcd(b^((n-1)/p)-1,n)=1. It uses the same code as sievefact.c but it > doesn't store the factors, only the cofactor. This could be useful if the > Pocklington test is quicker then the pseudoprime tests the only trouble > being to then decide which numbers to do the pseudoprime tests on to find a > counter example to the Baillie-PSW test. > > Would it be possible for you to explain your idea about calculating powers > of 2 mod N for lots of different numbers at once again? I can't quite > remember exactly how it was to be done or how we could go about splitting > the powers up once we had calculated the power of the product. An > explanation in an email should suffice. > > I have also written a program that uses the list I emailed you before to > look for counter examples to the Baillie-PSW test below 2^63. Unfortunately > there weren't that many products in the list that were less then2^63 and I > didn't find any counter examples. Maybe it could be worth implementing a > Baillie-PSW test in fmpz's and having another go? > > > Peter > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > flint-devel mailing list > fli...@li... > https://lists.sourceforge.net/lists/listinfo/flint-devel > > |