You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(10) |
Nov
(37) |
Dec
(66) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(52) |
Feb
(136) |
Mar
(65) |
Apr
(38) |
May
(46) |
Jun
(143) |
Jul
(60) |
Aug
(33) |
Sep
(79) |
Oct
(29) |
Nov
(13) |
Dec
(14) |
2006 |
Jan
(25) |
Feb
(26) |
Mar
(4) |
Apr
(9) |
May
(29) |
Jun
|
Jul
(9) |
Aug
(11) |
Sep
(10) |
Oct
(9) |
Nov
(45) |
Dec
(8) |
2007 |
Jan
(82) |
Feb
(61) |
Mar
(39) |
Apr
(7) |
May
(9) |
Jun
(16) |
Jul
(2) |
Aug
(22) |
Sep
(2) |
Oct
|
Nov
(4) |
Dec
(5) |
2008 |
Jan
|
Feb
|
Mar
(5) |
Apr
(2) |
May
(8) |
Jun
|
Jul
(10) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
(32) |
May
|
Jun
(7) |
Jul
|
Aug
(38) |
Sep
(3) |
Oct
|
Nov
(4) |
Dec
|
2010 |
Jan
(36) |
Feb
(32) |
Mar
(2) |
Apr
(19) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(6) |
Nov
(8) |
Dec
|
2011 |
Jan
(3) |
Feb
|
Mar
(5) |
Apr
|
May
(2) |
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
(6) |
2012 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
(6) |
Dec
(10) |
2014 |
Jan
(8) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(34) |
Aug
(6) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(18) |
Jul
(13) |
Aug
(30) |
Sep
(4) |
Oct
(1) |
Nov
|
Dec
(4) |
2016 |
Jan
(2) |
Feb
(10) |
Mar
(3) |
Apr
|
May
|
Jun
(11) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Lionel B. <lio...@bo...> - 2005-01-10 20:23:07
|
Michel Bouissou wrote the following on 01/10/05 19:32 : >[...] > >Furthermore, the problem is not how our greylisting system may affect *our* >own verification daemon, but how it will affect remote server's verification >daemons when they try to verify an address from our site. And we have no >control upon how these daemons, if any, work. We don't know how the remote >sites are configured. > If they want to receive/send mails in a timely manner, they should be configured to cope well with temporary failures, after all *they* decide to refuse the mail. sender address verification isn't something people should use blindly, if you do so you block some mails the recipient are willing to receive. I'm not willing to poke holes in the greylisting process just to cope with what I still think is a (small) defect in the verify daemon. In fact if you really want to, you can configure Postfix itself to not greylist mails coming from the verification addresses. If there really is a need, I can write how to configure Postfix to do so in SQLgrey's HOWTO. Unless there are some new hard arguments for adding address-based whitelists to SQLgrey and not modifying the verify daemon, let us keep this on sqlgrey-users in the next messages. Best regards, Lionel. |
From: Michel B. <mi...@bo...> - 2005-01-10 18:32:54
|
Le Lundi 10 Janvier 2005 17:19, Lionel Bouton a =E9crit : > > Negative caches should not be this high if the return code is 45x when > validating senders. I can understand high TTL for 55x, but remember tha= t > greylisting only simulates a temporary unavailability of the MX, if the > mail is delayed for several hours when a server is unavailable for a > couple of minutes there is a defect in the configuration IMHO. I fully agree with this. > The problem is more with sender verification configurability than with > greylisting in my opinion. Yes, but we also have to live with things "as they are" rather than "as t= hey=20 should be", until of course these things are fixed ;-) Many Linux distributions come with Postfix packages and many users aren't= =20 going to install a possibly updated Postfix until the updated package is = in=20 their favourite distro. Not everybody is of the patch-and-compile-myself=20 sort ;-) OTOH, when one adds a separate greylisting daemon to his existing Postifx= =20 setup, he'd like it to integrate as well as possible with the verificatio= n=20 daemon, not "as it should be", but "as it is" ;-) Furthermore, the problem is not how our greylisting system may affect *ou= r*=20 own verification daemon, but how it will affect remote server's verificat= ion=20 daemons when they try to verify an address from our site. And we have no=20 control upon how these daemons, if any, work. We don't know how the remot= e=20 sites are configured. So the best thing we can do it to keep as much as=20 possible our greylisting system out of the way of these remote verificati= on=20 daemons. > Could Postfix's verify daemon be updated to have a separate (lower) > negative cache ttl for 45x replies ? Seems the right thing to do as the > informations stored in cache have very different meanings in the 45x an= d > 55x cases. True, and that would be great. Meanwhile, we have to work around this=20 feature ;-) --=20 Michel Bouissou <mi...@bo...> OpenPGP ID 0xDDE8AC6E |
From: Lionel B. <lio...@bo...> - 2005-01-10 17:58:11
|
Max Diehn wrote the following on 01/10/2005 05:35 PM : > Hello Lionel, > > I don't use Postfix but a proprietary / closed source smtp-server for > which I have an API. So I coded an extension for that smtp-server to > act as a policy client to sqlgrey. Now I have some problems with the > tcp connections, obviously due to the limitations of that API. > (How) Is it possible to make sqlgrey accept udp connections? You can either * try to patch SQLgrey to accept connections on an UDP port instead of the default TCP : - search for "my $server = bless {" - 3 lines below, hardcode your desired UDP port like this : - port = <yourport>, proto = 'udp', if it works, you'll have to maintain your own SQLgrey branch. * or write a wrapper that will speak with your server with a protocol it can understand and with the Postfix protocol with SQLgrey (more complex but better as it means you can switch to whatever policy daemon you want and won't have to maintain your own version of SQLgrey). For a simple protocol description have a look at http://www.postfix.org/SMTPD_POLICY_README.html. You'll be on your own for this though, I can't test this as I don't have access to your mail-server and am not inclined to spend time on integration with other SMTP servers anyway. Lionel; |
From: Max D. <Max...@lr...> - 2005-01-10 16:36:12
|
Hello Lionel, I don't use Postfix but a proprietary / closed source smtp-server for which I have an API. So I coded an extension for that smtp-server to act as a policy client to sqlgrey. Now I have some problems with the tcp connections, obviously due to the limitations of that API. (How) Is it possible to make sqlgrey accept udp connections? Best regards, Max |
From: Lionel B. <lio...@bo...> - 2005-01-10 16:19:34
|
Postfix-users is CCed as I propose a change in the verify daemon later on in this mail, please skip to 2/ if you don't care about SQLgrey specific questions but only for greylisting + address verification behaviour. # 1/ greylister return value Michel Bouissou wrote the following on 01/10/2005 03:46 PM : > >Let me explain my view : The huge majority of trash that my mailserver >currently receives comes from zombie virus-sending Windows machines, or >spambots that don't have a normal retry pattern. The whole point in >greylisting is precisely to eliminate them, because they will *NOT* retry. So >rejecting their mail attempt with any code, either 450 or 550 results in the >same effect : They don't come back (at least using the same from/to/IP). > >I personally prefer to put first in my Postfix series of tests all the tests >that can be "local and quick", and only for messages that passes these, >perform tests that may be "longer or network or resources consuming", such as >sender address verification. > > Makes sense to me. >That's why I'd like that a "Greylisted" event could immediately reject the >mail on first attempt (I wouldn't mind a "defer_if_permit" on an "early >reconnection" BTW, because it would allow me to check sender now that it is >probable that the same mail will come back again). > > > This last one is a little more complex but I agree it can be helpful. >Furthermore, if Greylisting caused an immediate reject, it would permit me not >to uselessly bother remote servers with address verification for addresses >that are dummy, and messages that won't come back. > >Furthermore, I prefer not to "pollute" my address verification DB where it can >be avoided, to prevent it from growing too much. SQLgrey uses a nice >PostgreSQL database on my system, it's size is no problem and it can be >easily maintained (vacuum analyze etc...), which is not the case of the >Postfix address verification daemon DB, which uses a simpler BerkeleyDB. > >Last but not least, I don't mind rejecting first with a 450 a spam that will >actually come back and that I will reject with a 554 on 2nd attempt (origin >blacklisted, non-existent sender...). By rejecting the message twice, I then >put a double load on the spammer's server, and I don't hate the idea of >giving them more work to get rejected twice ;-) > > Ok you convinced me. Added to my TODO list for 1.4.x. # 2/ address verification > > > >>The common problem is the following : >>Server A uses greylisting, Server B uses sender address verification. >>- Server A formard a mail to Server B, >>- Server B wants to check that the sender exists and Server A is the MX >>for the corresponding domain, Server A is not in the auto-whitelists of >>Server B >>- Server A don't trust Server B yet, so Server B verification fails and >>(temporarily) refuses the mail, >>- Server B will retry (around 20 minutes later with default Postfix >>configuration under light load), then Server A will be able to verify >>(and will be for 60 days and more with default SQLgrey config), the mail >>gets through. >> >> > >No, because server B will probably *NOT* verify again so quickly. Server B >will probably have a "negative cache TTL" telling it not to retry a failed >verification before a given delay (I personally use 3 hours delay) to avoid >repeteadly bothering remote servers with possible repeated requests for the >same non-existent address. > > Negative caches should not be this high if the return code is 45x when validating senders. I can understand high TTL for 55x, but remember that greylisting only simulates a temporary unavailability of the MX, if the mail is delayed for several hours when a server is unavailable for a couple of minutes there is a defect in the configuration IMHO. The problem is more with sender verification configurability than with greylisting in my opinion. sender verification is a risky thing to do btw, you should only call verify on known-to-be forged domain where you are pretty sure you can verify the Return-Path of each and every valid mail. Could Postfix's verify daemon be updated to have a separate (lower) negative cache ttl for 45x replies ? Seems the right thing to do as the informations stored in cache have very different meanings in the 45x and 55x cases. >If this scheme, what happens is as follows: >- Server A wants to send mail to server B >- Server B tries to verify sender address on server A, but gets greylisted, >and its verification receives a "450". >- Server B keeps this 450 in cache for its "negative TTL time" (let's say 3 >hours). >- Server A will retry several times to send its mail to server B, and will be >refused with 450 immediately as long as the sender address verification >negative TTL isn't expired. >- After at least 3 hours (but some sending servers may have retry delays that >grows exponentially, such as qmail, so it could well be 8 or 10 hours), > in qmail's case, it shouldn't grow much since the 2nd verification attempt will most probably succeed. Best regards, Lionel. |
From: Michel B. <mi...@bo...> - 2005-01-10 14:46:39
|
Le Lundi 10 Janvier 2005 15:08, Lionel Bouton a =E9crit : > > >2/ The latest version of Postgrey allows specifying the result (both code > > and text) that Postgrey should give when a connection is greylisted. It > > would be nice if SQLgrey could do the same. > >Currently SQLgrey returns "defer_if_permit" and this cannot be modified > >(except by fiddling in the code). > >However, some may prefer to reject immediately with a 450 temporary > > failure, rather than carry on with further potentially expensive Postfix > > tests (such as checking external DNSBLs or verifying sender existence), > > where the connection will in the end be rejected anyway... > > I suppose you use a 550 rejection at least with DNSBLs. In this case > you'll lose time by forcing a temporary failure instead of rejecting on > the first connection. Let me explain my view : The huge majority of trash that my mailserver=20 currently receives comes from zombie virus-sending Windows machines, or=20 spambots that don't have a normal retry pattern. The whole point in=20 greylisting is precisely to eliminate them, because they will *NOT* retry. = So=20 rejecting their mail attempt with any code, either 450 or 550 results in th= e=20 same effect : They don't come back (at least using the same from/to/IP). I personally prefer to put first in my Postfix series of tests all the test= s=20 that can be "local and quick", and only for messages that passes these,=20 perform tests that may be "longer or network or resources consuming", such = as=20 sender address verification. That's why I'd like that a "Greylisted" event could immediately reject the= =20 mail on first attempt (I wouldn't mind a "defer_if_permit" on an "early=20 reconnection" BTW, because it would allow me to check sender now that it is= =20 probable that the same mail will come back again). =46urthermore, if Greylisting caused an immediate reject, it would permit m= e not=20 to uselessly bother remote servers with address verification for addresses= =20 that are dummy, and messages that won't come back. =46urthermore, I prefer not to "pollute" my address verification DB where i= t can=20 be avoided, to prevent it from growing too much. SQLgrey uses a nice=20 PostgreSQL database on my system, it's size is no problem and it can be=20 easily maintained (vacuum analyze etc...), which is not the case of the=20 Postfix address verification daemon DB, which uses a simpler BerkeleyDB. Last but not least, I don't mind rejecting first with a 450 a spam that wil= l=20 actually come back and that I will reject with a 554 on 2nd attempt (origin= =20 blacklisted, non-existent sender...). By rejecting the message twice, I the= n=20 put a double load on the spammer's server, and I don't hate the idea of=20 giving them more work to get rejected twice ;-) > With sender verification it can be more complex, I'm not sure of the > cases where 450 and 550 are used. Basically the Postfix verification daemon gives the same family of error th= at=20 the remote server gave when performing verification. If the verification=20 failed with a 5xx, Postfix will issue a 5xx. If the verification failed wit= h=20 a 4xx (or could not be done), Postfix will issue a 4xx. > If the sender verification use 550,=20 > you end up in the same case than the one described above : you lose time > and performance if a 450 is used then it will save one 450 but not the > subsequent ones. > > So in the end, I'm not sure you will gain much from it or if in fact you > won't lose something... > But it's a simple thing to add, so if you find a case where it clearly > is beneficial, I'll code it. Thanks :-) > >3/ It would be nice to have a whitelist of senders (using regexps) for > > whom greylisting should never be done. This list should by default > > include the addresses that some MTA use to perform "sender address > > verification", as we should avoid greylisting those verifications as mu= ch > > as possible, the combination of greylisting and sender address > > verification introducing problems and rather long delays. These > > verification origin addresses are, among the most common: > ><> (null sender) > >MAILER-DAEMON@* > >postmaster@* > > I don't think greylisting and sender address verification clash with > each other. =46rom my experience they actually do. > The common problem is the following :=20 > Server A uses greylisting, Server B uses sender address verification. > - Server A formard a mail to Server B, > - Server B wants to check that the sender exists and Server A is the MX > for the corresponding domain, Server A is not in the auto-whitelists of > Server B > - Server A don't trust Server B yet, so Server B verification fails and > (temporarily) refuses the mail, > - Server B will retry (around 20 minutes later with default Postfix > configuration under light load), then Server A will be able to verify > (and will be for 60 days and more with default SQLgrey config), the mail > gets through. No, because server B will probably *NOT* verify again so quickly. Server B= =20 will probably have a "negative cache TTL" telling it not to retry a failed= =20 verification before a given delay (I personally use 3 hours delay) to avoid= =20 repeteadly bothering remote servers with possible repeated requests for the= =20 same non-existent address. If this scheme, what happens is as follows: =2D Server A wants to send mail to server B =2D Server B tries to verify sender address on server A, but gets greyliste= d,=20 and its verification receives a "450". =2D Server B keeps this 450 in cache for its "negative TTL time" (let's say= 3=20 hours). =2D Server A will retry several times to send its mail to server B, and wil= l be=20 refused with 450 immediately as long as the sender address verification=20 negative TTL isn't expired. =2D After at least 3 hours (but some sending servers may have retry delays = that=20 grows exponentially, such as qmail, so it could well be 8 or 10 hours),=20 server A will retry, server B will retry its verification, and, provided=20 server A satisfied the verification fast enough, the mail will in the end b= e=20 accepted. If server A is a bit long to answer, the verification will timeou= t=20 and be finished asychronously, the mail will be refused once more, and serv= er=20 A will have to retry once more to send it. With the result that on average, the mail from A to B will not have been=20 delayed by "5 minutes or so" by the greylisting made on A, but rather by 4= =20 hours or more (after a dozen of attempts), by the combination of greylistin= g=20 on A, verification on B, and negative TTL on B's verification. Well, 4 hours or so, that's becoming a delay long enough to cause trouble t= o=20 many sysadmins ;-)) > The problem with adding hard whitelists for origin addressses is that > you will have many SPAMs coming through (postmaster, MAILER-DAEMON and > <> are used rather often). Not that often on my experience. "postmaster" has actually been used by rec= ent=20 worm-viruses (as well as hostmaster and webmaster), but I have seen very fe= w=20 spams or viruses "From: <>" and even less "From: MAILER-DAEMON@" (talking=20 about SMTP enveloppe Froms). Thanks again for SQLgrey, I like it ;-) =2D-=20 Michel Bouissou <mi...@bo...> OpenPGP ID 0xDDE8AC6E Les taxis de la Marne ont fait la guerre. Alors moi je rigole quand je vois un taxi qui a peur d'aller en banlieue. |
From: Lionel B. <lio...@bo...> - 2005-01-10 14:08:38
|
Michel Bouissou wrote the following on 01/10/2005 09:26 AM : >Hi there, > >I've started using SQLgrey a couple of days ago, and I'm rather happy with it. > >I have however a couple of comments and feature requests : > >1/ In the provided "clients_fqdn_whitelist", the following line: >scd.yahoo.com >is wrong, does not work, and should be replaced with >*.scd.yahoo.com > > Thanks. Commited in CVS. >2/ The latest version of Postgrey allows specifying the result (both code and >text) that Postgrey should give when a connection is greylisted. It would be >nice if SQLgrey could do the same. >Currently SQLgrey returns "defer_if_permit" and this cannot be modified >(except by fiddling in the code). >However, some may prefer to reject immediately with a 450 temporary failure, >rather than carry on with further potentially expensive Postfix tests (such >as checking external DNSBLs or verifying sender existence), where the >connection will in the end be rejected anyway... > > > I suppose you use a 550 rejection at least with DNSBLs. In this case you'll lose time by forcing a temporary failure instead of rejecting on the first connection. With sender verification it can be more complex, I'm not sure of the cases where 450 and 550 are used. If the sender verification use 550, you end up in the same case than the one described above : you lose time and performance if a 450 is used then it will save one 450 but not the subsequent ones. So in the end, I'm not sure you will gain much from it or if in fact you won't lose something... But it's a simple thing to add, so if you find a case where it clearly is beneficial, I'll code it. >3/ It would be nice to have a whitelist of senders (using regexps) for whom >greylisting should never be done. This list should by default include the >addresses that some MTA use to perform "sender address verification", as we >should avoid greylisting those verifications as much as possible, the >combination of greylisting and sender address verification introducing >problems and rather long delays. These verification origin addresses are, >among the most common: ><> (null sender) >MAILER-DAEMON@* >postmaster@* > > I don't think greylisting and sender address verification clash with each other. The common problem is the following : Server A uses greylisting, Server B uses sender address verification. - Server A formard a mail to Server B, - Server B wants to check that the sender exists and Server A is the MX for the corresponding domain, Server A is not in the auto-whitelists of Server B - Server A don't trust Server B yet, so Server B verification fails and (temporarily) refuses the mail, - Server B will retry (around 20 minutes later with default Postfix configuration under light load), then Server A will be able to verify (and will be for 60 days and more with default SQLgrey config), the mail gets through. In the end what happens is that you more or less apply greylisting on the "outgoing" mails where the destination uses sender verification as well as the "incoming" mails. With auto-whitelisting, you shouldn't notice much of a difference. The problem with adding hard whitelists for origin addressses is that you will have many SPAMs coming through (postmaster, MAILER-DAEMON and <> are used rather often). In conclusion, I think auto-whitelisting should take care of the problem efficiently and adding sender whitelists will bring more harm than good. If you find odd installations where sender verification is poorly implemented (for example using a 550 insted of a 450) at the point of breaking greylisting, I believe you should add them to the whitelists server by server (these would be broken SMTP configurations, so it should be the right place), warn their postmasters and post the cases on this mailing-list. > >Thanks for this excellent piece of software. > > > You are welcomed. Thanks for the comments. Lionel. |
From: Michel B. <mi...@bo...> - 2005-01-10 08:26:35
|
Hi there, I've started using SQLgrey a couple of days ago, and I'm rather happy with it. I have however a couple of comments and feature requests : 1/ In the provided "clients_fqdn_whitelist", the following line: scd.yahoo.com is wrong, does not work, and should be replaced with *.scd.yahoo.com 2/ The latest version of Postgrey allows specifying the result (both code and text) that Postgrey should give when a connection is greylisted. It would be nice if SQLgrey could do the same. Currently SQLgrey returns "defer_if_permit" and this cannot be modified (except by fiddling in the code). However, some may prefer to reject immediately with a 450 temporary failure, rather than carry on with further potentially expensive Postfix tests (such as checking external DNSBLs or verifying sender existence), where the connection will in the end be rejected anyway... 3/ It would be nice to have a whitelist of senders (using regexps) for whom greylisting should never be done. This list should by default include the addresses that some MTA use to perform "sender address verification", as we should avoid greylisting those verifications as much as possible, the combination of greylisting and sender address verification introducing problems and rather long delays. These verification origin addresses are, among the most common: <> (null sender) MAILER-DAEMON@* postmaster@* Thanks for this excellent piece of software. Regards. -- Michel Bouissou <mi...@bo...> OpenPGP ID 0xDDE8AC6E |
From: Lionel B. <lio...@bo...> - 2005-01-07 11:03:22
|
Max Diehn wrote the following on 01/07/2005 11:21 AM : > Hi Lionel, > > in Net::Server, it is said: > > "Each of the server personalities (except for INET), support > restarting via a HUP signal (see "kill -l"). When a HUP is received, > the server will close children (if any), make sure that sockets are > left open, and re-exec using the same commandline parameters that > initially started the server." > > Making sure that sockets are left open - that's what I wanted. > But this happens, when I try HUPing sqlgrey: > > 2005/01/07-10:37:28 Server closing! > 2005/01/07-10:37:28 HUP'ing server > Process Backgrounded > 2005/01/07-10:37:29 Pid_file already exists for running process > 26902)... aborting at line 268 in file > /usr/lib/perl5/site_perl/5.8.0/Net/Server.pm > 2005/01/07-10:37:29 Server closing! > > sqlgrey source says: > > if (defined $opt{kill}) { > ... > unlink $pidfile; > exit; > } > > but in case of a HUP, the old pidfile still exists. > Do you know an easy solution for this? Not yet, there's a bug in Net::Server::Multiplex that prevents you from using SIGHUP. I already hit this when starting to implement on-demand static whitelist reloading. Its not on the top of my TODO list, but I'd to have a look at the Multiplex code to understand what's going on. Lionel |
From: Max D. <Max...@lr...> - 2005-01-07 10:21:10
|
Hi Lionel, in Net::Server, it is said: "Each of the server personalities (except for INET), support restarting via a HUP signal (see "kill -l"). When a HUP is received, the server will close children (if any), make sure that sockets are left open, and re-exec using the same commandline parameters that initially started the server." Making sure that sockets are left open - that's what I wanted. But this happens, when I try HUPing sqlgrey: 2005/01/07-10:37:28 Server closing! 2005/01/07-10:37:28 HUP'ing server Process Backgrounded 2005/01/07-10:37:29 Pid_file already exists for running process 26902)... aborting at line 268 in file /usr/lib/perl5/site_perl/5.8.0/Net/Server.pm 2005/01/07-10:37:29 Server closing! sqlgrey source says: if (defined $opt{kill}) { ... unlink $pidfile; exit; } but in case of a HUP, the old pidfile still exists. Do you know an easy solution for this? Thanks, Max |
From: Lionel B. <lio...@bo...> - 2004-12-29 09:10:43
|
HaJo Schatz wrote the following on 12/29/04 06:54 : > Lionel Bouton wrote: > >>> - During boot, sqlgrey couldn't access postgresql yet (DBI returned >>> "db is starting up"). >> > >> Retrying after a while is IMHO not worth it (it can't decide on its >> own how much time it has to sleep). SQLgrey must be started after the >> database and this is the administrator's job to configure the system >> accordingly. > > > Sure -- I have an S85psotgresql and a S90sqlgrey. However postgres > seems to take a while to "come up", which makes it currently > impossible to rely on a correct start-up of a box, you have to > intervene manually (and you will still have to if you simply exit > sqlgrey with an error). Currently, a reboot of a box means that you > will not be able to receive any mail afterwards until you attend to it... > > I think there are only two proper ways of solving this: > 1) In above case, DBI is indicating a clear reason why the connection > (temporarily) failed. Hence, if DBI returns "DB starting up" as error, > re-try within sqlgrey until this error is gone. I actually think that > "DB starting up" is not really an error at all... Ok, I'll look into it. > 2) Do this check/delay in the init-script before launching sqlgrey. Ie > query the DB and see whether it's responsive. > > I think 2) is a bit confusing when thinking about what will happen if > the db gets restarted while sqlgrey is already running. I know, in > such a case sqlgrey will re-try. But here's IMHO the inconsistency -- > if there's a DB issue at start-up, sqlgrey ignores it. If the issue > occurs during execution, sqlgrey takes care of it... > I could change that, I believed it would be quite messy to handle the no connection available at startup time but on second thought I think it can be handled cleanly. >> Lionel (back from numerous enjoyable and delicious meals). > > > Hajo, big and round by now. And I thought you went skiing... > Skiing is scheduled on the last 2 weeks of January :-) I hope I'll find some time to hack on SQLgrey before that. Lionel. |
From: HaJo S. <ha...@ha...> - 2004-12-29 05:55:04
|
Lionel Bouton wrote: >> - During boot, sqlgrey couldn't access postgresql yet (DBI returned >> "db is starting up"). > Retrying after a while is IMHO not worth it (it can't decide on its own > how much time it has to sleep). SQLgrey must be started after the > database and this is the administrator's job to configure the system > accordingly. Sure -- I have an S85psotgresql and a S90sqlgrey. However postgres seems to take a while to "come up", which makes it currently impossible to rely on a correct start-up of a box, you have to intervene manually (and you will still have to if you simply exit sqlgrey with an error). Currently, a reboot of a box means that you will not be able to receive any mail afterwards until you attend to it... I think there are only two proper ways of solving this: 1) In above case, DBI is indicating a clear reason why the connection (temporarily) failed. Hence, if DBI returns "DB starting up" as error, re-try within sqlgrey until this error is gone. I actually think that "DB starting up" is not really an error at all... 2) Do this check/delay in the init-script before launching sqlgrey. Ie query the DB and see whether it's responsive. I think 2) is a bit confusing when thinking about what will happen if the db gets restarted while sqlgrey is already running. I know, in such a case sqlgrey will re-try. But here's IMHO the inconsistency -- if there's a DB issue at start-up, sqlgrey ignores it. If the issue occurs during execution, sqlgrey takes care of it... > Lionel (back from numerous enjoyable and delicious meals). Hajo, big and round by now. And I thought you went skiing... -- HaJo Schatz <ha...@ha...> http://www.HaJo.Net PGP-Key: http://www.hajo.net/hajonet/keys/pgpkey_hajo.txt |
From: Lionel B. <lio...@bo...> - 2004-12-28 21:56:42
|
HaJo Schatz wrote the following on 12/24/04 05:39 : >Hi Lionel, > >Just discovered two start-up issues with sqlgrey while doing a reboot of >my mail server: > >- During boot, sqlgrey couldn't access postgresql yet (DBI returned "db >is starting up"). sqlgrey then tried to create the from_awl table, >failed (db still starting up) and died silently (rather than, what would >IMO have been correct, retrying after a while). I have attached the >relevant part of the log below fyi. > > > I'll have to see if I can make sqlgrey exit with an error, this is the most desirable behaviour as scripts launching sqlgrey could then detect the problem and output an error on the console. Retrying after a while is IMHO not worth it (it can't decide on its own how much time it has to sleep). SQLgrey must be started after the database and this is the administrator's job to configure the system accordingly. >- I accidentially started sqlgrey (thorugh /etc/init.d/sqlgrey start) as >non-root user. The init-script said: > >'Starting SQLgrey: Pid_file "/var/run/sqlgrey.pid" already exists. >Overwriting!' > >and then '[ OK ]'. However, it was of course not OK, sqlgrey died with a > permission-denied on the PID file, did not tell me however... > > > This is related to the problem above, SQLgrey doesn't exit with an error today, I'll have to look if I can make the appropriate checks *before* it forks to daemonize itself. Thanks for the bug-reports, I hope all of you found pleasant surprises under the tree, enjoyed some good time and delicious meals :-) Lionel (back from numerous enjoyable and delicious meals). |
From: HaJo S. <ha...@ha...> - 2004-12-24 04:40:06
|
Hi Lionel, Just discovered two start-up issues with sqlgrey while doing a reboot of my mail server: - During boot, sqlgrey couldn't access postgresql yet (DBI returned "db is starting up"). sqlgrey then tried to create the from_awl table, failed (db still starting up) and died silently (rather than, what would IMO have been correct, retrying after a while). I have attached the relevant part of the log below fyi. - I accidentially started sqlgrey (thorugh /etc/init.d/sqlgrey start) as non-root user. The init-script said: 'Starting SQLgrey: Pid_file "/var/run/sqlgrey.pid" already exists. Overwriting!' and then '[ OK ]'. However, it was of course not OK, sqlgrey died with a permission-denied on the PID file, did not tell me however... Merry Christmas, HaJo ----------[maillog sniplet]--------------- Dec 23 23:19:18 sun sqlgrey[3691]: Process Backgrounded Dec 23 23:19:18 sun sqlgrey[3691]: 2004/12/23-23:19:17 sqlgrey (type Net::Server::Multiplex) starting! pid(3691) Dec 23 23:19:20 sun sqlgrey[3691]: Binding to TCP port 2501 on host localhost Dec 23 23:19:20 sun sqlgrey[3691]: Group Not Defined. Defaulting to EGID '0' Dec 23 23:19:20 sun sqlgrey[3691]: Setting uid to "91" Dec 23 23:19:21 sun sqlgrey[3691]: Can't connect to DB: FATAL: The database system is starting up Dec 23 23:19:21 sun sqlgrey[3691]: Can't connect to DB: FATAL: The database system is starting up Dec 23 23:19:21 sun sqlgrey[3691]: Warning: couldn't do query: SELECT 1 from from_awl LIMIT 0: FATAL: The database system is starting up, reconnecting to DB Dec 23 23:19:22 sun sqlgrey[3691]: Can't connect to DB: FATAL: The database system is starting up Dec 23 23:19:22 sun sqlgrey[3691]: Can't connect to DB: FATAL: The database system is starting up Dec 23 23:19:22 sun sqlgrey[3691]: Warning: couldn't do query: CREATE TABLE from_awl (sender_name varchar(64) NOT NULL, sender_domain varchar(255) NOT NULL, host_ip varchar(15) NOT NULL, last_seen timestamp NOT NULL, PRIMARY KEY (sender_name, sender_domain, host_ip));: FATAL: The database system is starting up, reconnecting to DB Dec 23 23:19:22 sun sqlgrey[3691]: Can't connect to DB: FATAL: The database system is starting up Dec 23 23:19:23 sun sqlgrey[3691]: fatal: Couldn't create table from_awl: FATAL: The database system is starting up Dec 23 23:36:42 sun postfix/smtpd[7424]: warning: connect to 127.0.0.1:2501: Connection refused -- HaJo Schatz <ha...@ha...> http://www.HaJo.Net PGP-Key: http://www.hajo.net/hajonet/keys/pgpkey_hajo.txt |
From: Josh E. <jo...@en...> - 2004-12-20 19:42:12
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Lionel Bouton wrote: | I was wondering where this '%' was coming from :-) I'll have to test if | I can make SQLgrey output a more useful error message though. | What was the permission problem exactly ? I initially added sqlgrey@127.0.0.1, but later realized the web and MySQL servers are separate hosts (been a busy day), so I changed 127.0.0.1 to % and flushed privileges. Effectively, the user sqlgrey@% ('%' is the wildcard for MySQL, though you probably knew that) existed but access to the db was for @127.0.0.1. I'm not sure if you can catch that or not, but the error mentioned Syslog.pm, which threw me off at first. Josh -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBxykZV/+PyAj2L+IRAn/DAJ9X3ii+zDGKFtia6vKHs84Ct4PLuwCfSU8w H7xmy8b6hCEEdQDRaFPttk8= =BBC+ -----END PGP SIGNATURE----- |
From: Lionel B. <lio...@bo...> - 2004-12-20 19:09:42
|
Josh Endries wrote the following on 12/20/04 18:50 : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Josh Endries wrote: > | $ sqlgrey -d > | Invalid conversion in sprintf: "%'" at > | /usr/local/lib/perl5/5.8.5/mach/Sys/Syslog.pm line 312. > | Can't call method "do" on unblessed reference at /usr/bin/sqlgrey > | line 91. > > Oops, it was a MySQL ACL problem, sorry for the false alarm. I was wondering where this '%' was coming from :-) I'll have to test if I can make SQLgrey output a more useful error message though. What was the permission problem exactly ? Lionel. |
From: Josh E. <jo...@en...> - 2004-12-20 17:50:54
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Josh Endries wrote: | $ sqlgrey -d | Invalid conversion in sprintf: "%'" at | /usr/local/lib/perl5/5.8.5/mach/Sys/Syslog.pm line 312. | Can't call method "do" on unblessed reference at /usr/bin/sqlgrey | line 91. Oops, it was a MySQL ACL problem, sorry for the false alarm. Josh -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (FreeBSD) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBxxDIV/+PyAj2L+IRAn6uAJ9OM0MJuR1XJnW6u4XHENcfMZ4P3ACfRoc6 41fPSfv2psqEtxTewqvXsuE= =3NNJ -----END PGP SIGNATURE----- |
From: Josh E. <jo...@en...> - 2004-12-20 17:43:47
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, I'm trying to get SQLgrey running on a new FreeBSD 5 system. I just installed Perl 5.8 and then SQLgrey and get this when I try to run it: $ sqlgrey -d Invalid conversion in sprintf: "%'" at /usr/local/lib/perl5/5.8.5/mach/Sys/Syslog.pm line 312. Can't call method "do" on unblessed reference at /usr/bin/sqlgrey line 91. Does SQLgrey support Perl 5.8, or what else might be wrong? Josh -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (FreeBSD) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBxw8oV/+PyAj2L+IRAofLAKChqVaCpI0JRvlQ/jUldvVeSOpdrwCeN65i LPz/ox4tbMjQNriHQcBL1/w= =InOv -----END PGP SIGNATURE----- |
From: <wi...@po...> - 2004-12-20 00:12:33
|
Lionel Bouton: > Hum, pleasant situation, guess I'll have to mark some of the greylisting > algorithms as experimental with IPv6 in SQLgrey for a while... > What did the people behind IPv6 thought when they described address > representations, nobody told them that DNS was designed just to solve > the human-representation problems they didn't solve ? Since there is no one-to-one relationship between symbolic names and numerical addresses, both forms will be needed. One name can resolve to multiple addresses, and one address can have multiple names. In addition, access controls based on third-party information from the DNS will always be less secure than access control based on numerical addresses only. So, for some purposes the numerical address form is preferred. Wietse |
From: Lionel B. <lio...@bo...> - 2004-12-19 23:18:24
|
Wietse Venema wrote the following on 12/19/04 23:39 : >>In short, do I need to implement a full-fledge IPv6 address parser when >>I try to manipulate IPv6 addresses or is the format used a subset more >>usable by a computer ? >> >> > >No official Postfix release has IPv6 support. > >The third-party code that I am building into Postfix, and that will >hopefully ship with Postfix 2.2 when it becomes the official release, >uses the inet_ntop() routine. > > Thanks for the details, it could very well help me debug some odd problems in the near future :-). >Since I have no plans to bypass system library routines, Postfix's >result of address to string conversion will be whatever the local >inet_ntop() implementation produces. > Makes sense. Policy daemons can then call inet_pton and handle the mess from this point. > This is a member of a relatively >new group of functions, and the manual pages do not say much about >the exact output format of IPv4 in IPv6 addresses. > > Hum, pleasant situation, guess I'll have to mark some of the greylisting algorithms as experimental with IPv6 in SQLgrey for a while... What did the people behind IPv6 thought when they described address representations, nobody told them that DNS was designed just to solve the human-representation problems they didn't solve ? In the end it makes more sense to deal directly with the original binary representations. Lionel. |
From: Klaus A. S. <kse...@gm...> - 2004-12-19 22:04:31
|
Lionel Bouton wrote: > (I'm not sure of the layout used by Postfix, in fact I'm not > sure Postfix decides the layout, it might be the libc). No matter what IPv6 address I use for telnet'ing to my MX (currently also my workstation), it appears as ::1, so I'm afraid I cannot help much. Here's how it looks in the Received header: #v+ Received: from ip6-localhost (unknown [IPv6:::1]) by mx.szn.dk (Postfix) with ESMTP id C56448F7CF for <kl...@se...>; Sun, 19 Dec 2004 22:56:51 +0100 (CET) #v- And since this is localhost, it doesn't go through SQLgrey. Cheers, --=20 Klaus Alexander Seistrup Copenhagen =B7 Denmark |
From: Klaus A. S. <kse...@gm...> - 2004-12-19 21:54:26
|
Lionel Bouton wrote: > BTW, which Postfix version are you using ? I'm using Postfix 2.1.5 from Debian/unstable (Ubuntu/hoary). It supports IPv6 out of the box (and I'm pretty sure that at least some earlier versions did as well). > It seems IPv6 is only available through patches. I've no answer yet from > the Postfix mailing-list (just posted my questions) but it could mean tha= t > IPv6 isn't supported yet when calling a policy dameon. I quite sure I've seen an IPv6 address in an earlier greylisting d=E6mon (I'm afraid it was PostGrey, though, but I haven't got any log files to support it). > Until IPv6 is officialy supported, not advertising any IPv6 MX shouldn't > cause any harm (all public IPv6 hosts I'm aware of do speak IPv4 as > well...). Either that, or just praying that spammers or viruses won't connect over IPv6. ;-) Cheers, --=20 Klaus Alexander Seistrup Copenhagen =B7 Denmark |
From: Lionel B. <lio...@bo...> - 2004-12-19 21:46:21
|
Lionel Bouton wrote the following on 12/19/04 22:04 : > > I'll ask on the Postfix mailing-list... > BTW, which Postfix version are you using ? It seems IPv6 is only available through patches. I've no answer yet from the Postfix mailing-list (just posted my questions) but it could mean that IPv6 isn't supported yet when calling a policy dameon. Until IPv6 is officialy supported, not advertising any IPv6 MX shouldn't cause any harm (all public IPv6 hosts I'm aware of do speak IPv4 as well...). Lionel. |
From: Klaus A. S. <kse...@gm...> - 2004-12-19 21:33:19
|
Lionel Bouton wrote: >> How does SQLgrey handle IPv6 addresses? >=20 > I believe badly... > I must admit I never thought of the IPv6 case. Heheh... :-) > I've not yet thought much about this but the main problem is that the > current address fields aren't large enough to store every IPv6 addresses. I'm using SQLite (v2) db. I think SQLite supports "unlimited" storage for strings if using TEXT instead of VARCHAR. Well, even the size of the VARCHAR is not respected by SQLite, so I would expect an SQLite db to survive IPv6 addresses. However, ... > The second problem is in the smart and classc greylisting algorithms, ... Yes. > Unfortunately, I've zero access to IPv6 networks so I can't test any of > these. Neiter have I, execept my own server (and all connections appear to come from ::1). However, I hope that other list members do, and mail me... > What I can do is changing the table layouts in 1.5.x to handle the > maximum IPv6 address representation size (39 bytes). You should then > switch to the 'full' greylisting method and send me examples of IPv6 > entries (I'm not sure of the layout used by Postfix, in fact I'm not > sure Postfix decides the layout, it might be the libc). Hm, yes, I think "full" greylisting will have to be the answer in that case, I didn't think of that before. > I'll ask on the Postfix mailing-list... Meanwhile I'll look for IPv6 connections to my MX. Cheers, --=20 Klaus Alexander Seistrup Copenhagen =B7 Denmark |
From: Lionel B. <lio...@bo...> - 2004-12-19 21:04:43
|
Klaus Alexander Seistrup wrote the following on 12/19/04 20:37 : >Hi, > >Just wondering . . . My primary MX, mx.szn.dk, accepts connections on >both IPv4 and IPv6. How does SQLgrey handle IPv6 addresses? > > I believe badly... I must admit I never thought of the IPv6 case. I've not yet thought much about this but the main problem is that the current address fields aren't large enough to store every IPv6 addresses. The simplest thing for SQLgrey is take the input it gets from Postfix and put it directly in the address field. The second problem is in the smart and classc greylisting algorithms, they'll have to be IPv6 aware (what's a rough equivalent of a class C network in v6, can it be made of all IPv6 with only the last 2 bytes modified or are there more subtle things to consider, think of the representation of IPv4 addresses encapsulated in IPv6 for example). Unfortunately, I've zero access to IPv6 networks so I can't test any of these. What I can do is changing the table layouts in 1.5.x to handle the maximum IPv6 address representation size (39 bytes). You should then switch to the 'full' greylisting method and send me examples of IPv6 entries (I'm not sure of the layout used by Postfix, in fact I'm not sure Postfix decides the layout, it might be the libc). I'll ask on the Postfix mailing-list... Lionel. |