|
From: Paul B. <pa...@hy...> - 2007-01-19 15:56:14
|
I am about to start up SQLGREY and I would like to "pre" load the "domain_awl" and "from_awl" from my old postfix logs and usage. I know I can and server/ip to my whitelist files however I would rather them go into the AWL so they can expire if need be. Other then just adding to the database is there a better way to do this? Some how submit into the daemon?=20 Paul |
|
From: Kasey S. <ksp...@as...> - 2007-01-19 16:04:02
|
If you're using the same MySQL database as before, it should already be there. The logs only record what is inserted into the database, so losing the logs will not affect your domain_awl, as long as the database still exists. If you're setting up a new MySQL server too, then just mysql-dump the old database and restore it into the new server. On Jan 19, 2007, at 9:43 AM, Paul Barbeau wrote: > I am about to start up SQLGREY and I would like to "pre" load the > "domain_awl" and "from_awl" from my old postfix logs and usage. I > know > I can and server/ip to my whitelist files however I would rather > them go > into the AWL so they can expire if need be. Other then just adding to > the database is there a better way to do this? Some how submit > into the > daemon? > > Paul > > > ---------------------------------------------------------------------- > --- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to > share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php? > page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > |
|
From: Paul B. <pa...@hy...> - 2007-01-19 16:14:29
|
I was not using grey listing before at all so this is first time data. What I am pulling out of the logs is from the postfix lines and they are sender, recipient, server what normally would have been passed to the server if I had been using it. I know they will go into the "connect" first however based on volume I can move them over as I know what should have been added. Hope this clears up what i meant. Paul -----Original Message----- From: sql...@li... [mailto:sql...@li...] On Behalf Of Kasey Speakman Sent: Friday, January 19, 2007 11:04 AM To: SQLgrey users mailing-list Subject: Re: [Sqlgrey-users] Pre Load If you're using the same MySQL database as before, it should already =20 be there. The logs only record what is inserted into the database, =20 so losing the logs will not affect your domain_awl, as long as the =20 database still exists. If you're setting up a new MySQL server too, =20 then just mysql-dump the old database and restore it into the new =20 server. On Jan 19, 2007, at 9:43 AM, Paul Barbeau wrote: > I am about to start up SQLGREY and I would like to "pre" load the > "domain_awl" and "from_awl" from my old postfix logs and usage. I =20 > know > I can and server/ip to my whitelist files however I would rather =20 > them go > into the AWL so they can expire if need be. Other then just adding to > the database is there a better way to do this? Some how submit =20 > into the > daemon? > > Paul > > > ---------------------------------------------------------------------- > --- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to =20 > share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?=20 > page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users > > --=20 > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V _______________________________________________ Sqlgrey-users mailing list Sql...@li... https://lists.sourceforge.net/lists/listinfo/sqlgrey-users |
|
From: Lionel B. <lio...@bo...> - 2007-01-19 17:02:42
|
Paul Barbeau wrote the following on 19.01.2007 17:12 : > I was not using grey listing before at all so this is first time data. > What I am pulling out of the logs is from the postfix lines and they are > sender, recipient, server what normally would have been passed to the > server if I had been using it. I know they will go into the "connect" > first however based on volume I can move them over as I know what should > have been added. > > Hope this clears up what i meant. > You'll have to use your favorite language to script it. By greylisting's design there's no sure way of determining the good from the bad reading postfix logs, so you'll have to code your own heuristics adapted to your own case. Inserting the lines you want in the good table is the easiest part, it's only a matter of connecting to the database and doing INSERTs. You can even prepare the INSERTs in a file and feed it to psql or mysql clients to avoid coding database accesses in your script. In fact, if I were you I wouldn't even bother doing so. Greylisting only adds a small delay that is often not noticeable, even without auto-whitelists. The delay usually is around 20 minutes, so unless you expect a mail at a very precise moment or the sender just called to check if the email got through, nobody is even aware that greylisting took place. Even people with huge mailservers didn't have any problem letting SQLgrey handle the task automatically, why should you have any? Lionel. |
|
From: Dan F. <da...@ha...> - 2007-01-19 17:28:25
|
I agree with Lionel. Its best letting sqlgrey do all this since you really cant tell who is valid senders if you dont temporally reject first, and then sees who comes back. (which is what greylisting does) If you are worried that delaying all mail for 5 minutes at the same day you enable sqlgrey, consider using the "discrimination" feature from 1.7.4+. That will let you, via regular expressions, select what to greylist and what to just let through . Good way of slowly introducing greylisting by making the discrimination rules more and more restrictive. Also i got this wild idea of the top of my head (note: my wild idea's arent always recommended :)): You could let sqlgrey learn over a month or two. Havent tried excactly what you need, but i imagine something like this would work: By using version 1.7.4+ you can set the "reject_code". By setting reject_code = dunno sqlgrey performs as usual, except nothing gets rejected. Then by changing "reconnect_delay" to something really low (dont know if you kan use "0" here) and "max_connect_age" to something really high, then everyone who sends more than 1 mail withing the time specified by "max_connect_age", will be treated as a valid sender and added to "from_awl". after a while, simply set reject_code, max_connect_age and reconnect_delay back to default. A word of warning. This will result in a large "connect" table. Also, you will likely will get a stack of spammers added to your from_awl as well whom will have to expire. As i said, its a wild idea ;) - Dan Faerch Paul Barbeau wrote: > I was not using grey listing before at all so this is first time data. > What I am pulling out of the logs is from the postfix lines and they are > sender, recipient, server what normally would have been passed to the > server if I had been using it. I know they will go into the "connect" > first however based on volume I can move them over as I know what should > have been added. > > Hope this clears up what i meant. > > Paul > > -----Original Message----- > From: sql...@li... > [mailto:sql...@li...] On Behalf Of Kasey > Speakman > Sent: Friday, January 19, 2007 11:04 AM > To: SQLgrey users mailing-list > Subject: Re: [Sqlgrey-users] Pre Load > > If you're using the same MySQL database as before, it should already > be there. The logs only record what is inserted into the database, > so losing the logs will not affect your domain_awl, as long as the > database still exists. If you're setting up a new MySQL server too, > then just mysql-dump the old database and restore it into the new > server. > > On Jan 19, 2007, at 9:43 AM, Paul Barbeau wrote: > > >> I am about to start up SQLGREY and I would like to "pre" load the >> "domain_awl" and "from_awl" from my old postfix logs and usage. I >> know >> I can and server/ip to my whitelist files however I would rather >> them go >> into the AWL so they can expire if need be. Other then just adding to >> the database is there a better way to do this? Some how submit >> into the >> daemon? >> >> Paul >> >> >> ---------------------------------------------------------------------- >> > > >> --- >> Take Surveys. Earn Cash. Influence the Future of IT >> Join SourceForge.net's Techsay panel and you'll get the chance to >> share your >> opinions on IT & business topics through brief surveys - and earn cash >> http://www.techsay.com/default.php? >> page=join.php&p=sourceforge&CID=DEVDEV >> _______________________________________________ >> Sqlgrey-users mailing list >> Sql...@li... >> https://lists.sourceforge.net/lists/listinfo/sqlgrey-users >> >> -- >> This message has been scanned for viruses and >> dangerous content by MailScanner, and is >> believed to be clean. >> >> > > > ------------------------------------------------------------------------ > - > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDE > V > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users > > |
|
From: Paul B. <pa...@hy...> - 2007-01-19 18:14:38
|
Thanks for the feed back. We have already for the last week with one of our domains been running it and have notice some servers (Bell Canada) retry in 3 hours not 20 mins so that delay is getting a bit up there. Also about 6 months ago we tried with postgrey and had to pull it because of the delays so I am a bit gun shy. With the sql interface we can keep a better eye on what is in the connect database so I think I will enter the ones I know are good via SQL INSERT and wing it for the rest. =20 Thanks again for the input everyone. Paul PS I am not running 1.7.4 as I got an error and not sure if it was important however it did load. The error is below Name "DBIx::DBCluster::DEBUG" used only once: possible typo at /usr/sbin/sqlgrey line 2413. Name "DBIx::DBCluster::WRITE_HOSTS_NEVER_READ" used only once: possible typo at /usr/sbin/sqlgrey line 827. Name "DBIx::DBCluster::CLUSTERS" used only once: possible typo at /usr/sbin/sqlgrey line 818. 1.6.7 works so it became the path of least resistance. I am not using DBCluster I am using DBI and I have tried just running "make install" off a clean download as well "make use_dbi && make install" and I still get the error.=20 -----Original Message----- From: sql...@li... [mailto:sql...@li...] On Behalf Of Dan Faerch Sent: Friday, January 19, 2007 12:28 PM To: SQLgrey users mailing-list Subject: Re: [Sqlgrey-users] Pre Load I agree with Lionel. Its best letting sqlgrey do all this since you really cant tell who is=20 valid senders if you dont temporally reject first, and then sees who=20 comes back. (which is what greylisting does) If you are worried that delaying all mail for 5 minutes at the same day=20 you enable sqlgrey, consider using the "discrimination" feature from=20 1.7.4+. That will let you, via regular expressions, select what to=20 greylist and what to just let through . Good way of slowly introducing=20 greylisting by making the discrimination rules more and more restrictive. Also i got this wild idea of the top of my head (note: my wild idea's=20 arent always recommended :)): You could let sqlgrey learn over a month or two. Havent tried excactly=20 what you need, but i imagine something like this would work: By using version 1.7.4+ you can set the "reject_code". By setting reject_code =3D dunno sqlgrey performs as usual, except nothing gets rejected. Then by changing "reconnect_delay" to something really low (dont know=20 if you kan use "0" here) and "max_connect_age" to something really high, then everyone who sends more than 1 mail withing the time specified by "max_connect_age", will be=20 treated as a valid sender and added to "from_awl". after a while, simply set reject_code, max_connect_age and=20 reconnect_delay back to default. A word of warning. This will result in a large "connect" table. Also, you will likely will=20 get a stack of spammers added to your from_awl as well whom will have to expire. As i said, its a wild idea ;) - Dan Faerch Paul Barbeau wrote: > I was not using grey listing before at all so this is first time data. > What I am pulling out of the logs is from the postfix lines and they are > sender, recipient, server what normally would have been passed to the > server if I had been using it. I know they will go into the "connect" > first however based on volume I can move them over as I know what should > have been added. > > Hope this clears up what i meant. > > Paul > > -----Original Message----- > From: sql...@li... > [mailto:sql...@li...] On Behalf Of Kasey > Speakman > Sent: Friday, January 19, 2007 11:04 AM > To: SQLgrey users mailing-list > Subject: Re: [Sqlgrey-users] Pre Load > > If you're using the same MySQL database as before, it should already =20 > be there. The logs only record what is inserted into the database, =20 > so losing the logs will not affect your domain_awl, as long as the =20 > database still exists. If you're setting up a new MySQL server too, =20 > then just mysql-dump the old database and restore it into the new =20 > server. > > On Jan 19, 2007, at 9:43 AM, Paul Barbeau wrote: > > =20 >> I am about to start up SQLGREY and I would like to "pre" load the >> "domain_awl" and "from_awl" from my old postfix logs and usage. I =20 >> know >> I can and server/ip to my whitelist files however I would rather =20 >> them go >> into the AWL so they can expire if need be. Other then just adding to >> the database is there a better way to do this? Some how submit =20 >> into the >> daemon? >> >> Paul >> >> >> ---------------------------------------------------------------------- >> =20 > > =20 >> --- >> Take Surveys. Earn Cash. Influence the Future of IT >> Join SourceForge.net's Techsay panel and you'll get the chance to =20 >> share your >> opinions on IT & business topics through brief surveys - and earn cash >> http://www.techsay.com/default.php?=20 >> page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV >> _______________________________________________ >> Sqlgrey-users mailing list >> Sql...@li... >> https://lists.sourceforge.net/lists/listinfo/sqlgrey-users >> >> --=20 >> This message has been scanned for viruses and >> dangerous content by MailScanner, and is >> believed to be clean. >> >> =20 > > > ------------------------------------------------------------------------ > - > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE > V > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users > > ------------------------------------------------------------------------ - > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users > > =20 ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V _______________________________________________ Sqlgrey-users mailing list Sql...@li... https://lists.sourceforge.net/lists/listinfo/sqlgrey-users |
|
From: Dave S. <dst...@ma...> - 2007-01-19 18:43:34
|
We are experimenting here with it, with good results on other SQL-based =
PHP software.=20
=20
I would like to try it with Perl, and have the API, but am having trouble =
setting variables into memcached, and could use some help.
=20
I am using code that takes a SQL command string, turns it into an MD5 =
hash, then sees it if can get the data from memcached. If not, it looks it =
up in SQL, and then stores it in memcached.
=20
Anyone try this? I have *NO* idea what I am doing with Perl since I am =
just a PHP guy.
=20
Thanks!
=20
sub get_memcached_object {
my ($sql) =3D @_;
use Digest::MD5;
my $md5 ;
$md5 =3D Digest::MD5->new;
$md5->add($sql) ;
my $md5_id =3D $md5->hexdigest ;
print "SQL hashed to $md5_id \n" ;
=20
use Cache::Memcached;
my $obj =3D Cache::Memcached->get("$md5_id");
if ($obj)
{
# return $obj if $obj;
print "$md5_id found in memcached \n" ;
return $obj;
}
=20
# $obj =3D $::db->selectrow_hashref("$sql") ;
my $dbh ;
$dbh =3D DBI->connect('DBI:Pg:dbname=3Dv3_mw_filter_rules; host=3Dour_h=
ostname, our_username, ourpassword,
{ PrintError =3D> 0,
AutoCommit =3D> 1,
InactiveDestroy =3D> 1
}
) ;
print "Requesting from SQL server: $sql \n" ;
$obj =3D $dbh->selectrow_hashref("$sql") ;
print "Settting object in memcached: $obj \n" ;
Cache::Memcached->set("$md5_id", $obj);
=20
my $obj ;
$obj =3D Cache::Memcached->get("$md5_id");
if ($obj)
{
# return $obj if $obj;
print "$md5_id found in memcached \n" ;
return $obj;
}
}
This message has been certified virus-free by MailWise Filter - The real-ti=
me, intelligent, e-mail firewall used to scan inbound and outbound messages =
for SPAM, Viruses and Content. =0A=0A For more information, please visit: http:=
//www.mailwise.com=0A
|
|
From: Lionel B. <lio...@bo...> - 2007-01-19 19:21:01
|
Dave Strickler wrote the following on 19.01.2007 19:41 :
> We are experimenting here with it, with good results on other
> SQL-based PHP software.
>
> I would like to try it with Perl, and have the API, but am having
> trouble setting variables into memcached, and could use some help.
>
> I am using code that takes a SQL command string, turns it into an MD5
> hash, then sees it if can get the data from memcached. If not, it
> looks it up in SQL, and then stores it in memcached.
>
> Anyone try this? I have *NO* idea what I am doing with Perl since I am
> just a PHP guy.
I've used memcached, but with Ruby, not Perl. I've 2 code-related comments :
- you shouldn't use MD5 to cache statements it's inefficient and
theoritically you could have a collision (sometime in this century...).
The good practice is to wrap the code that fetches and sets data in the
DB to access memcache with a unique key (you can reuse the primary key
used by SQLgrey)
For example in the "is_in_from_awl" method you could check for the
presence of the key "from_awl|<sender_name>|sender_domain|src" which
would be expected to store the "last_seen" value stored in DB, if not
found, ask the DB.
For the key I use '|' as a separator and the table and the primary key
column names as element to make sure the key is unique in memcache.
Then in "put_in_from_awl" you'd build the key the same way and put the
last_seen value with an expiration of
"$self->past_tstamp($self->{sqlgrey}{awl_age}" days.
Don't forget to put and check the timestamp values or changing your
delays in configuration would have no effect on memcache-stored entries.
- you *must* handle the cache expiration (by either making entries
expire, explicitely delete them or checking their value for validity),
by only wrapping the statements like you do in your code you don't
handle DELETEs properly, the method described above should be ok. Note:
when you'll want to handle the connect table, you'll definitely have to
delete entries in memcache when they are moved to the auto-whitelist.
On the principle, I'm not sure you would earn much from using memcache.
When you don't find an information in memcache, you suppose it isn't
there and check in the DB instead. So a large subset of the queries made
by SQLgrey will still hit the DB. You could alleviate the problem by
storing negative hits in memcache, but then you'll have to expire them
properly too (and there should be so many of them, nearly never reused
that you could end up ejecting more useful memcache content when adding
them).
If it works for you could you please bench the results with and without
your modification (the average load on a mail server with a local DB
with and without your patch). I don't think we have any performance
problem even with large mail systems but if we get a good performance
boost, I'll definitely consider adding your patch.
Lionel.
|
|
From: Dave S. <dst...@ma...> - 2007-01-19 19:46:30
|
Lionel, =20 Thanks for the *excellent* advice. I will certainly be watching for all = these things. =20 As far as speeding up SQLGrey, I'm not sure memcache will either, but I = know there's only one way to know for sure, and I'm working on seeing if = it does. I think caching the connect table seem silly, but the AWL table = seems like a good choice.=20 =20 I'm not sure if this is the right place to ask, but do you do any = customizing of SQLGrey on an hourly-rate basis? If so, we'd like to hire = someone to write and test a patch for us with the understanding that we = would share the results and the code with the Community. =20 Thanks, =20 =20 Dave Strickler MailWise LLC 617-933-5810 (direct) dst...@ma...=20 www.mailwise.com ( http://www.mailwise.com/ ) "Intelligent E-mail Protection" >>> Lionel Bouton <lio...@bo...> 2:20 PM Friday, January 19, = 2007 >>> Dave Strickler wrote the following on 19.01.2007 19:41 : > We are experimenting here with it, with good results on other > SQL-based PHP software. > =20 > I would like to try it with Perl, and have the API, but am having > trouble setting variables into memcached, and could use some help. > =20 > I am using code that takes a SQL command string, turns it into an MD5 > hash, then sees it if can get the data from memcached. If not, it > looks it up in SQL, and then stores it in memcached. > =20 > Anyone try this? I have *NO* idea what I am doing with Perl since I am > just a PHP guy. I've used memcached, but with Ruby, not Perl. I've 2 code-related comments = : - you shouldn't use MD5 to cache statements it's inefficient and theoritically you could have a collision (sometime in this century...). The good practice is to wrap the code that fetches and sets data in the DB to access memcache with a unique key (you can reuse the primary key used by SQLgrey) For example in the "is_in_from_awl" method you could check for the presence of the key "from_awl|<sender_name>|sender_domain|src" which would be expected to store the "last_seen" value stored in DB, if not found, ask the DB. For the key I use '|' as a separator and the table and the primary key column names as element to make sure the key is unique in memcache. Then in "put_in_from_awl" you'd build the key the same way and put the last_seen value with an expiration of "$self->past_tstamp($self->{sqlgrey}{awl_age}" days. Don't forget to put and check the timestamp values or changing your delays in configuration would have no effect on memcache-stored entries. - you *must* handle the cache expiration (by either making entries expire, explicitely delete them or checking their value for validity), by only wrapping the statements like you do in your code you don't handle DELETEs properly, the method described above should be ok. Note: when you'll want to handle the connect table, you'll definitely have to delete entries in memcache when they are moved to the auto-whitelist. On the principle, I'm not sure you would earn much from using memcache. When you don't find an information in memcache, you suppose it isn't there and check in the DB instead. So a large subset of the queries made by SQLgrey will still hit the DB. You could alleviate the problem by storing negative hits in memcache, but then you'll have to expire them properly too (and there should be so many of them, nearly never reused that you could end up ejecting more useful memcache content when adding them). If it works for you could you please bench the results with and without your modification (the average load on a mail server with a local DB with and without your patch). I don't think we have any performance problem even with large mail systems but if we get a good performance boost, I'll definitely consider adding your patch. Lionel. ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share = your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDE= VDEV=20 _______________________________________________ Sqlgrey-users mailing list Sql...@li...=20 https://lists.sourceforge.net/lists/listinfo/sqlgrey-users This message has been certified virus-free by MailWise Filter - The real-ti= me, intelligent, e-mail firewall used to scan inbound and outbound messages = for SPAM, Viruses and Content. =0A=0A For more information, please visit: http:= //www.mailwise.com=0A |
|
From: Lionel B. <lio...@bo...> - 2007-01-19 20:16:46
|
Dave Strickler wrote the following on 19.01.2007 20:41 : > Lionel, > =20 > Thanks for the *excellent* advice. I will certainly be watching for > all these things. > =20 > As far as speeding up SQLGrey, I'm not sure memcache will either, but > I know there's only one way to know for sure, and I'm working on > seeing if it does. I think caching the connect table seem silly, but > the AWL table seems like a good choice. > =20 > I'm not sure if this is the right place to ask, but do you do any > customizing of SQLGrey on an hourly-rate basis? If so, we'd like to > hire someone to write and test a patch for us with the understanding > that we would share the results and the code with the Community. I'm afraid I can't (yet). To do that in France you must set up a business which is not the quickest and easiest thing to do (as soon as you earn money you must pay taxes and insurances and you need a structure to do that, I just learned that the bare minimum of taxes is around 1500=E2=82=AC/year). I'm currently leaving my employer (I work for= them until April) and I'm looking for a new job while setting a small business of my own. Then I'll be able to work as a gun for hire part of the time. But if the patch is not to big, you could ask here for it. Someone else might be interested or even willing to do it for free :-) Lionel |
|
From: Dave S. <dst...@ma...> - 2007-01-19 21:30:16
|
1,500 Francs for filing paperwork - Ouch! Someone in the government is getting rich ;-) =20 Well, when you're out on your own, make sure let the list know. I'll be watching... =20 In the mean time, anyone interested in helping out? I'm happy to pay a reasonable consulting fee. I really just need some Perl expertise, and someone familiar with SQLGrey would be great... =20 =20 Dave Strickler MailWise LLC 617-933-5810 (direct) www.mailwise.com ( http://www.mailwise.com/ ) "Intelligent E-mail Protection" >>> Lionel Bouton <lio...@bo...> 3:16 PM Friday, January 19, 2007 >>> Dave Strickler wrote the following on 19.01.2007 20:41 : > Lionel, > =20 > Thanks for the *excellent* advice. I will certainly be watching for > all these things. > =20 > As far as speeding up SQLGrey, I'm not sure memcache will either, but > I know there's only one way to know for sure, and I'm working on > seeing if it does. I think caching the connect table seem silly, but > the AWL table seems like a good choice. > =20 > I'm not sure if this is the right place to ask, but do you do any > customizing of SQLGrey on an hourly-rate basis? If so, we'd like to > hire someone to write and test a patch for us with the understanding > that we would share the results and the code with the Community. I'm afraid I can't (yet). To do that in France you must set up a business which is not the quickest and easiest thing to do (as soon as you earn money you must pay taxes and insurances and you need a structure to do that, I just learned that the bare minimum of taxes is around 1500=E2=82=AC/year). I'm currently leaving my employer (I work for= them until April) and I'm looking for a new job while setting a small business of my own. Then I'll be able to work as a gun for hire part of the time. But if the patch is not to big, you could ask here for it. Someone else might be interested or even willing to do it for free :-) Lionel ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV _______________________________________________ Sqlgrey-users mailing list Sql...@li...=20 https://lists.sourceforge.net/lists/listinfo/sqlgrey-users This message has been certified virus-free by MailWise Filter - The real-= time, intelligent, e-mail firewall used to scan inbound and outbound mess= ages for SPAM, Viruses and Content.=20 For more information, please visit: http://www.mailwise.com |
|
From: Lionel B. <lio...@bo...> - 2007-01-19 21:56:29
|
Dave Strickler wrote the following on 19.01.2007 22:29 : > 1,500 Francs for filing paperwork - Ouch! Someone in the government is > getting rich ;-) in fact these were euros. Nearly $2000/year. Numerous people get money through this system and it is probably less than efficient but this is the price to pay to get life-long medical/unemployement insurances. Lionel |
|
From: Dave S. <dst...@ma...> - 2007-01-20 19:52:36
|
{slaps head} Of course ! Please label me "the Ignorant American". ;-)
=20
And yes, that is a steep price, but sure, there are many benefits.=20
=20
Dave Strickler
MailWise LLC
617-933-5810 (direct)
www.mailwise.com ( http://www.mailwise.com/ )
"Intelligent E-mail Protection"
>>> Lionel Bouton <lio...@bo...> 4:56 PM Friday, January 19, =
2007 >>>
Dave Strickler wrote the following on 19.01.2007 22:29 :
> 1,500 Francs for filing paperwork - Ouch! Someone in the government is
> getting rich ;-)
in fact these were euros. Nearly $2000/year. Numerous people get money
through this system and it is probably less than efficient but this is
the price to pay to get life-long medical/unemployement insurances.
Lionel
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share =
your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDE=
VDEV=20
_______________________________________________
Sqlgrey-users mailing list
Sql...@li...=20
https://lists.sourceforge.net/lists/listinfo/sqlgrey-users
This message has been certified virus-free by MailWise Filter - The real-ti=
me, intelligent, e-mail firewall used to scan inbound and outbound messages =
for SPAM, Viruses and Content. =0A=0A For more information, please visit: http:=
//www.mailwise.com=0A
|
|
From: Dan F. <da...@ha...> - 2007-01-20 14:10:17
|
Paul Barbeau wrote: > PS I am not running 1.7.4 as I got an error and not sure if it was > important however it did load. The error is below > > Name "DBIx::DBCluster::DEBUG" used only once: possible typo at > /usr/sbin/sqlgrey line 2413. > Name "DBIx::DBCluster::WRITE_HOSTS_NEVER_READ" used only once: possible > typo at /usr/sbin/sqlgrey line 827. > Name "DBIx::DBCluster::CLUSTERS" used only once: possible typo at > /usr/sbin/sqlgrey line 818. > > 1.6.7 works so it became the path of least resistance. I am not using > DBCluster I am using DBI and I have tried just running "make install" > off a clean download as well "make use_dbi && make install" and I still > get the error. > Allright allright.. Ill fix this :) Its a harmless warning, but if it means ppl wont use this version, ill make a hack to hide these and release it as 1.7.5. - Dan |
|
From: Dan F. <da...@ha...> - 2007-01-20 18:12:11
Attachments:
sqlgrey-trivial.patch
|
I need a helping hand.. >> Name "DBIx::DBCluster::DEBUG" used only once: possible typo at >> /usr/sbin/sqlgrey line 2413. >> Name "DBIx::DBCluster::WRITE_HOSTS_NEVER_READ" used only once: possible >> typo at /usr/sbin/sqlgrey line 827. >> Name "DBIx::DBCluster::CLUSTERS" used only once: possible typo at >> /usr/sbin/sqlgrey line 818. I cant get these warnings on to appear any of my computers (no idea why), so i cant verify that my patch makes them go away. Could someone who actually gets these warnings apply the attached patch and check that they disappear? $ patch /path/to/your/sqlgrey < sqlgrey-trivial.patch - Dan |
|
From: Lionel B. <lio...@bo...> - 2007-01-20 18:29:47
|
Dan Faerch wrote the following on 20.01.2007 19:11 : > I need a helping hand.. >>> Name "DBIx::DBCluster::DEBUG" used only once: possible typo at >>> /usr/sbin/sqlgrey line 2413. >>> Name "DBIx::DBCluster::WRITE_HOSTS_NEVER_READ" used only once: possible >>> typo at /usr/sbin/sqlgrey line 827. >>> Name "DBIx::DBCluster::CLUSTERS" used only once: possible typo at >>> /usr/sbin/sqlgrey line 818. > I cant get these warnings on to appear any of my computers (no idea > why), so i cant verify that my patch makes them go away. > > Could someone who actually gets these warnings apply the attached patch > and check that they disappear? > > $ patch /path/to/your/sqlgrey < sqlgrey-trivial.patch I did get these warnings and now they disappeared, so WorksForMe(tm). Lionel. |
|
From: Steve P. <st...@co...> - 2007-01-20 19:35:04
|
I always see these messages every time I reboot. As I recall, they only occur if you have db clustering turned off. (May be some other option, it's been a while). -Steve On Jan 20, 2007, at 10:29 AM, Lionel Bouton wrote: > Dan Faerch wrote the following on 20.01.2007 19:11 : >> I need a helping hand.. >>>> Name "DBIx::DBCluster::DEBUG" used only once: possible typo at >>>> /usr/sbin/sqlgrey line 2413. >>>> Name "DBIx::DBCluster::WRITE_HOSTS_NEVER_READ" used only once: >>>> possible >>>> typo at /usr/sbin/sqlgrey line 827. >>>> Name "DBIx::DBCluster::CLUSTERS" used only once: possible typo at >>>> /usr/sbin/sqlgrey line 818. >> I cant get these warnings on to appear any of my computers (no idea >> why), so i cant verify that my patch makes them go away. >> >> Could someone who actually gets these warnings apply the attached >> patch >> and check that they disappear? >> >> $ patch /path/to/your/sqlgrey < sqlgrey-trivial.patch > > I did get these warnings and now they disappeared, so WorksForMe(tm). > > Lionel. > > ---------------------------------------------------------------------- > --- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to > share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php? > page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users > |
|
From: Dan F. <da...@ha...> - 2007-01-20 20:38:56
|
Steve Pellegrin wrote: > I always see these messages every time I reboot. As I recall, they > only occur if you have db clustering turned off. (May be some other > option, it's been a while). > Well sorta. It happens when the db clustering module isnt loaded. Since the options i set are actually a properties for DBIx::DBCluster, and it isnt loaded, it becomes a simple variable that is never used which then gives the warning "used only once: possible typo..". It only happens when using DBI instead of DBx::DBCluster. So even if config has db_cluster=off, as long as sqlgrey is installed with make db_cluster you wont get the warnings. The warnings will be gone in 1.7.5. The warnings has NO effect on sqlgrey's normal functions. - Dan |
|
From: Dan F. <da...@ha...> - 2007-01-20 20:38:59
|
Lionel Bouton wrote: > I did get these warnings and now they disappeared, so WorksForMe(tm). > Great, thanks for testing. Ill commit it to CVS then.. Since i dont feel such a small change warrents a new release, ill see if i can find time to look at some of the issues/feature requests, reported on the tracker (no garantees though). Especially: [ 1613916 ] sqlgrey 1.6.7 dies if syslog daemon is down - I've confirmed this on our setup. I think its a serious problem. [ 1574884 ] db_clean_hostname and HOSTNAME - Non critical problem, but its not doing excacttly what its supposed to. Im thinking about maybe redesigning the way this works so you can disable automatic db-clean entirely, and THEN run it as cronjob from one host. Eg. 'sqlgrey --db-clean'. [ 1580029 ] MX and SPF checking - SPF checking would be really cool :). Anyone who knows which spf-lib for perl is the "standard"? Comments & idea's (and coding help) is welcome. If anyone decides theyll look one of these, please let me know so ill postpone looking at that specific item. - Dan |
|
From: Lionel B. <lio...@bo...> - 2007-01-20 21:10:22
|
Dan Faerch wrote the following on 20.01.2007 21:38 : > Lionel Bouton wrote: > >> I did get these warnings and now they disappeared, so WorksForMe(tm). >> >> > Great, thanks for testing. Ill commit it to CVS then.. Since i dont feel > such a small change warrents a new release, ill see if i can find time > to look at some of the issues/feature requests, reported on the tracker > (no garantees though). Especially: > > [ 1613916 ] sqlgrey 1.6.7 dies if syslog daemon is down > - I've confirmed this on our setup. I think its a serious problem. > > Agreed. I'm leaving for holidays, but as soon as I have some time (probably around january the 31th) I'll issue a 1.6.8 with a fix for this one. > [ 1574884 ] db_clean_hostname and HOSTNAME > - Non critical problem, but its not doing excacttly what its supposed > to. Im thinking about maybe redesigning the way this works so you can > disable automatic db-clean entirely, and THEN run it as cronjob from one > host. Eg. 'sqlgrey --db-clean'. > If I were you, I'd store the last cleanup timestamp in the database. SQLgrey would maintain a local cache of the value stored in the DB (to avoid asking it on each mail). Then, when an instance of SQLgrey would find that the cleanup is due according to its local cache it would begin by verifying that its local cache is up to date (if not it replaces the cache value and tests it again). If the cleanup is indeed due, it would update the value in DB and then call the cleanup procedure -> zero conf needed (I like avoiding configuration, software should just work given the bare minimum information even if it means more work for the developer). You could have race conditions where 2+ SQLgrey wants to update at the very same time. This is probably not a problem as: - it should be very rare, - it can only marginally slow down the database, not corrupt it in any way, - each SQLgrey instance won't notice any problem and everybody will continue happily. There's even a solution using serializable transactions to make sure that only one SQLgrey instance will trigger the cleanup if you have a transactional DBMS (PostgreSQL or MySQL with InnoDB, probably 5.x for serializable transactions). As a positive side-effect, SQLgrey can stop cleaning the database each time it is started... > [ 1580029 ] MX and SPF checking > - SPF checking would be really cool :). Anyone who knows which spf-lib > for perl is the "standard"? > Hum, you do what you want with 1.7.x, but I'm considering SPF as a dying horse begging for someone to put an end to its misery... Any DNS query is a problem with SQLgrey as the process serializes Postfix requests. MX checking would be interesting but you'll have to solve the serialization problem. Lionel. |
|
From: Dan F. <da...@ha...> - 2007-01-20 21:53:21
|
Lionel Bouton wrote: > Agreed. I'm leaving for holidays, but as soon as I have some time > (probably around january the 31th) I'll issue a 1.6.8 with a fix for > this one. > Ok sounds great. >> [ 1574884 ] db_clean_hostname and HOSTNAME ............. >> If I were you, I'd store the last cleanup timestamp in the database. >> >> Not a bad idea at all.. Except for the race condition you mention and having to change the db-schema, which you explicitly told me not to do ;) >> - it can only marginally slow down the database, not corrupt it in any way Yeah.. No corruption should be possible when multiple cleanups are made at the same time. But it does put heavy load on the sql and sqlgrey on our servers during cleanup, Also, AFAIK, sqlgrey is unresponsive during cleanup, a problem that gave me a lot of trouble while i was running postgrey. And using db_clustering just makes the "connect" table so much bigger.. For me, a cronjob would be great, since we are redesigning our mailcluster, so that there is a "master" controller box, and a lot of slaves (simplified). The controller will run SQL master and LVS clustering. The master will (no longer) run postfix and sqlgrey and so forth.. Here it would be great to have a cronjob doing the actual db-cleanup.. Ive seen some of you on this list actually use the db_cluster features. What would suit your needs? >> You could have race conditions where 2+ SQLgrey wants to update at the >> very same time. >> For problems like this, i usually do random(0-x) sleeping , before UPDATE'ing the timestamp.. It decreases the likelihood of a collision tremendously. Like this: SELECT. Determine if its time to do clean up.. If yes -> sleep random(0-x) UPDATE (blahblah) WHERE `timestamp`='<timestamp from SELECT>' If affected_rows > 0 then do_cleanup. "x" could be 4 seconds or so. I dont suppose, due to sqlgrey's multithreaded nature, that this will cause it to hang for the sleep time? > - SPF checking would be really cool :). Anyone who knows which spf-lib for perl is the "standard"? .... > Hum, you do what you want with 1.7.x, but I'm considering SPF as a dying horse begging for someone to put an end to its misery... I am SO hoping SPF will be more common (since it is, in theory, a great idea). And adding to sqlgrey, so SPF=whitelist, might help the adaptation. :) But if me (and the Dmitry122, who posted the idea), are the only ones who likes it, i really wont bother :) > Any DNS query > is a problem with SQLgrey as the process serializes Postfix requests. MX > checking would be interesting but you'll have to solve the serialization > problem. > I dont understand the "serialization problem"? (or maybe i dont understand the term). - Dan Faerch |
|
From: Lionel B. <lio...@bo...> - 2007-01-20 22:18:10
|
Dan Faerch wrote the following on 20.01.2007 22:53 : > > Not a bad idea at all.. Except for the race condition you mention and > having to change the db-schema, which you explicitly told me not to do ;) > > On 1.6.x yes. Database schema changes are tricky, because you don't want to have to revert them so you better get them right in the first place. This said 1.7.x is the place where you can have database schema changes. > > >>> You could have race conditions where 2+ SQLgrey wants to update at the >>> very same time. >>> >>> > For problems like this, i usually do random(0-x) sleeping , before > UPDATE'ing the timestamp.. It decreases the likelihood of a collision > tremendously. > Like this: > SELECT. > I don't really understand what you meant with this SELECT. > Determine if its time to do clean up.. > If yes -> sleep random(0-x) > UPDATE (blahblah) WHERE `timestamp`='<timestamp from SELECT>' > If affected_rows > 0 then do_cleanup. > > Hum I forgot about this method. But why sleep? Assuming you have the expected timestamp at which you think you should cleanup (the cached value I spoke of) : UPDATE <table> SET timestamp = expected where timestamp < expected. And as you pointed out, the number of affected rows tells us if we are responsible for the cleanup: - 0: no someone else managed to modify the timestamp just before us (just update the "expected" cache) - 1: yes we are the one ! update the "expected" cache and cleanup. No need to sleep, no two clients can really update the same row given that they all have the same "expected" value (and in the buggy case where they aren't properly configured, they won't try to update at the same time anyway). Robust, clean, ensures only one cleanup so minimum load, doesn't even need transactions so should work with all DBs. > I dont suppose, due to sqlgrey's multithreaded nature, that this will > cause it to hang for the sleep time? > > SQLgrey is not multithreaded. It's a single process that multiplexes the incoming Postfix requests into one sequential flow of queries. Given that querying a local database is fast, there's no need for multiple processes or threads (which probably better explains my comments on DNS queries). >> - SPF checking would be really cool :). Anyone who knows which spf-lib for perl is the "standard"? >> > .... > >> Hum, you do what you want with 1.7.x, but I'm considering SPF as a dying horse begging for someone to put an end to its misery... >> > I am SO hoping SPF will be more common (since it is, in theory, a great > idea). And adding to sqlgrey, so SPF=whitelist, might help the > adaptation. :) But if me (and the Dmitry122, who posted the idea), are > the only ones who likes it, i really wont bother :) > SPF breaks email forwarding (SRS is a solution but _all_ forwarding mailservers should support it in order for it to work properly), only tells you that the server is authorized for a domain and not if it wants to send SPAM and is so poorly understood that it breaks email in many places (I've seen anti-SPAM appliances/proprietary software which checked the From header instead of the Return-path for example...). Lionel. |
|
From: Dan F. <da...@ha...> - 2007-01-20 22:29:09
|
Lionel Bouton wrote: >> Like this: >> SELECT. >> >> > > I don't really understand what you meant with this SELECT. > > Select * from whatever WHERE last_cleanup < sumthing-sumthing. You need to select the last cleanup time, to be able to compare if another sqlgrey already did it. >> Determine if its time to do clean up.. >> If yes -> sleep random(0-x) >> UPDATE (blahblah) WHERE `timestamp`='<timestamp from SELECT>' >> If affected_rows > 0 then do_cleanup. >> >> >> > > Hum I forgot about this method. But why sleep? Assuming you have the > expected timestamp at which you think you should cleanup (the cached > value I spoke of) : > UPDATE <table> SET timestamp = expected where timestamp < expected. > > And as you pointed out, the number of affected rows tells us if we are > responsible for the cleanup: > - 0: no someone else managed to modify the timestamp just before us > (just update the "expected" cache) > - 1: yes we are the one ! update the "expected" cache and cleanup. > > No need to sleep, no two clients can really update the same row given > Youre right.. Didnt think of that. I wrote it because i usually SELECT twice. SELECT 1. Do some math or whatever, SLEEP random and SELECT 2, to make sure noone has already changed this. But i guess i need to know if "affected rows" work on all sql's? > processes or threads (which probably better explains my comments on DNS > queries). > > It does explain it, yes.. > SPF breaks email forwarding (SRS is a solution but _all_ forwarding > mailservers should support it in order for it to work properly), only > tells you that the server is authorized for a domain and not if it wants > to send SPAM and is so poorly understood that it breaks email in many > places (I've seen anti-SPAM appliances/proprietary software which > checked the From header instead of the Return-path for example...). > Im still not convinced its a bad idea.. But ill give it some more though before rushing into it.. - Dan |
|
From: Lionel B. <lio...@bo...> - 2007-01-20 22:55:41
|
Dan Faerch wrote the following on 20.01.2007 23:28 : > Lionel Bouton wrote: > >>> Like this: >>> SELECT. >>> >>> >>> >> I don't really understand what you meant with this SELECT. >> >> >> > Select * from whatever WHERE last_cleanup < sumthing-sumthing. > You need to select the last cleanup time, to be able to compare if > another sqlgrey already did it. > Just to make sure we understand each other: You only need to SELECT the "last_cleanup" value once after each attempted cleanup (and on startup). You then compute the next "expected" cleanup value (simply by adding the delay). On each mail to process, you check if now > "expected" and only try the update with the UPDATE ... SQL statement when this is true. Whatever happens (0 or 1 row updated) you have to refresh the "last_cleanup" value to compute the next "expected" cleanup value. So you SELECT the actual "last_cleanup" value again if the update didn't affect any row or use the one you gave in the update in the other case. > But i guess i need to know if "affected rows" work on all sql's? > > It does, it is standard SQL to return the number of affected rows for each UPDATE statement. At least it worked for me on Oracle, DB2, MySQL, PostgreSQL and SQLite last time I checked (SQLite is in fact out of scope, I can't imagine people running several SQLgreys against one SQLite DB...). Lionel, who should really, really start packing for his trip tomorrow. |
|
From: Dan F. <da...@ha...> - 2007-01-21 00:58:51
|
Lionel Bouton wrote: > Just to make sure we understand each other: > Exactly what i meant. > Oracle, DB2, MySQL, > PostgreSQL and SQLite last time I checked Ok.. Then i dont need to install a ton of DBD's to check this ;). > (SQLite is in fact out of > scope, I can't imagine people running several SQLgreys against one > SQLite DB...). > Without knowing a whole lot about sqlite, i cant really see how that would even be possible.. > Lionel, who should really, really start packing for his trip tomorrow. > Tomorrow?!? Then its WAY past your bedtime ;) - Dan |