You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(10) |
Nov
(37) |
Dec
(66) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(52) |
Feb
(136) |
Mar
(65) |
Apr
(38) |
May
(46) |
Jun
(143) |
Jul
(60) |
Aug
(33) |
Sep
(79) |
Oct
(29) |
Nov
(13) |
Dec
(14) |
2006 |
Jan
(25) |
Feb
(26) |
Mar
(4) |
Apr
(9) |
May
(29) |
Jun
|
Jul
(9) |
Aug
(11) |
Sep
(10) |
Oct
(9) |
Nov
(45) |
Dec
(8) |
2007 |
Jan
(82) |
Feb
(61) |
Mar
(39) |
Apr
(7) |
May
(9) |
Jun
(16) |
Jul
(2) |
Aug
(22) |
Sep
(2) |
Oct
|
Nov
(4) |
Dec
(5) |
2008 |
Jan
|
Feb
|
Mar
(5) |
Apr
(2) |
May
(8) |
Jun
|
Jul
(10) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
(32) |
May
|
Jun
(7) |
Jul
|
Aug
(38) |
Sep
(3) |
Oct
|
Nov
(4) |
Dec
|
2010 |
Jan
(36) |
Feb
(32) |
Mar
(2) |
Apr
(19) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(6) |
Nov
(8) |
Dec
|
2011 |
Jan
(3) |
Feb
|
Mar
(5) |
Apr
|
May
(2) |
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
(6) |
2012 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
(6) |
Dec
(10) |
2014 |
Jan
(8) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(34) |
Aug
(6) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(18) |
Jul
(13) |
Aug
(30) |
Sep
(4) |
Oct
(1) |
Nov
|
Dec
(4) |
2016 |
Jan
(2) |
Feb
(10) |
Mar
(3) |
Apr
|
May
|
Jun
(11) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Lionel B. <lio...@bo...> - 2004-12-14 23:55:00
|
Farkas Levente wrote the following on 12/14/04 18:08 : >> [...]Anyway, with auto-whitelisting your users shouldn't notice much >> of a delay. Just make the switch on Friday's evening and let the >> marginal week-end trafic populate the auto-whitelist tables. > > > that's not so easy! most of these important emails comes a few hours > before deadlines (usualy thuesday 18:00 and thursday 18:00) which > makes the think a bit complicated:-( > Understandable, you might want to use an opt-in policy for greylisting. Greylisting is a tradeoff that auto-whitelists can only make less painfull to make. You should make your users aware that either : - you use greylisting for all of them which means that poorly configured mail servers won't deliver in a timely manner (and some rare ones never) but on the other hand their SPAM level is less than half that what it could be (remember that asking the sender to resend the message will solve the problem in most cases). - you use greylisting on an opt-in basis and they have to choose what they consider more important : less SPAM or "instant messaging", their choice, their responsibility. >> In the best case, to ease transition, what I *could* add is a way to >> plug another policy daemon into sqlgrey and let the messages pass and >> populate the database when the other policy daemon answers "DUNNO" or >> "PREPEND *", that would need a bit of tweaking to not ask the other >> policy daemon when not needed. That will not make it on the top of my >> TODO list in the short future though (the more code in and the less >> manageable the project becomes). > > > if easier it can be a solution too. > As you can see I'm burried alive under enhancement requests ! But SQLgrey is open-source, feel free to add the feature you need if I'm not fast enough. > >> - if needed (sqlgrey can cope with database unavailability), >> configure replication to a slave database. > > > it is currently possible? It depends on the database system. Currently SQLgrey only connects to one database (which would be the master) though. > >> If the database fails to answer properly (connection down or sql >> errors), sqlgrey won't crash (this was fully debugged not long ago) but > > > what does the "won't crash" means? in this case response with a DUNNO? > Yes. > [...] > >> *failure*, but you will have a single point of *inefficiency* : the >> database. > > > that's far better! > I didn't want to add another point of failure and trust me : when there were bugs in the handling of database failures users were quick to report them ! >> If you use a slave database, you can either : make it take over the >> master's IP, update the DNS to reroute sqlgrey connections to the >> slave (my personnal choice - put low TTLs in the DNS - this way you >> don't have to take the master fully down if only its database system >> is down), or (if it can be made quickly) put the master back online. > > > imho it'd be better to be able to configure more (slave) sql server > for sqlgrey in stead of dns manipulation. > I'm not sure I understand. Do you mean that SQLgrey should directly access several servers and update all of them or do you want replication done at the database level (SQLgrey being unaware of the process replicating the master to the slaves) ? The former is doable but will be quite complex : SQLgrey would have to support adding an empty database to a farm of databases and populate the tables of each database without data allowing the message to pass when there's at least another database with data making SQLgrey decice to accept it. This must be done at every step of the decision process (valid previous connection attempt, e-mail auto-whitelist entry, domain auto-whitelist entry). This will : - be slow ! - get slower each time you add a new database, - be limited by the least responsive of your databases. but be really, really robust (the replication is handled out of the databases and there's no need of 2-phase COMMIT which is a real bonus). If this is what you want, I'm afraid it should be another project : SQLgrey current model is not the best suited for this, you'll want to make the same request to different databases in // and wait for all of them to complete or timeout, mark databases as faulty to avoid spending time waiting for timeouts, ... In the latter case you want SQLgrey being aware of the fact that there is a replication process occuring between several databases and one of them is the master, you want to be ensure only one is in RW mode, this one is known by SQLgrey and when it goes down an external process decides which slave becomes the master and - do what's needed to reconfigure it as the master, - signal each SQLgrey server to use this new one. For that, I only see one thing needed on SQLgrey's side : modify SQLgrey in order to allow on the fly reconnection to another database. The rest is database specific. But I don't really see the benefit of making this so complex, usually replicated databases come with what's needed to make a slave replace a master by taking over its IP address. In this case SQLgrey will work correctly out of the box. > > imho it's be enough to switch to the slave and the slave can replicate > eg. once a day the master (before becoming the master) I'm not sure I understand. Lionel. |
From: Klaus A. S. <kse...@gm...> - 2004-12-14 23:27:34
|
On Wed, 15 Dec 2004 00:13:28 +0100, Lionel Bouton <lio...@bo...> wrote: >=20 > > Composite values should be possible, e.g. 10m11s =3D=3D 671 seconds. >=20 > Wow, my TODO list is under siege... But you are right, this can become > confusing and adding units is a good solution. :-) > Do you *really* need composite values ? It's not a big deal, but will > it serve a purpose ? For example I can't imagine 10m11s being more > useful than 10m or 11m. Heheh..., 11 minutes would work just fine. I should have chosen a better example. ;-) The points are that (1) 10m11s is easier to comprehend than 671s, and (2) I would like to have full control over the time values. But I can always revert to using seconds instead of composite values if I want to be anal. > >Btw, my mx is running SQLgrey 1.4.0. This version uses a variable, > >maint_delay, for controlling when to do household cleaning. However, > >the default value is 0 (seconds), meaning that cleanup is trigerred by > >each mail that arrives. On a busy site this might not be optimal, but > >the value doesn't seem to be configurable... >=20 > Yep. I forgot to make this configurable. No sweat. SQLite is really fast, and my mail server is not that busy, I just wondered if there were a deeper meaning. :-) Cheers, --=20 Klaus Alexander Seistrup Copenhagen =B7 Denmark |
From: Lionel B. <lio...@bo...> - 2004-12-14 23:14:54
|
Klaus Alexander Seistrup wrote the following on 12/14/04 19:38 : >Hi, > >SQLgrey relies on quite a few time related variables -- >reconnect-delay, max-connect-age, awl-age, and others -- but unless a >commented configuration fils is used, there is no way of knowing the >time unit of a given variable. E.g., the re-connect-delay takes >minutes, the max-connect-age takes hours, the awl-age takes days, and >so on... > >To make the values more intuitive, I suggest that times be given by >appending specific units (e.g. w=weeks, d=days, h=hours, m=minutes, >s=seconds), and that SQLgrey converts to whatever unit it uses >internally (seconds?). Composite values should be possible, e.g. >10m11s == 671 seconds. > > Wow, my TODO list is under siege... But you are right, this can become confusing and adding units is a good solution. Do you *really* need composite values ? It's not a big deal, but will it serve a purpose ? For example I can't imagine 10m11s being more useful than 10m or 11m. >Btw, my mx is running SQLgrey 1.4.0. This version uses a variable, >maint_delay, for controlling when to do household cleaning. However, >the default value is 0 (seconds), meaning that cleanup is trigerred by >each mail that arrives. On a busy site this might not be optimal, but >the value doesn't seem to be configurable... > > > Yep. I forgot to make this configurable. I realised this after 1.4.0. "0" doesn't seem to be a problem on most sites, but it should definitely be configurable and documented as this variable is a tradeoff between : - delaying spam reports in logs (increasing maint_delay delays the reports), - more stressfull cleanups (and SQLgrey is waiting on each cleanup to complete wich can make this a real problem) when maint_delay increases, - mean load increase when maint_delay is low. There's no magic solution but here is what I'm considering : - make maint_delay configurable and let the DBA tune it and leave this nasty optimisation problem to her/him, - make cleanup an independant process forking from the main SQLgrey process using heuristics to fine tune the delay between cleanups (ie: don't lower the delay under a given value, don't make it higher than another, try to reach an average cleanup time of less than a third value without making it reach an average cleanup time of more than a forth on a given hour period), that will probably mirror what a DBA will try to do but make it automatic (did I mention bloat in a previous e-mail ? :-)). - switch to Net::Server::Prefork in order to stop blocking, there are numerous downsides though (code complexity, concurrent database accesses that will slow it down, connections refused when the server reachs its children limit, ...). There's another problem with cleanups : how should SQLgrey handle them when multiple SQLgrey instances use the same database. I have some ideas like storing a last_cleanup timestamp in the new "config" table to make it easy to distribute among them. Lionel. |
From: Lionel B. <lio...@bo...> - 2004-12-14 22:45:23
|
Josh Endries wrote the following on 12/14/04 19:07 : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Lionel Bouton wrote: > | For one isolated user it takes a long time for auto-whitelist to > kick in > | (especially SQLgrey's domain-based one) because the greylister can't > | learn from trafic to other users. You'll have a much better > | auto-whitelist usage in an ISP environment. > > I actually have a number of live/active users on my test server, but > I see your point. Of course, everyone gets different email, so I'm > wondering how large the difference really is. It depends on the sender repartition accross the recipients. If multiple recipients see the same sender on different occasions they will benefit from the common auto-whitelist. If multiple recipients see different senders but these are from the same domains, they will benefit too. > > | I think you can already use postfix to selectively use > greylisting, see > | the postfix online documentation, especially the chapter where it is > | configured to greylist only specific source domains. > > Do you mean greylist for specific users or specific incoming email > domains? This is what I was considering, a per-user opt-in approach. > I just have to look into how to get Postfix to look up the policies > to use. I can insert it before or after alias expansion with my > setup (Postfix is so flexible :)), but I guess that's beside the point. > I didn't realise you could make Postfix use the greylisting policy daemon after alias expansion. How do you do that ? > | Then an ISP can launch sqlgrey in "optin" or "optout" mode and add to > | its web interfaces some configuration pages that will allow its > users to > | subscribe to the service by adding/removing entries in the correct > | sqlgrey tables. > > This is what I'll probably do, make a web interface, but I was > thinking about using an SQL lookup in Postfix to get the policies > (not sure if that is possible) and/or putting it in amavisd-new or > something so you can have "trickle-down" organization-based > policies. If this isn't possible, adding it to sqlgrey may be the > only option, but I think it "belongs" in Postfix, personally. If it can be done there, that's good please explain to the list how you do it and I'll make it an HOWTO. I saw at least another greylisting implementation providing optin optout, so I'm wondering why they had to do it. I have two orthogonal goals : - make the software easy to use (if optin/optout is painful in Postfix, then I'll add it), - don't bloat it (no, another piece of code in a 50k perl script !). Lionel. |
From: Klaus A. S. <kse...@gm...> - 2004-12-14 18:38:57
|
Hi, SQLgrey relies on quite a few time related variables -- reconnect-delay, max-connect-age, awl-age, and others -- but unless a commented configuration fils is used, there is no way of knowing the time unit of a given variable. E.g., the re-connect-delay takes minutes, the max-connect-age takes hours, the awl-age takes days, and so on... To make the values more intuitive, I suggest that times be given by appending specific units (e.g. w=3Dweeks, d=3Ddays, h=3Dhours, m=3Dminutes, s=3Dseconds), and that SQLgrey converts to whatever unit it uses internally (seconds?). Composite values should be possible, e.g. 10m11s =3D=3D 671 seconds. Btw, my mx is running SQLgrey 1.4.0. This version uses a variable, maint_delay, for controlling when to do household cleaning. However, the default value is 0 (seconds), meaning that cleanup is trigerred by each mail that arrives. On a busy site this might not be optimal, but the value doesn't seem to be configurable... Cheers, // Klaus --=20 Klaus Alexander Seistrup =B7 Copenhagen =B7 Denmark |
From: Josh E. <jo...@en...> - 2004-12-14 18:16:02
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Lionel Bouton wrote: | For one isolated user it takes a long time for auto-whitelist to kick in | (especially SQLgrey's domain-based one) because the greylister can't | learn from trafic to other users. You'll have a much better | auto-whitelist usage in an ISP environment. I actually have a number of live/active users on my test server, but I see your point. Of course, everyone gets different email, so I'm wondering how large the difference really is. | I think you can already use postfix to selectively use greylisting, see | the postfix online documentation, especially the chapter where it is | configured to greylist only specific source domains. Do you mean greylist for specific users or specific incoming email domains? This is what I was considering, a per-user opt-in approach. I just have to look into how to get Postfix to look up the policies to use. I can insert it before or after alias expansion with my setup (Postfix is so flexible :)), but I guess that's beside the point. | Then an ISP can launch sqlgrey in "optin" or "optout" mode and add to | its web interfaces some configuration pages that will allow its users to | subscribe to the service by adding/removing entries in the correct | sqlgrey tables. This is what I'll probably do, make a web interface, but I was thinking about using an SQL lookup in Postfix to get the policies (not sure if that is possible) and/or putting it in amavisd-new or something so you can have "trickle-down" organization-based policies. If this isn't possible, adding it to sqlgrey may be the only option, but I think it "belongs" in Postfix, personally. Anyway thanks for the response! Josh -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBvyvwV/+PyAj2L+IRAt0HAJwIBPbZdok+fWfsp1Vkk7UqVbehXgCdHYaZ 4RuBdrSSxR9IC8P2lZmD02Y= =uaJ2 -----END PGP SIGNATURE----- |
From: Lionel B. <lio...@bo...> - 2004-12-14 17:39:43
|
Josh Endries wrote the following on 12/14/04 17:40 : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hello, > > I'm wondering if anyone out there uses sqlgrey (or any greylisting > policies) in a large/ISP environment, and what success/problems > they've had. I've been thinking about deploying sqlgrey (due to the > feature of one backend and multiple nodes), but in my testing on my > private server I found that the delays were often annoying and > didn't seem to stop. Maybe I didn't set it up correctly or > something... :/ I'll test it again sometime. For one isolated user it takes a long time for auto-whitelist to kick in (especially SQLgrey's domain-based one) because the greylister can't learn from trafic to other users. You'll have a much better auto-whitelist usage in an ISP environment. > > Anyway, the main concern I have is that users will not see email > immediately as most are accustomed to. Unfortunately this seems to > be a greylisting downfall, not sqlgrey's, and I'm just curious if > anyone has deployed this on a large scale and if they've run into > problems or if people are complaining, etc., or any ideas on the > matter. I realize it could be bad for businesses, but it's effect on > spam is great. :) The best way is to let the user decides if it wants to use greylisting (or even make them pay for it :-)). I think you can already use postfix to selectively use greylisting, see the postfix online documentation, especially the chapter where it is configured to greylist only specific source domains. I can add opt-in and opt-out support if needed. This could work like this : - default : current behaviour, - --opt-in : the RCPT TO: must be in a "optin" table for greylisting to be used, - --opt-out : the RCPT TO: must *not* be in a "optout" table for greylisting to be used. Caveat : IIRC the policy daemon is called *before* alias expansion. If you have mailing-lists and/or several aliases for the same users, you'll have to take this into consideration when populating optin or optout tables. Then an ISP can launch sqlgrey in "optin" or "optout" mode and add to its web interfaces some configuration pages that will allow its users to subscribe to the service by adding/removing entries in the correct sqlgrey tables. Best regards, Lionel. |
From: Farkas L. <lf...@bp...> - 2004-12-14 17:08:19
|
Lionel Bouton wrote: > Farkas Levente wrote the following on 12/14/04 15:04 : > >> hi, >> we may be switch from postgrey to sqlgray. afaik they use the same >> database setup since sql is a for of post. is there any way to migrate >> from postgrey to sqlgrey? we wouldn't like tostart with a fresh clean >> database since it cause a lot's of headacke for all of the currently >> confirmed sender/recepints. it's be very useful. > > > > Currently there's no way of migrating the greylisting data in the > postgrey database to sqlgrey. I'm not sure if reusing postgrey data is a always the latest:-) i package the redhat/fedora postgrey rpms. > good idea, I'd have to check in details the postgrey version you use to > see if it would be doable. That would be quite a lot of work and I don't > think I can find the time for such a specific task. > Anyway, with auto-whitelisting your users shouldn't notice much of a > delay. Just make the switch on Friday's evening and let the marginal > week-end trafic populate the auto-whitelist tables. that's not so easy! most of these important emails comes a few hours before deadlines (usualy thuesday 18:00 and thursday 18:00) which makes the think a bit complicated:-( > In the best case, to ease transition, what I *could* add is a way to > plug another policy daemon into sqlgrey and let the messages pass and > populate the database when the other policy daemon answers "DUNNO" or > "PREPEND *", that would need a bit of tweaking to not ask the other > policy daemon when not needed. That will not make it on the top of my > TODO list in the short future though (the more code in and the less > manageable the project becomes). if easier it can be a solution too. >> another question is there any howto/readme or other short description >> how to use eg. mysql data replication amonf the mx mail servers with >> sqlgrey? that's the only reason we like to switch to sqlgrey since we >> _must_ use one greylist server (otherwise it's unusable) and currently >> if our postgrey server die (the service or the whole machine), than >> all of our mx's pofix die (postfix can't work without the configured >> policy server). so one good solution to use a separate sql server on >> all mx host and replicate the database among them. any advice? > > > > Configuring replication between different MySQL databases would imply > master-master setups (you want each database to be able to write to each > other) and it could trigger nasty primary key clashes. IIRC MySQL > replication doesn't ensure that a statement did update all databases > when it is commited : 2 databases could have the same primary key added > for one of SQLgrey's tables -> expect nasty errors (and look in MySQL > docs for how it handles such situations). > > In your configuration, I'd advise you to do the following : > - use one sqlgrey instance on each of your postfix servers using > greylisting, all configured to use the same database. if sqlgrey is not better than postgrey in term of "failure" then i don't see any good reason why switch to sqlgrey? do you know? > - if needed (sqlgrey can cope with database unavailability), configure > replication to a slave database. it is currently possible? > If the database fails to answer properly (connection down or sql > errors), sqlgrey won't crash (this was fully debugged not long ago) but what does the "won't crash" means? in this case response with a DUNNO? > will try to reconnect on each new message coming in and unless the > database comes back let them pass (no greylisting) to ensure no > legitimate trafic will be blocked. You will not have a single point of aha. > *failure*, but you will have a single point of *inefficiency* : the > database. that's far better! > If you use a slave database, you can either : make it take over the > master's IP, update the DNS to reroute sqlgrey connections to the slave > (my personnal choice - put low TTLs in the DNS - this way you don't have > to take the master fully down if only its database system is down), or > (if it can be made quickly) put the master back online. imho it'd be better to be able to configure more (slave) sql server for sqlgrey in stead of dns manipulation. > After switching to the slave, you'll probably have to make it the master > and reconfigure your old master as a slave. > If you do this, please consider writing a quick HOWTO describing the > failure handling and master back online, even something sketchy will do, > I can write the final HOWTO but I need the experience of someone who > actually did setup and test a master-slave scenario. this dns trick is not realy like to me so probably we won't use this. > In your case, there is two things I think SQLgrey still lacks : > - e-mail the postmaster or a configurable address when the database goes > down/comes back online (currently in my TODO list), work-around : use > Nagios to monitor your databases, we use nagios but an automatick failover what i realy would like to see. > - allow automatic switching to fallback databases, but for this to be > implemented correctly I need to know how each RDBMs handle a master > coming back online (will it become a slave as I believe or will it take imho it's be enough to switch to the slave and the slave can replicate eg. once a day the master (before becoming the master) and nagios can notify the sysadm to restore the master manualy and restart sqlgrey manualy to use the original master. this scenario as the first step would be better then the current one. > its master role back eventually ? How do SQLgrey check which database is > the master at startup time : think of a SQLgrey restart after the master > went down and comes back as a slave...). NEVER SWITCH BACK TO THE ORIGINAL MASTER AUTOMATICALY! i think it's acceptable to be some kind of manual configuration in this case. > Finally if you are worried about MySQL stability consider PostgreSQL. > I've not yet personnally seen MySQL crash on me, but I worked as a > software developper for a company that uses MySQL on really high loads > (2x 4-way Hyperthreading Xeon in master-slave setup that just have the > head out of the water) and they had crashes with 4.0 versions at least > once a month. PostgreSQL seems more robust to me. In the end you should > use the database you are the most comfortable with (did already > successfully backup and restore, know how to optimize for speed or > memory footprint and so on). no i'm not worry about any kind of sql server. i worry about the system ie. the machine itself (hardware, network etc..) i don't see any sql server crash either but see many kind of hardware, network problem. -- Levente "Si vis pacem para bellum!" |
From: Josh E. <jo...@en...> - 2004-12-14 16:48:40
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, I'm wondering if anyone out there uses sqlgrey (or any greylisting policies) in a large/ISP environment, and what success/problems they've had. I've been thinking about deploying sqlgrey (due to the feature of one backend and multiple nodes), but in my testing on my private server I found that the delays were often annoying and didn't seem to stop. Maybe I didn't set it up correctly or something... :/ I'll test it again sometime. Anyway, the main concern I have is that users will not see email immediately as most are accustomed to. Unfortunately this seems to be a greylisting downfall, not sqlgrey's, and I'm just curious if anyone has deployed this on a large scale and if they've run into problems or if people are complaining, etc., or any ideas on the matter. I realize it could be bad for businesses, but it's effect on spam is great. :) Thanks, Josh -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBvxd4V/+PyAj2L+IRAqGKAJ9P1LJ6M1P2WSArxck4uQQxsmbBLACgnWDS wUpRDLUqqt79pWxWa6PwrfQ= =02o1 -----END PGP SIGNATURE----- |
From: Lionel B. <lio...@bo...> - 2004-12-14 15:42:51
|
Lionel Bouton wrote the following on 12/13/04 17:36 : > Oystein Viggen wrote the following on 12/13/04 16:43 : > >> Hi, >> >> I noticed that Net::Server would complain in my mail.warn log that >> that EGID >> was not defined: >> >> Dec 13 12:20:14 lists sqlgrey[22928]: Group Not Defined. Defaulting >> to EGID '0 0' >> >> I've written a quick patch (attached) that should support setting >> --group on >> the command line and group in the config file and default to >> "sqlgrey" if >> nothing is specified.[...] >> > > That's a longstanding annoyance, yes. I'm not yet really familiar with > the Net::Server framework and I'll have to find if making it switch to > the default group of the specified user is easily doable (if I can > remove some configuration the user needs to think about, I prefer > doing so). If not I'll gladly take your patch. Net::Server can't make it, and it looks like infering the default group of an user in a portable way (not reading from /etc/passwd) is a mess. Oystein, your patch is included in my tree (minus the 5 minute delay). Thanks again, Lionel. |
From: Lionel B. <lio...@bo...> - 2004-12-14 15:35:36
|
Farkas Levente wrote the following on 12/14/04 15:04 : > hi, > we may be switch from postgrey to sqlgray. afaik they use the same > database setup since sql is a for of post. is there any way to migrate > from postgrey to sqlgrey? we wouldn't like tostart with a fresh clean > database since it cause a lot's of headacke for all of the currently > confirmed sender/recepints. it's be very useful. Currently there's no way of migrating the greylisting data in the postgrey database to sqlgrey. I'm not sure if reusing postgrey data is a good idea, I'd have to check in details the postgrey version you use to see if it would be doable. That would be quite a lot of work and I don't think I can find the time for such a specific task. Anyway, with auto-whitelisting your users shouldn't notice much of a delay. Just make the switch on Friday's evening and let the marginal week-end trafic populate the auto-whitelist tables. In the best case, to ease transition, what I *could* add is a way to plug another policy daemon into sqlgrey and let the messages pass and populate the database when the other policy daemon answers "DUNNO" or "PREPEND *", that would need a bit of tweaking to not ask the other policy daemon when not needed. That will not make it on the top of my TODO list in the short future though (the more code in and the less manageable the project becomes). > another question is there any howto/readme or other short description > how to use eg. mysql data replication amonf the mx mail servers with > sqlgrey? that's the only reason we like to switch to sqlgrey since we > _must_ use one greylist server (otherwise it's unusable) and currently > if our postgrey server die (the service or the whole machine), than > all of our mx's pofix die (postfix can't work without the configured > policy server). so one good solution to use a separate sql server on > all mx host and replicate the database among them. any advice? Configuring replication between different MySQL databases would imply master-master setups (you want each database to be able to write to each other) and it could trigger nasty primary key clashes. IIRC MySQL replication doesn't ensure that a statement did update all databases when it is commited : 2 databases could have the same primary key added for one of SQLgrey's tables -> expect nasty errors (and look in MySQL docs for how it handles such situations). In your configuration, I'd advise you to do the following : - use one sqlgrey instance on each of your postfix servers using greylisting, all configured to use the same database. - if needed (sqlgrey can cope with database unavailability), configure replication to a slave database. If the database fails to answer properly (connection down or sql errors), sqlgrey won't crash (this was fully debugged not long ago) but will try to reconnect on each new message coming in and unless the database comes back let them pass (no greylisting) to ensure no legitimate trafic will be blocked. You will not have a single point of *failure*, but you will have a single point of *inefficiency* : the database. If you use a slave database, you can either : make it take over the master's IP, update the DNS to reroute sqlgrey connections to the slave (my personnal choice - put low TTLs in the DNS - this way you don't have to take the master fully down if only its database system is down), or (if it can be made quickly) put the master back online. After switching to the slave, you'll probably have to make it the master and reconfigure your old master as a slave. If you do this, please consider writing a quick HOWTO describing the failure handling and master back online, even something sketchy will do, I can write the final HOWTO but I need the experience of someone who actually did setup and test a master-slave scenario. In your case, there is two things I think SQLgrey still lacks : - e-mail the postmaster or a configurable address when the database goes down/comes back online (currently in my TODO list), work-around : use Nagios to monitor your databases, - allow automatic switching to fallback databases, but for this to be implemented correctly I need to know how each RDBMs handle a master coming back online (will it become a slave as I believe or will it take its master role back eventually ? How do SQLgrey check which database is the master at startup time : think of a SQLgrey restart after the master went down and comes back as a slave...). Finally if you are worried about MySQL stability consider PostgreSQL. I've not yet personnally seen MySQL crash on me, but I worked as a software developper for a company that uses MySQL on really high loads (2x 4-way Hyperthreading Xeon in master-slave setup that just have the head out of the water) and they had crashes with 4.0 versions at least once a month. PostgreSQL seems more robust to me. In the end you should use the database you are the most comfortable with (did already successfully backup and restore, know how to optimize for speed or memory footprint and so on). > thanks in advance. > Your welcomed, Lionel. |
From: Farkas L. <lf...@bp...> - 2004-12-14 14:04:47
|
hi, we may be switch from postgrey to sqlgray. afaik they use the same database setup since sql is a for of post. is there any way to migrate from postgrey to sqlgrey? we wouldn't like tostart with a fresh clean database since it cause a lot's of headacke for all of the currently confirmed sender/recepints. it's be very useful. another question is there any howto/readme or other short description how to use eg. mysql data replication amonf the mx mail servers with sqlgrey? that's the only reason we like to switch to sqlgrey since we _must_ use one greylist server (otherwise it's unusable) and currently if our postgrey server die (the service or the whole machine), than all of our mx's pofix die (postfix can't work without the configured policy server). so one good solution to use a separate sql server on all mx host and replicate the database among them. any advice? thanks in advance. -- Levente "Si vis pacem para bellum!" |
From: David R. <dr...@gr...> - 2004-12-13 20:08:34
|
Lionel Bouton wrote: >>- Ever thought of a "live update" of the whitelists rather than >>supplying them with the source/rpm. Ie sqlgrey in say weekly intervals >>loading them from sqlgrey.sf.net? > > Nice idea. I don't want to bloat SQLgrey with the download code (I'm > already worried by its size and what it will be like with SPF support), > but I sure can add a hook to make it reload the main whitelists on a > SIGHUP for example. Then it's only a matter of a simple script that wil= l > fetch the download URLs from the conf file, download the whitelists, > make some simple checks, replace the whitelist files and send SIGHUP to > sqlgrey. I like the idea of a script to update the whitelists. This would be something that could be added to a cron entry. I don't like the idea of putting that code into sqlgrey itself, if you should be able to turn it off. -Dave PS - Version 1.4.0 has been working very well for me so far! |
From: Lionel B. <lio...@bo...> - 2004-12-13 16:37:59
|
Oystein Viggen wrote the following on 12/13/04 16:43 : >Hi, > >I noticed that Net::Server would complain in my mail.warn log that that = EGID >was not defined: > >Dec 13 12:20:14 lists sqlgrey[22928]: Group Not Defined. Defaulting to = EGID '0 0' > >I've written a quick patch (attached) that should support setting --grou= p on >the command line and group in the config file and default to "sqlgrey" i= f >nothing is specified. I have it happily running on a mail server right = now, >and it's no longer complaining about EGID. (I just tested that the defa= ult >seems to work. --group and config file group are untested). > =20 > That's a longstanding annoyance, yes. I'm not yet really familiar with=20 the Net::Server framework and I'll have to find if making it switch to=20 the default group of the specified user is easily doable (if I can=20 remove some configuration the user needs to think about, I prefer doing=20 so). If not I'll gladly take your patch. >The patch also contains a one-liner to revert the default reconnect_dela= y >back to 5 minutes instead of the current 1 minute. The standard config = file >documents 5 minutes as the default, and the default also used to be 5 >minutes in 1.2, so I figured the 1 minute was a left-over from testing >during 1.3. > =20 > The documentation doesn't reflect my intent. I saw numerous mail servers=20 retrying under the 5 minute delay and all of them tried after the 5=20 minute delay too. So I saw no reason to delay these servers. I'll update=20 the documentation :-) >Thanks, >=D8ystein > =20 > > =20 > Thanks to you, Lionel. |
From: Oystein V. <oys...@ti...> - 2004-12-13 15:43:06
|
Hi, I noticed that Net::Server would complain in my mail.warn log that that EGID was not defined: Dec 13 12:20:14 lists sqlgrey[22928]: Group Not Defined. Defaulting to EGID '0 0' I've written a quick patch (attached) that should support setting --group on the command line and group in the config file and default to "sqlgrey" if nothing is specified. I have it happily running on a mail server right now, and it's no longer complaining about EGID. (I just tested that the default seems to work. --group and config file group are untested). The patch also contains a one-liner to revert the default reconnect_delay back to 5 minutes instead of the current 1 minute. The standard config file documents 5 minutes as the default, and the default also used to be 5 minutes in 1.2, so I figured the 1 minute was a left-over from testing during 1.3. Thanks, Øystein -- If it ain't broke, don't break it. |
From: Lionel B. <lio...@bo...> - 2004-12-13 13:39:25
|
HaJo Schatz wrote the following on 12/13/04 14:17 : > Lionel Bouton wrote: > >>> - Ever thought of a "live update" of the whitelists rather than >>> supplying them with the source/rpm. Ie sqlgrey in say weekly intervals >>> loading them from sqlgrey.sf.net? >> >> >> Nice idea. I don't want to bloat SQLgrey with the download code (I'm >> already worried by its size and what it will be like with SPF support), > > How about just calling an external wget? SQLgrey will block until the wget returns (Multiplexing has its drawbacks...) : you will put connections on hold and if the wget takes long enough, break them. > >> but I sure can add a hook to make it reload the main whitelists on a >> SIGHUP for example. Then it's only a matter of a simple script that >> will fetch the download URLs from the conf file, download the >> whitelists, make some simple checks, replace the whitelist files and >> send SIGHUP to sqlgrey. > > > That could maybe be done nicely in the init.d script. Say a restart > automagically loads the latest whitelists. > Calling the init.d script seems good, but on non SysV Unices it won't work... Sending a SIGHUP to the pid in /var/run/sqlgrey.pid is probably more portable. I'll probably add a reload to the init scripts for convenience though. Best regards, Lionel. |
From: HaJo S. <ha...@ha...> - 2004-12-13 13:17:46
|
Lionel Bouton wrote: >> - Ever thought of a "live update" of the whitelists rather than >> supplying them with the source/rpm. Ie sqlgrey in say weekly intervals >> loading them from sqlgrey.sf.net? > > Nice idea. I don't want to bloat SQLgrey with the download code (I'm > already worried by its size and what it will be like with SPF support), How about just calling an external wget? > but I sure can add a hook to make it reload the main whitelists on a > SIGHUP for example. Then it's only a matter of a simple script that will > fetch the download URLs from the conf file, download the whitelists, > make some simple checks, replace the whitelist files and send SIGHUP to > sqlgrey. That could maybe be done nicely in the init.d script. Say a restart automagically loads the latest whitelists. -- HaJo Schatz <ha...@ha...> http://www.HaJo.Net PGP-Key: http://www.hajo.net/hajonet/keys/pgpkey_hajo.txt |
From: Lionel B. <lio...@bo...> - 2004-12-13 12:59:10
|
HaJo Schatz wrote the following on 12/13/04 12:47 : >On Sat, 2004-12-11 at 02:27, Lionel Bouton wrote: > > >>Hi, >> >>1.4.0 is released on sourceforge. There was a window left for SQL >>injection that was reported this morning, it is fixed in this version. >> >> > >Appears good. A few thoughts though: > >- Shouldn't sqlgrey be placed in /usr/sbin rather than /usr/bin? > > Makes sense to me. >- Ever thought of a "live update" of the whitelists rather than >supplying them with the source/rpm. Ie sqlgrey in say weekly intervals >loading them from sqlgrey.sf.net? > > Nice idea. I don't want to bloat SQLgrey with the download code (I'm already worried by its size and what it will be like with SPF support), but I sure can add a hook to make it reload the main whitelists on a SIGHUP for example. Then it's only a matter of a simple script that will fetch the download URLs from the conf file, download the whitelists, make some simple checks, replace the whitelist files and send SIGHUP to sqlgrey. I don't think distributing them from sourceforge is acceptable by sourceforge policy, but I can setup an alternate distribution server (in fact Gentoo users can install SQLgrey from the sources on my server already). >- Is /var/sqlgrey really necessary? Wouldn't it be enough to start >sqlgrey in /tmp? > > For MySQL and PostgreSQL users, /var/sqlgrey isn't needed at all. But SQLite users need a working directory for the database. The RPM can't guess which database will be used. As the answer really isn't obvious, I'll add this to the FAQ. Best regards, Lionel. |
From: HaJo S. <ha...@ha...> - 2004-12-13 11:47:16
|
On Sat, 2004-12-11 at 02:27, Lionel Bouton wrote: > Hi, > > 1.4.0 is released on sourceforge. There was a window left for SQL > injection that was reported this morning, it is fixed in this version. Appears good. A few thoughts though: - Shouldn't sqlgrey be placed in /usr/sbin rather than /usr/bin? - Ever thought of a "live update" of the whitelists rather than supplying them with the source/rpm. Ie sqlgrey in say weekly intervals loading them from sqlgrey.sf.net? - Is /var/sqlgrey really necessary? Wouldn't it be enough to start sqlgrey in /tmp? Tnx, HaJo -- HaJo Schatz <ha...@ha...> http://www.HaJo.Net PGP-Key: http://www.hajo.net/hajonet/keys/pgpkey_hajo.txt |
From: HaJo S. <ha...@ha...> - 2004-12-10 18:48:43
|
On Fri, 2004-12-10 at 03:36, Lionel Bouton wrote: > Hi, > > since I released 1.3.6 10 days ago there was no bug report. I'm inclined > to release 1.3.6 as the 1.4.0 release (with only minor packaging > changes). Is there anyone on the list seing a show-stopper ? For > example, I don't plan to add indices in 1.4.0, is anyone suffering from > a performance problem that would want to see it fixed with a known index ? 1.3.6 is simply running, running & running for me. I'd dare calling this a stable release but might of course not be authorative... HaJo -- HaJo Schatz <ha...@ha...> http://www.HaJo.Net PGP-Key: http://www.hajo.net/hajonet/keys/pgpkey_hajo.txt |
From: Lionel B. <lio...@bo...> - 2004-12-10 18:27:38
|
Hi, 1.4.0 is released on sourceforge. There was a window left for SQL injection that was reported this morning, it is fixed in this version. People still running 1.2.x versions are strongly advised to upgrade to 1.4.0. The documentation should be up to date for the new whitelisting mechanisms, see the HOWTO file for details on how to handle the occasional odd MTA configuration that doesn't play well with greylisting. Happy greylisting, Lionel. |
From: Lionel B. <lio...@bo...> - 2004-12-09 19:37:17
|
Hi, since I released 1.3.6 10 days ago there was no bug report. I'm inclined to release 1.3.6 as the 1.4.0 release (with only minor packaging changes). Is there anyone on the list seing a show-stopper ? For example, I don't plan to add indices in 1.4.0, is anyone suffering from a performance problem that would want to see it fixed with a known index ? Happy greylisting :-) Lionel. |
From: Lionel B. <lio...@bo...> - 2004-12-01 18:04:55
|
Derek Battams wrote the following on 01.12.2004 17:52 : >On Wed, December 1, 2004 5:12, Lionel Bouton said: > > >> [...] >> >>Did you need to change the spec file ? >> >> > >Yes, I had to change the spec file slightly. > I'll try to put most of your changes in future releases to help minimize the amount of patch lines you'll need. > Basically I had to assign a >specific UID to the sqlgrey user (since TSL 2.2 actually assigned a system >account for sqlgrey) and I also had to create the sqlgrey group if it did >not exist. > > > I'll look into it and see if I can provide a specfile that creates the user and the group consistently on both Fedora and TSL. >I also had to patch init/sqlgrey since the /etc/rc.d/init.d/functions file >does not exists in TSL 2.2 (it's actually just /etc/init.d/functions). > > This is valid for Fedora and RedHat since ages too. I just switched to /etc/init.d/functions in my source tree. >And finally I patched etc/sqlgrey.conf to make SQLite the default database >(which is how TSL set it up with their 1.2.0 RPM that is included with the >distro). > > > I won't touch that : one config file can't fit every distribution, I'll rely on people like you for distribution-specific configuration. Best regards, Lionel. |
From: Derek B. <de...@ba...> - 2004-12-01 16:52:51
|
On Wed, December 1, 2004 5:12, Lionel Bouton said: > Derek Battams wrote the following on 01.12.2004 05:23 : > >>As an aside, if anyone uses Trustix Secure Linux 2.2 and wants an RPM o= r >>SRPM for 1.3.6 on TSL 2.2 I built them and they're available at: >> >>http://www.battams.ca/software/tsl22/ >> >> > > Did you need to change the spec file ? Yes, I had to change the spec file slightly. Basically I had to assign a specific UID to the sqlgrey user (since TSL 2.2 actually assigned a syste= m account for sqlgrey) and I also had to create the sqlgrey group if it did not exist. I also had to patch init/sqlgrey since the /etc/rc.d/init.d/functions fil= e does not exists in TSL 2.2 (it's actually just /etc/init.d/functions).=20 And finally I patched etc/sqlgrey.conf to make SQLite the default databas= e (which is how TSL set it up with their 1.2.0 RPM that is included with th= e distro). Nothing serious, just some minor changes. - Derek |
From: Lionel B. <lio...@bo...> - 2004-12-01 10:12:18
|
Derek Battams wrote the following on 01.12.2004 05:23 : > >Just upgraded to 1.3.6 and specified 'smart' for the algorithm. New >connections to the server are adding just the first three bytes of the IP >address to the connect table, as expected, but the old entries in the >connect and *_awl tables still contain the full four bytes for the IP >address. Do I need to delete/update those entries? > No, SQLgrey will clean them up automatically (in 24 hours for the connect table and 60 days for the others by default), in the meantime it will just create new ones. If your goal is to inspect the table's content easily, you can clean them manually : DELETE FROM <table> WHERE <tstamp_column> < NOW() - INTERVAL '1 DAY'; > Since the four byte >entries weren't updated on the upgrade I assume this means that reconnects >for the four byte entries will not match (unless I manually update the >tables)? > > They will not match the old entries (unless 'smart' decides that it can't trust the host and uses the whole IP) and create new ones, yes. >As an aside, if anyone uses Trustix Secure Linux 2.2 and wants an RPM or >SRPM for 1.3.6 on TSL 2.2 I built them and they're available at: > >http://www.battams.ca/software/tsl22/ > > Did you need to change the spec file ? Lionel. |