From: Lionel B. <lio...@bo...> - 2004-12-14 23:55:00
|
Farkas Levente wrote the following on 12/14/04 18:08 : >> [...]Anyway, with auto-whitelisting your users shouldn't notice much >> of a delay. Just make the switch on Friday's evening and let the >> marginal week-end trafic populate the auto-whitelist tables. > > > that's not so easy! most of these important emails comes a few hours > before deadlines (usualy thuesday 18:00 and thursday 18:00) which > makes the think a bit complicated:-( > Understandable, you might want to use an opt-in policy for greylisting. Greylisting is a tradeoff that auto-whitelists can only make less painfull to make. You should make your users aware that either : - you use greylisting for all of them which means that poorly configured mail servers won't deliver in a timely manner (and some rare ones never) but on the other hand their SPAM level is less than half that what it could be (remember that asking the sender to resend the message will solve the problem in most cases). - you use greylisting on an opt-in basis and they have to choose what they consider more important : less SPAM or "instant messaging", their choice, their responsibility. >> In the best case, to ease transition, what I *could* add is a way to >> plug another policy daemon into sqlgrey and let the messages pass and >> populate the database when the other policy daemon answers "DUNNO" or >> "PREPEND *", that would need a bit of tweaking to not ask the other >> policy daemon when not needed. That will not make it on the top of my >> TODO list in the short future though (the more code in and the less >> manageable the project becomes). > > > if easier it can be a solution too. > As you can see I'm burried alive under enhancement requests ! But SQLgrey is open-source, feel free to add the feature you need if I'm not fast enough. > >> - if needed (sqlgrey can cope with database unavailability), >> configure replication to a slave database. > > > it is currently possible? It depends on the database system. Currently SQLgrey only connects to one database (which would be the master) though. > >> If the database fails to answer properly (connection down or sql >> errors), sqlgrey won't crash (this was fully debugged not long ago) but > > > what does the "won't crash" means? in this case response with a DUNNO? > Yes. > [...] > >> *failure*, but you will have a single point of *inefficiency* : the >> database. > > > that's far better! > I didn't want to add another point of failure and trust me : when there were bugs in the handling of database failures users were quick to report them ! >> If you use a slave database, you can either : make it take over the >> master's IP, update the DNS to reroute sqlgrey connections to the >> slave (my personnal choice - put low TTLs in the DNS - this way you >> don't have to take the master fully down if only its database system >> is down), or (if it can be made quickly) put the master back online. > > > imho it'd be better to be able to configure more (slave) sql server > for sqlgrey in stead of dns manipulation. > I'm not sure I understand. Do you mean that SQLgrey should directly access several servers and update all of them or do you want replication done at the database level (SQLgrey being unaware of the process replicating the master to the slaves) ? The former is doable but will be quite complex : SQLgrey would have to support adding an empty database to a farm of databases and populate the tables of each database without data allowing the message to pass when there's at least another database with data making SQLgrey decice to accept it. This must be done at every step of the decision process (valid previous connection attempt, e-mail auto-whitelist entry, domain auto-whitelist entry). This will : - be slow ! - get slower each time you add a new database, - be limited by the least responsive of your databases. but be really, really robust (the replication is handled out of the databases and there's no need of 2-phase COMMIT which is a real bonus). If this is what you want, I'm afraid it should be another project : SQLgrey current model is not the best suited for this, you'll want to make the same request to different databases in // and wait for all of them to complete or timeout, mark databases as faulty to avoid spending time waiting for timeouts, ... In the latter case you want SQLgrey being aware of the fact that there is a replication process occuring between several databases and one of them is the master, you want to be ensure only one is in RW mode, this one is known by SQLgrey and when it goes down an external process decides which slave becomes the master and - do what's needed to reconfigure it as the master, - signal each SQLgrey server to use this new one. For that, I only see one thing needed on SQLgrey's side : modify SQLgrey in order to allow on the fly reconnection to another database. The rest is database specific. But I don't really see the benefit of making this so complex, usually replicated databases come with what's needed to make a slave replace a master by taking over its IP address. In this case SQLgrey will work correctly out of the box. > > imho it's be enough to switch to the slave and the slave can replicate > eg. once a day the master (before becoming the master) I'm not sure I understand. Lionel. |