From: Farkas L. <lf...@bp...> - 2004-12-14 14:04:47
|
hi, we may be switch from postgrey to sqlgray. afaik they use the same database setup since sql is a for of post. is there any way to migrate from postgrey to sqlgrey? we wouldn't like tostart with a fresh clean database since it cause a lot's of headacke for all of the currently confirmed sender/recepints. it's be very useful. another question is there any howto/readme or other short description how to use eg. mysql data replication amonf the mx mail servers with sqlgrey? that's the only reason we like to switch to sqlgrey since we _must_ use one greylist server (otherwise it's unusable) and currently if our postgrey server die (the service or the whole machine), than all of our mx's pofix die (postfix can't work without the configured policy server). so one good solution to use a separate sql server on all mx host and replicate the database among them. any advice? thanks in advance. -- Levente "Si vis pacem para bellum!" |
From: Lionel B. <lio...@bo...> - 2004-12-14 15:35:36
|
Farkas Levente wrote the following on 12/14/04 15:04 : > hi, > we may be switch from postgrey to sqlgray. afaik they use the same > database setup since sql is a for of post. is there any way to migrate > from postgrey to sqlgrey? we wouldn't like tostart with a fresh clean > database since it cause a lot's of headacke for all of the currently > confirmed sender/recepints. it's be very useful. Currently there's no way of migrating the greylisting data in the postgrey database to sqlgrey. I'm not sure if reusing postgrey data is a good idea, I'd have to check in details the postgrey version you use to see if it would be doable. That would be quite a lot of work and I don't think I can find the time for such a specific task. Anyway, with auto-whitelisting your users shouldn't notice much of a delay. Just make the switch on Friday's evening and let the marginal week-end trafic populate the auto-whitelist tables. In the best case, to ease transition, what I *could* add is a way to plug another policy daemon into sqlgrey and let the messages pass and populate the database when the other policy daemon answers "DUNNO" or "PREPEND *", that would need a bit of tweaking to not ask the other policy daemon when not needed. That will not make it on the top of my TODO list in the short future though (the more code in and the less manageable the project becomes). > another question is there any howto/readme or other short description > how to use eg. mysql data replication amonf the mx mail servers with > sqlgrey? that's the only reason we like to switch to sqlgrey since we > _must_ use one greylist server (otherwise it's unusable) and currently > if our postgrey server die (the service or the whole machine), than > all of our mx's pofix die (postfix can't work without the configured > policy server). so one good solution to use a separate sql server on > all mx host and replicate the database among them. any advice? Configuring replication between different MySQL databases would imply master-master setups (you want each database to be able to write to each other) and it could trigger nasty primary key clashes. IIRC MySQL replication doesn't ensure that a statement did update all databases when it is commited : 2 databases could have the same primary key added for one of SQLgrey's tables -> expect nasty errors (and look in MySQL docs for how it handles such situations). In your configuration, I'd advise you to do the following : - use one sqlgrey instance on each of your postfix servers using greylisting, all configured to use the same database. - if needed (sqlgrey can cope with database unavailability), configure replication to a slave database. If the database fails to answer properly (connection down or sql errors), sqlgrey won't crash (this was fully debugged not long ago) but will try to reconnect on each new message coming in and unless the database comes back let them pass (no greylisting) to ensure no legitimate trafic will be blocked. You will not have a single point of *failure*, but you will have a single point of *inefficiency* : the database. If you use a slave database, you can either : make it take over the master's IP, update the DNS to reroute sqlgrey connections to the slave (my personnal choice - put low TTLs in the DNS - this way you don't have to take the master fully down if only its database system is down), or (if it can be made quickly) put the master back online. After switching to the slave, you'll probably have to make it the master and reconfigure your old master as a slave. If you do this, please consider writing a quick HOWTO describing the failure handling and master back online, even something sketchy will do, I can write the final HOWTO but I need the experience of someone who actually did setup and test a master-slave scenario. In your case, there is two things I think SQLgrey still lacks : - e-mail the postmaster or a configurable address when the database goes down/comes back online (currently in my TODO list), work-around : use Nagios to monitor your databases, - allow automatic switching to fallback databases, but for this to be implemented correctly I need to know how each RDBMs handle a master coming back online (will it become a slave as I believe or will it take its master role back eventually ? How do SQLgrey check which database is the master at startup time : think of a SQLgrey restart after the master went down and comes back as a slave...). Finally if you are worried about MySQL stability consider PostgreSQL. I've not yet personnally seen MySQL crash on me, but I worked as a software developper for a company that uses MySQL on really high loads (2x 4-way Hyperthreading Xeon in master-slave setup that just have the head out of the water) and they had crashes with 4.0 versions at least once a month. PostgreSQL seems more robust to me. In the end you should use the database you are the most comfortable with (did already successfully backup and restore, know how to optimize for speed or memory footprint and so on). > thanks in advance. > Your welcomed, Lionel. |
From: Farkas L. <lf...@bp...> - 2004-12-14 17:08:19
|
Lionel Bouton wrote: > Farkas Levente wrote the following on 12/14/04 15:04 : > >> hi, >> we may be switch from postgrey to sqlgray. afaik they use the same >> database setup since sql is a for of post. is there any way to migrate >> from postgrey to sqlgrey? we wouldn't like tostart with a fresh clean >> database since it cause a lot's of headacke for all of the currently >> confirmed sender/recepints. it's be very useful. > > > > Currently there's no way of migrating the greylisting data in the > postgrey database to sqlgrey. I'm not sure if reusing postgrey data is a always the latest:-) i package the redhat/fedora postgrey rpms. > good idea, I'd have to check in details the postgrey version you use to > see if it would be doable. That would be quite a lot of work and I don't > think I can find the time for such a specific task. > Anyway, with auto-whitelisting your users shouldn't notice much of a > delay. Just make the switch on Friday's evening and let the marginal > week-end trafic populate the auto-whitelist tables. that's not so easy! most of these important emails comes a few hours before deadlines (usualy thuesday 18:00 and thursday 18:00) which makes the think a bit complicated:-( > In the best case, to ease transition, what I *could* add is a way to > plug another policy daemon into sqlgrey and let the messages pass and > populate the database when the other policy daemon answers "DUNNO" or > "PREPEND *", that would need a bit of tweaking to not ask the other > policy daemon when not needed. That will not make it on the top of my > TODO list in the short future though (the more code in and the less > manageable the project becomes). if easier it can be a solution too. >> another question is there any howto/readme or other short description >> how to use eg. mysql data replication amonf the mx mail servers with >> sqlgrey? that's the only reason we like to switch to sqlgrey since we >> _must_ use one greylist server (otherwise it's unusable) and currently >> if our postgrey server die (the service or the whole machine), than >> all of our mx's pofix die (postfix can't work without the configured >> policy server). so one good solution to use a separate sql server on >> all mx host and replicate the database among them. any advice? > > > > Configuring replication between different MySQL databases would imply > master-master setups (you want each database to be able to write to each > other) and it could trigger nasty primary key clashes. IIRC MySQL > replication doesn't ensure that a statement did update all databases > when it is commited : 2 databases could have the same primary key added > for one of SQLgrey's tables -> expect nasty errors (and look in MySQL > docs for how it handles such situations). > > In your configuration, I'd advise you to do the following : > - use one sqlgrey instance on each of your postfix servers using > greylisting, all configured to use the same database. if sqlgrey is not better than postgrey in term of "failure" then i don't see any good reason why switch to sqlgrey? do you know? > - if needed (sqlgrey can cope with database unavailability), configure > replication to a slave database. it is currently possible? > If the database fails to answer properly (connection down or sql > errors), sqlgrey won't crash (this was fully debugged not long ago) but what does the "won't crash" means? in this case response with a DUNNO? > will try to reconnect on each new message coming in and unless the > database comes back let them pass (no greylisting) to ensure no > legitimate trafic will be blocked. You will not have a single point of aha. > *failure*, but you will have a single point of *inefficiency* : the > database. that's far better! > If you use a slave database, you can either : make it take over the > master's IP, update the DNS to reroute sqlgrey connections to the slave > (my personnal choice - put low TTLs in the DNS - this way you don't have > to take the master fully down if only its database system is down), or > (if it can be made quickly) put the master back online. imho it'd be better to be able to configure more (slave) sql server for sqlgrey in stead of dns manipulation. > After switching to the slave, you'll probably have to make it the master > and reconfigure your old master as a slave. > If you do this, please consider writing a quick HOWTO describing the > failure handling and master back online, even something sketchy will do, > I can write the final HOWTO but I need the experience of someone who > actually did setup and test a master-slave scenario. this dns trick is not realy like to me so probably we won't use this. > In your case, there is two things I think SQLgrey still lacks : > - e-mail the postmaster or a configurable address when the database goes > down/comes back online (currently in my TODO list), work-around : use > Nagios to monitor your databases, we use nagios but an automatick failover what i realy would like to see. > - allow automatic switching to fallback databases, but for this to be > implemented correctly I need to know how each RDBMs handle a master > coming back online (will it become a slave as I believe or will it take imho it's be enough to switch to the slave and the slave can replicate eg. once a day the master (before becoming the master) and nagios can notify the sysadm to restore the master manualy and restart sqlgrey manualy to use the original master. this scenario as the first step would be better then the current one. > its master role back eventually ? How do SQLgrey check which database is > the master at startup time : think of a SQLgrey restart after the master > went down and comes back as a slave...). NEVER SWITCH BACK TO THE ORIGINAL MASTER AUTOMATICALY! i think it's acceptable to be some kind of manual configuration in this case. > Finally if you are worried about MySQL stability consider PostgreSQL. > I've not yet personnally seen MySQL crash on me, but I worked as a > software developper for a company that uses MySQL on really high loads > (2x 4-way Hyperthreading Xeon in master-slave setup that just have the > head out of the water) and they had crashes with 4.0 versions at least > once a month. PostgreSQL seems more robust to me. In the end you should > use the database you are the most comfortable with (did already > successfully backup and restore, know how to optimize for speed or > memory footprint and so on). no i'm not worry about any kind of sql server. i worry about the system ie. the machine itself (hardware, network etc..) i don't see any sql server crash either but see many kind of hardware, network problem. -- Levente "Si vis pacem para bellum!" |
From: Lionel B. <lio...@bo...> - 2004-12-14 23:55:00
|
Farkas Levente wrote the following on 12/14/04 18:08 : >> [...]Anyway, with auto-whitelisting your users shouldn't notice much >> of a delay. Just make the switch on Friday's evening and let the >> marginal week-end trafic populate the auto-whitelist tables. > > > that's not so easy! most of these important emails comes a few hours > before deadlines (usualy thuesday 18:00 and thursday 18:00) which > makes the think a bit complicated:-( > Understandable, you might want to use an opt-in policy for greylisting. Greylisting is a tradeoff that auto-whitelists can only make less painfull to make. You should make your users aware that either : - you use greylisting for all of them which means that poorly configured mail servers won't deliver in a timely manner (and some rare ones never) but on the other hand their SPAM level is less than half that what it could be (remember that asking the sender to resend the message will solve the problem in most cases). - you use greylisting on an opt-in basis and they have to choose what they consider more important : less SPAM or "instant messaging", their choice, their responsibility. >> In the best case, to ease transition, what I *could* add is a way to >> plug another policy daemon into sqlgrey and let the messages pass and >> populate the database when the other policy daemon answers "DUNNO" or >> "PREPEND *", that would need a bit of tweaking to not ask the other >> policy daemon when not needed. That will not make it on the top of my >> TODO list in the short future though (the more code in and the less >> manageable the project becomes). > > > if easier it can be a solution too. > As you can see I'm burried alive under enhancement requests ! But SQLgrey is open-source, feel free to add the feature you need if I'm not fast enough. > >> - if needed (sqlgrey can cope with database unavailability), >> configure replication to a slave database. > > > it is currently possible? It depends on the database system. Currently SQLgrey only connects to one database (which would be the master) though. > >> If the database fails to answer properly (connection down or sql >> errors), sqlgrey won't crash (this was fully debugged not long ago) but > > > what does the "won't crash" means? in this case response with a DUNNO? > Yes. > [...] > >> *failure*, but you will have a single point of *inefficiency* : the >> database. > > > that's far better! > I didn't want to add another point of failure and trust me : when there were bugs in the handling of database failures users were quick to report them ! >> If you use a slave database, you can either : make it take over the >> master's IP, update the DNS to reroute sqlgrey connections to the >> slave (my personnal choice - put low TTLs in the DNS - this way you >> don't have to take the master fully down if only its database system >> is down), or (if it can be made quickly) put the master back online. > > > imho it'd be better to be able to configure more (slave) sql server > for sqlgrey in stead of dns manipulation. > I'm not sure I understand. Do you mean that SQLgrey should directly access several servers and update all of them or do you want replication done at the database level (SQLgrey being unaware of the process replicating the master to the slaves) ? The former is doable but will be quite complex : SQLgrey would have to support adding an empty database to a farm of databases and populate the tables of each database without data allowing the message to pass when there's at least another database with data making SQLgrey decice to accept it. This must be done at every step of the decision process (valid previous connection attempt, e-mail auto-whitelist entry, domain auto-whitelist entry). This will : - be slow ! - get slower each time you add a new database, - be limited by the least responsive of your databases. but be really, really robust (the replication is handled out of the databases and there's no need of 2-phase COMMIT which is a real bonus). If this is what you want, I'm afraid it should be another project : SQLgrey current model is not the best suited for this, you'll want to make the same request to different databases in // and wait for all of them to complete or timeout, mark databases as faulty to avoid spending time waiting for timeouts, ... In the latter case you want SQLgrey being aware of the fact that there is a replication process occuring between several databases and one of them is the master, you want to be ensure only one is in RW mode, this one is known by SQLgrey and when it goes down an external process decides which slave becomes the master and - do what's needed to reconfigure it as the master, - signal each SQLgrey server to use this new one. For that, I only see one thing needed on SQLgrey's side : modify SQLgrey in order to allow on the fly reconnection to another database. The rest is database specific. But I don't really see the benefit of making this so complex, usually replicated databases come with what's needed to make a slave replace a master by taking over its IP address. In this case SQLgrey will work correctly out of the box. > > imho it's be enough to switch to the slave and the slave can replicate > eg. once a day the master (before becoming the master) I'm not sure I understand. Lionel. |
From: Farkas L. <lf...@bp...> - 2004-12-15 09:34:52
|
Lionel Bouton wrote: > Farkas Levente wrote the following on 12/14/04 18:08 : > >>> [...]Anyway, with auto-whitelisting your users shouldn't notice much >>> of a delay. Just make the switch on Friday's evening and let the >>> marginal week-end trafic populate the auto-whitelist tables. >> >> >> >> that's not so easy! most of these important emails comes a few hours >> before deadlines (usualy thuesday 18:00 and thursday 18:00) which >> makes the think a bit complicated:-( >> > > Understandable, you might want to use an opt-in policy for greylisting. > Greylisting is a tradeoff that auto-whitelists can only make less > painfull to make. You should make your users aware that either : > - you use greylisting for all of them which means that poorly configured > mail servers won't deliver in a timely manner (and some rare ones never) > but on the other hand their SPAM level is less than half that what it > could be (remember that asking the sender to resend the message will > solve the problem in most cases). > - you use greylisting on an opt-in basis and they have to choose what > they consider more important : less SPAM or "instant messaging", their > choice, their responsibility. as always they would like both:-) but currently that's the situation since in postgery's database most of our partner's email address is already included so there is no delay mostof the case, but if i start with a fresh/clean sqlgrey that the delay happend with all emails:-((( >>> - if needed (sqlgrey can cope with database unavailability), >>> configure replication to a slave database. >> >> >> >> it is currently possible? > > > > It depends on the database system. Currently SQLgrey only connects to > one database (which would be the master) though. so currently can't configure a slave:-( >>> If you use a slave database, you can either : make it take over the >>> master's IP, update the DNS to reroute sqlgrey connections to the >>> slave (my personnal choice - put low TTLs in the DNS - this way you >>> don't have to take the master fully down if only its database system >>> is down), or (if it can be made quickly) put the master back online. >> >> >> >> imho it'd be better to be able to configure more (slave) sql server >> for sqlgrey in stead of dns manipulation. >> > > I'm not sure I understand. Do you mean that SQLgrey should directly > access several servers and update all of them or do you want replication > done at the database level (SQLgrey being unaware of the process > replicating the master to the slaves) ? no. first of all my main question: why do i worth to switch to sqlgrey (or any other greylist server) from postgrey? in normal cicumstances all mx host should have to use the same greylist database otherwise the basic idea fail (delay can be too long). which do not means that every mx should use the same greylist server, but they have to use greylist servers which use the same database. currently we use one postgrey server and all other mx connect to this postgrey server. but this is a sinlge point of failure! so the only reason what is see to switch to another greylist server to avoid this sinlge point of failure! but there is one more thing. the failure usualy not in the greylist server (like postgrey never stop if configured well), the critical part is the machine itself which runs the greylist server. there can be hardware problem and there can be network problem. what i can image when we use an sql server as the database: all mx use his own greylist server and all greylist server connect to the same sql server. but in this case the same sinlge point of failure exist! the sql server's machine! so therefore if i can configure more sqlserver for each greylist server and the sql server's replicate the database among eachother then the sinlge point of failure disappear! so that can be a reason to switch. > The former is doable but will be quite complex : SQLgrey would have to > support adding an empty database to a farm of databases and populate the > tables of each database without data allowing the message to pass when > there's at least another database with data making SQLgrey decice to > accept it. This must be done at every step of the decision process > (valid previous connection attempt, e-mail auto-whitelist entry, domain > auto-whitelist entry). imho replication is not the sqlgery's responsibility! > If this is what you want, I'm afraid it should be another project : > SQLgrey current model is not the best suited for this, you'll want to > make the same request to different databases in // and wait for all of > them to complete or timeout, mark databases as faulty to avoid spending > time waiting for timeouts, ... i hope my former explanation show what i want:-) > In the latter case you want SQLgrey being aware of the fact that there > is a replication process occuring between several databases and one of > them is the master, you want to be ensure only one is in RW mode, this > one is known by SQLgrey and when it goes down an external process > decides which slave becomes the master and > - do what's needed to reconfigure it as the master, > - signal each SQLgrey server to use this new one. don't go that far! it seems to me that you always assume the failuer at the sql server level. i repeat myself: i trust in the sql server (never died), but i do not trust the machine and the network! and that'we what i'd like to avoid! usualy tha't the main reason of slave servers: - that's why there is more mx, - that's why there is slave ldap servers, - that's why there backup domain controllers on windows, - a bit similar raid-1,5,6 (one or two disk can fail at the same time, but no more). so if there is one master and in case of the failure of this (ie. not reachable the greylist can switch to another one which has the same (or almost the same) database (ie. relicated or replicated eg. in the last hour). that can be enough! the sysadm can recognize that can fix the problems (like fix the mx, ldap server, domain contorller or replace the failed disk in the above example). > For that, I only see one thing needed on SQLgrey's side : modify SQLgrey > in order to allow on the fly reconnection to another database. The rest > is database specific. > But I don't really see the benefit of making this so complex, usually > replicated databases come with what's needed to make a slave replace a > master by taking over its IP address. In this case SQLgrey will work > correctly out of the box. > >> imho it's be enough to switch to the slave and the slave can replicate >> eg. once a day the master (before becoming the master) > > > > I'm not sure I understand. i hope you understand my main problem/requirements/wish about a greylist server. if still not than it can be because my bad english:-( yours. -- Levente "Si vis pacem para bellum!" |
From: Lionel B. <lio...@bo...> - 2004-12-15 11:20:39
|
Farkas Levente wrote the following on 12/15/04 10:35 : > > no. first of all my main question: > why do i worth to switch to sqlgrey (or any other greylist server) > from postgrey? > in normal cicumstances all mx host should have to use the same > greylist database otherwise the basic idea fail (delay can be too > long). which do not means that every mx should use the same greylist > server, but they have to use greylist servers which use the same > database. > currently we use one postgrey server and all other mx connect to this > postgrey server. but this is a sinlge point of failure! so the only > reason what is see to switch to another greylist server to avoid this > sinlge point of failure! but there is one more thing. the failure > usualy not in the greylist server (like postgrey never stop if > configured well), the critical part is the machine itself which runs > the greylist server. there can be hardware problem and there can be > network problem. > what i can image when we use an sql server as the database: > all mx use his own greylist server and all greylist server connect to > the same sql server. but in this case the same sinlge point of failure > exist! the sql server's machine! so therefore if i can configure more > sqlserver for each greylist server and the sql server's replicate the > database among eachother then the sinlge point of failure disappear! > so that can be a reason to switch. In the case of one sqlgrey instance on each mx even if you don't have *any* slave, as I explained you won't have any single point of failure (you won't have any greylisting when the database wil go down though). > >> [...] > > > imho replication is not the sqlgery's responsibility! I can't agree more :-) > [...] - that's why there is slave ldap servers, The devil is in the details. If I understand correctly, you want SQLgrey instances to know a slave or list of slaves and fallback automatically to the slave or one of the slaves if the master doesn't answer. What I don't know is how you will solve the following problems related to the writes done to the database by SQLgrey : - one SQLgrey can't access the master due to a temporary link problem and switch to the slave and try to update its database content although the slave won't authorize it (you can't allow writes to a slave or you'll end up with consistency problems or PRIMARY KEY collisions as I explained in a precedent message), should it scan the list of servers until it finds the one accepting writes ? - in case of multiple slaves, how do you make every SQLgrey instances aware of the one among them becoming the new master. If you want to try each server in a list looking for one that accept writes assuming one and only one from the databse server pool can accept them pay attention to the fact that managing your sql server pool will be quite error prone. You only have to forget to put a database coming back from a failure online without forbidding writes and you end up with the whole platform rejecting e-mails in a more or less randomly fashion. Here are simple questions to make sure we speak of the same things. Do you agree with the following statements ? - one and only one sql server should accept writes from every SQLgrey instances. Let's call it the RW server (read-write). - all other sql servers replicate the content of the server above in a timely manner (using MySQL replication process for example). Let's call them the RO servers (read-only). - when the RW server fails, . either one of the slaves and *only* one switches to the RW status, making sure all other RO servers are now synchronising with it (this is not a simple process, I don't know how MySQL replication handles the fact that RO servers will not necessarily have the same content when the RW fails). . either none of them switch to RW and SQLgrey now only have RO databases so it cannot store new connection attempts in database : it can't greylist anymore so it *must* let every message pass (-> slaves won't be used at all). - when a failed server comes back online it must do so in a RO state if there is a RW server already, if not it can become the RW server but should have the most recent data available in its database when doing so (or your auto-whitelisting efficiency will suffer and you can consider reconnections as brand new connections). If you don't agree with one of them, please explain why. If you agree with all of them, is there any database out there that allows you to setup a system which will handle all these requirements ? > i hope you understand my main problem/requirements/wish about a > greylist server. I think I begin to understand what you want, but I don't know yet how you want to solve the problems described above with MySQL for example. I don't think adding failover in SQLgrey won't do much good if there's no (easy) way to configure the database servers to respect the requirements of a failover environment. Please remember that you can already setup a failover onvironment with SQLgrey if you setup a take over IP process when you detect the RW database goes down. > if still not than it can be because my bad english:-( English is not my mother tongue either, don't worry :-) Lionel. |
From: Farkas L. <lf...@bp...> - 2004-12-15 12:04:44
|
Lionel Bouton wrote: > In the case of one sqlgrey instance on each mx even if you don't have > *any* slave, as I explained you won't have any single point of failure > (you won't have any greylisting when the database wil go down though). you've got right, so i wouldn't like failure neither postfix without greylist:-) >> [...] - that's why there is slave ldap servers, > > > > The devil is in the details. If I understand correctly, you want SQLgrey > instances to know a slave or list of slaves and fallback automatically > to the slave or one of the slaves if the master doesn't answer. > What I don't know is how you will solve the following problems related > to the writes done to the database by SQLgrey : > - one SQLgrey can't access the master due to a temporary link problem > and switch to the slave and try to update its database content although > the slave won't authorize it (you can't allow writes to a slave or > you'll end up with consistency problems or PRIMARY KEY collisions as I > explained in a precedent message), should it scan the list of servers > until it finds the one accepting writes ? > - in case of multiple slaves, how do you make every SQLgrey instances > aware of the one among them becoming the new master. first of all i'd be happy with one master and one slave. second i don't know the right solution just that what i would like to see. and i talk about it a few other bigger site sysadm and i collect the problems. anyway - my first assumption if one of the mx can't access to one sql server, then none can do it. otherwise it's a real strange thing in case we can accept no greylisting just dunno. - try to use replication between sql servers. - allow write to the slave to and when the master wake up then replicate back the data too. - in my case actualy there is no master and slave just there is two sql server with the same database (or almost the same and there are certain point when they are syncing) and there is always one which is rw by all greylist server (first). imho the greylist database is not so complicated. it's easy to recognize which records should have to replicate. only old/expired record have to delete and always the last updated one is the latest and all record has timestemp (because that's the main purpose the database) so it's easy to know which is the last updated. > Here are simple questions to make sure we speak of the same things. Do > you agree with the following statements ? > - one and only one sql server should accept writes from every SQLgrey > instances. Let's call it the RW server (read-write). no. both, but all greylist server rw one of them at the same time. > - all other sql servers replicate the content of the server above in a > timely manner (using MySQL replication process for example). Let's call > them the RO servers (read-only). partly. it can be done by mysql replication or as i wrote above since from the database it's easy to recognize what should have to replicate it may can be done by the greylist server in a scheduled way (every minutes or 5 minutes updates the sql server which is not the current rw server). eg. update the current database plus the another database called "none-replicated", flush these data to the other server in every 1,5,10 minutes and do a full replication in every hour. i don't know which is the better and/or easier. > - when the RW server fails, > . either one of the slaves and *only* one switches to the RW status, > making sure all other RO servers are now synchronising with it (this is > not a simple process, I don't know how MySQL replication handles the > fact that RO servers will not necessarily have the same content when the > RW fails). > . either none of them switch to RW and SQLgrey now only have RO > databases so it cannot store new connection attempts in database : it > can't greylist anymore so it *must* let every message pass (-> slaves > won't be used at all). see above. > - when a failed server comes back online it must do so in a RO state if > there is a RW server already, if not it can become the RW server but > should have the most recent data available in its database when doing so > (or your auto-whitelisting efficiency will suffer and you can consider > reconnections as brand new connections). partly. i assume there is always one sql server which accept rw we can call this master. and even if the previous master come back that just become an sql server which periodicaly update his database (through mysql replication or greylist service). here i always talk about sql servers not greylist servers. suppose there is one greylist servers on all mx and two sql servers somewhere. one sql server is the up-to-date and one is the replication. as i wrote i assume that when the sql server gone it's gone for everyone. > If you don't agree with one of them, please explain why. i hope it's help. -- Levente "Si vis pacem para bellum!" |
From: Lionel B. <lio...@bo...> - 2004-12-15 13:33:29
|
Farkas Levente wrote the following on 12/15/04 13:04 : > > - my first assumption if one of the mx can't access to one sql server, > then none can do it. otherwise it's a real strange thing in case we > can accept no greylisting just dunno. This is the root of the problem, this assumption is incorrect. There are several cases where it can happen : - temporary network link failure (cable unplugged, hardware failing then resynchronising) : temporary split of your network, - SQLgrey automatically reconnects after an error, so if you take the RW database down for a short time, some SQLgrey instances will have to access the database and some not during this short time. The former ones won't be able to reconnect to the database they were using so they will look for another. The latter ones *will* be able to reconnect to the database. > - try to use replication between sql servers. You have to be more precise on this, there are very different implementations of replication between databases, from the simple dump to file/reload to the Oracle cluster. Each one comes with its advantages and limitations, the one you will use will change what the applications using the database pool can/cannot do with it. > - allow write to the slave to and when the master wake up then > replicate back the data too. This won't work : your slave could be used at any moment by a SQLgrey which for whatever reason couldn't contact your master : you'll corrupt your data. > - in my case actualy there is no master and slave just there is two > sql server with the same database (or almost the same and there are > certain point when they are syncing) and there is always one which is > rw by all greylist server (first). > > imho the greylist database is not so complicated. it's easy to > recognize which records should have to replicate. only old/expired > record have to delete and always the last updated one is the latest > and all record has timestemp (because that's the main purpose the > database) so it's easy to know which is the last updated. > >> Here are simple questions to make sure we speak of the same things. >> Do you agree with the following statements ? >> - one and only one sql server should accept writes from every SQLgrey >> instances. Let's call it the RW server (read-write). > > > no. both, but all greylist server rw one of them at the same time. > Won't work as explained above. You can't be sure one SQLgrey instance won't fail to contact the database you chose as a master while others will. There's no point discussing the rest until you understand this. Reliable database failover is *hard*, please take the time to understand these hard facts : - there's no affordable database system that allows multiple replicated read/write database on the market *yet* (only commercial databases in the hundreds of thousands euros/dollars range allow this and they even have limitations), you can only bet on master/slaves schemes, - when using master/slave schemes you *can't* write directly to the slaves you must use one and only one database in read/write mode for *every* SQL client accessing the database pool, - you cannot prevent the case where one instance among a pool of SQL clients won't be able to contact the "master" server and only this one. Seriously, what do you find wrong with a take over IP solution ? Reminder : slave replication in place, master fail, admin scripts detect the failure, take IP down on the master's interface and set up the same IP on the slave, switching it to read-write mode if needed (depends on how the replication work, it might or might not need to put the slave database in read-only mode). This is easily workable as it ensures you can't access 2 databases at the same time and SQLgrey will make the take over IP process transparent as it will automatically reconnect to the server replacing the failing one. Best regards, Lionel. |
From: Farkas L. <lf...@bp...> - 2004-12-15 13:42:48
|
reply to my own mail:-( Farkas Levente wrote: > imho the greylist database is not so complicated. it's easy to recognize > which records should have to replicate. only old/expired record have to > delete and always the last updated one is the latest and all record has > timestemp (because that's the main purpose the database) so it's easy to > know which is the last updated. just another tip, may be there are better ideas; one solution to create a connetion to all (both if there is only two) sql server. when you lookup for one triplet you can do it on all sql server and use the latest ie. the one with the smallest value. update only one sql server (probably the first in this case everybody update the same server most case). in the cleanup step (or in another scheduled time eg. hourly) you can merge/replicate the sql server before delete (update to the latest triplet value on all sql server). so you don't need mysql replication you can do the whole thing in the sqlgrey server. -- Levente "Si vis pacem para bellum!" |