From: Alex <mys...@gm...> - 2014-06-30 19:20:02
|
Hi, > You could use hapolicy instead (http://postfwd.org/hapolicy/index.html) > and run multiple instances of sqlgrey on multiple machines. If it wasn't already clear, I am running an instance of sqlgrey on each machine, which all talk to one master, the one that happened to go down this morning. This resulted in none of them apparently being able to talk to their own sqlgrey service and just started rejecting mail. > I am not sure, whether I completely understand your setup: you have > three node cluster with MySQL master-master replication? I'm a mysql novice, but I think it's just a slave-master situation. They all should have their own copies of the complete greylist. > We have successfully deployed sqlgrey with mysql master-slave > configuration, where reads were performed into slave nodes, while SQL > writes were done on the master node. After a while, we ditched sqlgrey > in favour of posftwd2 and hapolicy... So did you ditch it for this reason? That sounds like how I have it set up here. Is it not possible to create a fault-tolerant sqlgrey system on its own? Would you be able to send your postfwd2 and hapolicy configs for a reference to get started? I also realized I made a typo in the configuration file I posted here, which doesn't exist on my production system. Here are the relevant bits. This one has the db_host properly, in case that matters for reference here: loglevel = 3 log_override = whitelist:1,grey:3,spam:2 reconnect_delay = 5 db_type = mysql db_name = sqlgrey db_host = mail01.example.com db_port = default db_user = sqlgrey db_pass = mypass db_cleanup_hostname=mail01.example.com db_cleandelay = 1800 clean_method = sync db_cluster = on read_hosts=localhost,mail01.example.com,mail02.example.com, mail03.example.com prepend = 1 admin_mail = my...@me... Thanks, Alex |