|
From: Vladimir S. <vst...@gm...> - 2012-10-26 11:45:45
|
On Thu, Oct 25, 2012 at 1:40 AM, Paulo Pires <pj...@ub...> wrote: > Summing, I've found Postgres-XC to be quite easy to install and > configure in a 3 coordinators + 3 data-nodes (GTM all over them and > GTM-Proxy handling HA). A little Google and command-line did the trick > in *a couple hours*! In Debian You can install this package in a few seconds. > Now, the only downside for me is that Postgres-XC doesn't have a > built-in way of load-balancing between coordinators. If the coordinator It is not a problem. The problem is necessity to have standby for every data node. > 1) Define a DNS FQDN like coordinator.mydomain pointing to an IP > (i.e., 10.0.0.1) > 2) Point my app to work with that FQDN > 3) On every coordinator, configure keepalived with one shared-IP > (10.0.0.1) > 4) Install haproxy in every coordinator and have it load-balance with > the other coordinators First, haproxy here is extra - keepalived can do all things itself and better. Second, put it on any XC node is bad idea. In any case I prefer full cluster solution with corosync/pacemaker. This way we can put under single cluster control not only database, but all other parts of the system, i.e. web servers and applications servers. But be aware: with this solution we have HA only for LB, but not for datanodes itself. > My only doubt is, if you get a data-node offline and then bring it up, > will the data in that data-node be synchronized? My congratulation. You come at the point about what we are discussing for a long time in neighbor thread. Data from this node if it has no replica on other nodes are not available any more, but Your application don't knows, which data is available and which is not. You can easy imagine consequences. That is moment when down time is started. That is what we have without HA. And that is why You must have standby for every data node. In other word You should build extra infrastructure in size of entire cluster. |