|
From: Joseph G. <jos...@or...> - 2012-10-26 14:04:13
|
On 26 October 2012 23:06, Nikhil Sontakke <ni...@st...> wrote: > >> >> But where is war? It is simply question. With low priority You have no >> neither knowledge nor HA itself. But if every XC accompanied with HA >> then it is high priority. And question is what is true here? >> > > Vladimir, I guess you are getting the impression that PGXC has de-emphasized > HA, that's certainly not the case. > > For a distributed database, the HA aspects are really important. As you have > mentioned elsewhere there needs to be a solution in place with something > like CoroSync/PaceMaker and it's been looked into. > > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_sfd2d_oct > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > For those interested I have been playing with something similar (you could probably see my previous discussion on list). I have been building a a prototype using external scripting that allows PG-XC to use the inbuilt streaming replication to HA datanodes. This has great HA properties but can't currently distribute read queries to the slaves nicely. I have been evaluating how to do this but after looking at the GTM etc I have decided it's beyond my limited knowledge of PG/PG-XC for now. The basic setup uses pgbouncer infront of PG-XC on a virtual IP so the path a query takes looks something like this: virtual-ip -> pgbouncer primary -> coodinators -> virtual-ip -> datanode master. The virtual IP infront of the datanode pair failsover over automatically and repmgr then instructs the slave to become writeable. There is also secondary pgbouncer server that fails over automatically too, this allows clients to just reconnect on anything bad happening. This causes a very slight service disruption but overall is pretty ok... considering for anything to happen a physical server has to fail. Ideally I would like to integrate the failover detection and management into the coordinator cluster along with beign able to service read queries from my datanode slaves. However I am quite happy with this setup and am able to scale write capacity with ease with a fully HA setup. (minus a disconnect on something bad happening which is OK) Joseph. -- CTO | Orion Virtualisation Solutions | www.orionvm.com.au Phone: 1300 56 99 52 | Mobile: 0428 754 846 |