|
From: Koichi S. <koi...@gm...> - 2013-10-15 02:58:13
|
Sorry I did not respond for a while. Please take a look at my comment inline. Regards; --- Koichi Suzuki 2013/10/8 Yehezkel Horowitz <hor...@ch...> > >> My goal - I have an application that needs SQL DB and must always be > >> up (I have a backup machine for this purpose). > >Have you thought about PostgreSQL itself for your solution. Is there any > reason you'd need XC? Do you have an amount of data that >forces you to use > multi-master architecture or perhaps PG itself could handle it? > > I need multi-master capability, as clients might connect to both machines > at the same time; Yes - my tables will be replicated. > > >Yep, this is doable. If all your data is replicated you would be able to > do that. However you need to keep in mind that you will not be able to > write new data to node B if node A is not accessible. If you data is > replicated and you need to update a table, both nodes need to work. > > This is a surprise for me, this wasn't clear in the documentation I read > nor at some PG-XC presentations I looked at in the internet. > Isn't this point one of the conditions for High-Availability of DB - > allowing work to continue even if one of the machines failed? > Postgres-XC assumes any table may be replicated or distributed so XC does not have an operation interface assuming all the tables are replicated. It always assumes there could be some table distributed, some replicated. On the other hand, Postgres-XC's most important feature is to maintain cluster-wide data integrity. XC's replication is for HA, but to provide scalability by proxying as many statement to local datanode and increase parallelism. So, when you issue a DML against a replicated table, Postgres-XC tries to propagate it to all the nodes where it is defined over. If any node is not available, Postgres-XC determines it cannot maintain cluster-wide data integrity. We provide a couple of means to deal with this. 1. ALTER TABLE to change table's replication. You can delete any node. Because this change should go to any other nodes for cluster-wide data integrity, you should have all the datanodes working. 2. Configure slaves for each master. When one of them fails, it can be failed over by its slave. Typically, you can configure slaves at other datanode's server each other. After failover occurs (you may want to integrate with automatic failover system such as Pacemaker and Corosync/Heartbeat) and you feel it's not needed any longer, you can issue ALTER TEABLE to delete failed node your cluster, issue DROP NODE as well, and then stop the slave and release its resource. > >Or if you want B to be still writable, you could update the node > information inside it, make it workable alone, and when server A is up > again recreate a new XC node from scratch and add it again to the cluster. > > What is the correct procedure for doing that? Is there a pgxc_ctl commands > for doing that? > Hope the above helps. > > >> My questions: > >> > >> 1. In your docs, you always put the GTM in dedicated machine. > >> a. Is this a requirement, just an easy to understand topology or > best > >> practice? > >GTM consumes a certain amount of CPU and does not need much RAM, while > for your nodes you might prioritize the opposite. > >> b. In case of best practice, what is the expected penalty in case > the > >> GTM is deployed on the same machine with coordinator and datanode? > >CPU resource consumption and reduction of performance if your queries > need some CPU with for example internal sort operations among other things. > O.K got it; For now I'm trying to make it work, afterwards I'll take care > for make it work faster. > > >> 2. What should I do after Machine A is back to life if I want: > >> a. Make it act as a new slave? > >> b. Make it become the master again? > >There is no principle of master/slave in XC like in Postgres (well you > could create a slave node for an individual Coordinator/Datanode). >But > basically in your configuration machine A and B have the same state. > >Only GTM is a slave. > > Sorry, I meant in the context of GTM - how should I make MachineA a new > GTM-slave or make it a GTM-master again? > You need to configure gtm_proxy for this purpose. Gtm_ctl provides failover option for gtm slave to be the new gtm master. It also provides reconnect option for gtm_proxy to connect to the new gtm master. Pgxc_ctl provides this as corresponding commands. Please take a look at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |