From: Koichi S. <koi...@gm...> - 2013-03-24 03:45:29
|
2013/3/21 Bei Xu <be...@ad...>: > Hi, Koichi: > Base on your reply, > > Since slave is a copy of master, the slave has the same GTM_proxy listed > in postgresql.conf as the master, it will connect to server3's proxy AFTER > SLAVE IS STARTED, > And we will only change the slave's proxy to server 4 AFTER promotion, > correct? > > Thus, looks like SLAVE needs to connect to A PROXY at ALL TIME: before > promotion is server3's proxy, after promotion is server 4's proxy. No, master doesn't go to proxy to connect. Instead, proxy goes to the master. GTM is a server and proxy is a client. So, as applications has to reconnect to the new master of PostgreSQL when it fails over, GTM proxy should reconnect to the new GTM master. > > Please take a look at following 2 senarios: > Senario1: If slave was configured with server4's proxy AFTER SLAVE IS > STARTED, upon server 3 failure, we will do : > 1) promote on slave > Since slave is already connect server 4's proxy, we don't have to do > anything here. GTM proxy should not be connected to the slave until it fails over. This will make whole cluster transaction status inconsistent. Please take a look at bash version of pgxc_ctl, which describes how gtm master can be handled. > > senario2: If slave was configured with server3's proxy AFTER SLAVE IS > STARTED, upon server 3 failure, we will do: > 1) restart slave to change proxy from server3's proxy value to server4's > proxy value > 2) promote on slave > > Obviously, senario1 has less steps and simpler, senario2 is suggested by > you. Is there any reason you suggested senario2? > > My concern is, If a slave is connect to any active proxy (the proxy is > started and pointing to the GTM), will the transaction be applied TWICE? > One from proxy, one from the master? Proxy is just a proxy. When a transaction starts and GXID is given, then the master fails and the slave takes over, it carries over such GXID status. When you are finished, because coordinators continues to connect to the same gtm proxy, coordinator will report that the transaction is finished. It is transparent. When a coordinator fails, the transaction fails too. In this case, the coordinator is failed over by its slave. Slave should reconnect to local gtm proxy. Of course, if old gtm proxy is running, failed over coordinator can connect to its original (remote) gtm proxy. However, it will waste network traffic so it is highly recommended to connect to the local one. Also, in this case, all the other coordinator should be notified that it is now at the different access point, if you don't use VIP to carry over IP address. This notification can be done by ALTER NODE statement. Datanode failover can be handled similarly. Please take a look at scripts in pgxc_ctl (bash version), which comes will all of such steps. Best; --- Koichi Suzuki > > > > > > > On 3/21/13 12:40 AM, "Koichi Suzuki" <koi...@gm...> wrote: > >>Only after promotion. Before promotion, they will not be connected >>to gtm_proxy. >> >>Regards; >>---------- >>Koichi Suzuki >> >> >>2013/3/21 Bei Xu <be...@ad...>: >>> Hi Koichi: >>> Thanks for the reply. I still have doubts for item 1. If we setup >>> proxy on server 4, do we reconfigure server 4's coordinator/datanodes to >>> point to server 4's proxy at ALL TIME(after replication is setup, I can >>> change gtm_host to point to server4's proxy before I bring up slaves) or >>> only AFTER promotion? >>> >>> >>> On 3/20/13 11:08 PM, "Koichi Suzuki" <koi...@gm...> wrote: >>> >>>>1. It's better to have gtm proxy at server 4 when you failover to this >>>>server. We need gtm proxy now to failover GTM while >>>>coordinators/datanodes are running. When you simply make a copy of >>>>coordinator/datanode with pg_basebackup and promote them, they will >>>>try to connect to gtm_proxy at server3. You need to reconfigure them >>>>to connect to gtm_proxy at server4. >>>> >>>>2. Only one risk is the recovery point could be different from >>>>component to component, I mean, some transaction may be committed at >>>>some node but aborted at another because there could be some >>>>difference in available WAL records. It may possible to improve the >>>>core to handle this to some extent but please understand there will be >>>>some corner case, especially if DDL is involved in such a case. This >>>>chance could be small and you may be able to correct this manually or >>>>this can be allowed in some applications. >>>> >>>>Regards; >>>>---------- >>>>Koichi Suzuki >>>> >>>> >>>>2013/3/21 Bei Xu <be...@ad...>: >>>>> Hi, I want to set up HA for pgxc, please see below for my current >>>>>setup. >>>>> >>>>> server1: 1 GTM >>>>> server2: 1 GTM_Standby >>>>> server3 (master): 1 proxy >>>>> 1 coordinator >>>>> 2 datanode >>>>> >>>>> Server4: (stream replication slave) : 1 standalone proxy ?? >>>>> 1 replicated coordinator (slave of >>>>> server3's coordinator) >>>>> 2 replicated datanode (slave of >>>>> server3's datanodes) >>>>> >>>>> >>>>> server3's coordinator and datanodes are the master of the server4's >>>>> coordinator/datanodes by stream replication. >>>>> >>>>> Question. >>>>> 1. Should there be a proxy on server 4? If not, which proxy should >>>>>the >>>>> server4's coordinator and datanodes pointing to? (I have to specify >>>>>the >>>>> gtm_host in postgresql.conf)/ >>>>> 2. Do I have to use synchronous replication vs Asynchrous replication? >>>>>I am >>>>> currently using Asynchrnous replication because I think if I use >>>>> synchronous, slave failour will affect master. >>>>> >>>>> >>>>>----------------------------------------------------------------------- >>>>>-- >>>>>----- >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_mar >>>>> _______________________________________________ >>>>> Postgres-xc-developers mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>>> >>> >>> >> > > |