You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Koichi S. <koi...@gm...> - 2013-03-21 07:40:42
|
Only after promotion. Before promotion, they will not be connected to gtm_proxy. Regards; ---------- Koichi Suzuki 2013/3/21 Bei Xu <be...@ad...>: > Hi Koichi: > Thanks for the reply. I still have doubts for item 1. If we setup > proxy on server 4, do we reconfigure server 4's coordinator/datanodes to > point to server 4's proxy at ALL TIME(after replication is setup, I can > change gtm_host to point to server4's proxy before I bring up slaves) or > only AFTER promotion? > > > On 3/20/13 11:08 PM, "Koichi Suzuki" <koi...@gm...> wrote: > >>1. It's better to have gtm proxy at server 4 when you failover to this >>server. We need gtm proxy now to failover GTM while >>coordinators/datanodes are running. When you simply make a copy of >>coordinator/datanode with pg_basebackup and promote them, they will >>try to connect to gtm_proxy at server3. You need to reconfigure them >>to connect to gtm_proxy at server4. >> >>2. Only one risk is the recovery point could be different from >>component to component, I mean, some transaction may be committed at >>some node but aborted at another because there could be some >>difference in available WAL records. It may possible to improve the >>core to handle this to some extent but please understand there will be >>some corner case, especially if DDL is involved in such a case. This >>chance could be small and you may be able to correct this manually or >>this can be allowed in some applications. >> >>Regards; >>---------- >>Koichi Suzuki >> >> >>2013/3/21 Bei Xu <be...@ad...>: >>> Hi, I want to set up HA for pgxc, please see below for my current setup. >>> >>> server1: 1 GTM >>> server2: 1 GTM_Standby >>> server3 (master): 1 proxy >>> 1 coordinator >>> 2 datanode >>> >>> Server4: (stream replication slave) : 1 standalone proxy ?? >>> 1 replicated coordinator (slave of >>> server3's coordinator) >>> 2 replicated datanode (slave of >>> server3's datanodes) >>> >>> >>> server3's coordinator and datanodes are the master of the server4's >>> coordinator/datanodes by stream replication. >>> >>> Question. >>> 1. Should there be a proxy on server 4? If not, which proxy should >>>the >>> server4's coordinator and datanodes pointing to? (I have to specify the >>> gtm_host in postgresql.conf)/ >>> 2. Do I have to use synchronous replication vs Asynchrous replication? >>>I am >>> currently using Asynchrnous replication because I think if I use >>> synchronous, slave failour will affect master. >>> >>> >>>------------------------------------------------------------------------- >>>----- >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> > > |
From: Ashutosh B. <ash...@en...> - 2013-03-21 06:43:26
|
On Thu, Mar 21, 2013 at 10:35 AM, Bei Xu <be...@ad...> wrote: > Ashutosh: > Thanks for the suggestion. We only have limited 6 servers allocated. > 3 servers are masters, 3 servers are slaves (stream replication). > If we set up 1 datanode per server, then we only have 3 active datanodes > in total. That's why we set up 2 datanodes per server in order to have 6 > active datanodes. > Do you think 3 active datanodes in 3 servers perform better than 6 active > datanodes in 3 servers? > 3 active datanodes on 3 separate server is expected to do better than 6 active datanodes on 3 servers. But, in rare cases (if they balance CPU and IO amongst two datanodes on same server) 6 datanodes on 3 servers might be as good as other configuration. Please see if the later is the case with you, but that would be rare, I guess. > > From: Ashutosh Bapat <ash...@en...> > Date: Wednesday, March 20, 2013 11:25 PM > To: Xu Bei <be...@ad...> > Cc: "pos...@li..." < > pos...@li...>, Karthik Sethupathy < > kse...@ad...>, Venky Kandaswamy <ve...@ad...> > Subject: Re: [Postgres-xc-developers] proxy setup on standby server > > Hi Bei, > Suzuki-san has replied to your questions. I have different suggestion. > > You may want to use separate servers for the two datanodes, that way > improves performance because CPU and IO loads are divided. > > On Wed, Mar 20, 2013 at 9:55 PM, Bei Xu <be...@ad...> wrote: > >> Hi, I want to set up HA for pgxc, please see below for my current setup. >> >> server1: 1 GTM >> server2: 1 GTM_Standby >> server3 (master): 1 proxy >> 1 coordinator >> 2 datanode >> >> Server4: (stream replication slave) : 1 standalone proxy ?? >> 1 replicated coordinator (slave of >> server3's coordinator) >> 2 replicated datanode (slave of >> server3's datanodes) >> >> >> server3's coordinator and datanodes are the master of the server4's >> coordinator/datanodes by stream replication. >> >> Question. >> 1. Should there be a proxy on server 4? If not, which proxy should the >> server4's coordinator and datanodes pointing to? (I have to specify the >> gtm_host in postgresql.conf)/ >> 2. Do I have to use synchronous replication vs Asynchrous replication? I >> am currently using Asynchrnous replication because I think if I use >> synchronous, slave failour will affect master. >> >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Nikhil S. <ni...@st...> - 2013-03-21 06:38:45
|
Hi Bei, > server1: 1 GTM > server2: 1 GTM_Standby > server3 (master): 1 proxy > 1 coordinator > 2 datanode > > Server4: (stream replication slave) : 1 standalone proxy ?? > 1 replicated coordinator (slave of > server3's coordinator) > 2 replicated datanode (slave of > server3's datanodes) > > > server3's coordinator and datanodes are the master of the server4's > coordinator/datanodes by stream replication. > IMO, a better config would be to have datanode1 running on server3 and datanode 2 running on server4. Also their replicas should then respectively go to server4 and server3 respectively. > 2. Do I have to use synchronous replication vs Asynchrous replication? I am > currently using Asynchrnous replication because I think if I use > synchronous, slave failour will affect master. > Or consider having two synchronous replicas configured. Also the replicas need not be hot standby replicas. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Bei Xu <be...@ad...> - 2013-03-21 06:35:38
|
Ashutosh: Thanks for the suggestion. We only have limited 6 servers allocated. 3 servers are masters, 3 servers are slaves (stream replication). If we set up 1 datanode per server, then we only have 3 active datanodes in total. That's why we set up 2 datanodes per server in order to have 6 active datanodes. Do you think 3 active datanodes in 3 servers perform better than 6 active datanodes in 3 servers? From: Ashutosh Bapat <ash...@en...<mailto:ash...@en...>> Date: Wednesday, March 20, 2013 11:25 PM To: Xu Bei <be...@ad...<mailto:be...@ad...>> Cc: "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>>, Karthik Sethupathy <kse...@ad...<mailto:kse...@ad...>>, Venky Kandaswamy <ve...@ad...<mailto:ve...@ad...>> Subject: Re: [Postgres-xc-developers] proxy setup on standby server Hi Bei, Suzuki-san has replied to your questions. I have different suggestion. You may want to use separate servers for the two datanodes, that way improves performance because CPU and IO loads are divided. On Wed, Mar 20, 2013 at 9:55 PM, Bei Xu <be...@ad...<mailto:be...@ad...>> wrote: Hi, I want to set up HA for pgxc, please see below for my current setup. server1: 1 GTM server2: 1 GTM_Standby server3 (master): 1 proxy 1 coordinator 2 datanode Server4: (stream replication slave) : 1 standalone proxy ?? 1 replicated coordinator (slave of server3's coordinator) 2 replicated datanode (slave of server3's datanodes) server3's coordinator and datanodes are the master of the server4's coordinator/datanodes by stream replication. Question. 1. Should there be a proxy on server 4? If not, which proxy should the server4's coordinator and datanodes pointing to? (I have to specify the gtm_host in postgresql.conf)/ 2. Do I have to use synchronous replication vs Asynchrous replication? I am currently using Asynchrnous replication because I think if I use synchronous, slave failour will affect master. ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Bei Xu <be...@ad...> - 2013-03-21 06:31:09
|
Hi Koichi: Thanks for the reply. I still have doubts for item 1. If we setup proxy on server 4, do we reconfigure server 4's coordinator/datanodes to point to server 4's proxy at ALL TIME(after replication is setup, I can change gtm_host to point to server4's proxy before I bring up slaves) or only AFTER promotion? On 3/20/13 11:08 PM, "Koichi Suzuki" <koi...@gm...> wrote: >1. It's better to have gtm proxy at server 4 when you failover to this >server. We need gtm proxy now to failover GTM while >coordinators/datanodes are running. When you simply make a copy of >coordinator/datanode with pg_basebackup and promote them, they will >try to connect to gtm_proxy at server3. You need to reconfigure them >to connect to gtm_proxy at server4. > >2. Only one risk is the recovery point could be different from >component to component, I mean, some transaction may be committed at >some node but aborted at another because there could be some >difference in available WAL records. It may possible to improve the >core to handle this to some extent but please understand there will be >some corner case, especially if DDL is involved in such a case. This >chance could be small and you may be able to correct this manually or >this can be allowed in some applications. > >Regards; >---------- >Koichi Suzuki > > >2013/3/21 Bei Xu <be...@ad...>: >> Hi, I want to set up HA for pgxc, please see below for my current setup. >> >> server1: 1 GTM >> server2: 1 GTM_Standby >> server3 (master): 1 proxy >> 1 coordinator >> 2 datanode >> >> Server4: (stream replication slave) : 1 standalone proxy ?? >> 1 replicated coordinator (slave of >> server3's coordinator) >> 2 replicated datanode (slave of >> server3's datanodes) >> >> >> server3's coordinator and datanodes are the master of the server4's >> coordinator/datanodes by stream replication. >> >> Question. >> 1. Should there be a proxy on server 4? If not, which proxy should >>the >> server4's coordinator and datanodes pointing to? (I have to specify the >> gtm_host in postgresql.conf)/ >> 2. Do I have to use synchronous replication vs Asynchrous replication? >>I am >> currently using Asynchrnous replication because I think if I use >> synchronous, slave failour will affect master. >> >> >>------------------------------------------------------------------------- >>----- >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > |
From: Ashutosh B. <ash...@en...> - 2013-03-21 06:25:38
|
Hi Bei, Suzuki-san has replied to your questions. I have different suggestion. You may want to use separate servers for the two datanodes, that way improves performance because CPU and IO loads are divided. On Wed, Mar 20, 2013 at 9:55 PM, Bei Xu <be...@ad...> wrote: > Hi, I want to set up HA for pgxc, please see below for my current setup. > > server1: 1 GTM > server2: 1 GTM_Standby > server3 (master): 1 proxy > 1 coordinator > 2 datanode > > Server4: (stream replication slave) : 1 standalone proxy ?? > 1 replicated coordinator (slave of > server3's coordinator) > 2 replicated datanode (slave of > server3's datanodes) > > > server3's coordinator and datanodes are the master of the server4's > coordinator/datanodes by stream replication. > > Question. > 1. Should there be a proxy on server 4? If not, which proxy should the > server4's coordinator and datanodes pointing to? (I have to specify the > gtm_host in postgresql.conf)/ > 2. Do I have to use synchronous replication vs Asynchrous replication? I > am currently using Asynchrnous replication because I think if I use > synchronous, slave failour will affect master. > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Koichi S. <koi...@gm...> - 2013-03-21 06:18:47
|
GTM and GTM standby name can be the same. Others should be unique. GTM will reject connection. You cannot issue CREATE NODE for the node which shares the name with others. Regards; ---------- Koichi Suzuki 2013/3/21 Bei Xu <be...@ad...>: > Hi, All: > Does "nodename" parameter has to be to different on all the components in > pgxc cluster? > For instance, gtm and gtm_standby > Datanode and datanode replica. > Proxy names > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Koichi S. <koi...@gm...> - 2013-03-21 06:08:54
|
1. It's better to have gtm proxy at server 4 when you failover to this server. We need gtm proxy now to failover GTM while coordinators/datanodes are running. When you simply make a copy of coordinator/datanode with pg_basebackup and promote them, they will try to connect to gtm_proxy at server3. You need to reconfigure them to connect to gtm_proxy at server4. 2. Only one risk is the recovery point could be different from component to component, I mean, some transaction may be committed at some node but aborted at another because there could be some difference in available WAL records. It may possible to improve the core to handle this to some extent but please understand there will be some corner case, especially if DDL is involved in such a case. This chance could be small and you may be able to correct this manually or this can be allowed in some applications. Regards; ---------- Koichi Suzuki 2013/3/21 Bei Xu <be...@ad...>: > Hi, I want to set up HA for pgxc, please see below for my current setup. > > server1: 1 GTM > server2: 1 GTM_Standby > server3 (master): 1 proxy > 1 coordinator > 2 datanode > > Server4: (stream replication slave) : 1 standalone proxy ?? > 1 replicated coordinator (slave of > server3's coordinator) > 2 replicated datanode (slave of > server3's datanodes) > > > server3's coordinator and datanodes are the master of the server4's > coordinator/datanodes by stream replication. > > Question. > 1. Should there be a proxy on server 4? If not, which proxy should the > server4's coordinator and datanodes pointing to? (I have to specify the > gtm_host in postgresql.conf)/ > 2. Do I have to use synchronous replication vs Asynchrous replication? I am > currently using Asynchrnous replication because I think if I use > synchronous, slave failour will affect master. > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Bei Xu <be...@ad...> - 2013-03-20 18:26:38
|
Hi, All: Does "nodename" parameter has to be to different on all the components in pgxc cluster? For instance, gtm and gtm_standby Datanode and datanode replica. Proxy names |
From: Bei Xu <be...@ad...> - 2013-03-20 18:11:06
|
Hi, I want to set up HA for pgxc, please see below for my current setup. server1: 1 GTM server2: 1 GTM_Standby server3 (master): 1 proxy 1 coordinator 2 datanode Server4: (stream replication slave) : 1 standalone proxy ?? 1 replicated coordinator (slave of server3's coordinator) 2 replicated datanode (slave of server3's datanodes) server3's coordinator and datanodes are the master of the server4's coordinator/datanodes by stream replication. Question. 1. Should there be a proxy on server 4? If not, which proxy should the server4's coordinator and datanodes pointing to? (I have to specify the gtm_host in postgresql.conf)/ 2. Do I have to use synchronous replication vs Asynchrous replication? I am currently using Asynchrnous replication because I think if I use synchronous, slave failour will affect master. |
From: Amit K. <ami...@en...> - 2013-03-19 06:19:07
|
Yes this looks good to go. Thanks for adding the single-node scenario in the tests. On 18 March 2013 16:21, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > PFA the patch with changes. Let me know if it's good to commit. > > > On Mon, Mar 18, 2013 at 2:11 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Ok, I think it's better to leave distributed as distributed and handle >> each separately. >> >> >> On Mon, Mar 18, 2013 at 2:02 PM, Amit Khandekar < >> ami...@en...> wrote: >> >>> >>> >>> On 8 March 2013 14:00, Ashutosh Bapat <ash...@en...>wrote: >>> >>>> Hi Amit, >>>> Please find my replies inlined, >>>> >>>> >>>> >>>>> I think the logic of shippability of outer joins is flawless. Didn't >>>>> find any holes. Patch comments below : >>>>> >>>>> ------- >>>>> >>>>> In case of distributed equi-join case, why is >>>>> IsExecNodesColumnDistributed() used instead of >>>>> IsExecNodesDistributedByValue() ? We want to always rule out the round >>>>> robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will >>>>> always fail for round robin tables because they won't have any distrib >>>>> columns, but still , just curious ... >>>>> >>>>> >>>> It keeps open the possibility that we will be able to ship equi-join if >>>> we can somehow infer that the rows from both the sides of join, >>>> participating in the result of join are collocated. >>>> >>>> >>>>> ------- >>>>> >>>>> * PGXC_TODO: What do we do when baselocatortype is >>>>> * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH >>>>> distributed or >>>>> * MODULO distributed. In that case, having equi-join >>>>> doesn't work >>>>> * really, because same value from different relation >>>>> will go to >>>>> * different node. >>>>> >>>>> The above comment says that it does not work if one of the tables is >>>>> distributed by hash and other table is distributed by modulo. But the >>>>> code is actually checking the baselocatortype also, so I guess it >>>>> works correctly after all ? I did not get what is the TODO here. Or >>>>> does it mean this ? : >>>>> For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) >>>>> tj2 on tj1.col1 = tj2.col4 >>>>> the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the >>>>> merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 >>>>> join tj2 would be wrongly marked shippable even though they should not >>>>> be shippable because of the mix of hash and modulo ? >>>>> >>>>> >>>> That's correct. This should be taken care by my second patch up for >>>> review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. >>>> While reviewing that patch, can you please also review if this is true. >>>> >>>> >>>>> ------- >>>>> >>>>> Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual >>>>> be examined in is_query_shippable() walker ? >>>>> >>>> >>>> This code will get executed in standard_planner() as well, so it's >>>> possible that some of the join quals will be shippable and some are not. >>>> While this is fine for an inner join, we want to make sure the a qual which >>>> implies collocation of rows is shippable. This check is more from future >>>> extension perspective than anything else. >>>> >>>> >>> >>> Ok. Understood all the comments above. >>> >>> >>>> >>>>> -------- >>>>> >>>>> If both tables reside on a single datanode, every join case should be >>>>> shippable, which doesn't seem to be happening : >>>>> postgres=# create table tab2 (id2 int, v varchar) distribute by >>>>> replication to node (datanode_1); >>>>> postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); >>>>> postgres=# explain select * from (tab1 full outer join tab2 on id1 = >>>>> id2 ) ; >>>>> QUERY PLAN >>>>> >>>>> ------------------------------------------------------------------------------------------------- >>>>> Hash Full Join (cost=0.12..0.26 rows=10 width=72) >>>>> Hash Cond: (tab1.id1 = tab2.id2) >>>>> -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 >>>>> rows=1000 width=36) >>>>> Node/s: datanode_1 >>>>> -> Hash (cost=0.00..0.00 rows=1000 width=36) >>>>> -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" >>>>> (cost=0.00..0.00 rows=1000 width=36) >>>>> Node/s: datanode_1 >>>>> >>>>> Probably you need to take out the following statement out of the >>>>> distributed case and apply it as a general rule: >>>>> /* If there is only single node, try merging the nodes >>>>> */ >>>>> if (list_length(inner_en->nodeList) == 1 && >>>>> list_length(outer_en->nodeList) == 1) >>>>> merge_nodes = true; >>>>> >>>>> >>>> I am thinking about this and actually thought that we should mark a >>>> single node ExecNodes as REPLICATED, so that it doesn't need any special >>>> handling. What do you think? >>>> >>> >>> I am concerned about loss of information that the underlying table is >>> actually distributed. Also, there is a function >>> IsReturningDMLOnReplicatedTable() which is using this information, although >>> not sure how much it's making use of that information. I leave that to you >>> for deciding which option to choose. I personally feel it's always good to >>> be explicit while checking for this condition. >>> >>> >>>> >>>> >>>>> >>>>> >>> -- >>>>> >>> Best Wishes, >>>>> >>> Ashutosh Bapat >>>>> >>> EntepriseDB Corporation >>>>> >>> The Enterprise Postgres Company >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> -- >>>>> >> Best Wishes, >>>>> >> Ashutosh Bapat >>>>> >> EntepriseDB Corporation >>>>> >> The Enterprise Postgres Company >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > -- >>>>> > Best Wishes, >>>>> > Ashutosh Bapat >>>>> > EntepriseDB Corporation >>>>> > The Enterprise Postgres Company >>>>> > >>>>> > >>>>> ------------------------------------------------------------------------------ >>>>> > Free Next-Gen Firewall Hardware Offer >>>>> > Buy your Sophos next-gen firewall before the end March 2013 >>>>> > and get the hardware for free! Learn more. >>>>> > http://p.sf.net/sfu/sophos-d2d-feb >>>>> > _______________________________________________ >>>>> > Postgres-xc-developers mailing list >>>>> > Pos...@li... >>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> > >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Ashutosh B. <ash...@en...> - 2013-03-18 10:12:02
|
Ok, I think it's better to leave distributed as distributed and handle each separately. On Mon, Mar 18, 2013 at 2:02 PM, Amit Khandekar < ami...@en...> wrote: > > > On 8 March 2013 14:00, Ashutosh Bapat <ash...@en...>wrote: > >> Hi Amit, >> Please find my replies inlined, >> >> >> >>> I think the logic of shippability of outer joins is flawless. Didn't >>> find any holes. Patch comments below : >>> >>> ------- >>> >>> In case of distributed equi-join case, why is >>> IsExecNodesColumnDistributed() used instead of >>> IsExecNodesDistributedByValue() ? We want to always rule out the round >>> robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will >>> always fail for round robin tables because they won't have any distrib >>> columns, but still , just curious ... >>> >>> >> It keeps open the possibility that we will be able to ship equi-join if >> we can somehow infer that the rows from both the sides of join, >> participating in the result of join are collocated. >> >> >>> ------- >>> >>> * PGXC_TODO: What do we do when baselocatortype is >>> * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH >>> distributed or >>> * MODULO distributed. In that case, having equi-join >>> doesn't work >>> * really, because same value from different relation >>> will go to >>> * different node. >>> >>> The above comment says that it does not work if one of the tables is >>> distributed by hash and other table is distributed by modulo. But the >>> code is actually checking the baselocatortype also, so I guess it >>> works correctly after all ? I did not get what is the TODO here. Or >>> does it mean this ? : >>> For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) >>> tj2 on tj1.col1 = tj2.col4 >>> the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the >>> merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 >>> join tj2 would be wrongly marked shippable even though they should not >>> be shippable because of the mix of hash and modulo ? >>> >>> >> That's correct. This should be taken care by my second patch up for >> review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. >> While reviewing that patch, can you please also review if this is true. >> >> >>> ------- >>> >>> Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual >>> be examined in is_query_shippable() walker ? >>> >> >> This code will get executed in standard_planner() as well, so it's >> possible that some of the join quals will be shippable and some are not. >> While this is fine for an inner join, we want to make sure the a qual which >> implies collocation of rows is shippable. This check is more from future >> extension perspective than anything else. >> >> > > Ok. Understood all the comments above. > > >> >>> -------- >>> >>> If both tables reside on a single datanode, every join case should be >>> shippable, which doesn't seem to be happening : >>> postgres=# create table tab2 (id2 int, v varchar) distribute by >>> replication to node (datanode_1); >>> postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); >>> postgres=# explain select * from (tab1 full outer join tab2 on id1 = id2 >>> ) ; >>> QUERY PLAN >>> >>> ------------------------------------------------------------------------------------------------- >>> Hash Full Join (cost=0.12..0.26 rows=10 width=72) >>> Hash Cond: (tab1.id1 = tab2.id2) >>> -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 >>> rows=1000 width=36) >>> Node/s: datanode_1 >>> -> Hash (cost=0.00..0.00 rows=1000 width=36) >>> -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" >>> (cost=0.00..0.00 rows=1000 width=36) >>> Node/s: datanode_1 >>> >>> Probably you need to take out the following statement out of the >>> distributed case and apply it as a general rule: >>> /* If there is only single node, try merging the nodes */ >>> if (list_length(inner_en->nodeList) == 1 && >>> list_length(outer_en->nodeList) == 1) >>> merge_nodes = true; >>> >>> >> I am thinking about this and actually thought that we should mark a >> single node ExecNodes as REPLICATED, so that it doesn't need any special >> handling. What do you think? >> > > I am concerned about loss of information that the underlying table is > actually distributed. Also, there is a function > IsReturningDMLOnReplicatedTable() which is using this information, although > not sure how much it's making use of that information. I leave that to you > for deciding which option to choose. I personally feel it's always good to > be explicit while checking for this condition. > > >> >> >>> >>> >>> -- >>> >>> Best Wishes, >>> >>> Ashutosh Bapat >>> >>> EntepriseDB Corporation >>> >>> The Enterprise Postgres Company >>> >> >>> >> >>> >> >>> >> >>> >> -- >>> >> Best Wishes, >>> >> Ashutosh Bapat >>> >> EntepriseDB Corporation >>> >> The Enterprise Postgres Company >>> > >>> > >>> > >>> > >>> > -- >>> > Best Wishes, >>> > Ashutosh Bapat >>> > EntepriseDB Corporation >>> > The Enterprise Postgres Company >>> > >>> > >>> ------------------------------------------------------------------------------ >>> > Free Next-Gen Firewall Hardware Offer >>> > Buy your Sophos next-gen firewall before the end March 2013 >>> > and get the hardware for free! Learn more. >>> > http://p.sf.net/sfu/sophos-d2d-feb >>> > _______________________________________________ >>> > Postgres-xc-developers mailing list >>> > Pos...@li... >>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> > >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Amit K. <ami...@en...> - 2013-03-18 10:03:27
|
On 8 March 2013 14:00, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > Please find my replies inlined, > > > >> I think the logic of shippability of outer joins is flawless. Didn't >> find any holes. Patch comments below : >> >> ------- >> >> In case of distributed equi-join case, why is >> IsExecNodesColumnDistributed() used instead of >> IsExecNodesDistributedByValue() ? We want to always rule out the round >> robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will >> always fail for round robin tables because they won't have any distrib >> columns, but still , just curious ... >> >> > It keeps open the possibility that we will be able to ship equi-join if we > can somehow infer that the rows from both the sides of join, participating > in the result of join are collocated. > > >> ------- >> >> * PGXC_TODO: What do we do when baselocatortype is >> * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH >> distributed or >> * MODULO distributed. In that case, having equi-join >> doesn't work >> * really, because same value from different relation >> will go to >> * different node. >> >> The above comment says that it does not work if one of the tables is >> distributed by hash and other table is distributed by modulo. But the >> code is actually checking the baselocatortype also, so I guess it >> works correctly after all ? I did not get what is the TODO here. Or >> does it mean this ? : >> For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) >> tj2 on tj1.col1 = tj2.col4 >> the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the >> merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 >> join tj2 would be wrongly marked shippable even though they should not >> be shippable because of the mix of hash and modulo ? >> >> > That's correct. This should be taken care by my second patch up for > review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. > While reviewing that patch, can you please also review if this is true. > > >> ------- >> >> Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual >> be examined in is_query_shippable() walker ? >> > > This code will get executed in standard_planner() as well, so it's > possible that some of the join quals will be shippable and some are not. > While this is fine for an inner join, we want to make sure the a qual which > implies collocation of rows is shippable. This check is more from future > extension perspective than anything else. > > Ok. Understood all the comments above. > >> -------- >> >> If both tables reside on a single datanode, every join case should be >> shippable, which doesn't seem to be happening : >> postgres=# create table tab2 (id2 int, v varchar) distribute by >> replication to node (datanode_1); >> postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); >> postgres=# explain select * from (tab1 full outer join tab2 on id1 = id2 >> ) ; >> QUERY PLAN >> >> ------------------------------------------------------------------------------------------------- >> Hash Full Join (cost=0.12..0.26 rows=10 width=72) >> Hash Cond: (tab1.id1 = tab2.id2) >> -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 >> rows=1000 width=36) >> Node/s: datanode_1 >> -> Hash (cost=0.00..0.00 rows=1000 width=36) >> -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" >> (cost=0.00..0.00 rows=1000 width=36) >> Node/s: datanode_1 >> >> Probably you need to take out the following statement out of the >> distributed case and apply it as a general rule: >> /* If there is only single node, try merging the nodes */ >> if (list_length(inner_en->nodeList) == 1 && >> list_length(outer_en->nodeList) == 1) >> merge_nodes = true; >> >> > I am thinking about this and actually thought that we should mark a single > node ExecNodes as REPLICATED, so that it doesn't need any special handling. > What do you think? > I am concerned about loss of information that the underlying table is actually distributed. Also, there is a function IsReturningDMLOnReplicatedTable() which is using this information, although not sure how much it's making use of that information. I leave that to you for deciding which option to choose. I personally feel it's always good to be explicit while checking for this condition. > > >> >> >>> -- >> >>> Best Wishes, >> >>> Ashutosh Bapat >> >>> EntepriseDB Corporation >> >>> The Enterprise Postgres Company >> >> >> >> >> >> >> >> >> >> -- >> >> Best Wishes, >> >> Ashutosh Bapat >> >> EntepriseDB Corporation >> >> The Enterprise Postgres Company >> > >> > >> > >> > >> > -- >> > Best Wishes, >> > Ashutosh Bapat >> > EntepriseDB Corporation >> > The Enterprise Postgres Company >> > >> > >> ------------------------------------------------------------------------------ >> > Free Next-Gen Firewall Hardware Offer >> > Buy your Sophos next-gen firewall before the end March 2013 >> > and get the hardware for free! Learn more. >> > http://p.sf.net/sfu/sophos-d2d-feb >> > _______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Michael P. <mic...@gm...> - 2013-03-12 11:10:13
|
On Tue, Mar 12, 2013 at 7:56 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi All, > Support for materialised view in XC is going to be a problem when we will > pull this feature in XC. The problem is where should be storage of the > materialized view. There are two ways, we can store a materialised view - > a. make it coordinator only and store the materialized view result at the > coordinator. OR b. store it like any other table, replicated or > distributed. > > I am assuming that materialized views will be created for frequently > occurring queries, such that a single materialized view is capable of > serving whole query. In that case, having coordinator local storage would > improve the performance, since the query doesn't need any fetches from > datanode. We will need to create the infrastructure to have coordinator > local storage for user data. > > The second option doesn't look that attractive unless a materialized view > is being mis-used so that higher percentage of queries need joins with > other views or tables. I assume that materialized views should be replicated at each Coordinator with storage on Coordinator. Only a thought though... This would really improve performance on some query joins. Then, what about refresh? After that, doing a refresh on all the Coordinators within the same transaction could be problematic as each Coordinator would need to connect to each remote node, finishing with a dangerous state where multiple connections would be open on remote nodes for the same session. A refresh that runs only locally for each Coordinator is enough I think. -- Michael |
From: Ashutosh B. <ash...@en...> - 2013-03-12 10:56:58
|
Hi All, Support for materialised view in XC is going to be a problem when we will pull this feature in XC. The problem is where should be storage of the materialized view. There are two ways, we can store a materialised view - a. make it coordinator only and store the materialized view result at the coordinator. OR b. store it like any other table, replicated or distributed. I am assuming that materialized views will be created for frequently occurring queries, such that a single materialized view is capable of serving whole query. In that case, having coordinator local storage would improve the performance, since the query doesn't need any fetches from datanode. We will need to create the infrastructure to have coordinator local storage for user data. The second option doesn't look that attractive unless a materialized view is being mis-used so that higher percentage of queries need joins with other views or tables. ---------- Forwarded message ---------- From: Kevin Grittner <kg...@ym...> Date: Sun, Mar 3, 2013 at 11:14 PM Subject: [HACKERS] materialized views and FDWs To: "pgs...@po..." <pgs...@po...> In final testing and documentation today, it occurred to me to test a materialized view with foreign data wrapper. I picked the file_fdw for convenience, but I think this should work as well with any other FDW. The idea is to create an MV which mirrors an FDW so that it can be indexed and quickly accessed. Timings below are all fully cached to minimize caching effects. test=# create extension file_fdw; CREATE EXTENSION test=# create server local_file foreign data wrapper file_fdw ; CREATE SERVER test=# create foreign table words (word text not null) server local_file options (filename '/etc/dictionaries-common/words'); CREATE FOREIGN TABLE test=# create materialized view wrd as select * from words; SELECT 99171 test=# create unique index wrd_word on wrd (word); CREATE INDEX test=# create extension pg_trgm ; CREATE EXTENSION test=# create index wrd_trgm on wrd using gist (word gist_trgm_ops); CREATE INDEX test=# vacuum analyze wrd; VACUUM test=# select word from wrd order by word <-> 'caterpiler' limit 10; word --------------- cater caterpillar Caterpillar caterpillars caterpillar's Caterpillar's caterer caterer's caters catered (10 rows) test=# explain analyze select word from words order by word <-> 'caterpiler' limit 10; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------- Limit (cost=2195.70..2195.72 rows=10 width=32) (actual time=218.904..218.906 rows=10 loops=1) -> Sort (cost=2195.70..2237.61 rows=16765 width=32) (actual time=218.902..218.904 rows=10 loops=1) Sort Key: ((word <-> 'caterpiler'::text)) Sort Method: top-N heapsort Memory: 25kB -> Foreign Scan on words (cost=0.00..1833.41 rows=16765 width=32) (actual time=0.046..200.965 rows=99171 loops=1) Foreign File: /etc/dictionaries-common/words Foreign File Size: 938848 Total runtime: 218.966 ms (8 rows) test=# set enable_indexscan = off; test=# explain analyze select word from wrd order by word <-> 'caterpiler' limit 10; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------- Limit (cost=3883.69..3883.71 rows=10 width=9) (actual time=203.819..203.821 rows=10 loops=1) -> Sort (cost=3883.69..4131.61 rows=99171 width=9) (actual time=203.818..203.818 rows=10 loops=1) Sort Key: ((word <-> 'caterpiler'::text)) Sort Method: top-N heapsort Memory: 25kB -> Seq Scan on wrd (cost=0.00..1740.64 rows=99171 width=9) (actual time=0.029..186.749 rows=99171 loops=1) Total runtime: 203.851 ms (6 rows) test=# reset enable_indexscan; test=# explain analyze select word from wrd order by word <-> 'caterpiler' limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.28..1.02 rows=10 width=9) (actual time=24.916..25.079 rows=10 loops=1) -> Index Scan using wrd_trgm on wrd (cost=0.28..7383.70 rows=99171 width=9) (actual time=24.914..25.076 rows=10 loops=1) Order By: (word <-> 'caterpiler'::text) Total runtime: 25.884 ms (4 rows) Does this deserve specific treatment in the docs? Where? -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgs...@po...) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Abbas B. <abb...@en...> - 2013-03-10 14:59:39
|
Hi, Attached please find a patch that adds support in pg_dump to dump nodes and node groups. This is required while adding a new node to the cluster. -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Michael P. <mic...@gm...> - 2013-03-08 10:32:27
|
On Fri, Mar 8, 2013 at 5:09 PM, Nikhil Sontakke <ni...@st...> wrote: > I use a simple 'psql -c "\x"' query to monitor coordinator/datanodes. > The psql call ensures that the connection protocol is followed and > accepted by that node. It then does an innocuous activity on the psql > side before exiting. Works well for me. > +1. -- Michael |
From: Amit K. <ami...@en...> - 2013-03-08 10:04:00
|
On 6 March 2013 15:20, Abbas Butt <abb...@en...> wrote: > > > On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar < > ami...@en...> wrote: > >> On 19 February 2013 12:37, Abbas Butt <abb...@en...> >> wrote: >> > >> > Hi, >> > Attached please find a patch that locks the cluster so that dump can be >> > taken to be restored on the new node to be added. >> > >> > To lock the cluster the patch adds a new GUC parameter called >> > xc_lock_for_backup, however its status is maintained by the pooler. The >> > reason is that the default behavior of XC is to release connections as >> soon >> > as a command is done and it uses PersistentConnections GUC to control >> the >> > behavior. We in this case however need a status that is independent of >> the >> > setting of PersistentConnections. >> > >> > Assume we have two coordinator cluster, the patch provides this >> behavior: >> > >> > Case 1: set and show >> > ==================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 2: set from one client show from other >> > ================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > (From another tab) >> > psql test -p 5432 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 3: set from one, quit it, run again and show >> > ====================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > \q >> > psql test -p 5432 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 4: set on one coordinator, show from other >> > ===================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > (From another tab) >> > psql test -p 5433 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > pg_dump and pg_dumpall seem to work fine after locking the cluster for >> > backup but I would test these utilities in detail next. >> > >> > Also I have yet to look in detail that standard_ProcessUtility is the >> only >> > place that updates the portion of catalog that is dumped. There may be >> some >> > other places too that need to be blocked for catalog updates. >> > >> > The patch adds no extra warnings and regression shows no extra failure. >> > >> > Comments are welcome. >> >> Abbas wrote on another thread: >> >> > Amit wrote on another thread: >> >> I haven't given a thought on the earlier patch you sent for cluster >> lock >> >> implementation; may be we can discuss this on that thread, but just a >> quick >> >> question: >> >> >> >> Does the cluster-lock command wait for the ongoing DDL commands to >> finish >> >> ? If not, we have problems. The subsequent pg_dump would not contain >> objects >> >> created by these particular DDLs. >> > >> > >> > Suppose you have a two coordinator cluster. Assume one client connected >> to >> > each. Suppose one client issues a lock cluster command and the other >> issues >> > a DDL. Is this what you mean by an ongoing DDL? If true then answer to >> your >> > question is Yes. >> > >> > Suppose you have a prepared transaction that has a DDL in it, again if >> this >> > can be considered an on going DDL, then again answer to your question is >> > Yes. >> > >> > Suppose you have a two coordinator cluster. Assume one client connected >> to >> > each. One client starts a transaction and issues a DDL, the second >> client >> > issues a lock cluster command, the first commits the transaction. If >> this is >> > an ongoing DDL, then the answer to your question is No. >> >> Yes this last scenario is what I meant: A DDL has been executed on nodes, >> but >> not committed, when the cluster lock command is run and then pg_dump >> immediately >> starts its transaction before the DDL is committed. Here pg_dump does >> not see the new objects that would be created. >> >> I myself am not sure how would we prevent this from happening. There >> are two callback hooks that might be worth considering though: >> 1. Transaction End callback (CallXactCallbacks) >> 2. Object creation/drop hook (InvokeObjectAccessHook) >> >> Suppose we create an object creation/drop hook function that would : >> 1. store the current transaction id in a global objects_created list >> if the cluster is not locked, >> 2. or else if the cluster is locked, this hook would ereport() saying >> "cannot create catalog objects in this mode". >> >> And then during transaction commit , a new transaction callback hook will: >> 1. Check the above objects_created list to see if the current >> transaction has any objects created/dropped. >> 2. If found and if the cluster-lock is on, it will again ereport() >> saying "cannot create catalog objects in this mode" >> >> Thinking more on the object creation hook, we can even consider this >> as a substitute for checking the cluster-lock status in >> standardProcessUtility(). But I am not sure whether this hook does get >> called on each of the catalog objects. At least the code comments say >> it does. >> > > Thanks for the ideas, here is how I handled the problem of ongoing DDLs. > > 1. Online node addition feature requires that each transaction > should be monitored for any activity that would be prohibited > if the cluster is locked before the transaction commit. > This obviously adds some overhead in each transaction. > If the database administrator is sure that the deployed > cluster would never require online addition of nodes > OR the database administrator decides that node addition > will be done by bringing the cluster down then a > command line parameter "disable-online-node-addition" > can be used to disable transaction monitoring for online node addition > By default on line addition of nodes will be available. > Is this overhead because you do pooler communication during commit ? If so, yes, that is a overhead. In other reply, you said, we have to keep the lock across the sessions; if we leave that session, the lock goes away, so we would have the restriction that everything else should be run in the same session. So if we acquire a session lock in pg_dump itself, would that solve the problem ? 2. Suppose we have a two coordinator cluster CO1 and CO2 > Assume one client connected to each coordinator. > Further assume one client starts a transaction > and issues a DDL. This is an unfinished transaction. > Now assume the second client issues > SET xc_lock_for_backup=yes > The commit on the unfinished transaction should now > fail. To handle this situation we monitor each > transaction for any activity that would be prohibited > if the cluster is locked before transaction commit. > At the time of commit we check that if the transaction > had issued a prohibited statement and now the cluster > has been locked, we abort the commit. > This is done only if online addition of nodes has not > been disabled explicitly and the server is not running > in bootstrap mode. > > Does the object access hook seem to be a feasible option for keeping track of unfinished DDLs ? If this is feasible, we don't have to prohibit according to wihch DDL is being run. -- > 3. I did not use CallXactCallbacks because the comment in > CommitTransaction reads > * This is all post-commit cleanup. Note that if an error is raised > here, > * it's too late to abort the transaction. This should be just > * noncritical resource releasing. > Yes, you are right. The transaction has already been committed when this callback gets invoked. > I have attached the revised patch with detailed comments. > > >> >> >> > But its a matter of >> > deciding which camp are we going to put COMMIT in, the allow camp, or >> the >> > deny camp. I decided to put it in allow camp, because I have not yet >> written >> > any code to detect whether a transaction being committed has a DDL in >> it or >> > not, and stopping all transactions from committing looks too >> restrictive to >> > me. >> >> >> > >> > Do you have some other meaning of an ongoing DDL? >> >> >> >> > >> > -- >> > Abbas >> > Architect >> > EnterpriseDB Corporation >> > The Enterprise PostgreSQL Company >> > >> > Phone: 92-334-5100153 >> > >> > Website: www.enterprisedb.com >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> > >> > This e-mail message (and any attachment) is intended for the use of >> > the individual or entity to whom it is addressed. This message >> > contains information from EnterpriseDB Corporation that may be >> > privileged, confidential, or exempt from disclosure under applicable >> > law. If you are not the intended recipient or authorized to receive >> > this for the intended recipient, any use, dissemination, distribution, >> > retention, archiving, or copying of this communication is strictly >> > prohibited. If you have received this e-mail in error, please notify >> > the sender immediately by reply e-mail and delete this message. >> > >> > >> ------------------------------------------------------------------------------ >> > Everyone hates slow websites. So do we. >> > Make your web apps faster with AppDynamics >> > Download AppDynamics Lite for free today: >> > http://p.sf.net/sfu/appdyn_d2d_feb >> > _______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > |
From: Ashutosh B. <ash...@en...> - 2013-03-08 10:00:56
|
Hi Amit, Please find my replies inlined, > I think the logic of shippability of outer joins is flawless. Didn't > find any holes. Patch comments below : > > ------- > > In case of distributed equi-join case, why is > IsExecNodesColumnDistributed() used instead of > IsExecNodesDistributedByValue() ? We want to always rule out the round > robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will > always fail for round robin tables because they won't have any distrib > columns, but still , just curious ... > > It keeps open the possibility that we will be able to ship equi-join if we can somehow infer that the rows from both the sides of join, participating in the result of join are collocated. > ------- > > * PGXC_TODO: What do we do when baselocatortype is > * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH > distributed or > * MODULO distributed. In that case, having equi-join > doesn't work > * really, because same value from different relation will > go to > * different node. > > The above comment says that it does not work if one of the tables is > distributed by hash and other table is distributed by modulo. But the > code is actually checking the baselocatortype also, so I guess it > works correctly after all ? I did not get what is the TODO here. Or > does it mean this ? : > For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) > tj2 on tj1.col1 = tj2.col4 > the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the > merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 > join tj2 would be wrongly marked shippable even though they should not > be shippable because of the mix of hash and modulo ? > > That's correct. This should be taken care by my second patch up for review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. While reviewing that patch, can you please also review if this is true. > ------- > > Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual > be examined in is_query_shippable() walker ? > This code will get executed in standard_planner() as well, so it's possible that some of the join quals will be shippable and some are not. While this is fine for an inner join, we want to make sure the a qual which implies collocation of rows is shippable. This check is more from future extension perspective than anything else. > > -------- > > If both tables reside on a single datanode, every join case should be > shippable, which doesn't seem to be happening : > postgres=# create table tab2 (id2 int, v varchar) distribute by > replication to node (datanode_1); > postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); > postgres=# explain select * from (tab1 full outer join tab2 on id1 = id2 ) > ; > QUERY PLAN > > ------------------------------------------------------------------------------------------------- > Hash Full Join (cost=0.12..0.26 rows=10 width=72) > Hash Cond: (tab1.id1 = tab2.id2) > -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 > rows=1000 width=36) > Node/s: datanode_1 > -> Hash (cost=0.00..0.00 rows=1000 width=36) > -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" > (cost=0.00..0.00 rows=1000 width=36) > Node/s: datanode_1 > > Probably you need to take out the following statement out of the > distributed case and apply it as a general rule: > /* If there is only single node, try merging the nodes */ > if (list_length(inner_en->nodeList) == 1 && > list_length(outer_en->nodeList) == 1) > merge_nodes = true; > > I am thinking about this and actually thought that we should mark a single node ExecNodes as REPLICATED, so that it doesn't need any special handling. What do you think? > > >>> -- > >>> Best Wishes, > >>> Ashutosh Bapat > >>> EntepriseDB Corporation > >>> The Enterprise Postgres Company > >> > >> > >> > >> > >> -- > >> Best Wishes, > >> Ashutosh Bapat > >> EntepriseDB Corporation > >> The Enterprise Postgres Company > > > > > > > > > > -- > > Best Wishes, > > Ashutosh Bapat > > EntepriseDB Corporation > > The Enterprise Postgres Company > > > > > ------------------------------------------------------------------------------ > > Free Next-Gen Firewall Hardware Offer > > Buy your Sophos next-gen firewall before the end March 2013 > > and get the hardware for free! Learn more. > > http://p.sf.net/sfu/sophos-d2d-feb > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Abbas B. <abb...@en...> - 2013-03-08 09:55:39
|
Thanks, I got it to work. On Fri, Mar 8, 2013 at 1:40 PM, Koichi Suzuki <koi...@gm...>wrote: > I fond that the documentation does not reflect the change. I visited > the code and found they're implemented. > > Could you take a look at gram.y? > > We need to revise the document to include all these changes. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/8 Abbas Butt <abb...@en...>: > > Hi, > > ALTER TABLE REDISTRIBUTE does not support TO NODE clause: > > How would we redistribute data after e.g. adding a node? > > OR > > How would we redistribute the data before removing a node? > > > > I think this functionality will have to be added in the system to > complete > > the whole picture. > > > > -- > > Abbas > > Architect > > EnterpriseDB Corporation > > The Enterprise PostgreSQL Company > > > > Phone: 92-334-5100153 > > > > Website: www.enterprisedb.com > > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > > Follow us on Twitter: http://www.twitter.com/enterprisedb > > > > This e-mail message (and any attachment) is intended for the use of > > the individual or entity to whom it is addressed. This message > > contains information from EnterpriseDB Corporation that may be > > privileged, confidential, or exempt from disclosure under applicable > > law. If you are not the intended recipient or authorized to receive > > this for the intended recipient, any use, dissemination, distribution, > > retention, archiving, or copying of this communication is strictly > > prohibited. If you have received this e-mail in error, please notify > > the sender immediately by reply e-mail and delete this message. > > > ------------------------------------------------------------------------------ > > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > > endpoint security space. For insight on selecting the right partner to > > tackle endpoint security challenges, access the full report. > > http://p.sf.net/sfu/symantec-dev2dev > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Nikhil S. <ni...@st...> - 2013-03-08 09:23:15
|
> Does it work correctly if gtm/gtm_proxy is not running? Yeah, it does. I faced the same issues that if gtm is down, then the call would error out and the HA infrastructure would wrongly assume that this node is down and do failover. With this simple psql call all that's avoided. Regards, Nikhils >I found > PQping is lighter and easier to use, which is dedicated API to check > if the server is running. It is independent from users/databases and > does not require any password. Just check the target is working. > > I think this is more flexible to be used in various setups. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/8 Nikhil Sontakke <ni...@st...>: >> I use a simple 'psql -c "\x"' query to monitor coordinator/datanodes. >> The psql call ensures that the connection protocol is followed and >> accepted by that node. It then does an innocuous activity on the psql >> side before exiting. Works well for me. >> >> Regards, >> Nikhils >> >> On Fri, Mar 8, 2013 at 12:48 PM, Koichi Suzuki >> <koi...@gm...> wrote: >>> Okay, here's a patch which uses PQping. This is new to 9.1 and is >>> extremely simple and matches my needs. >>> >>> Regards; >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/3/8 Michael Paquier <mic...@gm...>: >>>> >>>> >>>> On Fri, Mar 8, 2013 at 12:13 PM, Koichi Suzuki <koi...@gm...> >>>> wrote: >>>>> >>>>> Because 9.3 merge will not be done in 1.1, I don't think it's feasible >>>>> at present. Second means will be to use PQ* functions. Anyway, >>>>> this will be provided by pgxc_monitor. May be a good idea to use >>>>> custom background, but this could be too much because the requirement >>>>> is very small. >>>> >>>> In this case use something like PQPing or similar, but simply do not involve >>>> core. There would be underlying performance impact for sure. >>>> -- >>>> Michael >>> >>> ------------------------------------------------------------------------------ >>> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >>> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >>> endpoint security space. For insight on selecting the right partner to >>> tackle endpoint security challenges, access the full report. >>> http://p.sf.net/sfu/symantec-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Koichi S. <koi...@gm...> - 2013-03-08 09:01:38
|
Thanks Abbas for the fix. ---------- Koichi Suzuki 2013/3/8 Abbas Butt <abb...@en...>: > Attached please find patch to fix 3607290. > > Regression shows no extra failure. > > Test cases for this have already been submitted in email subject [Patch to > fix a crash in COPY TO from a replicated table] > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Koichi S. <koi...@gm...> - 2013-03-08 08:40:39
|
I fond that the documentation does not reflect the change. I visited the code and found they're implemented. Could you take a look at gram.y? We need to revise the document to include all these changes. Regards; ---------- Koichi Suzuki 2013/3/8 Abbas Butt <abb...@en...>: > Hi, > ALTER TABLE REDISTRIBUTE does not support TO NODE clause: > How would we redistribute data after e.g. adding a node? > OR > How would we redistribute the data before removing a node? > > I think this functionality will have to be added in the system to complete > the whole picture. > > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Amit K. <ami...@en...> - 2013-03-08 08:38:37
|
On 6 March 2013 14:16, Ashutosh Bapat <ash...@en...> wrote: > Hi Amit, > The patch looks good and is not the track for parameter handling. I see that > we are relying more on the data produced by PG and standard planner rather > than infering ourselves in XC. So, this looks good improvement. > > Here are my comments > Tests > ----- > > 1. It seems testing the parameter handling for queries arising from plpgsql > functions. The function prm_func() seems to be doing that. Can you please > add > some comments in this function specifying what is being tested in various > sets > of statements in function. > 2. Also, it seems to be using two tables prm_emp1 and prm_emp2. The first > one is > being used to populate the other one and a variable inside the function. > Later only the other is being used. Can we just use a single table > instead of > two? > 3. Can we use an existing volatile function instead of a new one like > prm_volfunc()? Done these changes. also added DELETE scenario. > > Code > ---- > 1. Please use prefixes rq_ and rqs_ for the newly added members of > RemoteQuery > and RemoteQueryState structures resp. This allows to locate the usage of > these members easily through cscope/tag etc. As a general rule, we should > always add a prefix for members of commonly used structures or members > which > use very common variable names. rq_, rqs_, en_ are being used for > RemoteQuery, RemoteQueryState and ExecNodes resp. Done. > 2. Is it possible to infer value of has_internal_params from rest of the > members > of RemoteQuery structure? If so, can we drop this member and use > inference logic? Could not find any information that we can safely infer the param types from. > 3. Following code needs more commenting in DMLParamListToDataRow() > 5027 /* Set the remote param types if they are not already set */ > The code below, this comments seems to execute only the first time the > RemoteQueryState is used. Please elaborate this in the comment, lest the > reader is confused as to when this case can happen. Done. > 4. In code below > 5098 /* copy data to the buffer */ > 5099 *datarow = palloc(buf.len); > 5100 memcpy(*datarow, buf.data, buf.len); > 5101 rq_state->paramval_len = buf.len; > 5102 pfree(buf.data); > Can we use datarow = buf.data. The memory context in both the cases will > have > same life. We will save calls to palloc, pfree and memcpy. You can add > comments about why this assignment is safe. We do this type of assignment > at > other places too. See pgxc_rqplan_build_statement. Similar change is > needed > in ExternParamListToDataRow(). Right. Done. > 5. More elaboration needed in prologue of DMLParamListToDataRow(). See some > hints below. We need to elaborate on the purpose of such conversion. Name > of the > function is misleading, there is not ParamList involved to convert from. > We are > converting from TupleSlot. > 5011 /* -------------------------------- > 5012 * DMLParamListToDataRow > 5013 * Obtain a copy of <given> slot's data row <in what form?>, and copy > it into > 5014 * <passed in/given> RemoteQueryState.paramval_data. Also set > remote_param_types <to what?> > 5015 * The slot itself is undisturbed. > 5016 * -------------------------------- Done. Also changed the names of the both internal and extern param functions. > 6. Variable declarations in DMLParamListToDataRow() need to aligned. We > align > the start of declaration and the variable names themselves. Done. This was existing code. But corrected it. > 7. In create_remotedml_plan(), we were using SetRemoteStatementName to have > all > the parameter setting in one place. But you have instead set them > explicitly > in the function itself. Can you please revert back the change? The > remote_param_types set here are being over-written in > DMLParamListToDataRow(). What if the param types/param numbers obtained > in both these > functions are different? Can we add some asserts to check this? The remote_param_types set in create_remotedml_plan() belong to RemoteQuery, whereas those that are set in DMLParamListToDataRow() belong to RemoteQueryState, so they are not overwritten. But the remote param types that are set in create_remotedml_plan are not required. I have realized, that part is redundant, and I have removed it. The internal params are inferred in DMLParamListToDataRow(). > > > > On Tue, Feb 26, 2013 at 9:51 AM, Amit Khandekar > <ami...@en...> wrote: >> >> There has been errors like : >> "Cannot find parameter $4" or >> "Bind supplies 4 parameters while Prepare needs 8 parameters" that we >> have been getting for specific scenarios. These scenarios come up in >> plpgsql functions. This is the root cause: >> >> If PLpgSQL_datum.dtype is not a simple type (PLPGSQL_DTYPE_VAR), the >> parameter types (ParamExternData.ptype) for such plpgsql functions are >> not set until when the values are actually populated. Example of such >> variables is record variable without %rowtype specification. The >> ParamListInfo.paramFetch hook function is called when needed to fetch >> the such parameter types. In the XC function >> pgxc_set_remote_parameters(), we do not consider this, and we check >> only the ParamExternData.ptype to see if parameters are present, and >> end up with lesser parameters than the actual parameters, sometimes >> even ending up with 0 parameter types. >> >> During trigger support implementation, it was discovered that due to >> this issue, >> the NEW.field or OLD.field cannot be used directly in SQL statements. >> >> Actually we don't even need parameter types to be set at plan time in >> XC. We only need them at the BIND message. There, we can anyway infer >> the types from the tuple descriptor. So the attached patch removes all >> the places where parameter types are set, and derives them when the >> BIND data row is built. >> >> I have not touched the SetRemoteStatementName function in this patch. >> There can be scenarios where user calls PREPARE using parameter types, >> and in such cases it is better to use these parameters in >> SetRemoteStatementName() being called from BuildCachedPlan with >> non-NULL boundParams. Actually use of parameter types during PREPARE >> and rebuilding cached plans etc will be dealt further after this one. >> So, I haven't removed param types altogether. >> >> We also need to know whether the parameters are supplied through >> source data plan (DMLs) or they are external. So added a field >> has_internal_params in RemoteQuery to make this difference explicit. >> Data row and parameters types are built in a different manner for DMLs >> and non-DMLs. >> >> Moved the datarow generation function from execTuples.c to execRemote.c . >> >> Regressions >> ----------------- >> >> There is a parameter related error in plpgsql.sql test, which does not >> occur now, so corrected the expected output. It still does not show >> the exact output because of absence of trigger support. >> >> Added new test xc_params.sql which would be further extended later. >> >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company |
From: Koichi S. <koi...@gm...> - 2013-03-08 08:35:11
|
Does it work correctly if gtm/gtm_proxy is not running? I found PQping is lighter and easier to use, which is dedicated API to check if the server is running. It is independent from users/databases and does not require any password. Just check the target is working. I think this is more flexible to be used in various setups. Regards; ---------- Koichi Suzuki 2013/3/8 Nikhil Sontakke <ni...@st...>: > I use a simple 'psql -c "\x"' query to monitor coordinator/datanodes. > The psql call ensures that the connection protocol is followed and > accepted by that node. It then does an innocuous activity on the psql > side before exiting. Works well for me. > > Regards, > Nikhils > > On Fri, Mar 8, 2013 at 12:48 PM, Koichi Suzuki > <koi...@gm...> wrote: >> Okay, here's a patch which uses PQping. This is new to 9.1 and is >> extremely simple and matches my needs. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> 2013/3/8 Michael Paquier <mic...@gm...>: >>> >>> >>> On Fri, Mar 8, 2013 at 12:13 PM, Koichi Suzuki <koi...@gm...> >>> wrote: >>>> >>>> Because 9.3 merge will not be done in 1.1, I don't think it's feasible >>>> at present. Second means will be to use PQ* functions. Anyway, >>>> this will be provided by pgxc_monitor. May be a good idea to use >>>> custom background, but this could be too much because the requirement >>>> is very small. >>> >>> In this case use something like PQPing or similar, but simply do not involve >>> core. There would be underlying performance impact for sure. >>> -- >>> Michael >> >> ------------------------------------------------------------------------------ >> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >> endpoint security space. For insight on selecting the right partner to >> tackle endpoint security challenges, access the full report. >> http://p.sf.net/sfu/symantec-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service |