You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ashutosh B. <ash...@en...> - 2013-06-03 05:55:03
|
On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt <abb...@en...>wrote: > > > On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat < > ash...@en...> wrote: > >> >> >> >> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt <abb...@en...>wrote: >> >>> Attached please find updated patch to fix the bug. The patch takes care >>> of the bug and the regression issues resulting from the changes done in the >>> patch. Please note that the issue in test case plancache still stands >>> unsolved because of the following test case (simplified but taken from >>> plancache.sql) >>> >>> create schema s1 create table abc (f1 int); >>> create schema s2 create table abc (f1 int); >>> >>> >>> insert into s1.abc values(123); >>> insert into s2.abc values(456); >>> >>> set search_path = s1; >>> >>> prepare p1 as select f1 from abc; >>> execute p1; -- works fine, results in 123 >>> >>> set search_path = s2; >>> execute p1; -- works fine after the patch, results in 123 >>> >>> alter table s1.abc add column f2 float8; -- force replan >>> execute p1; -- fails >>> >>> >> Huh! The beast bit us. >> >> I think the right solution here is either of two >> 1. Take your previous patch to always use qualified names (but you need >> to improve it not to affect the view dumps) >> 2. Prepare the statements at the datanode at the time of prepare. >> >> >> Is this test added new in 9.2? >> > > No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in > March 2007. > > >> Why didn't we see this issue the first time prepare was implemented? I >> don't remember (but it was two years back). >> > > I was unable to locate the exact reason but since statements were not > being prepared on datanodes due to a merge issue this issue just surfaced > up. > > Well, even though statements were not getting prepared (actually prepared statements were not being used again and again) on datanodes, we never prepared them on datanode at the time of preparing the statement. So, this bug should have shown itself long back. > >> >>> The last execute should result in 123, whereas it results in 456. The >>> reason is that the search path has already been changed at the datanode and >>> a replan would mean select from abc in s2. >>> >>> >>> >>> >>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> Hi Abbas, >>>> I think the fix is on the right track. There are couple of improvements >>>> that we need to do here (but you may not do those if the time doesn't >>>> permit). >>>> >>>> 1. We should have a status in RemoteQuery node, as to whether the query >>>> in the node should use extended protocol or not, rather than relying on the >>>> presence of statement name and parameters etc. Amit has already added a >>>> status with that effect. We need to leverage it. >>>> >>>> >>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> The patch fixes the dead code issue, that I described earlier. The >>>>> code was dead because of two issues: >>>>> >>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>>> being called in the function pgxc_start_command_on_connection. >>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>> prepared statement must have some parameters. >>>>> >>>>> Fixing these two issues makes sure that the function >>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>>>> on the datanode. >>>>> This patch would fix bug 3607975. It would however not fix the test >>>>> case I described in my previous email because of reasons I described. >>>>> >>>>> >>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> Can you please explain what this fix does? It would help to have an >>>>>> elaborate explanation with code snippets. >>>>>> >>>>>> >>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>>>> ash...@en...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>> test case. >>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>> 3607975. >>>>>>>>>>> >>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, >>>>>>>>>>> the coordinator keeps on preparing and executing the query on datanode al >>>>>>>>>>> times, as against preparing once and executing multiple times. This is >>>>>>>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>>>>>>> >>>>>>>>>>> Consider this test case >>>>>>>>>>> >>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>> D. execute p1; >>>>>>>>>>> E. execute p1; >>>>>>>>>>> F. execute p1; >>>>>>>>>>> >>>>>>>>>>> Here are the confusions >>>>>>>>>>> >>>>>>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>>>>>> prepare issued by a user. >>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>>>>>>>> >>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>>>>>>> generic plan, >>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>> again does use cached plans >>>>>>>>>>> and does not prepare again and again every time we issue an >>>>>>>>>>> execute. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>> >>>>>>>>> >>>>>>>>> We never prepare any named/unnamed statements on the datanode. I >>>>>>>>> spent time looking at the code written in do_query and functions called >>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>> prepared statements are being handled now is the same as I described >>>>>>>>> earlier in the mail chain with the help of an example. >>>>>>>>> The code that is dead was originally added by Mason through commit >>>>>>>>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>>>>>>>> has been changed a lot over the last two years. This commit does not >>>>>>>>> contain any test cases so I am not sure how did it use to work back then. >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>>> either. >>>>>>>> >>>>>>> >>>>>>> I was able to find the reason why the code was dead and the attached >>>>>>> patch (WIP) fixes the problem. This would now ensure that statements are >>>>>>> prepared on datanodes whenever required. However there is a problem in the >>>>>>> way prepared statements are handled. The problem is that unless a prepared >>>>>>> statement is executed it is never prepared on datanodes, hence changing the >>>>>>> path before executing the statement gives us incorrect results. For Example >>>>>>> >>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>> replication; >>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>> replication; >>>>>>> >>>>>>> insert into s1.abc values(123); >>>>>>> insert into s2.abc values(456); >>>>>>> set search_path = s2; >>>>>>> prepare p1 as select f1 from abc; >>>>>>> set search_path = s1; >>>>>>> execute p1; >>>>>>> >>>>>>> The last execute results in 123, where as it should have resulted in >>>>>>> 456. >>>>>>> I can finalize the attached patch by fixing any regression issues >>>>>>> that may result and that would fix 3607975 and improve performance however >>>>>>> the above test case would still fail. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Did you verify it under the debugger? If that would have been the >>>>>>>>>> case, we would not have seen this problem if search_path changed in between >>>>>>>>>> steps D and E. >>>>>>>>>> >>>>>>>>> >>>>>>>>> If search path is changed between steps D & E, the problem occurs >>>>>>>>> because when the remote query node is created, schema qualification is not >>>>>>>>> added in the sql statement to be sent to the datanode, but changes in >>>>>>>>> search path do get communicated to the datanode. The sql statement is built >>>>>>>>> when execute is issued for the first time and is reused on subsequent >>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>>>> when search path was some thing else. >>>>>>>>> >>>>>>>>> >>>>>>>> Fixing the prepared statements the way I suggested, would fix the >>>>>>>> problem, since the statement will get prepared at the datanode, with the >>>>>>>> same search path settings, as it would on the coordinator. >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Comments are welcome. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> *Abbas* >>>>>>>>>>> Architect >>>>>>>>>>> >>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>> * >>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>> @EnterpriseDB >>>>>>>>>>> >>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>> monitoring service >>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>> monitor your >>>>>>>>>>> browser, app, & servers with just a few lines of code. Try New >>>>>>>>>>> Relic >>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>> Pos...@li... >>>>>>>>>>> >>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Best Wishes, >>>>>>>>>> Ashutosh Bapat >>>>>>>>>> EntepriseDB Corporation >>>>>>>>>> The Postgres Database Company >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> -- >>>>>>>>> *Abbas* >>>>>>>>> Architect >>>>>>>>> >>>>>>>>> Ph: 92.334.5100153 >>>>>>>>> Skype ID: gabbasb >>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>> * >>>>>>>>> Follow us on Twitter* >>>>>>>>> @EnterpriseDB >>>>>>>>> >>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> *Abbas* >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>> * >>>>>>> Follow us on Twitter* >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ahsan H. <ahs...@en...> - 2013-06-03 05:52:44
|
Abbas, Can you please check-in this patch after making sure that it doesn't cause any additional regression failures. On Mon, Jun 3, 2013 at 5:59 AM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a revised patch that removes the variable non_fqs_dml > from RemoteQueryState whihc is no longer needed. > With this small change included the patch is good to go. > > > > On Thu, May 23, 2013 at 9:41 AM, Abbas Butt <abb...@en...>wrote: > >> I will try to spare some time for this over the weekend. >> >> >> On Thu, May 23, 2013 at 1:09 AM, Ahsan Hadi <ahs...@en...>wrote: >> >>> Abbas, >>> Can you please review this patch this week? >>> >>> >>> On Tue, May 21, 2013 at 3:55 AM, Amit Khandekar < >>> ami...@en...> wrote: >>> >>>> Currently the number of tuples processed is updated in both >>>> HandleCommandComplete and ExecInsert/Update/Delete. >>>> >>>> In HandleCommandComplete() it gets it from the command tag returned >>>> from the datanode i.e. INSERT 0 2, UPDATE 5 and likewise. And then it >>>> updates estate->es_processed. But it does this only for FQS. For non-FQS, >>>> in ExecInsert/Update, it is just incremented by 1. So if a trigger >>>> function skips one row on datanode, the command tag returned from datanode >>>> is INSERT 0 0. But still in ExecInsert() increments the row count. >>>> >>>> I have added a new field RemoteQueryState->rqs_processed, which is >>>> updated in HandleCommandComplete(). Then it is used in >>>> ExecInsert/Update/Delete() for non-FQS, and in RemoteQueryNext() for FQS. >>>> >>>> While fixing this issue, I see that there seem to be some issue with >>>> combiner->command_complete_count. Currently it checks for consistency of >>>> number of tuples returned for replicated tables, but it does that only for >>>> FQS. Need to completely remove the dependency on whether it's an FQS or >>>> non-FQS DML query. For this, command_complete_count needs to be better >>>> handled. I felt it needs some refactoring which I did not feel good to do >>>> in this release. Currently this field is being updated for each iteration >>>> of FetchTuple by re-using the same combiner for each iteration, whereas it >>>> seems it should be updated only for each node execution, not for each tuple >>>> fetched. I haven't touched this part, but added a TODO, and opened 3613645. >>>> >>>> Added some testcases in existing tests xc_trigship and xc_returning. >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Try New Relic Now & We'll Send You this Cool Shirt >>>> New Relic is the only SaaS-based application performance monitoring >>>> service >>>> that delivers powerful full stack analytics. Optimize and monitor your >>>> browser, app, & servers with just a few lines of code. Try New Relic >>>> and get this awesome Nerd Life shirt! >>>> http://p.sf.net/sfu/newrelic_d2d_may >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Ahsan Hadi >>> Snr Director Product Development >>> EnterpriseDB Corporation >>> The Enterprise Postgres Company >>> >>> Phone: +92-51-8358874 >>> Mobile: +92-333-5162114 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of the >>> individual or entity to whom it is addressed. This message contains >>> information from EnterpriseDB Corporation that may be privileged, >>> confidential, or exempt from disclosure under applicable law. If you are >>> not the intended recipient or authorized to receive this for the intended >>> recipient, any use, dissemination, distribution, retention, archiving, or >>> copying of this communication is strictly prohibited. If you have received >>> this e-mail in error, please notify the sender immediately by reply e-mail >>> and delete this message. >>> >>> >>> ------------------------------------------------------------------------------ >>> Try New Relic Now & We'll Send You this Cool Shirt >>> New Relic is the only SaaS-based application performance monitoring >>> service >>> that delivers powerful full stack analytics. Optimize and monitor your >>> browser, app, & servers with just a few lines of code. Try New Relic >>> and get this awesome Nerd Life shirt! >>> http://p.sf.net/sfu/newrelic_d2d_may >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Ahsan Hadi Snr Director Product Development EnterpriseDB Corporation The Enterprise Postgres Company Phone: +92-51-8358874 Mobile: +92-333-5162114 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: 鈴木 幸市 <ko...@in...> - 2013-06-03 05:31:48
|
Sorry, I forgot to send to developers ML. --- Koichi Suzuki Begin forwarded message: > 差出人: Koichi Suzuki <koi...@gm...> > 件名: [Postgres-xc-core] Patch to backup restart port for barrier > 日時: 2013年6月3日 14:30:17 JST > 宛先: Postgres-XC core <Pos...@li...> > > PFA the patch to backup gtm restart point for each CREATE BARRIER command. This is needed to make stable restoration point by PITR. > > ---------- > Koichi Suzuki > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite > It's a free troubleshooting tool designed for production > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://p.sf.net/sfu/appdyn_d2d_ap2_______________________________________________ > Postgres-xc-core mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-core |
From: Abbas B. <abb...@en...> - 2013-06-03 05:21:20
|
On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat < ash...@en...> wrote: > > > > On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt <abb...@en...>wrote: > >> Attached please find updated patch to fix the bug. The patch takes care >> of the bug and the regression issues resulting from the changes done in the >> patch. Please note that the issue in test case plancache still stands >> unsolved because of the following test case (simplified but taken from >> plancache.sql) >> >> create schema s1 create table abc (f1 int); >> create schema s2 create table abc (f1 int); >> >> >> insert into s1.abc values(123); >> insert into s2.abc values(456); >> >> set search_path = s1; >> >> prepare p1 as select f1 from abc; >> execute p1; -- works fine, results in 123 >> >> set search_path = s2; >> execute p1; -- works fine after the patch, results in 123 >> >> alter table s1.abc add column f2 float8; -- force replan >> execute p1; -- fails >> >> > Huh! The beast bit us. > > I think the right solution here is either of two > 1. Take your previous patch to always use qualified names (but you need to > improve it not to affect the view dumps) > 2. Prepare the statements at the datanode at the time of prepare. > > > Is this test added new in 9.2? > No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in March 2007. > Why didn't we see this issue the first time prepare was implemented? I > don't remember (but it was two years back). > I was unable to locate the exact reason but since statements were not being prepared on datanodes due to a merge issue this issue just surfaced up. > > >> The last execute should result in 123, whereas it results in 456. The >> reason is that the search path has already been changed at the datanode and >> a replan would mean select from abc in s2. >> >> >> >> >> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Hi Abbas, >>> I think the fix is on the right track. There are couple of improvements >>> that we need to do here (but you may not do those if the time doesn't >>> permit). >>> >>> 1. We should have a status in RemoteQuery node, as to whether the query >>> in the node should use extended protocol or not, rather than relying on the >>> presence of statement name and parameters etc. Amit has already added a >>> status with that effect. We need to leverage it. >>> >>> >>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> The patch fixes the dead code issue, that I described earlier. The code >>>> was dead because of two issues: >>>> >>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>> being called in the function pgxc_start_command_on_connection. >>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>> prepared statement must have some parameters. >>>> >>>> Fixing these two issues makes sure that the function >>>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>>> on the datanode. >>>> This patch would fix bug 3607975. It would however not fix the test >>>> case I described in my previous email because of reasons I described. >>>> >>>> >>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> Can you please explain what this fix does? It would help to have an >>>>> elaborate explanation with code snippets. >>>>> >>>>> >>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>> test case. >>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>> 3607975. >>>>>>>>>> >>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, >>>>>>>>>> the coordinator keeps on preparing and executing the query on datanode al >>>>>>>>>> times, as against preparing once and executing multiple times. This is >>>>>>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>>>>>> >>>>>>>>>> Consider this test case >>>>>>>>>> >>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>> D. execute p1; >>>>>>>>>> E. execute p1; >>>>>>>>>> F. execute p1; >>>>>>>>>> >>>>>>>>>> Here are the confusions >>>>>>>>>> >>>>>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>>>>> prepare issued by a user. >>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>>>>>>> >>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>>>>>> generic plan, >>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>> again does use cached plans >>>>>>>>>> and does not prepare again and again every time we issue an >>>>>>>>>> execute. >>>>>>>>>> >>>>>>>>>> >>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>> >>>>>>>> >>>>>>>> We never prepare any named/unnamed statements on the datanode. I >>>>>>>> spent time looking at the code written in do_query and functions called >>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>> prepared statements are being handled now is the same as I described >>>>>>>> earlier in the mail chain with the help of an example. >>>>>>>> The code that is dead was originally added by Mason through commit >>>>>>>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>>>>>>> has been changed a lot over the last two years. This commit does not >>>>>>>> contain any test cases so I am not sure how did it use to work back then. >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>> either. >>>>>>> >>>>>> >>>>>> I was able to find the reason why the code was dead and the attached >>>>>> patch (WIP) fixes the problem. This would now ensure that statements are >>>>>> prepared on datanodes whenever required. However there is a problem in the >>>>>> way prepared statements are handled. The problem is that unless a prepared >>>>>> statement is executed it is never prepared on datanodes, hence changing the >>>>>> path before executing the statement gives us incorrect results. For Example >>>>>> >>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>> replication; >>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>> replication; >>>>>> >>>>>> insert into s1.abc values(123); >>>>>> insert into s2.abc values(456); >>>>>> set search_path = s2; >>>>>> prepare p1 as select f1 from abc; >>>>>> set search_path = s1; >>>>>> execute p1; >>>>>> >>>>>> The last execute results in 123, where as it should have resulted in >>>>>> 456. >>>>>> I can finalize the attached patch by fixing any regression issues >>>>>> that may result and that would fix 3607975 and improve performance however >>>>>> the above test case would still fail. >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>> >>>>>>>>>> >>>>>>>>> Did you verify it under the debugger? If that would have been the >>>>>>>>> case, we would not have seen this problem if search_path changed in between >>>>>>>>> steps D and E. >>>>>>>>> >>>>>>>> >>>>>>>> If search path is changed between steps D & E, the problem occurs >>>>>>>> because when the remote query node is created, schema qualification is not >>>>>>>> added in the sql statement to be sent to the datanode, but changes in >>>>>>>> search path do get communicated to the datanode. The sql statement is built >>>>>>>> when execute is issued for the first time and is reused on subsequent >>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>>> when search path was some thing else. >>>>>>>> >>>>>>>> >>>>>>> Fixing the prepared statements the way I suggested, would fix the >>>>>>> problem, since the statement will get prepared at the datanode, with the >>>>>>> same search path settings, as it would on the coordinator. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> Comments are welcome. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> *Abbas* >>>>>>>>>> Architect >>>>>>>>>> >>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>> Skype ID: gabbasb >>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>> * >>>>>>>>>> Follow us on Twitter* >>>>>>>>>> @EnterpriseDB >>>>>>>>>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>> monitoring service >>>>>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>>>>> your >>>>>>>>>> browser, app, & servers with just a few lines of code. Try New >>>>>>>>>> Relic >>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>> _______________________________________________ >>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>> Pos...@li... >>>>>>>>>> >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> *Abbas* >>>>>>>> Architect >>>>>>>> >>>>>>>> Ph: 92.334.5100153 >>>>>>>> Skype ID: gabbasb >>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>> * >>>>>>>> Follow us on Twitter* >>>>>>>> @EnterpriseDB >>>>>>>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ashutosh B. <ash...@en...> - 2013-06-03 03:43:13
|
On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt <abb...@en...>wrote: > Attached please find updated patch to fix the bug. The patch takes care of > the bug and the regression issues resulting from the changes done in the > patch. Please note that the issue in test case plancache still stands > unsolved because of the following test case (simplified but taken from > plancache.sql) > > create schema s1 create table abc (f1 int); > create schema s2 create table abc (f1 int); > > > insert into s1.abc values(123); > insert into s2.abc values(456); > > set search_path = s1; > > prepare p1 as select f1 from abc; > execute p1; -- works fine, results in 123 > > set search_path = s2; > execute p1; -- works fine after the patch, results in 123 > > alter table s1.abc add column f2 float8; -- force replan > execute p1; -- fails > > Huh! The beast bit us. I think the right solution here is either of two 1. Take your previous patch to always use qualified names (but you need to improve it not to affect the view dumps) 2. Prepare the statements at the datanode at the time of prepare. Is this test added new in 9.2? Why didn't we see this issue the first time prepare was implemented? I don't remember (but it was two years back). > The last execute should result in 123, whereas it results in 456. The > reason is that the search path has already been changed at the datanode and > a replan would mean select from abc in s2. > > > > > On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> I think the fix is on the right track. There are couple of improvements >> that we need to do here (but you may not do those if the time doesn't >> permit). >> >> 1. We should have a status in RemoteQuery node, as to whether the query >> in the node should use extended protocol or not, rather than relying on the >> presence of statement name and parameters etc. Amit has already added a >> status with that effect. We need to leverage it. >> >> >> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt <abb...@en...>wrote: >> >>> The patch fixes the dead code issue, that I described earlier. The code >>> was dead because of two issues: >>> >>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to NULL >>> and this was the main reason ActivateDatanodeStatementOnNode was not being >>> called in the function pgxc_start_command_on_connection. >>> 2. The function SetRemoteStatementName was wrongly assuming that a >>> prepared statement must have some parameters. >>> >>> Fixing these two issues makes sure that the function >>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>> on the datanode. >>> This patch would fix bug 3607975. It would however not fix the test case >>> I described in my previous email because of reasons I described. >>> >>> >>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> Can you please explain what this fix does? It would help to have an >>>> elaborate explanation with code snippets. >>>> >>>> >>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> While working on test case plancache it was brought up as a review >>>>>>>>> comment that solving bug id 3607975 should solve the problem of the test >>>>>>>>> case. >>>>>>>>> However there is some confusion in the statement of bug id 3607975. >>>>>>>>> >>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, >>>>>>>>> the coordinator keeps on preparing and executing the query on datanode al >>>>>>>>> times, as against preparing once and executing multiple times. This is >>>>>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>>>>> >>>>>>>>> Consider this test case >>>>>>>>> >>>>>>>>> A. create table abc(a int, b int); >>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>> D. execute p1; >>>>>>>>> E. execute p1; >>>>>>>>> F. execute p1; >>>>>>>>> >>>>>>>>> Here are the confusions >>>>>>>>> >>>>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>>>> prepare issued by a user. >>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>>>>>> >>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>>>>> generic plan, >>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>> For details see function GetCachedPlan. >>>>>>>>> This means that executing a prepared statement again and again >>>>>>>>> does use cached plans >>>>>>>>> and does not prepare again and again every time we issue an >>>>>>>>> execute. >>>>>>>>> >>>>>>>>> >>>>>>>> The problem is not here. The problem is in do_query() where somehow >>>>>>>> the name of prepared statement gets wiped out and we keep on preparing >>>>>>>> unnamed statements at the datanode. >>>>>>>> >>>>>>> >>>>>>> We never prepare any named/unnamed statements on the datanode. I >>>>>>> spent time looking at the code written in do_query and functions called >>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>> prepared statements are being handled now is the same as I described >>>>>>> earlier in the mail chain with the help of an example. >>>>>>> The code that is dead was originally added by Mason through commit >>>>>>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>>>>>> has been changed a lot over the last two years. This commit does not >>>>>>> contain any test cases so I am not sure how did it use to work back then. >>>>>>> >>>>>>> >>>>>> >>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>> either. >>>>>> >>>>> >>>>> I was able to find the reason why the code was dead and the attached >>>>> patch (WIP) fixes the problem. This would now ensure that statements are >>>>> prepared on datanodes whenever required. However there is a problem in the >>>>> way prepared statements are handled. The problem is that unless a prepared >>>>> statement is executed it is never prepared on datanodes, hence changing the >>>>> path before executing the statement gives us incorrect results. For Example >>>>> >>>>> create schema s1 create table abc (f1 int) distribute by replication; >>>>> create schema s2 create table abc (f1 int) distribute by replication; >>>>> >>>>> insert into s1.abc values(123); >>>>> insert into s2.abc values(456); >>>>> set search_path = s2; >>>>> prepare p1 as select f1 from abc; >>>>> set search_path = s1; >>>>> execute p1; >>>>> >>>>> The last execute results in 123, where as it should have resulted in >>>>> 456. >>>>> I can finalize the attached patch by fixing any regression issues that >>>>> may result and that would fix 3607975 and improve performance however the >>>>> above test case would still fail. >>>>> >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>>> >>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>> >>>>>>>>> >>>>>>>> Did you verify it under the debugger? If that would have been the >>>>>>>> case, we would not have seen this problem if search_path changed in between >>>>>>>> steps D and E. >>>>>>>> >>>>>>> >>>>>>> If search path is changed between steps D & E, the problem occurs >>>>>>> because when the remote query node is created, schema qualification is not >>>>>>> added in the sql statement to be sent to the datanode, but changes in >>>>>>> search path do get communicated to the datanode. The sql statement is built >>>>>>> when execute is issued for the first time and is reused on subsequent >>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>> when search path was some thing else. >>>>>>> >>>>>>> >>>>>> Fixing the prepared statements the way I suggested, would fix the >>>>>> problem, since the statement will get prepared at the datanode, with the >>>>>> same search path settings, as it would on the coordinator. >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Comments are welcome. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> *Abbas* >>>>>>>>> Architect >>>>>>>>> >>>>>>>>> Ph: 92.334.5100153 >>>>>>>>> Skype ID: gabbasb >>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>> * >>>>>>>>> Follow us on Twitter* >>>>>>>>> @EnterpriseDB >>>>>>>>> >>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>> monitoring service >>>>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>>>> your >>>>>>>>> browser, app, & servers with just a few lines of code. Try New >>>>>>>>> Relic >>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> *Abbas* >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>> * >>>>>>> Follow us on Twitter* >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-06-03 02:10:49
|
Attached please find updated patch to fix the bug. The patch takes care of the bug and the regression issues resulting from the changes done in the patch. Please note that the issue in test case plancache still stands unsolved because of the following test case (simplified but taken from plancache.sql) create schema s1 create table abc (f1 int); create schema s2 create table abc (f1 int); insert into s1.abc values(123); insert into s2.abc values(456); set search_path = s1; prepare p1 as select f1 from abc; execute p1; -- works fine, results in 123 set search_path = s2; execute p1; -- works fine after the patch, results in 123 alter table s1.abc add column f2 float8; -- force replan execute p1; -- fails The last execute should result in 123, whereas it results in 456. The reason is that the search path has already been changed at the datanode and a replan would mean select from abc in s2. On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > I think the fix is on the right track. There are couple of improvements > that we need to do here (but you may not do those if the time doesn't > permit). > > 1. We should have a status in RemoteQuery node, as to whether the query in > the node should use extended protocol or not, rather than relying on the > presence of statement name and parameters etc. Amit has already added a > status with that effect. We need to leverage it. > > > On Tue, May 28, 2013 at 9:04 AM, Abbas Butt <abb...@en...>wrote: > >> The patch fixes the dead code issue, that I described earlier. The code >> was dead because of two issues: >> >> 1. The function CompleteCachedPlan was wrongly setting stmt_name to NULL >> and this was the main reason ActivateDatanodeStatementOnNode was not being >> called in the function pgxc_start_command_on_connection. >> 2. The function SetRemoteStatementName was wrongly assuming that a >> prepared statement must have some parameters. >> >> Fixing these two issues makes sure that the function >> ActivateDatanodeStatementOnNode is now called and statements get prepared >> on the datanode. >> This patch would fix bug 3607975. It would however not fix the test case >> I described in my previous email because of reasons I described. >> >> >> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Can you please explain what this fix does? It would help to have an >>> elaborate explanation with code snippets. >>> >>> >>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>> abb...@en...> wrote: >>> >>>> >>>> >>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> While working on test case plancache it was brought up as a review >>>>>>>> comment that solving bug id 3607975 should solve the problem of the test >>>>>>>> case. >>>>>>>> However there is some confusion in the statement of bug id 3607975. >>>>>>>> >>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, the >>>>>>>> coordinator keeps on preparing and executing the query on datanode al >>>>>>>> times, as against preparing once and executing multiple times. This is >>>>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>>>> >>>>>>>> Consider this test case >>>>>>>> >>>>>>>> A. create table abc(a int, b int); >>>>>>>> B. insert into abc values(11, 22); >>>>>>>> C. prepare p1 as select * from abc; >>>>>>>> D. execute p1; >>>>>>>> E. execute p1; >>>>>>>> F. execute p1; >>>>>>>> >>>>>>>> Here are the confusions >>>>>>>> >>>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>>> prepare issued by a user. >>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>>>>> >>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>>>> generic plan, >>>>>>>> and steps E and F use the already built generic plan. >>>>>>>> For details see function GetCachedPlan. >>>>>>>> This means that executing a prepared statement again and again >>>>>>>> does use cached plans >>>>>>>> and does not prepare again and again every time we issue an >>>>>>>> execute. >>>>>>>> >>>>>>>> >>>>>>> The problem is not here. The problem is in do_query() where somehow >>>>>>> the name of prepared statement gets wiped out and we keep on preparing >>>>>>> unnamed statements at the datanode. >>>>>>> >>>>>> >>>>>> We never prepare any named/unnamed statements on the datanode. I >>>>>> spent time looking at the code written in do_query and functions called >>>>>> from with in do_query to handle prepared statements but the code written in >>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>> is dead as of now. It is never called during the complete regression run. >>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>> prepared statements are being handled now is the same as I described >>>>>> earlier in the mail chain with the help of an example. >>>>>> The code that is dead was originally added by Mason through commit >>>>>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>>>>> has been changed a lot over the last two years. This commit does not >>>>>> contain any test cases so I am not sure how did it use to work back then. >>>>>> >>>>>> >>>>> >>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>> something has gone wrong in-between. That's what we need to find out and >>>>> fix. Not preparing statements on the datanode is not good for performance >>>>> either. >>>>> >>>> >>>> I was able to find the reason why the code was dead and the attached >>>> patch (WIP) fixes the problem. This would now ensure that statements are >>>> prepared on datanodes whenever required. However there is a problem in the >>>> way prepared statements are handled. The problem is that unless a prepared >>>> statement is executed it is never prepared on datanodes, hence changing the >>>> path before executing the statement gives us incorrect results. For Example >>>> >>>> create schema s1 create table abc (f1 int) distribute by replication; >>>> create schema s2 create table abc (f1 int) distribute by replication; >>>> >>>> insert into s1.abc values(123); >>>> insert into s2.abc values(456); >>>> set search_path = s2; >>>> prepare p1 as select f1 from abc; >>>> set search_path = s1; >>>> execute p1; >>>> >>>> The last execute results in 123, where as it should have resulted in >>>> 456. >>>> I can finalize the attached patch by fixing any regression issues that >>>> may result and that would fix 3607975 and improve performance however the >>>> above test case would still fail. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>>> >>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>> >>>>>>>> >>>>>>> Did you verify it under the debugger? If that would have been the >>>>>>> case, we would not have seen this problem if search_path changed in between >>>>>>> steps D and E. >>>>>>> >>>>>> >>>>>> If search path is changed between steps D & E, the problem occurs >>>>>> because when the remote query node is created, schema qualification is not >>>>>> added in the sql statement to be sent to the datanode, but changes in >>>>>> search path do get communicated to the datanode. The sql statement is built >>>>>> when execute is issued for the first time and is reused on subsequent >>>>>> executes. The datanode is totally unaware that the select that it just >>>>>> received is due to an execute of a prepared statement that was prepared >>>>>> when search path was some thing else. >>>>>> >>>>>> >>>>> Fixing the prepared statements the way I suggested, would fix the >>>>> problem, since the statement will get prepared at the datanode, with the >>>>> same search path settings, as it would on the coordinator. >>>>> >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> Comments are welcome. >>>>>>>> >>>>>>>> -- >>>>>>>> *Abbas* >>>>>>>> Architect >>>>>>>> >>>>>>>> Ph: 92.334.5100153 >>>>>>>> Skype ID: gabbasb >>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>> * >>>>>>>> Follow us on Twitter* >>>>>>>> @EnterpriseDB >>>>>>>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>>> service >>>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>>> your >>>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>> _______________________________________________ >>>>>>>> Postgres-xc-developers mailing list >>>>>>>> Pos...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Koichi S. <koi...@gm...> - 2013-06-03 01:45:30
|
Okay, I got it was a wrong merge. Regards; ---------- Koichi Suzuki 2013/6/2 Ashutosh Bapat <ash...@en...> > This line is deleted in PG. While merging it was not deleted in XC during > manula conflict resolution. > > > On Sun, Jun 2, 2013 at 6:16 AM, Koichi Suzuki <koi...@gm...>wrote: > >> Removed line is vanilla PG code part. If we close twice, isn't it >> better to remove the code in XC part, if it is? >> >> At least, we should maintain #if... directive as >> >> #ifdef PGXC >> >> ... >> #else >> relation_close(...) >> #endif >> >> ---------- >> Koichi Suzuki >> >> >> 2013/5/31 Ashutosh Bapat <ash...@en...> >> >>> Hi All, >>> If we execute following set of commands in the sequence, we get an >>> assertion failure on datanode/s and an "Unexpected EOF" error message on >>> the coordinator. >>> >>> drop schema alter1, alter2 cascade; >>> create schema alter1; >>> create schema alter2; >>> create table alter1.t1(val int); >>> alter table alter1.t1 set schema alter2; >>> >>> The reason is we are closing a relation twice, once in >>> AlterTableNamespaceInternal and second time in AlterTableNamespace. The >>> first call is only in XC and not in PG. Looks like some merge error. >>> >>> Here's patch to fix this. >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >>> >>> ------------------------------------------------------------------------------ >>> Get 100% visibility into Java/.NET code with AppDynamics Lite >>> It's a free troubleshooting tool designed for production >>> Get down to code-level detail for bottlenecks, with <2% overhead. >>> Download for free and get started troubleshooting in minutes. >>> http://p.sf.net/sfu/appdyn_d2d_ap2 >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > |
From: Abbas B. <abb...@en...> - 2013-06-03 00:59:10
|
Hi, Attached please find a revised patch that removes the variable non_fqs_dml from RemoteQueryState whihc is no longer needed. With this small change included the patch is good to go. On Thu, May 23, 2013 at 9:41 AM, Abbas Butt <abb...@en...>wrote: > I will try to spare some time for this over the weekend. > > > On Thu, May 23, 2013 at 1:09 AM, Ahsan Hadi <ahs...@en...>wrote: > >> Abbas, >> Can you please review this patch this week? >> >> >> On Tue, May 21, 2013 at 3:55 AM, Amit Khandekar < >> ami...@en...> wrote: >> >>> Currently the number of tuples processed is updated in both >>> HandleCommandComplete and ExecInsert/Update/Delete. >>> >>> In HandleCommandComplete() it gets it from the command tag returned from >>> the datanode i.e. INSERT 0 2, UPDATE 5 and likewise. And then it updates >>> estate->es_processed. But it does this only for FQS. For non-FQS, in >>> ExecInsert/Update, it is just incremented by 1. So if a trigger function >>> skips one row on datanode, the command tag returned from datanode is INSERT >>> 0 0. But still in ExecInsert() increments the row count. >>> >>> I have added a new field RemoteQueryState->rqs_processed, which is >>> updated in HandleCommandComplete(). Then it is used in >>> ExecInsert/Update/Delete() for non-FQS, and in RemoteQueryNext() for FQS. >>> >>> While fixing this issue, I see that there seem to be some issue with >>> combiner->command_complete_count. Currently it checks for consistency of >>> number of tuples returned for replicated tables, but it does that only for >>> FQS. Need to completely remove the dependency on whether it's an FQS or >>> non-FQS DML query. For this, command_complete_count needs to be better >>> handled. I felt it needs some refactoring which I did not feel good to do >>> in this release. Currently this field is being updated for each iteration >>> of FetchTuple by re-using the same combiner for each iteration, whereas it >>> seems it should be updated only for each node execution, not for each tuple >>> fetched. I haven't touched this part, but added a TODO, and opened 3613645. >>> >>> Added some testcases in existing tests xc_trigship and xc_returning. >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Try New Relic Now & We'll Send You this Cool Shirt >>> New Relic is the only SaaS-based application performance monitoring >>> service >>> that delivers powerful full stack analytics. Optimize and monitor your >>> browser, app, & servers with just a few lines of code. Try New Relic >>> and get this awesome Nerd Life shirt! >>> http://p.sf.net/sfu/newrelic_d2d_may >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Ahsan Hadi >> Snr Director Product Development >> EnterpriseDB Corporation >> The Enterprise Postgres Company >> >> Phone: +92-51-8358874 >> Mobile: +92-333-5162114 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of the >> individual or entity to whom it is addressed. This message contains >> information from EnterpriseDB Corporation that may be privileged, >> confidential, or exempt from disclosure under applicable law. If you are >> not the intended recipient or authorized to receive this for the intended >> recipient, any use, dissemination, distribution, retention, archiving, or >> copying of this communication is strictly prohibited. If you have received >> this e-mail in error, please notify the sender immediately by reply e-mail >> and delete this message. >> >> >> ------------------------------------------------------------------------------ >> Try New Relic Now & We'll Send You this Cool Shirt >> New Relic is the only SaaS-based application performance monitoring >> service >> that delivers powerful full stack analytics. Optimize and monitor your >> browser, app, & servers with just a few lines of code. Try New Relic >> and get this awesome Nerd Life shirt! >> http://p.sf.net/sfu/newrelic_d2d_may >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ashutosh B. <ash...@en...> - 2013-06-02 09:01:41
|
This line is deleted in PG. While merging it was not deleted in XC during manula conflict resolution. On Sun, Jun 2, 2013 at 6:16 AM, Koichi Suzuki <koi...@gm...>wrote: > Removed line is vanilla PG code part. If we close twice, isn't it better > to remove the code in XC part, if it is? > > At least, we should maintain #if... directive as > > #ifdef PGXC > > ... > #else > relation_close(...) > #endif > > ---------- > Koichi Suzuki > > > 2013/5/31 Ashutosh Bapat <ash...@en...> > >> Hi All, >> If we execute following set of commands in the sequence, we get an >> assertion failure on datanode/s and an "Unexpected EOF" error message on >> the coordinator. >> >> drop schema alter1, alter2 cascade; >> create schema alter1; >> create schema alter2; >> create table alter1.t1(val int); >> alter table alter1.t1 set schema alter2; >> >> The reason is we are closing a relation twice, once in >> AlterTableNamespaceInternal and second time in AlterTableNamespace. The >> first call is only in XC and not in PG. Looks like some merge error. >> >> Here's patch to fix this. >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> >> >> ------------------------------------------------------------------------------ >> Get 100% visibility into Java/.NET code with AppDynamics Lite >> It's a free troubleshooting tool designed for production >> Get down to code-level detail for bottlenecks, with <2% overhead. >> Download for free and get started troubleshooting in minutes. >> http://p.sf.net/sfu/appdyn_d2d_ap2 >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Michael P. <mic...@gm...> - 2013-06-02 07:59:06
|
On Fri, May 31, 2013 at 10:20 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi All, > I am curious as why we don't want PostgreSQL tags in XC? While debugging > some merge problems, I found that it will be helpful to have those tags in > XC repository as well to know where and what happened in PG repository and > relate it to the XC branching, tagging etc. So, why don't we have PG tags > in XC? Can somebody answer it? > Because you don't need them, and minimizing the number of tags in the XC-only code makes it more understandable. If you want to include the vanilla tags, and as Postgres and XC repositories share 99% of similar history, what developers *should* do when developing with PG or XC code is using a single GIT repository containing 2 remote definitions to vanilla Postgres and XC such as you can create local branches based on the 2 remotes and then they can do direct comparisons between them or a given object/branch/tag. If you have the 2 remotes defined and fetched, the vanilla tags will be included. Having such a development model for a fork is a huge advantage, model getting even better thanks to git. Having 2 remotes pointing to PG and XC is also the method to use when merging code from one remote to the other, through local branches based on different remotes. -- Michael |
From: Koichi S. <koi...@gm...> - 2013-06-02 00:46:50
|
Removed line is vanilla PG code part. If we close twice, isn't it better to remove the code in XC part, if it is? At least, we should maintain #if... directive as #ifdef PGXC ... #else relation_close(...) #endif ---------- Koichi Suzuki 2013/5/31 Ashutosh Bapat <ash...@en...> > Hi All, > If we execute following set of commands in the sequence, we get an > assertion failure on datanode/s and an "Unexpected EOF" error message on > the coordinator. > > drop schema alter1, alter2 cascade; > create schema alter1; > create schema alter2; > create table alter1.t1(val int); > alter table alter1.t1 set schema alter2; > > The reason is we are closing a relation twice, once in > AlterTableNamespaceInternal and second time in AlterTableNamespace. The > first call is only in XC and not in PG. Looks like some merge error. > > Here's patch to fix this. > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite > It's a free troubleshooting tool designed for production > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://p.sf.net/sfu/appdyn_d2d_ap2 > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Ashutosh B. <ash...@en...> - 2013-05-31 13:21:03
|
Hi All, I am curious as why we don't want PostgreSQL tags in XC? While debugging some merge problems, I found that it will be helpful to have those tags in XC repository as well to know where and what happened in PG repository and relate it to the XC branching, tagging etc. So, why don't we have PG tags in XC? Can somebody answer it? On Thu, Feb 7, 2013 at 2:06 PM, Michael Paquier <mic...@gm...>wrote: > Hi all, > > I just noticed that all the vanilla Postgres tags are included in GIT > repository of XC on source forge. > Those tags are not related to XC and have never been there until recently. > Barring objections, I will clean up that as there is no meaning in > maintaining more than necessary. > Thanks, > -- > Michael > > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-05-30 06:17:39
|
EXPLAIN throws the same error. On Thu, May 30, 2013 at 11:14 AM, Ashutosh Bapat < ash...@en...> wrote: > > > > On Thu, May 30, 2013 at 2:10 AM, Abbas Butt <abb...@en...>wrote: > >> I have compared the expected output changes with the expected output file >> in PG and found one more case where we needed to exchange command ids, >> accommodated that case as well and committed the patch. >> >> I compared the sql files of PG and XC, and found that there are some test >> cases missing in XC, however for some reason the syntax used in the new >> statements is not yet supported by XC. >> e.g. >> On PG we get >> test=# WITH outermost(x) AS ( >> test(# SELECT 1 >> test(# UNION (WITH innermost as (SELECT 2) >> test(# SELECT * FROM innermost >> test(# UNION SELECT 3) >> test(# ) >> test-# SELECT * FROM outermost; >> x >> --- >> 1 >> 2 >> 3 >> (3 rows) >> >> where as on XC we get a syntax error >> >> test=# WITH outermost(x) AS ( >> test(# SELECT 1 >> test(# UNION (WITH innermost as (SELECT 2) >> test(# SELECT * FROM innermost >> test(# UNION SELECT 3) >> test(# ) >> test-# SELECT * FROM outermost; >> ERROR: relation "innermost" does not exist >> LINE 4: SELECT * FROM innermost >> ^ >> I have added a bug ID (3614136) in SF to track this issue. >> >> > This isn't a syntax error. It looks to be a problem with WITH support. > Somewhere it's not resolving innermost correctly (probably while sending > the query to the datanodes). Can you please check the EXPLAIN output? Amit > added support for WITH (I guess). Can you please assign it to Amit, if > that's correct? > > >> >> >> On Mon, May 27, 2013 at 7:53 AM, Abbas Butt <abb...@en...>wrote: >> >>> >>> >>> On Fri, May 24, 2013 at 7:54 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Thu, May 23, 2013 at 10:23 PM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Thu, May 23, 2013 at 10:19 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Fri, May 17, 2013 at 1:58 PM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, May 17, 2013 at 2:23 PM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, May 16, 2013 at 3:13 PM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> Hi Abbas, >>>>>>>>> I am also seeing a lot of changes in the expected output where the >>>>>>>>> rows output have changed. What are these changes? >>>>>>>>> >>>>>>>> >>>>>>>> These changes are a result of blocking partition column updates >>>>>>>> >>>>>>> >>>>>>> Are those in sync with PG expected output? >>>>>>> >>>>>> >>>>>> No, in PG the update does not fail, in XC it fails. >>>>>> >>>>>> >>>>>>> Why did we change the original expected output in first place? >>>>>>> >>>>>> >>>>>> Do you mean that the changes in expected output due to blocking of >>>>>> partition column updates should only be done in alternate expected output >>>>>> file? >>>>>> >>>>>> >>>>> >>>>> yes, of course. >>>>> >>>>> >>>> >>>> This response can be confusing. If you are talking about the changing >>>> of table distribution, then that has to be changed everywhere. But, I do >>>> not understand, why should we see those many changes. The original output >>>> file must have preserved the correct output, right? >>>> >>> >>> Unfortunately the results in original output were incorrect, especially >>> for the cases where it was possible to update partition column using WITH >>> syntax. >>> The patch fixes two issues >>> 1. Block partition column updates using WITH syntax >>> 2. WITH query that updates a table in the main query and inserts a row >>> in the same table in the WITH query >>> Hence there are more changes in the output files. >>> >>> >>>> >>>> >>>>> >>>>>>> >>>>>>>> and changing the distribution of tables to replication. >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> That's acceptable. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < >>>>>>>>> ash...@en...> wrote: >>>>>>>>> >>>>>>>>>> Hi Abbas, >>>>>>>>>> Instead of fixing the first issue in pgxc_build_dml_statement(), >>>>>>>>>> is it possible to traverse the Query in validate_part_col_updatable() >>>>>>>>>> recursively to find UPDATE statements and apply partition column check? >>>>>>>>>> That would cover all the possibilities, I guess. That also saves us much >>>>>>>>>> effort in case we come to support distribution column updation. >>>>>>>>>> >>>>>>>>>> I think, we need a generic solution to solve this command id >>>>>>>>>> issue, e.g. punching command id always and efficiently. But for now this >>>>>>>>>> suffices. Please log a bug/feature and put it in 1.2 bucket. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> Adding developers mailing list. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt < >>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> Attached please find a patch to fix test case with. >>>>>>>>>>>> There were two issues making the test to fail. >>>>>>>>>>>> 1. Updates to partition column were possible using syntax like >>>>>>>>>>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>>>>>>>>>> The patch blocks this syntax. >>>>>>>>>>>> >>>>>>>>>>>> 2. For a WITH query that updates a table in the main query and >>>>>>>>>>>> inserts a row in the same table in the WITH query we need >>>>>>>>>>>> to use >>>>>>>>>>>> command ID communication to remote nodes in order to >>>>>>>>>>>> maintain global data visibility. >>>>>>>>>>>> For example >>>>>>>>>>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY >>>>>>>>>>>> REPLICATION; >>>>>>>>>>>> INSERT INTO tab VALUES (1,'p1'); >>>>>>>>>>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING >>>>>>>>>>>> id AS newid) >>>>>>>>>>>> UPDATE tab SET id = id + newid FROM wcte; >>>>>>>>>>>> The last query gets translated into the following >>>>>>>>>>>> multi-statement >>>>>>>>>>>> transaction on the primary datanode >>>>>>>>>>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ >>>>>>>>>>>> WRITE >>>>>>>>>>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id >>>>>>>>>>>> -- (42,'new)' >>>>>>>>>>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>>>>>>>>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) >>>>>>>>>>>> -- (43,(0,1)] >>>>>>>>>>>> (e) COMMIT TRANSACTION >>>>>>>>>>>> The command id of the select in step (c), should be such >>>>>>>>>>>> that >>>>>>>>>>>> it does not see the insert of step (b) >>>>>>>>>>>> >>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> *Abbas* >>>>>>>>>>>> Architect >>>>>>>>>>>> >>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>> * >>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> -- >>>>>>>>>>> *Abbas* >>>>>>>>>>> Architect >>>>>>>>>>> >>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>> * >>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>> @EnterpriseDB >>>>>>>>>>> >>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> AlienVault Unified Security Management (USM) platform delivers >>>>>>>>>>> complete >>>>>>>>>>> security visibility with the essential security capabilities. >>>>>>>>>>> Easily and >>>>>>>>>>> efficiently configure, manage, and operate all of your security >>>>>>>>>>> controls >>>>>>>>>>> from a single console and one unified framework. Download a free >>>>>>>>>>> trial. >>>>>>>>>>> http://p.sf.net/sfu/alienvault_d2d >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Postgres-xc-core mailing list >>>>>>>>>>> Pos...@li... >>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Best Wishes, >>>>>>>>>> Ashutosh Bapat >>>>>>>>>> EntepriseDB Corporation >>>>>>>>>> The Postgres Database Company >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> *Abbas* >>>>>>>> Architect >>>>>>>> >>>>>>>> Ph: 92.334.5100153 >>>>>>>> Skype ID: gabbasb >>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>> * >>>>>>>> Follow us on Twitter* >>>>>>>> @EnterpriseDB >>>>>>>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> AlienVault Unified Security Management (USM) platform delivers >>>>>>> complete >>>>>>> security visibility with the essential security capabilities. Easily >>>>>>> and >>>>>>> efficiently configure, manage, and operate all of your security >>>>>>> controls >>>>>>> from a single console and one unified framework. Download a free >>>>>>> trial. >>>>>>> http://p.sf.net/sfu/alienvault_d2d >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-core mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ashutosh B. <ash...@en...> - 2013-05-30 06:15:05
|
On Thu, May 30, 2013 at 2:10 AM, Abbas Butt <abb...@en...>wrote: > I have compared the expected output changes with the expected output file > in PG and found one more case where we needed to exchange command ids, > accommodated that case as well and committed the patch. > > I compared the sql files of PG and XC, and found that there are some test > cases missing in XC, however for some reason the syntax used in the new > statements is not yet supported by XC. > e.g. > On PG we get > test=# WITH outermost(x) AS ( > test(# SELECT 1 > test(# UNION (WITH innermost as (SELECT 2) > test(# SELECT * FROM innermost > test(# UNION SELECT 3) > test(# ) > test-# SELECT * FROM outermost; > x > --- > 1 > 2 > 3 > (3 rows) > > where as on XC we get a syntax error > > test=# WITH outermost(x) AS ( > test(# SELECT 1 > test(# UNION (WITH innermost as (SELECT 2) > test(# SELECT * FROM innermost > test(# UNION SELECT 3) > test(# ) > test-# SELECT * FROM outermost; > ERROR: relation "innermost" does not exist > LINE 4: SELECT * FROM innermost > ^ > I have added a bug ID (3614136) in SF to track this issue. > > This isn't a syntax error. It looks to be a problem with WITH support. Somewhere it's not resolving innermost correctly (probably while sending the query to the datanodes). Can you please check the EXPLAIN output? Amit added support for WITH (I guess). Can you please assign it to Amit, if that's correct? > > > On Mon, May 27, 2013 at 7:53 AM, Abbas Butt <abb...@en...>wrote: > >> >> >> On Fri, May 24, 2013 at 7:54 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> >>> On Thu, May 23, 2013 at 10:23 PM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Thu, May 23, 2013 at 10:19 PM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Fri, May 17, 2013 at 1:58 PM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, May 17, 2013 at 2:23 PM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, May 16, 2013 at 3:13 PM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> Hi Abbas, >>>>>>>> I am also seeing a lot of changes in the expected output where the >>>>>>>> rows output have changed. What are these changes? >>>>>>>> >>>>>>> >>>>>>> These changes are a result of blocking partition column updates >>>>>>> >>>>>> >>>>>> Are those in sync with PG expected output? >>>>>> >>>>> >>>>> No, in PG the update does not fail, in XC it fails. >>>>> >>>>> >>>>>> Why did we change the original expected output in first place? >>>>>> >>>>> >>>>> Do you mean that the changes in expected output due to blocking of >>>>> partition column updates should only be done in alternate expected output >>>>> file? >>>>> >>>>> >>>> >>>> yes, of course. >>>> >>>> >>> >>> This response can be confusing. If you are talking about the changing of >>> table distribution, then that has to be changed everywhere. But, I do not >>> understand, why should we see those many changes. The original output file >>> must have preserved the correct output, right? >>> >> >> Unfortunately the results in original output were incorrect, especially >> for the cases where it was possible to update partition column using WITH >> syntax. >> The patch fixes two issues >> 1. Block partition column updates using WITH syntax >> 2. WITH query that updates a table in the main query and inserts a row in >> the same table in the WITH query >> Hence there are more changes in the output files. >> >> >>> >>> >>>> >>>>>> >>>>>>> and changing the distribution of tables to replication. >>>>>>> >>>>>>> >>>>>> >>>>>> That's acceptable. >>>>>> >>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> Hi Abbas, >>>>>>>>> Instead of fixing the first issue in pgxc_build_dml_statement(), >>>>>>>>> is it possible to traverse the Query in validate_part_col_updatable() >>>>>>>>> recursively to find UPDATE statements and apply partition column check? >>>>>>>>> That would cover all the possibilities, I guess. That also saves us much >>>>>>>>> effort in case we come to support distribution column updation. >>>>>>>>> >>>>>>>>> I think, we need a generic solution to solve this command id >>>>>>>>> issue, e.g. punching command id always and efficiently. But for now this >>>>>>>>> suffices. Please log a bug/feature and put it in 1.2 bucket. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> Adding developers mailing list. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> Attached please find a patch to fix test case with. >>>>>>>>>>> There were two issues making the test to fail. >>>>>>>>>>> 1. Updates to partition column were possible using syntax like >>>>>>>>>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>>>>>>>>> The patch blocks this syntax. >>>>>>>>>>> >>>>>>>>>>> 2. For a WITH query that updates a table in the main query and >>>>>>>>>>> inserts a row in the same table in the WITH query we need >>>>>>>>>>> to use >>>>>>>>>>> command ID communication to remote nodes in order to >>>>>>>>>>> maintain global data visibility. >>>>>>>>>>> For example >>>>>>>>>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY >>>>>>>>>>> REPLICATION; >>>>>>>>>>> INSERT INTO tab VALUES (1,'p1'); >>>>>>>>>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id >>>>>>>>>>> AS newid) >>>>>>>>>>> UPDATE tab SET id = id + newid FROM wcte; >>>>>>>>>>> The last query gets translated into the following >>>>>>>>>>> multi-statement >>>>>>>>>>> transaction on the primary datanode >>>>>>>>>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ >>>>>>>>>>> WRITE >>>>>>>>>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id >>>>>>>>>>> -- (42,'new)' >>>>>>>>>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>>>>>>>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) >>>>>>>>>>> -- (43,(0,1)] >>>>>>>>>>> (e) COMMIT TRANSACTION >>>>>>>>>>> The command id of the select in step (c), should be such >>>>>>>>>>> that >>>>>>>>>>> it does not see the insert of step (b) >>>>>>>>>>> >>>>>>>>>>> Comments are welcome. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> *Abbas* >>>>>>>>>>> Architect >>>>>>>>>>> >>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>> * >>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>> @EnterpriseDB >>>>>>>>>>> >>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> *Abbas* >>>>>>>>>> Architect >>>>>>>>>> >>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>> Skype ID: gabbasb >>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>> * >>>>>>>>>> Follow us on Twitter* >>>>>>>>>> @EnterpriseDB >>>>>>>>>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> AlienVault Unified Security Management (USM) platform delivers >>>>>>>>>> complete >>>>>>>>>> security visibility with the essential security capabilities. >>>>>>>>>> Easily and >>>>>>>>>> efficiently configure, manage, and operate all of your security >>>>>>>>>> controls >>>>>>>>>> from a single console and one unified framework. Download a free >>>>>>>>>> trial. >>>>>>>>>> http://p.sf.net/sfu/alienvault_d2d >>>>>>>>>> _______________________________________________ >>>>>>>>>> Postgres-xc-core mailing list >>>>>>>>>> Pos...@li... >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> *Abbas* >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>> * >>>>>>> Follow us on Twitter* >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> AlienVault Unified Security Management (USM) platform delivers >>>>>> complete >>>>>> security visibility with the essential security capabilities. Easily >>>>>> and >>>>>> efficiently configure, manage, and operate all of your security >>>>>> controls >>>>>> from a single console and one unified framework. Download a free >>>>>> trial. >>>>>> http://p.sf.net/sfu/alienvault_d2d >>>>>> _______________________________________________ >>>>>> Postgres-xc-core mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-05-30 06:10:48
|
I have compared the expected output changes with the expected output file in PG and found one more case where we needed to exchange command ids, accommodated that case as well and committed the patch. I compared the sql files of PG and XC, and found that there are some test cases missing in XC, however for some reason the syntax used in the new statements is not yet supported by XC. e.g. On PG we get test=# WITH outermost(x) AS ( test(# SELECT 1 test(# UNION (WITH innermost as (SELECT 2) test(# SELECT * FROM innermost test(# UNION SELECT 3) test(# ) test-# SELECT * FROM outermost; x --- 1 2 3 (3 rows) where as on XC we get a syntax error test=# WITH outermost(x) AS ( test(# SELECT 1 test(# UNION (WITH innermost as (SELECT 2) test(# SELECT * FROM innermost test(# UNION SELECT 3) test(# ) test-# SELECT * FROM outermost; ERROR: relation "innermost" does not exist LINE 4: SELECT * FROM innermost ^ I have added a bug ID (3614136) in SF to track this issue. On Mon, May 27, 2013 at 7:53 AM, Abbas Butt <abb...@en...>wrote: > > > On Fri, May 24, 2013 at 7:54 AM, Ashutosh Bapat < > ash...@en...> wrote: > >> >> >> >> On Thu, May 23, 2013 at 10:23 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> >>> On Thu, May 23, 2013 at 10:19 PM, Abbas Butt < >>> abb...@en...> wrote: >>> >>>> >>>> >>>> On Fri, May 17, 2013 at 1:58 PM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Fri, May 17, 2013 at 2:23 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Thu, May 16, 2013 at 3:13 PM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> Hi Abbas, >>>>>>> I am also seeing a lot of changes in the expected output where the >>>>>>> rows output have changed. What are these changes? >>>>>>> >>>>>> >>>>>> These changes are a result of blocking partition column updates >>>>>> >>>>> >>>>> Are those in sync with PG expected output? >>>>> >>>> >>>> No, in PG the update does not fail, in XC it fails. >>>> >>>> >>>>> Why did we change the original expected output in first place? >>>>> >>>> >>>> Do you mean that the changes in expected output due to blocking of >>>> partition column updates should only be done in alternate expected output >>>> file? >>>> >>>> >>> >>> yes, of course. >>> >>> >> >> This response can be confusing. If you are talking about the changing of >> table distribution, then that has to be changed everywhere. But, I do not >> understand, why should we see those many changes. The original output file >> must have preserved the correct output, right? >> > > Unfortunately the results in original output were incorrect, especially > for the cases where it was possible to update partition column using WITH > syntax. > The patch fixes two issues > 1. Block partition column updates using WITH syntax > 2. WITH query that updates a table in the main query and inserts a row in > the same table in the WITH query > Hence there are more changes in the output files. > > >> >> >>> >>>>> >>>>>> and changing the distribution of tables to replication. >>>>>> >>>>>> >>>>> >>>>> That's acceptable. >>>>> >>>>> >>>>>> >>>>>>> >>>>>>> On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> Hi Abbas, >>>>>>>> Instead of fixing the first issue in pgxc_build_dml_statement(), is >>>>>>>> it possible to traverse the Query in validate_part_col_updatable() >>>>>>>> recursively to find UPDATE statements and apply partition column check? >>>>>>>> That would cover all the possibilities, I guess. That also saves us much >>>>>>>> effort in case we come to support distribution column updation. >>>>>>>> >>>>>>>> I think, we need a generic solution to solve this command id issue, >>>>>>>> e.g. punching command id always and efficiently. But for now this suffices. >>>>>>>> Please log a bug/feature and put it in 1.2 bucket. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> Adding developers mailing list. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> Attached please find a patch to fix test case with. >>>>>>>>>> There were two issues making the test to fail. >>>>>>>>>> 1. Updates to partition column were possible using syntax like >>>>>>>>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>>>>>>>> The patch blocks this syntax. >>>>>>>>>> >>>>>>>>>> 2. For a WITH query that updates a table in the main query and >>>>>>>>>> inserts a row in the same table in the WITH query we need to >>>>>>>>>> use >>>>>>>>>> command ID communication to remote nodes in order to >>>>>>>>>> maintain global data visibility. >>>>>>>>>> For example >>>>>>>>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY >>>>>>>>>> REPLICATION; >>>>>>>>>> INSERT INTO tab VALUES (1,'p1'); >>>>>>>>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id >>>>>>>>>> AS newid) >>>>>>>>>> UPDATE tab SET id = id + newid FROM wcte; >>>>>>>>>> The last query gets translated into the following >>>>>>>>>> multi-statement >>>>>>>>>> transaction on the primary datanode >>>>>>>>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ >>>>>>>>>> WRITE >>>>>>>>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id >>>>>>>>>> -- (42,'new)' >>>>>>>>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>>>>>>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>>>>>>>>> (43,(0,1)] >>>>>>>>>> (e) COMMIT TRANSACTION >>>>>>>>>> The command id of the select in step (c), should be such that >>>>>>>>>> it does not see the insert of step (b) >>>>>>>>>> >>>>>>>>>> Comments are welcome. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> *Abbas* >>>>>>>>>> Architect >>>>>>>>>> >>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>> Skype ID: gabbasb >>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>> * >>>>>>>>>> Follow us on Twitter* >>>>>>>>>> @EnterpriseDB >>>>>>>>>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> -- >>>>>>>>> *Abbas* >>>>>>>>> Architect >>>>>>>>> >>>>>>>>> Ph: 92.334.5100153 >>>>>>>>> Skype ID: gabbasb >>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>> * >>>>>>>>> Follow us on Twitter* >>>>>>>>> @EnterpriseDB >>>>>>>>> >>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> AlienVault Unified Security Management (USM) platform delivers >>>>>>>>> complete >>>>>>>>> security visibility with the essential security capabilities. >>>>>>>>> Easily and >>>>>>>>> efficiently configure, manage, and operate all of your security >>>>>>>>> controls >>>>>>>>> from a single console and one unified framework. Download a free >>>>>>>>> trial. >>>>>>>>> http://p.sf.net/sfu/alienvault_d2d >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-core mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> AlienVault Unified Security Management (USM) platform delivers complete >>>>> security visibility with the essential security capabilities. Easily >>>>> and >>>>> efficiently configure, manage, and operate all of your security >>>>> controls >>>>> from a single console and one unified framework. Download a free trial. >>>>> http://p.sf.net/sfu/alienvault_d2d >>>>> _______________________________________________ >>>>> Postgres-xc-core mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>>> >>>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Koichi S. <koi...@gm...> - 2013-05-29 10:44:32
|
Yes, I think so. Only one thing is it may bring some difficulty to maintain the code if we release this connection here. Best; ---------- Koichi Suzuki 2013/5/29 Andrei Martsinchyk <and...@gm...> > > > > 2013/5/29 鈴木 幸市 <ko...@in...> > >> I see. I don't feel comfortable to close GTM connection which has been >> opened elsewhere. Other than that, the patch looks reasonable. >> >> > I am OK with removing the line. In the scenario I am thinking about the > connection will be closed anyway with the session end. > > > >> Any further feedback? >> --- >> Koichi Suzuki >> >> >> >> On 2013/05/28, at 17:27, Andrei Martsinchyk <and...@gm...> >> wrote: >> >> >> >> >> 2013/5/28 Koichi Suzuki <koi...@gm...> >> >>> If the background terminates after this, yes, we can disconnect GTM. >>> If the background keeps running and disconnect this time, it should be >>> given another chance to connect to GTM. I was not sure if the current >>> code does this. Here's my analysis: >>> >>> 1. Datanode can connect to GTM directly only for auto Vacuum and Vacuum >>> analyze. >>> 2. These processes runs as a single transaction, and then quit. >>> 3. So we're safe to disconnect at the end of the transaction. >>> >>> Am I correct? >>> >>> >> Correct. However if even session lasts and runs another transaction the >> only problem is overhead of re-establishing of the GTM connection. >> >> >>> Regards; >>> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/5/28 Andrei Martsinchyk <and...@gm...> >>> >>>> >>>> >>>> >>>> 2013/5/28 Koichi Suzuki <koi...@gm...> >>>> >>>>> I have a question on the patch. Why we close GTM connection here? >>>>> Why shouldn't we keep the connection open? >>>>> >>>>> >>>> That block of code is executed when client is connected to the datanode >>>> directly. That does not happen during normal operation just to perform >>>> one-time maintenence or monitoring task. So I thought it is better to close >>>> the connection. There should be no harm if connection is left open. >>>> >>>> >>>> >>>>> Regards; >>>>> >>>>> ---------- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> 2013/5/28 Koichi Suzuki <koi...@gm...> >>>>> >>>>>> Thank you Andrei for the patch. I took a glance at it and will >>>>>> review it before commit. >>>>>> >>>>>> Best; >>>>>> >>>>>> ---------- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2013/5/27 Andrei Martsinchyk <and...@gm...> >>>>>> >>>>>>> We noticed that transaction handles are not released after direct >>>>>>> connections to datanodes, if they are connecting to GTM through GTM proxy. >>>>>>> So if Datanode is periodically connected directly (ex. for >>>>>>> monitoring) GTM eventually starts throwing error "Max transaction limit >>>>>>> reached". >>>>>>> Please find fix attached. >>>>>>> >>>>>>> -- >>>>>>> Andrei Martsinchyk >>>>>>> >>>>>>> StormDB - http://www.stormdb.com >>>>>>> The Database Cloud >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>> service >>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>> your >>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>> and get this awesome Nerd Life shirt! >>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-developers mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> Andrei Martsinchyk >>>> >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> >>>> >>> >> >> >> -- >> Andrei Martsinchyk >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> >> ------------------------------------------------------------------------------ >> Try New Relic Now & We'll Send You this Cool Shirt >> New Relic is the only SaaS-based application performance monitoring >> service >> that delivers powerful full stack analytics. Optimize and monitor your >> browser, app, & servers with just a few lines of code. Try New Relic >> and get this awesome Nerd Life shirt! >> http://p.sf.net/sfu/newrelic_d2d_may_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > |
From: Andrei M. <and...@gm...> - 2013-05-29 10:42:31
|
2013/5/29 鈴木 幸市 <ko...@in...> > I see. I don't feel comfortable to close GTM connection which has been > opened elsewhere. Other than that, the patch looks reasonable. > > I am OK with removing the line. In the scenario I am thinking about the connection will be closed anyway with the session end. > Any further feedback? > --- > Koichi Suzuki > > > > On 2013/05/28, at 17:27, Andrei Martsinchyk <and...@gm...> > wrote: > > > > > 2013/5/28 Koichi Suzuki <koi...@gm...> > >> If the background terminates after this, yes, we can disconnect GTM. If >> the background keeps running and disconnect this time, it should be given >> another chance to connect to GTM. I was not sure if the current code does >> this. Here's my analysis: >> >> 1. Datanode can connect to GTM directly only for auto Vacuum and Vacuum >> analyze. >> 2. These processes runs as a single transaction, and then quit. >> 3. So we're safe to disconnect at the end of the transaction. >> >> Am I correct? >> >> > Correct. However if even session lasts and runs another transaction the > only problem is overhead of re-establishing of the GTM connection. > > >> Regards; >> >> ---------- >> Koichi Suzuki >> >> >> 2013/5/28 Andrei Martsinchyk <and...@gm...> >> >>> >>> >>> >>> 2013/5/28 Koichi Suzuki <koi...@gm...> >>> >>>> I have a question on the patch. Why we close GTM connection here? >>>> Why shouldn't we keep the connection open? >>>> >>>> >>> That block of code is executed when client is connected to the datanode >>> directly. That does not happen during normal operation just to perform >>> one-time maintenence or monitoring task. So I thought it is better to close >>> the connection. There should be no harm if connection is left open. >>> >>> >>> >>>> Regards; >>>> >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/5/28 Koichi Suzuki <koi...@gm...> >>>> >>>>> Thank you Andrei for the patch. I took a glance at it and will >>>>> review it before commit. >>>>> >>>>> Best; >>>>> >>>>> ---------- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> 2013/5/27 Andrei Martsinchyk <and...@gm...> >>>>> >>>>>> We noticed that transaction handles are not released after direct >>>>>> connections to datanodes, if they are connecting to GTM through GTM proxy. >>>>>> So if Datanode is periodically connected directly (ex. for >>>>>> monitoring) GTM eventually starts throwing error "Max transaction limit >>>>>> reached". >>>>>> Please find fix attached. >>>>>> >>>>>> -- >>>>>> Andrei Martsinchyk >>>>>> >>>>>> StormDB - http://www.stormdb.com >>>>>> The Database Cloud >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>> service >>>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>> and get this awesome Nerd Life shirt! >>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>> _______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>>>>> >>>>> >>>> >>> >>> >>> -- >>> Andrei Martsinchyk >>> >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> >>> >> > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring > service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! > http://p.sf.net/sfu/newrelic_d2d_may_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > -- Andrei Martsinchyk StormDB - http://www.stormdb.com The Database Cloud |
From: 鈴木 幸市 <ko...@in...> - 2013-05-29 02:08:52
|
I see. I don't feel comfortable to close GTM connection which has been opened elsewhere. Other than that, the patch looks reasonable. Any further feedback? --- Koichi Suzuki On 2013/05/28, at 17:27, Andrei Martsinchyk <and...@gm...> wrote: > > > > 2013/5/28 Koichi Suzuki <koi...@gm...> > If the background terminates after this, yes, we can disconnect GTM. If the background keeps running and disconnect this time, it should be given another chance to connect to GTM. I was not sure if the current code does this. Here's my analysis: > > 1. Datanode can connect to GTM directly only for auto Vacuum and Vacuum analyze. > 2. These processes runs as a single transaction, and then quit. > 3. So we're safe to disconnect at the end of the transaction. > > Am I correct? > > > Correct. However if even session lasts and runs another transaction the only problem is overhead of re-establishing of the GTM connection. > > Regards; > > ---------- > Koichi Suzuki > > > 2013/5/28 Andrei Martsinchyk <and...@gm...> > > > > 2013/5/28 Koichi Suzuki <koi...@gm...> > I have a question on the patch. Why we close GTM connection here? Why shouldn't we keep the connection open? > > > That block of code is executed when client is connected to the datanode directly. That does not happen during normal operation just to perform one-time maintenence or monitoring task. So I thought it is better to close the connection. There should be no harm if connection is left open. > > > Regards; > > ---------- > Koichi Suzuki > > > 2013/5/28 Koichi Suzuki <koi...@gm...> > Thank you Andrei for the patch. I took a glance at it and will review it before commit. > > Best; > > ---------- > Koichi Suzuki > > > 2013/5/27 Andrei Martsinchyk <and...@gm...> > We noticed that transaction handles are not released after direct connections to datanodes, if they are connecting to GTM through GTM proxy. > So if Datanode is periodically connected directly (ex. for monitoring) GTM eventually starts throwing error "Max transaction limit reached". > Please find fix attached. > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > > > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Ashutosh B. <ash...@en...> - 2013-05-28 14:17:22
|
Hi Abbas, I think the fix is on the right track. There are couple of improvements that we need to do here (but you may not do those if the time doesn't permit). 1. We should have a status in RemoteQuery node, as to whether the query in the node should use extended protocol or not, rather than relying on the presence of statement name and parameters etc. Amit has already added a status with that effect. We need to leverage it. On Tue, May 28, 2013 at 9:04 AM, Abbas Butt <abb...@en...>wrote: > The patch fixes the dead code issue, that I described earlier. The code > was dead because of two issues: > > 1. The function CompleteCachedPlan was wrongly setting stmt_name to NULL > and this was the main reason ActivateDatanodeStatementOnNode was not being > called in the function pgxc_start_command_on_connection. > 2. The function SetRemoteStatementName was wrongly assuming that a > prepared statement must have some parameters. > > Fixing these two issues makes sure that the function > ActivateDatanodeStatementOnNode is now called and statements get prepared > on the datanode. > This patch would fix bug 3607975. It would however not fix the test case I > described in my previous email because of reasons I described. > > > On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Can you please explain what this fix does? It would help to have an >> elaborate explanation with code snippets. >> >> >> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt <abb...@en... >> > wrote: >> >>> >>> >>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> While working on test case plancache it was brought up as a review >>>>>>> comment that solving bug id 3607975 should solve the problem of the test >>>>>>> case. >>>>>>> However there is some confusion in the statement of bug id 3607975. >>>>>>> >>>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, the >>>>>>> coordinator keeps on preparing and executing the query on datanode al >>>>>>> times, as against preparing once and executing multiple times. This is >>>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>>> >>>>>>> Consider this test case >>>>>>> >>>>>>> A. create table abc(a int, b int); >>>>>>> B. insert into abc values(11, 22); >>>>>>> C. prepare p1 as select * from abc; >>>>>>> D. execute p1; >>>>>>> E. execute p1; >>>>>>> F. execute p1; >>>>>>> >>>>>>> Here are the confusions >>>>>>> >>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>> prepare issued by a user. >>>>>>> In fact step C does nothing on the datanodes. >>>>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>>>> >>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>>> generic plan, >>>>>>> and steps E and F use the already built generic plan. >>>>>>> For details see function GetCachedPlan. >>>>>>> This means that executing a prepared statement again and again >>>>>>> does use cached plans >>>>>>> and does not prepare again and again every time we issue an >>>>>>> execute. >>>>>>> >>>>>>> >>>>>> The problem is not here. The problem is in do_query() where somehow >>>>>> the name of prepared statement gets wiped out and we keep on preparing >>>>>> unnamed statements at the datanode. >>>>>> >>>>> >>>>> We never prepare any named/unnamed statements on the datanode. I spent >>>>> time looking at the code written in do_query and functions called from with >>>>> in do_query to handle prepared statements but the code written in >>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>> is dead as of now. It is never called during the complete regression run. >>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>> prepared statements are being handled now is the same as I described >>>>> earlier in the mail chain with the help of an example. >>>>> The code that is dead was originally added by Mason through commit >>>>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>>>> has been changed a lot over the last two years. This commit does not >>>>> contain any test cases so I am not sure how did it use to work back then. >>>>> >>>>> >>>> >>>> This code wasn't dead, when I worked on prepared statements. So, >>>> something has gone wrong in-between. That's what we need to find out and >>>> fix. Not preparing statements on the datanode is not good for performance >>>> either. >>>> >>> >>> I was able to find the reason why the code was dead and the attached >>> patch (WIP) fixes the problem. This would now ensure that statements are >>> prepared on datanodes whenever required. However there is a problem in the >>> way prepared statements are handled. The problem is that unless a prepared >>> statement is executed it is never prepared on datanodes, hence changing the >>> path before executing the statement gives us incorrect results. For Example >>> >>> create schema s1 create table abc (f1 int) distribute by replication; >>> create schema s2 create table abc (f1 int) distribute by replication; >>> >>> insert into s1.abc values(123); >>> insert into s2.abc values(456); >>> set search_path = s2; >>> prepare p1 as select f1 from abc; >>> set search_path = s1; >>> execute p1; >>> >>> The last execute results in 123, where as it should have resulted in 456. >>> I can finalize the attached patch by fixing any regression issues that >>> may result and that would fix 3607975 and improve performance however the >>> above test case would still fail. >>> >>> >>>> >>>> >>>>> >>>>>> >>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>> >>>>>>> >>>>>> Did you verify it under the debugger? If that would have been the >>>>>> case, we would not have seen this problem if search_path changed in between >>>>>> steps D and E. >>>>>> >>>>> >>>>> If search path is changed between steps D & E, the problem occurs >>>>> because when the remote query node is created, schema qualification is not >>>>> added in the sql statement to be sent to the datanode, but changes in >>>>> search path do get communicated to the datanode. The sql statement is built >>>>> when execute is issued for the first time and is reused on subsequent >>>>> executes. The datanode is totally unaware that the select that it just >>>>> received is due to an execute of a prepared statement that was prepared >>>>> when search path was some thing else. >>>>> >>>>> >>>> Fixing the prepared statements the way I suggested, would fix the >>>> problem, since the statement will get prepared at the datanode, with the >>>> same search path settings, as it would on the coordinator. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>>> Comments are welcome. >>>>>>> >>>>>>> -- >>>>>>> *Abbas* >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>> * >>>>>>> Follow us on Twitter* >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>> service >>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>> your >>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>> and get this awesome Nerd Life shirt! >>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-developers mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-05-28 14:16:44
|
On Tue, May 28, 2013 at 6:04 PM, Abbas Butt <abb...@en...>wrote: > The patch fixes the dead code issue, that I described earlier. The code > was dead because of two issues: > > 1. The function CompleteCachedPlan was wrongly setting stmt_name to NULL > and this was the main reason ActivateDatanodeStatementOnNode was not being > called in the function pgxc_start_command_on_connection. > Some more explanation of this point is as follows: The function CompleteCachedPlan, should not touch the stmt_name because it has already been set to its proper value before calling it by the function CreateCachedPlan where we have added another parameter specifically for this purpose. The function CompleteCachedPlan was setting the stmt_name to NULL because of an error in merge, c1dd6cb5fdea86bbddfb471b1da56bb54b604c45. The purpose of stmt_name in cached plan is as follows: The name that the user mentions while issuing a prepare is stored in it, later when the plan is created from cached plan, it gets copied to RemoteQuery node, and when the time comes to execute the node, it is used to decide whether we need to prepare the statement on the datanode or not. > 2. The function SetRemoteStatementName was wrongly assuming that a > prepared statement must have some parameters. > > Fixing these two issues makes sure that the function > ActivateDatanodeStatementOnNode is now called and statements get prepared > on the datanode. > This patch would fix bug 3607975. It would however not fix the test case I > described in my previous email because of reasons I described. > > > On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Can you please explain what this fix does? It would help to have an >> elaborate explanation with code snippets. >> >> >> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt <abb...@en... >> > wrote: >> >>> >>> >>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> While working on test case plancache it was brought up as a review >>>>>>> comment that solving bug id 3607975 should solve the problem of the test >>>>>>> case. >>>>>>> However there is some confusion in the statement of bug id 3607975. >>>>>>> >>>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, the >>>>>>> coordinator keeps on preparing and executing the query on datanode al >>>>>>> times, as against preparing once and executing multiple times. This is >>>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>>> >>>>>>> Consider this test case >>>>>>> >>>>>>> A. create table abc(a int, b int); >>>>>>> B. insert into abc values(11, 22); >>>>>>> C. prepare p1 as select * from abc; >>>>>>> D. execute p1; >>>>>>> E. execute p1; >>>>>>> F. execute p1; >>>>>>> >>>>>>> Here are the confusions >>>>>>> >>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>> prepare issued by a user. >>>>>>> In fact step C does nothing on the datanodes. >>>>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>>>> >>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>>> generic plan, >>>>>>> and steps E and F use the already built generic plan. >>>>>>> For details see function GetCachedPlan. >>>>>>> This means that executing a prepared statement again and again >>>>>>> does use cached plans >>>>>>> and does not prepare again and again every time we issue an >>>>>>> execute. >>>>>>> >>>>>>> >>>>>> The problem is not here. The problem is in do_query() where somehow >>>>>> the name of prepared statement gets wiped out and we keep on preparing >>>>>> unnamed statements at the datanode. >>>>>> >>>>> >>>>> We never prepare any named/unnamed statements on the datanode. I spent >>>>> time looking at the code written in do_query and functions called from with >>>>> in do_query to handle prepared statements but the code written in >>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>> is dead as of now. It is never called during the complete regression run. >>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>> prepared statements are being handled now is the same as I described >>>>> earlier in the mail chain with the help of an example. >>>>> The code that is dead was originally added by Mason through commit >>>>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>>>> has been changed a lot over the last two years. This commit does not >>>>> contain any test cases so I am not sure how did it use to work back then. >>>>> >>>>> >>>> >>>> This code wasn't dead, when I worked on prepared statements. So, >>>> something has gone wrong in-between. That's what we need to find out and >>>> fix. Not preparing statements on the datanode is not good for performance >>>> either. >>>> >>> >>> I was able to find the reason why the code was dead and the attached >>> patch (WIP) fixes the problem. This would now ensure that statements are >>> prepared on datanodes whenever required. However there is a problem in the >>> way prepared statements are handled. The problem is that unless a prepared >>> statement is executed it is never prepared on datanodes, hence changing the >>> path before executing the statement gives us incorrect results. For Example >>> >>> create schema s1 create table abc (f1 int) distribute by replication; >>> create schema s2 create table abc (f1 int) distribute by replication; >>> >>> insert into s1.abc values(123); >>> insert into s2.abc values(456); >>> set search_path = s2; >>> prepare p1 as select f1 from abc; >>> set search_path = s1; >>> execute p1; >>> >>> The last execute results in 123, where as it should have resulted in 456. >>> I can finalize the attached patch by fixing any regression issues that >>> may result and that would fix 3607975 and improve performance however the >>> above test case would still fail. >>> >>> >>>> >>>> >>>>> >>>>>> >>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>> >>>>>>> >>>>>> Did you verify it under the debugger? If that would have been the >>>>>> case, we would not have seen this problem if search_path changed in between >>>>>> steps D and E. >>>>>> >>>>> >>>>> If search path is changed between steps D & E, the problem occurs >>>>> because when the remote query node is created, schema qualification is not >>>>> added in the sql statement to be sent to the datanode, but changes in >>>>> search path do get communicated to the datanode. The sql statement is built >>>>> when execute is issued for the first time and is reused on subsequent >>>>> executes. The datanode is totally unaware that the select that it just >>>>> received is due to an execute of a prepared statement that was prepared >>>>> when search path was some thing else. >>>>> >>>>> >>>> Fixing the prepared statements the way I suggested, would fix the >>>> problem, since the statement will get prepared at the datanode, with the >>>> same search path settings, as it would on the coordinator. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>>> Comments are welcome. >>>>>>> >>>>>>> -- >>>>>>> *Abbas* >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>> * >>>>>>> Follow us on Twitter* >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>> service >>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>> your >>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>> and get this awesome Nerd Life shirt! >>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-developers mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-28 13:04:32
|
The patch fixes the dead code issue, that I described earlier. The code was dead because of two issues: 1. The function CompleteCachedPlan was wrongly setting stmt_name to NULL and this was the main reason ActivateDatanodeStatementOnNode was not being called in the function pgxc_start_command_on_connection. 2. The function SetRemoteStatementName was wrongly assuming that a prepared statement must have some parameters. Fixing these two issues makes sure that the function ActivateDatanodeStatementOnNode is now called and statements get prepared on the datanode. This patch would fix bug 3607975. It would however not fix the test case I described in my previous email because of reasons I described. On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < ash...@en...> wrote: > Can you please explain what this fix does? It would help to have an > elaborate explanation with code snippets. > > > On Sun, May 26, 2013 at 10:18 PM, Abbas Butt <abb...@en...>wrote: > >> >> >> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> >>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> >>>> >>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> While working on test case plancache it was brought up as a review >>>>>> comment that solving bug id 3607975 should solve the problem of the test >>>>>> case. >>>>>> However there is some confusion in the statement of bug id 3607975. >>>>>> >>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, the >>>>>> coordinator keeps on preparing and executing the query on datanode al >>>>>> times, as against preparing once and executing multiple times. This is >>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>> >>>>>> Consider this test case >>>>>> >>>>>> A. create table abc(a int, b int); >>>>>> B. insert into abc values(11, 22); >>>>>> C. prepare p1 as select * from abc; >>>>>> D. execute p1; >>>>>> E. execute p1; >>>>>> F. execute p1; >>>>>> >>>>>> Here are the confusions >>>>>> >>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>> prepare issued by a user. >>>>>> In fact step C does nothing on the datanodes. >>>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>>> >>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>> generic plan, >>>>>> and steps E and F use the already built generic plan. >>>>>> For details see function GetCachedPlan. >>>>>> This means that executing a prepared statement again and again >>>>>> does use cached plans >>>>>> and does not prepare again and again every time we issue an >>>>>> execute. >>>>>> >>>>>> >>>>> The problem is not here. The problem is in do_query() where somehow >>>>> the name of prepared statement gets wiped out and we keep on preparing >>>>> unnamed statements at the datanode. >>>>> >>>> >>>> We never prepare any named/unnamed statements on the datanode. I spent >>>> time looking at the code written in do_query and functions called from with >>>> in do_query to handle prepared statements but the code written in >>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>> is dead as of now. It is never called during the complete regression run. >>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>> prepared statements are being handled now is the same as I described >>>> earlier in the mail chain with the help of an example. >>>> The code that is dead was originally added by Mason through commit >>>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>>> has been changed a lot over the last two years. This commit does not >>>> contain any test cases so I am not sure how did it use to work back then. >>>> >>>> >>> >>> This code wasn't dead, when I worked on prepared statements. So, >>> something has gone wrong in-between. That's what we need to find out and >>> fix. Not preparing statements on the datanode is not good for performance >>> either. >>> >> >> I was able to find the reason why the code was dead and the attached >> patch (WIP) fixes the problem. This would now ensure that statements are >> prepared on datanodes whenever required. However there is a problem in the >> way prepared statements are handled. The problem is that unless a prepared >> statement is executed it is never prepared on datanodes, hence changing the >> path before executing the statement gives us incorrect results. For Example >> >> create schema s1 create table abc (f1 int) distribute by replication; >> create schema s2 create table abc (f1 int) distribute by replication; >> >> insert into s1.abc values(123); >> insert into s2.abc values(456); >> set search_path = s2; >> prepare p1 as select f1 from abc; >> set search_path = s1; >> execute p1; >> >> The last execute results in 123, where as it should have resulted in 456. >> I can finalize the attached patch by fixing any regression issues that >> may result and that would fix 3607975 and improve performance however the >> above test case would still fail. >> >> >>> >>> >>>> >>>>> >>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>> >>>>>> >>>>> Did you verify it under the debugger? If that would have been the >>>>> case, we would not have seen this problem if search_path changed in between >>>>> steps D and E. >>>>> >>>> >>>> If search path is changed between steps D & E, the problem occurs >>>> because when the remote query node is created, schema qualification is not >>>> added in the sql statement to be sent to the datanode, but changes in >>>> search path do get communicated to the datanode. The sql statement is built >>>> when execute is issued for the first time and is reused on subsequent >>>> executes. The datanode is totally unaware that the select that it just >>>> received is due to an execute of a prepared statement that was prepared >>>> when search path was some thing else. >>>> >>>> >>> Fixing the prepared statements the way I suggested, would fix the >>> problem, since the statement will get prepared at the datanode, with the >>> same search path settings, as it would on the coordinator. >>> >>> >>>> >>>> >>>>> >>>>> >>>>>> Comments are welcome. >>>>>> >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>> service >>>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>> and get this awesome Nerd Life shirt! >>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>> _______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ashutosh B. <ash...@en...> - 2013-05-28 12:50:39
|
Can you please explain what this fix does? It would help to have an elaborate explanation with code snippets. On Sun, May 26, 2013 at 10:18 PM, Abbas Butt <abb...@en...>wrote: > > > On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> >> >> >> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt <abb...@en...>wrote: >> >>> >>> >>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> Hi, >>>>> >>>>> While working on test case plancache it was brought up as a review >>>>> comment that solving bug id 3607975 should solve the problem of the test >>>>> case. >>>>> However there is some confusion in the statement of bug id 3607975. >>>>> >>>>> "When a user does and PREPARE and then EXECUTEs multiple times, the >>>>> coordinator keeps on preparing and executing the query on datanode al >>>>> times, as against preparing once and executing multiple times. This is >>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>> >>>>> Consider this test case >>>>> >>>>> A. create table abc(a int, b int); >>>>> B. insert into abc values(11, 22); >>>>> C. prepare p1 as select * from abc; >>>>> D. execute p1; >>>>> E. execute p1; >>>>> F. execute p1; >>>>> >>>>> Here are the confusions >>>>> >>>>> 1. The coordinator never prepares on datanode in response to a prepare >>>>> issued by a user. >>>>> In fact step C does nothing on the datanodes. >>>>> Step D simply sends "SELECT a, b FROM abc" to all datanodes. >>>>> >>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>> generic plan, >>>>> and steps E and F use the already built generic plan. >>>>> For details see function GetCachedPlan. >>>>> This means that executing a prepared statement again and again >>>>> does use cached plans >>>>> and does not prepare again and again every time we issue an >>>>> execute. >>>>> >>>>> >>>> The problem is not here. The problem is in do_query() where somehow the >>>> name of prepared statement gets wiped out and we keep on preparing unnamed >>>> statements at the datanode. >>>> >>> >>> We never prepare any named/unnamed statements on the datanode. I spent >>> time looking at the code written in do_query and functions called from with >>> in do_query to handle prepared statements but the code written in >>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>> is dead as of now. It is never called during the complete regression run. >>> The function ActivateDatanodeStatementOnNode is never called. The way >>> prepared statements are being handled now is the same as I described >>> earlier in the mail chain with the help of an example. >>> The code that is dead was originally added by Mason through commit >>> d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This code >>> has been changed a lot over the last two years. This commit does not >>> contain any test cases so I am not sure how did it use to work back then. >>> >>> >> >> This code wasn't dead, when I worked on prepared statements. So, >> something has gone wrong in-between. That's what we need to find out and >> fix. Not preparing statements on the datanode is not good for performance >> either. >> > > I was able to find the reason why the code was dead and the attached patch > (WIP) fixes the problem. This would now ensure that statements are prepared > on datanodes whenever required. However there is a problem in the way > prepared statements are handled. The problem is that unless a prepared > statement is executed it is never prepared on datanodes, hence changing the > path before executing the statement gives us incorrect results. For Example > > create schema s1 create table abc (f1 int) distribute by replication; > create schema s2 create table abc (f1 int) distribute by replication; > > insert into s1.abc values(123); > insert into s2.abc values(456); > set search_path = s2; > prepare p1 as select f1 from abc; > set search_path = s1; > execute p1; > > The last execute results in 123, where as it should have resulted in 456. > I can finalize the attached patch by fixing any regression issues that may > result and that would fix 3607975 and improve performance however the above > test case would still fail. > > >> >> >>> >>>> >>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>> >>>>> >>>> Did you verify it under the debugger? If that would have been the case, >>>> we would not have seen this problem if search_path changed in between steps >>>> D and E. >>>> >>> >>> If search path is changed between steps D & E, the problem occurs >>> because when the remote query node is created, schema qualification is not >>> added in the sql statement to be sent to the datanode, but changes in >>> search path do get communicated to the datanode. The sql statement is built >>> when execute is issued for the first time and is reused on subsequent >>> executes. The datanode is totally unaware that the select that it just >>> received is due to an execute of a prepared statement that was prepared >>> when search path was some thing else. >>> >>> >> Fixing the prepared statements the way I suggested, would fix the >> problem, since the statement will get prepared at the datanode, with the >> same search path settings, as it would on the coordinator. >> >> >>> >>> >>>> >>>> >>>>> Comments are welcome. >>>>> >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>> New Relic is the only SaaS-based application performance monitoring >>>>> service >>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>> and get this awesome Nerd Life shirt! >>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>> _______________________________________________ >>>>> Postgres-xc-developers mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Andrei M. <and...@gm...> - 2013-05-28 08:28:04
|
2013/5/28 Koichi Suzuki <koi...@gm...> > If the background terminates after this, yes, we can disconnect GTM. If > the background keeps running and disconnect this time, it should be given > another chance to connect to GTM. I was not sure if the current code does > this. Here's my analysis: > > 1. Datanode can connect to GTM directly only for auto Vacuum and Vacuum > analyze. > 2. These processes runs as a single transaction, and then quit. > 3. So we're safe to disconnect at the end of the transaction. > > Am I correct? > > Correct. However if even session lasts and runs another transaction the only problem is overhead of re-establishing of the GTM connection. > Regards; > > ---------- > Koichi Suzuki > > > 2013/5/28 Andrei Martsinchyk <and...@gm...> > >> >> >> >> 2013/5/28 Koichi Suzuki <koi...@gm...> >> >>> I have a question on the patch. Why we close GTM connection here? >>> Why shouldn't we keep the connection open? >>> >>> >> That block of code is executed when client is connected to the datanode >> directly. That does not happen during normal operation just to perform >> one-time maintenence or monitoring task. So I thought it is better to close >> the connection. There should be no harm if connection is left open. >> >> >> >>> Regards; >>> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/5/28 Koichi Suzuki <koi...@gm...> >>> >>>> Thank you Andrei for the patch. I took a glance at it and will review >>>> it before commit. >>>> >>>> Best; >>>> >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/5/27 Andrei Martsinchyk <and...@gm...> >>>> >>>>> We noticed that transaction handles are not released after direct >>>>> connections to datanodes, if they are connecting to GTM through GTM proxy. >>>>> So if Datanode is periodically connected directly (ex. for monitoring) >>>>> GTM eventually starts throwing error "Max transaction limit reached". >>>>> Please find fix attached. >>>>> >>>>> -- >>>>> Andrei Martsinchyk >>>>> >>>>> StormDB - http://www.stormdb.com >>>>> The Database Cloud >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>> New Relic is the only SaaS-based application performance monitoring >>>>> service >>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>> and get this awesome Nerd Life shirt! >>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>> _______________________________________________ >>>>> Postgres-xc-developers mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>>>> >>>> >>> >> >> >> -- >> Andrei Martsinchyk >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> >> > -- Andrei Martsinchyk StormDB - http://www.stormdb.com The Database Cloud |
From: Koichi S. <koi...@gm...> - 2013-05-28 08:21:09
|
If the background terminates after this, yes, we can disconnect GTM. If the background keeps running and disconnect this time, it should be given another chance to connect to GTM. I was not sure if the current code does this. Here's my analysis: 1. Datanode can connect to GTM directly only for auto Vacuum and Vacuum analyze. 2. These processes runs as a single transaction, and then quit. 3. So we're safe to disconnect at the end of the transaction. Am I correct? Regards; ---------- Koichi Suzuki 2013/5/28 Andrei Martsinchyk <and...@gm...> > > > > 2013/5/28 Koichi Suzuki <koi...@gm...> > >> I have a question on the patch. Why we close GTM connection here? Why >> shouldn't we keep the connection open? >> >> > That block of code is executed when client is connected to the datanode > directly. That does not happen during normal operation just to perform > one-time maintenence or monitoring task. So I thought it is better to close > the connection. There should be no harm if connection is left open. > > > >> Regards; >> >> ---------- >> Koichi Suzuki >> >> >> 2013/5/28 Koichi Suzuki <koi...@gm...> >> >>> Thank you Andrei for the patch. I took a glance at it and will review >>> it before commit. >>> >>> Best; >>> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/5/27 Andrei Martsinchyk <and...@gm...> >>> >>>> We noticed that transaction handles are not released after direct >>>> connections to datanodes, if they are connecting to GTM through GTM proxy. >>>> So if Datanode is periodically connected directly (ex. for monitoring) >>>> GTM eventually starts throwing error "Max transaction limit reached". >>>> Please find fix attached. >>>> >>>> -- >>>> Andrei Martsinchyk >>>> >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Try New Relic Now & We'll Send You this Cool Shirt >>>> New Relic is the only SaaS-based application performance monitoring >>>> service >>>> that delivers powerful full stack analytics. Optimize and monitor your >>>> browser, app, & servers with just a few lines of code. Try New Relic >>>> and get this awesome Nerd Life shirt! >>>> http://p.sf.net/sfu/newrelic_d2d_may >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >> > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > |
From: Andrei M. <and...@gm...> - 2013-05-28 08:08:42
|
2013/5/28 Koichi Suzuki <koi...@gm...> > I have a question on the patch. Why we close GTM connection here? Why > shouldn't we keep the connection open? > > That block of code is executed when client is connected to the datanode directly. That does not happen during normal operation just to perform one-time maintenence or monitoring task. So I thought it is better to close the connection. There should be no harm if connection is left open. > Regards; > > ---------- > Koichi Suzuki > > > 2013/5/28 Koichi Suzuki <koi...@gm...> > >> Thank you Andrei for the patch. I took a glance at it and will review >> it before commit. >> >> Best; >> >> ---------- >> Koichi Suzuki >> >> >> 2013/5/27 Andrei Martsinchyk <and...@gm...> >> >>> We noticed that transaction handles are not released after direct >>> connections to datanodes, if they are connecting to GTM through GTM proxy. >>> So if Datanode is periodically connected directly (ex. for monitoring) >>> GTM eventually starts throwing error "Max transaction limit reached". >>> Please find fix attached. >>> >>> -- >>> Andrei Martsinchyk >>> >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Try New Relic Now & We'll Send You this Cool Shirt >>> New Relic is the only SaaS-based application performance monitoring >>> service >>> that delivers powerful full stack analytics. Optimize and monitor your >>> browser, app, & servers with just a few lines of code. Try New Relic >>> and get this awesome Nerd Life shirt! >>> http://p.sf.net/sfu/newrelic_d2d_may >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> > -- Andrei Martsinchyk StormDB - http://www.stormdb.com The Database Cloud |