You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ashutosh B. <ash...@en...> - 2013-05-17 09:26:57
|
This looks good. Are there other ways where we can have UPDATE statement somewhere in the query tree list? Do we need to worry about such cases. On Fri, May 17, 2013 at 2:22 PM, Abbas Butt <abb...@en...>wrote: > > > On Thu, May 16, 2013 at 2:25 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> Instead of fixing the first issue in pgxc_build_dml_statement(), is it >> possible to traverse the Query in validate_part_col_updatable() recursively >> to find UPDATE statements and apply partition column check? >> > > Yes. I have attached that patch for your feedback. If you think its ok I > can send the updated patch including the rest of the changes. > > >> That would cover all the possibilities, I guess. That also saves us much >> effort in case we come to support distribution column updation. >> >> I think, we need a generic solution to solve this command id issue, e.g. >> punching command id always and efficiently. But for now this suffices. >> Please log a bug/feature and put it in 1.2 bucket. >> > > Done. > (Artifact 3613498<https://sourceforge.net/tracker/?func=detail&aid=3613498&group_id=311227&atid=1310235> > ) > >> >> >> >> >> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: >> >>> Adding developers mailing list. >>> >>> >>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Hi, >>>> Attached please find a patch to fix test case with. >>>> There were two issues making the test to fail. >>>> 1. Updates to partition column were possible using syntax like >>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>> The patch blocks this syntax. >>>> >>>> 2. For a WITH query that updates a table in the main query and >>>> inserts a row in the same table in the WITH query we need to use >>>> command ID communication to remote nodes in order to >>>> maintain global data visibility. >>>> For example >>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>>> INSERT INTO tab VALUES (1,'p1'); >>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>>> newid) >>>> UPDATE tab SET id = id + newid FROM wcte; >>>> The last query gets translated into the following multi-statement >>>> transaction on the primary datanode >>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>>> (42,'new)' >>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>>> (43,(0,1)] >>>> (e) COMMIT TRANSACTION >>>> The command id of the select in step (c), should be such that >>>> it does not see the insert of step (b) >>>> >>>> Comments are welcome. >>>> >>>> Regards >>>> >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-core mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-17 09:18:23
|
Is this failing for you? I do not see it failing on my machine, neither on build-farm. On Fri, May 17, 2013 at 2:01 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a patch to add some missing order by clauses in > truncate test case. > > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-17 09:14:35
|
Hi Abbas, The changes you have done affect the general query deparsing logic, which is used for dumping views. I don't think we should affect that. So, we should device a way to qualify objects only when it's being done for RemoteQuery. This might involve a lot of changes esp. in function definitions. But, diving deeper into the reasons, we have following two problems which might (I haven't tested those myself, so this uncertainty; otherwise I am 100% sure) be causing this issue. One of the reasons why this problem occurs is, we prepare the statements at the datanodes during first execute command. So, one of the way to completely solve this problem, is to prepare the statements at the datanodes at the time of preparing the statement. This is possible if the target datanodes are known at the time of planning. The second reason why we see this problem is bug *3607975*. Solving this bug would solve the regression diff. Can you please attempt it? Solving reason 2 would be enough to silence the diffs, I guess. Can you please check? On Fri, May 17, 2013 at 1:59 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a fix for test case plancache. > The test was failing because of the following issue > > create schema s1 create table abc (f1 int) distribute by replication; > create schema s2 create table abc (f1 int) distribute by replication; > insert into s1.abc values(123); > insert into s2.abc values(456); > set search_path = s1; > prepare p1 as select f1 from abc; > set search_path = s2; > execute p1; > > The last execute must send select f1 from s1.abc to the datanode, > despite the fact that the current schema has been set to s2. > > The solution was to schema qualify remote queries, and for that the > function generate_relation_name is modified to make sure that relations are > schema qualified independent of current search path. > > Expected outputs of many test cases are changed which makes the size and > footprint of this patch large. > > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-17 08:58:25
|
On Fri, May 17, 2013 at 2:23 PM, Abbas Butt <abb...@en...>wrote: > > > On Thu, May 16, 2013 at 3:13 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> I am also seeing a lot of changes in the expected output where the rows >> output have changed. What are these changes? >> > > These changes are a result of blocking partition column updates > Are those in sync with PG expected output? Why did we change the original expected output in first place? > and changing the distribution of tables to replication. > > That's acceptable. > >> >> On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Hi Abbas, >>> Instead of fixing the first issue in pgxc_build_dml_statement(), is it >>> possible to traverse the Query in validate_part_col_updatable() recursively >>> to find UPDATE statements and apply partition column check? That would >>> cover all the possibilities, I guess. That also saves us much effort in >>> case we come to support distribution column updation. >>> >>> I think, we need a generic solution to solve this command id issue, e.g. >>> punching command id always and efficiently. But for now this suffices. >>> Please log a bug/feature and put it in 1.2 bucket. >>> >>> >>> >>> >>> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Adding developers mailing list. >>>> >>>> >>>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> Hi, >>>>> Attached please find a patch to fix test case with. >>>>> There were two issues making the test to fail. >>>>> 1. Updates to partition column were possible using syntax like >>>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>>> The patch blocks this syntax. >>>>> >>>>> 2. For a WITH query that updates a table in the main query and >>>>> inserts a row in the same table in the WITH query we need to use >>>>> command ID communication to remote nodes in order to >>>>> maintain global data visibility. >>>>> For example >>>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>>>> INSERT INTO tab VALUES (1,'p1'); >>>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>>>> newid) >>>>> UPDATE tab SET id = id + newid FROM wcte; >>>>> The last query gets translated into the following multi-statement >>>>> transaction on the primary datanode >>>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>>>> (42,'new)' >>>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>>>> (43,(0,1)] >>>>> (e) COMMIT TRANSACTION >>>>> The command id of the select in step (c), should be such that >>>>> it does not see the insert of step (b) >>>>> >>>>> Comments are welcome. >>>>> >>>>> Regards >>>>> >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> AlienVault Unified Security Management (USM) platform delivers complete >>>> security visibility with the essential security capabilities. Easily and >>>> efficiently configure, manage, and operate all of your security controls >>>> from a single console and one unified framework. Download a free trial. >>>> http://p.sf.net/sfu/alienvault_d2d >>>> _______________________________________________ >>>> Postgres-xc-core mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-05-17 08:53:46
|
On Thu, May 16, 2013 at 3:13 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > I am also seeing a lot of changes in the expected output where the rows > output have changed. What are these changes? > These changes are a result of blocking partition column updates and changing the distribution of tables to replication. > > > On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> Instead of fixing the first issue in pgxc_build_dml_statement(), is it >> possible to traverse the Query in validate_part_col_updatable() recursively >> to find UPDATE statements and apply partition column check? That would >> cover all the possibilities, I guess. That also saves us much effort in >> case we come to support distribution column updation. >> >> I think, we need a generic solution to solve this command id issue, e.g. >> punching command id always and efficiently. But for now this suffices. >> Please log a bug/feature and put it in 1.2 bucket. >> >> >> >> >> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: >> >>> Adding developers mailing list. >>> >>> >>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Hi, >>>> Attached please find a patch to fix test case with. >>>> There were two issues making the test to fail. >>>> 1. Updates to partition column were possible using syntax like >>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>> The patch blocks this syntax. >>>> >>>> 2. For a WITH query that updates a table in the main query and >>>> inserts a row in the same table in the WITH query we need to use >>>> command ID communication to remote nodes in order to >>>> maintain global data visibility. >>>> For example >>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>>> INSERT INTO tab VALUES (1,'p1'); >>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>>> newid) >>>> UPDATE tab SET id = id + newid FROM wcte; >>>> The last query gets translated into the following multi-statement >>>> transaction on the primary datanode >>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>>> (42,'new)' >>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>>> (43,(0,1)] >>>> (e) COMMIT TRANSACTION >>>> The command id of the select in step (c), should be such that >>>> it does not see the insert of step (b) >>>> >>>> Comments are welcome. >>>> >>>> Regards >>>> >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-core mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:52:09
|
On Thu, May 16, 2013 at 2:25 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > Instead of fixing the first issue in pgxc_build_dml_statement(), is it > possible to traverse the Query in validate_part_col_updatable() recursively > to find UPDATE statements and apply partition column check? > Yes. I have attached that patch for your feedback. If you think its ok I can send the updated patch including the rest of the changes. > That would cover all the possibilities, I guess. That also saves us much > effort in case we come to support distribution column updation. > > I think, we need a generic solution to solve this command id issue, e.g. > punching command id always and efficiently. But for now this suffices. > Please log a bug/feature and put it in 1.2 bucket. > Done. (Artifact 3613498<https://sourceforge.net/tracker/?func=detail&aid=3613498&group_id=311227&atid=1310235> ) > > > > > On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: > >> Adding developers mailing list. >> >> >> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en...>wrote: >> >>> Hi, >>> Attached please find a patch to fix test case with. >>> There were two issues making the test to fail. >>> 1. Updates to partition column were possible using syntax like >>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>> The patch blocks this syntax. >>> >>> 2. For a WITH query that updates a table in the main query and >>> inserts a row in the same table in the WITH query we need to use >>> command ID communication to remote nodes in order to >>> maintain global data visibility. >>> For example >>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>> INSERT INTO tab VALUES (1,'p1'); >>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>> newid) >>> UPDATE tab SET id = id + newid FROM wcte; >>> The last query gets translated into the following multi-statement >>> transaction on the primary datanode >>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>> (42,'new)' >>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>> (43,(0,1)] >>> (e) COMMIT TRANSACTION >>> The command id of the select in step (c), should be such that >>> it does not see the insert of step (b) >>> >>> Comments are welcome. >>> >>> Regards >>> >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-core mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:35:55
|
Hi, Attached please find patch to change with.sql test case because of schema qualification in remote queries. -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:31:07
|
Hi, Attached please find a patch to add some missing order by clauses in truncate test case. -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:29:14
|
Hi, Attached please find a fix for test case plancache. The test was failing because of the following issue create schema s1 create table abc (f1 int) distribute by replication; create schema s2 create table abc (f1 int) distribute by replication; insert into s1.abc values(123); insert into s2.abc values(456); set search_path = s1; prepare p1 as select f1 from abc; set search_path = s2; execute p1; The last execute must send select f1 from s1.abc to the datanode, despite the fact that the current schema has been set to s2. The solution was to schema qualify remote queries, and for that the function generate_relation_name is modified to make sure that relations are schema qualified independent of current search path. Expected outputs of many test cases are changed which makes the size and footprint of this patch large. -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: 鈴木 幸市 <ko...@in...> - 2013-05-17 06:37:50
|
I understood. Maybe we must revise COPY manual about the restriction from user's point of view. Regards; --- Koichi Suzuki On 2013/05/17, at 14:44, Amit Khandekar <ami...@en...> wrote: > > > On 17 May 2013 10:14, 鈴木 幸市 <ko...@in...> wrote: > The background of the restriction is apparently PG restriction. My point is does this issue will not happen in PG. > > In PG, when queries are executed through triggers, those are initiated by the backend, there is no client in the picture. So there is no need of any client-server message exchange for executing triggers while the COPY protocol is in progress. > > For XC, the triggers are executed from the client (i.e. coordinator) so there are client-server messages to be exchanged while the COPY is in progress. > > In XC, we catch such scenario in the coordinator itself ; we block any messages to be sent to datanode while the COPY is in progress. > > I am working on allowing statement triggers to be executed from coordinator. That should be possible. > > Regards; > --- > Koichi Suzuki > > > > On 2013/05/17, at 13:36, Ashutosh Bapat <ash...@en...> wrote: > >> Ok, in that case, I don't think we have any other way but to convert COPY into INSERT between coordinator and datanode when the triggers are not shippable. I think this restriction applies only to the row triggers; statement triggers should be fine. >> >> >> On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar <ami...@en...> wrote: >> >> >> On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...> wrote: >> >> >> >> On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar <ami...@en...> wrote: >> >> >> On 15 May 2013 12:53, Amit Khandekar <ami...@en...> wrote: >> In XC, the way COPY is implemented is that for each record, we read the whole line into memory, and then pass it to the datanode as-is. If there are non-shippable default column expressions, we evaluate the default values , convert them into output form, and append them to the data row. >> >> In presence of BR triggers, currently the ExecBRInsertTriggers() do not get called because of the way we skip the whole PG code block; instead we just send the data row as-is, optionally appending default values into the data row. >> >> What we need to do is; convert the tuple returned by ExecBRTriggers into text data row, but the text data should be in COPY format. This is because we need to send the data row to the datanode using COPY command, so it requires correct COPY format, such as escape sequences. >> >> For this, we need to call the function CopyOneRowTo() that is being used by COPY TO. This will make sure it will emit the data row in the COPY format. But we need to create a temporary CopyState because CopyOneRowTo() needs it. We can derive it from the current CopyState that is already created for COPY FROM. Most of the fields remain the same, except we need to re-assign CopyState->line_buf, and CopyState->rowcontext. >> >> This will save us from writing code to make sure the new output data row generated by BR triggers complies with COPY data format. >> >> I had already done similar thing for appending default values into the data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to append the values to the data row in COPY format. There, we did not require CopyOneRow() because we did not require the complete row, we needed to append only a subset of columns to the existing data row. >> >> Comments/suggestions welcome. >> >> I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. >> >> It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). >> >> Is this an XC restriction or PG restriction? >> >> The above is a PG restriction. >> >> Not accpepting any other client messages during a COPY protocol is a PG backend requirement. Not accepting trigger queries from coordinator has become an XC restriction as a result of the above PG protocol restriction. >> >> >> >> This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. >> >> We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Amit K. <ami...@en...> - 2013-05-17 05:44:49
|
On 17 May 2013 10:14, 鈴木 幸市 <ko...@in...> wrote: > The background of the restriction is apparently PG restriction. My point > is does this issue will not happen in PG. > In PG, when queries are executed through triggers, those are initiated by the backend, there is no client in the picture. So there is no need of any client-server message exchange for executing triggers while the COPY protocol is in progress. For XC, the triggers are executed from the client (i.e. coordinator) so there are client-server messages to be exchanged while the COPY is in progress. In XC, we catch such scenario in the coordinator itself ; we block any messages to be sent to datanode while the COPY is in progress. I am working on allowing statement triggers to be executed from coordinator. That should be possible. > > Regards; > --- > Koichi Suzuki > > > > On 2013/05/17, at 13:36, Ashutosh Bapat <ash...@en...> > wrote: > > Ok, in that case, I don't think we have any other way but to convert COPY > into INSERT between coordinator and datanode when the triggers are not > shippable. I think this restriction applies only to the row triggers; > statement triggers should be fine. > > > On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar < > ami...@en...> wrote: > >> >> >> On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...>wrote: >> >>> >>> >>> >>> On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < >>> ami...@en...> wrote: >>> >>>> >>>> >>>> On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: >>>> >>>>> In XC, the way COPY is implemented is that for each record, we read >>>>> the whole line into memory, and then pass it to the datanode as-is. If >>>>> there are non-shippable default column expressions, we evaluate the default >>>>> values , convert them into output form, and append them to the data row. >>>>> >>>>> In presence of BR triggers, currently the ExecBRInsertTriggers() do >>>>> not get called because of the way we skip the whole PG code block; instead >>>>> we just send the data row as-is, optionally appending default values into >>>>> the data row. >>>>> >>>>> What we need to do is; convert the tuple returned by ExecBRTriggers >>>>> into text data row, but the text data should be in COPY format. This is >>>>> because we need to send the data row to the datanode using COPY command, so >>>>> it requires correct COPY format, such as escape sequences. >>>>> >>>>> For this, we need to call the function CopyOneRowTo() that is being >>>>> used by COPY TO. This will make sure it will emit the data row in the COPY >>>>> format. But we need to create a temporary CopyState because CopyOneRowTo() >>>>> needs it. We can derive it from the current CopyState that is already >>>>> created for COPY FROM. Most of the fields remain the same, except we need >>>>> to re-assign CopyState->line_buf, and CopyState->rowcontext. >>>>> >>>>> This will save us from writing code to make sure the new output data >>>>> row generated by BR triggers complies with COPY data format. >>>>> >>>>> I had already done similar thing for appending default values into the >>>>> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >>>>> append the values to the data row in COPY format. There, we did not require >>>>> CopyOneRow() because we did not require the complete row, we needed to >>>>> append only a subset of columns to the existing data row. >>>>> >>>>> Comments/suggestions welcome. >>>>> >>>> >>>> I have hit a dead end in the way I am allowing the BR triggers to >>>> execute during COPY. >>>> >>>> It is not possible to send any non-COPY messages to the backend when >>>> the client-server protocol is in COPY mode. Which means, it is not possible >>>> to send any commands to the datanode when connection is in >>>> DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an >>>> SQL query, that query can't be executed because it's not possible to >>>> exchange any non-copy messages, let alone sending a query to the backend >>>> (i.e. datanode). >>>> >>> >>> Is this an XC restriction or PG restriction? >>> >> >> The above is a PG restriction. >> >> Not accpepting any other client messages during a COPY protocol is a PG >> backend requirement. Not accepting trigger queries from coordinator has >> become an XC restriction as a result of the above PG protocol restriction. >> >> >>> >>>> >>>> This naturally happens only for non-shippable triggers. If triggers are >>>> executed on datanode, then this issue does not arise. >>>> >>>> We need to device some other means to support non-shippable triggers >>>> for COPY. May be we would end up sending INSERT commands on the datanode >>>> instead of COPY command, if there are non-shippable triggers. Each of the >>>> data row will be sent as parameters to the insert query. This operation >>>> would be slow, but possible. >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> AlienVault Unified Security Management (USM) platform delivers complete >>>> security visibility with the essential security capabilities. Easily and >>>> efficiently configure, manage, and operate all of your security controls >>>> from a single console and one unified framework. Download a free trial. >>>> http://p.sf.net/sfu/alienvault_d2d >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > > http://p.sf.net/sfu/alienvault_d2d_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > |
From: 鈴木 幸市 <ko...@in...> - 2013-05-17 04:44:22
|
The background of the restriction is apparently PG restriction. My point is does this issue will not happen in PG. Regards; --- Koichi Suzuki On 2013/05/17, at 13:36, Ashutosh Bapat <ash...@en...> wrote: > Ok, in that case, I don't think we have any other way but to convert COPY into INSERT between coordinator and datanode when the triggers are not shippable. I think this restriction applies only to the row triggers; statement triggers should be fine. > > > On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar <ami...@en...> wrote: > > > On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...> wrote: > > > > On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar <ami...@en...> wrote: > > > On 15 May 2013 12:53, Amit Khandekar <ami...@en...> wrote: > In XC, the way COPY is implemented is that for each record, we read the whole line into memory, and then pass it to the datanode as-is. If there are non-shippable default column expressions, we evaluate the default values , convert them into output form, and append them to the data row. > > In presence of BR triggers, currently the ExecBRInsertTriggers() do not get called because of the way we skip the whole PG code block; instead we just send the data row as-is, optionally appending default values into the data row. > > What we need to do is; convert the tuple returned by ExecBRTriggers into text data row, but the text data should be in COPY format. This is because we need to send the data row to the datanode using COPY command, so it requires correct COPY format, such as escape sequences. > > For this, we need to call the function CopyOneRowTo() that is being used by COPY TO. This will make sure it will emit the data row in the COPY format. But we need to create a temporary CopyState because CopyOneRowTo() needs it. We can derive it from the current CopyState that is already created for COPY FROM. Most of the fields remain the same, except we need to re-assign CopyState->line_buf, and CopyState->rowcontext. > > This will save us from writing code to make sure the new output data row generated by BR triggers complies with COPY data format. > > I had already done similar thing for appending default values into the data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to append the values to the data row in COPY format. There, we did not require CopyOneRow() because we did not require the complete row, we needed to append only a subset of columns to the existing data row. > > Comments/suggestions welcome. > > I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. > > It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). > > Is this an XC restriction or PG restriction? > > The above is a PG restriction. > > Not accpepting any other client messages during a COPY protocol is a PG backend requirement. Not accepting trigger queries from coordinator has become an XC restriction as a result of the above PG protocol restriction. > > > > This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. > > We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Ashutosh B. <ash...@en...> - 2013-05-17 04:36:30
|
Ok, in that case, I don't think we have any other way but to convert COPY into INSERT between coordinator and datanode when the triggers are not shippable. I think this restriction applies only to the row triggers; statement triggers should be fine. On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar < ami...@en...> wrote: > > > On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...>wrote: > >> >> >> >> On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < >> ami...@en...> wrote: >> >>> >>> >>> On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: >>> >>>> In XC, the way COPY is implemented is that for each record, we read the >>>> whole line into memory, and then pass it to the datanode as-is. If there >>>> are non-shippable default column expressions, we evaluate the default >>>> values , convert them into output form, and append them to the data row. >>>> >>>> In presence of BR triggers, currently the ExecBRInsertTriggers() do not >>>> get called because of the way we skip the whole PG code block; instead we >>>> just send the data row as-is, optionally appending default values into the >>>> data row. >>>> >>>> What we need to do is; convert the tuple returned by ExecBRTriggers >>>> into text data row, but the text data should be in COPY format. This is >>>> because we need to send the data row to the datanode using COPY command, so >>>> it requires correct COPY format, such as escape sequences. >>>> >>>> For this, we need to call the function CopyOneRowTo() that is being >>>> used by COPY TO. This will make sure it will emit the data row in the COPY >>>> format. But we need to create a temporary CopyState because CopyOneRowTo() >>>> needs it. We can derive it from the current CopyState that is already >>>> created for COPY FROM. Most of the fields remain the same, except we need >>>> to re-assign CopyState->line_buf, and CopyState->rowcontext. >>>> >>>> This will save us from writing code to make sure the new output data >>>> row generated by BR triggers complies with COPY data format. >>>> >>>> I had already done similar thing for appending default values into the >>>> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >>>> append the values to the data row in COPY format. There, we did not require >>>> CopyOneRow() because we did not require the complete row, we needed to >>>> append only a subset of columns to the existing data row. >>>> >>>> Comments/suggestions welcome. >>>> >>> >>> I have hit a dead end in the way I am allowing the BR triggers to >>> execute during COPY. >>> >>> It is not possible to send any non-COPY messages to the backend when the >>> client-server protocol is in COPY mode. Which means, it is not possible to >>> send any commands to the datanode when connection is in >>> DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an >>> SQL query, that query can't be executed because it's not possible to >>> exchange any non-copy messages, let alone sending a query to the backend >>> (i.e. datanode). >>> >> >> Is this an XC restriction or PG restriction? >> > > The above is a PG restriction. > > Not accpepting any other client messages during a COPY protocol is a PG > backend requirement. Not accepting trigger queries from coordinator has > become an XC restriction as a result of the above PG protocol restriction. > > >> >>> >>> This naturally happens only for non-shippable triggers. If triggers are >>> executed on datanode, then this issue does not arise. >>> >>> We need to device some other means to support non-shippable triggers for >>> COPY. May be we would end up sending INSERT commands on the datanode >>> instead of COPY command, if there are non-shippable triggers. Each of the >>> data row will be sent as parameters to the insert query. This operation >>> would be slow, but possible. >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Amit K. <ami...@en...> - 2013-05-17 04:32:25
|
On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...>wrote: > > > > On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < > ami...@en...> wrote: > >> >> >> On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: >> >>> In XC, the way COPY is implemented is that for each record, we read the >>> whole line into memory, and then pass it to the datanode as-is. If there >>> are non-shippable default column expressions, we evaluate the default >>> values , convert them into output form, and append them to the data row. >>> >>> In presence of BR triggers, currently the ExecBRInsertTriggers() do not >>> get called because of the way we skip the whole PG code block; instead we >>> just send the data row as-is, optionally appending default values into the >>> data row. >>> >>> What we need to do is; convert the tuple returned by ExecBRTriggers into >>> text data row, but the text data should be in COPY format. This is because >>> we need to send the data row to the datanode using COPY command, so it >>> requires correct COPY format, such as escape sequences. >>> >>> For this, we need to call the function CopyOneRowTo() that is being used >>> by COPY TO. This will make sure it will emit the data row in the COPY >>> format. But we need to create a temporary CopyState because CopyOneRowTo() >>> needs it. We can derive it from the current CopyState that is already >>> created for COPY FROM. Most of the fields remain the same, except we need >>> to re-assign CopyState->line_buf, and CopyState->rowcontext. >>> >>> This will save us from writing code to make sure the new output data row >>> generated by BR triggers complies with COPY data format. >>> >>> I had already done similar thing for appending default values into the >>> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >>> append the values to the data row in COPY format. There, we did not require >>> CopyOneRow() because we did not require the complete row, we needed to >>> append only a subset of columns to the existing data row. >>> >>> Comments/suggestions welcome. >>> >> >> I have hit a dead end in the way I am allowing the BR triggers to execute >> during COPY. >> >> It is not possible to send any non-COPY messages to the backend when the >> client-server protocol is in COPY mode. Which means, it is not possible to >> send any commands to the datanode when connection is in >> DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an >> SQL query, that query can't be executed because it's not possible to >> exchange any non-copy messages, let alone sending a query to the backend >> (i.e. datanode). >> > > Is this an XC restriction or PG restriction? > The above is a PG restriction. Not accpepting any other client messages during a COPY protocol is a PG backend requirement. Not accepting trigger queries from coordinator has become an XC restriction as a result of the above PG protocol restriction. > >> >> This naturally happens only for non-shippable triggers. If triggers are >> executed on datanode, then this issue does not arise. >> >> We need to device some other means to support non-shippable triggers for >> COPY. May be we would end up sending INSERT commands on the datanode >> instead of COPY command, if there are non-shippable triggers. Each of the >> data row will be sent as parameters to the insert query. This operation >> would be slow, but possible. >> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > |
From: 鈴木 幸市 <ko...@in...> - 2013-05-17 04:08:50
|
I'm afraid it's an XC restriction. Regards; --- Koichi Suzuki On 2013/05/17, at 13:06, Ashutosh Bapat <ash...@en...> wrote: > > > > On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar <ami...@en...> wrote: > > > On 15 May 2013 12:53, Amit Khandekar <ami...@en...> wrote: > In XC, the way COPY is implemented is that for each record, we read the whole line into memory, and then pass it to the datanode as-is. If there are non-shippable default column expressions, we evaluate the default values , convert them into output form, and append them to the data row. > > In presence of BR triggers, currently the ExecBRInsertTriggers() do not get called because of the way we skip the whole PG code block; instead we just send the data row as-is, optionally appending default values into the data row. > > What we need to do is; convert the tuple returned by ExecBRTriggers into text data row, but the text data should be in COPY format. This is because we need to send the data row to the datanode using COPY command, so it requires correct COPY format, such as escape sequences. > > For this, we need to call the function CopyOneRowTo() that is being used by COPY TO. This will make sure it will emit the data row in the COPY format. But we need to create a temporary CopyState because CopyOneRowTo() needs it. We can derive it from the current CopyState that is already created for COPY FROM. Most of the fields remain the same, except we need to re-assign CopyState->line_buf, and CopyState->rowcontext. > > This will save us from writing code to make sure the new output data row generated by BR triggers complies with COPY data format. > > I had already done similar thing for appending default values into the data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to append the values to the data row in COPY format. There, we did not require CopyOneRow() because we did not require the complete row, we needed to append only a subset of columns to the existing data row. > > Comments/suggestions welcome. > > I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. > > It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). > > Is this an XC restriction or PG restriction? > > > This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. > > We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Ashutosh B. <ash...@en...> - 2013-05-17 04:06:47
|
On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < ami...@en...> wrote: > > > On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: > >> In XC, the way COPY is implemented is that for each record, we read the >> whole line into memory, and then pass it to the datanode as-is. If there >> are non-shippable default column expressions, we evaluate the default >> values , convert them into output form, and append them to the data row. >> >> In presence of BR triggers, currently the ExecBRInsertTriggers() do not >> get called because of the way we skip the whole PG code block; instead we >> just send the data row as-is, optionally appending default values into the >> data row. >> >> What we need to do is; convert the tuple returned by ExecBRTriggers into >> text data row, but the text data should be in COPY format. This is because >> we need to send the data row to the datanode using COPY command, so it >> requires correct COPY format, such as escape sequences. >> >> For this, we need to call the function CopyOneRowTo() that is being used >> by COPY TO. This will make sure it will emit the data row in the COPY >> format. But we need to create a temporary CopyState because CopyOneRowTo() >> needs it. We can derive it from the current CopyState that is already >> created for COPY FROM. Most of the fields remain the same, except we need >> to re-assign CopyState->line_buf, and CopyState->rowcontext. >> >> This will save us from writing code to make sure the new output data row >> generated by BR triggers complies with COPY data format. >> >> I had already done similar thing for appending default values into the >> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >> append the values to the data row in COPY format. There, we did not require >> CopyOneRow() because we did not require the complete row, we needed to >> append only a subset of columns to the existing data row. >> >> Comments/suggestions welcome. >> > > I have hit a dead end in the way I am allowing the BR triggers to execute > during COPY. > > It is not possible to send any non-COPY messages to the backend when the > client-server protocol is in COPY mode. Which means, it is not possible to > send any commands to the datanode when connection is in > DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an > SQL query, that query can't be executed because it's not possible to > exchange any non-copy messages, let alone sending a query to the backend > (i.e. datanode). > Is this an XC restriction or PG restriction? > > This naturally happens only for non-shippable triggers. If triggers are > executed on datanode, then this issue does not arise. > > We need to device some other means to support non-shippable triggers for > COPY. May be we would end up sending INSERT commands on the datanode > instead of COPY command, if there are non-shippable triggers. Each of the > data row will be sent as parameters to the insert query. This operation > would be slow, but possible. > > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Amit K. <ami...@en...> - 2013-05-17 03:57:51
|
On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: > In XC, the way COPY is implemented is that for each record, we read the > whole line into memory, and then pass it to the datanode as-is. If there > are non-shippable default column expressions, we evaluate the default > values , convert them into output form, and append them to the data row. > > In presence of BR triggers, currently the ExecBRInsertTriggers() do not > get called because of the way we skip the whole PG code block; instead we > just send the data row as-is, optionally appending default values into the > data row. > > What we need to do is; convert the tuple returned by ExecBRTriggers into > text data row, but the text data should be in COPY format. This is because > we need to send the data row to the datanode using COPY command, so it > requires correct COPY format, such as escape sequences. > > For this, we need to call the function CopyOneRowTo() that is being used > by COPY TO. This will make sure it will emit the data row in the COPY > format. But we need to create a temporary CopyState because CopyOneRowTo() > needs it. We can derive it from the current CopyState that is already > created for COPY FROM. Most of the fields remain the same, except we need > to re-assign CopyState->line_buf, and CopyState->rowcontext. > > This will save us from writing code to make sure the new output data row > generated by BR triggers complies with COPY data format. > > I had already done similar thing for appending default values into the > data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to > append the values to the data row in COPY format. There, we did not require > CopyOneRow() because we did not require the complete row, we needed to > append only a subset of columns to the existing data row. > > Comments/suggestions welcome. > I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. |
From: Ashutosh B. <ash...@en...> - 2013-05-16 10:13:54
|
Hi Abbas, I am also seeing a lot of changes in the expected output where the rows output have changed. What are these changes? On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > Instead of fixing the first issue in pgxc_build_dml_statement(), is it > possible to traverse the Query in validate_part_col_updatable() recursively > to find UPDATE statements and apply partition column check? That would > cover all the possibilities, I guess. That also saves us much effort in > case we come to support distribution column updation. > > I think, we need a generic solution to solve this command id issue, e.g. > punching command id always and efficiently. But for now this suffices. > Please log a bug/feature and put it in 1.2 bucket. > > > > > On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: > >> Adding developers mailing list. >> >> >> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en...>wrote: >> >>> Hi, >>> Attached please find a patch to fix test case with. >>> There were two issues making the test to fail. >>> 1. Updates to partition column were possible using syntax like >>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>> The patch blocks this syntax. >>> >>> 2. For a WITH query that updates a table in the main query and >>> inserts a row in the same table in the WITH query we need to use >>> command ID communication to remote nodes in order to >>> maintain global data visibility. >>> For example >>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>> INSERT INTO tab VALUES (1,'p1'); >>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>> newid) >>> UPDATE tab SET id = id + newid FROM wcte; >>> The last query gets translated into the following multi-statement >>> transaction on the primary datanode >>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>> (42,'new)' >>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>> (43,(0,1)] >>> (e) COMMIT TRANSACTION >>> The command id of the select in step (c), should be such that >>> it does not see the insert of step (b) >>> >>> Comments are welcome. >>> >>> Regards >>> >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-core mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-16 09:25:18
|
Hi Abbas, Instead of fixing the first issue in pgxc_build_dml_statement(), is it possible to traverse the Query in validate_part_col_updatable() recursively to find UPDATE statements and apply partition column check? That would cover all the possibilities, I guess. That also saves us much effort in case we come to support distribution column updation. I think, we need a generic solution to solve this command id issue, e.g. punching command id always and efficiently. But for now this suffices. Please log a bug/feature and put it in 1.2 bucket. On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: > Adding developers mailing list. > > > On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Attached please find a patch to fix test case with. >> There were two issues making the test to fail. >> 1. Updates to partition column were possible using syntax like >> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >> The patch blocks this syntax. >> >> 2. For a WITH query that updates a table in the main query and >> inserts a row in the same table in the WITH query we need to use >> command ID communication to remote nodes in order to >> maintain global data visibility. >> For example >> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >> INSERT INTO tab VALUES (1,'p1'); >> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS newid) >> UPDATE tab SET id = id + newid FROM wcte; >> The last query gets translated into the following multi-statement >> transaction on the primary datanode >> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >> (42,'new)' >> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >> (43,(0,1)] >> (e) COMMIT TRANSACTION >> The command id of the select in step (c), should be such that >> it does not see the insert of step (b) >> >> Comments are welcome. >> >> Regards >> >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-core mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-core > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-05-16 07:53:05
|
On Thu, May 16, 2013 at 12:00 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > I didn't check every place that needed replacement. But it will be caught > in compilation I guess. I have few comments > 1. Instead of name has_to_save_cmd_id, can we use save_cmd_id or pass > command id? > Sure we can use the name you suggest. > 2. The comment against this variable is still the old, it needs to change, > with the change in the name? > I had modified the comment against the variable to add some more lines in it. I checked that the change is in the patch I sent earlier. Please see if those changes are sufficient. > > Why that member needs to be in Query structure? I think we have discussed > this when we added the member, but I think, at that time, we added it there > to avoid last minute complications. Is there a better place? > We catch the statements requiring command id exchange in transformation stage e.g. the WITH query needing command id exchange is caught in transformUpdateStmt, and this is required in RemoteQuery, so we had decided that a member is Query structure will be required. Do you see some other way of conveying this information from transformUpdateStmt to create_remotequery_plan? > > > On Wed, May 15, 2013 at 5:30 AM, Abbas Butt <abb...@en...>wrote: > >> Adding developers mailing list. >> >> >> On Wed, May 15, 2013 at 4:51 AM, Abbas Butt <abb...@en...>wrote: >> >>> Hi, >>> PFA a patch to change names of a couple of variable to more general >>> names. >>> This patch touches some of the areas of command id exchange mechanism >>> between nodes of the cluster and we can use this thread to discuss whether >>> we need to enable command id exchange in all cases OR not. >>> For reference see the patch 38b2b79 committed by Michael for the rest of >>> the details of the mechanism. >>> >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-core mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ashutosh B. <ash...@en...> - 2013-05-16 07:00:47
|
Hi Abbas, I didn't check every place that needed replacement. But it will be caught in compilation I guess. I have few comments 1. Instead of name has_to_save_cmd_id, can we use save_cmd_id or pass command id? 2. The comment against this variable is still the old, it needs to change, with the change in the name? Why that member needs to be in Query structure? I think we have discussed this when we added the member, but I think, at that time, we added it there to avoid last minute complications. Is there a better place? On Wed, May 15, 2013 at 5:30 AM, Abbas Butt <abb...@en...>wrote: > Adding developers mailing list. > > > On Wed, May 15, 2013 at 4:51 AM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> PFA a patch to change names of a couple of variable to more general names. >> This patch touches some of the areas of command id exchange mechanism >> between nodes of the cluster and we can use this thread to discuss whether >> we need to enable command id exchange in all cases OR not. >> For reference see the patch 38b2b79 committed by Michael for the rest of >> the details of the mechanism. >> >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-core mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-core > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Koichi S. <koi...@gm...> - 2013-05-16 05:34:22
|
Thank you Ashutosh. ---------- Koichi Suzuki 2013/5/15 Ashutosh Bapat <ash...@en...> > I committed this patch with some changes. > > > On Wed, May 15, 2013 at 1:28 PM, 鈴木 幸市 <ko...@in...> wrote: > >> Here's third try. >> >> I just left all the explain with (….) option as is and just added >> (verbose on, num_nodes off, nodes off, costs off) option to only one >> explain statement which simply issued explain verbose. >> >> Hope this patch makes sense. >> --- >> Koichi Suzuki >> >> >> >> On 2013/05/15, at 16:33, 鈴木 幸市 <ko...@in...> wrote: >> >> Verbose appeared in the old files. Should I remove all the "verbose"? >> >> Regards; >> --- >> Koichi Suzuki >> >> >> >> On 2013/05/15, at 14:58, Ashutosh Bapat <ash...@en...> >> wrote: >> >> Hi Suzuki-san, >> There are syntax errors in expected output in your patch. The right way >> to invoke EXPLAIN is >> EXPLAIN (verbose, num_nodes off, nodes off, costs off) <query> >> >> In your patch it is used as >> EXPLAIN (verbose, num_nodes off, nodes off, costs off) verbose <query> >> that's why it's giving syntax error. >> >> Also, you have use cost off instead of cost*s* off. >> >> >> On Wed, May 15, 2013 at 11:01 AM, 鈴木 幸市 <ko...@in...> wrote: >> >>> Here's the second fix. >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> >>> >>> >>> On 2013/05/15, at 13:22, Ashutosh Bapat <ash...@en...> >>> wrote: >>> >>> Again in this case, the right fix is to use the standard EXPLAIN form >>> EXPLAIN (verbose, num_nodes off, nodes off, costs off). >>> >>> What I don't understand is how come these changes appeared suddenly? >>> >>> >>> On Wed, May 15, 2013 at 7:59 AM, Koichi Suzuki < >>> koi...@gm...> wrote: >>> >>>> Hello; >>>> >>>> This failure was caused by the following: >>>> >>>> 1. Now FQS pushes down order by and there're no sort at coordinator. >>>> Current regression did not reflect this. >>>> >>>> 2. Regression test changed the datanode name. Current regression did >>>> not reflect this. >>>> >>>> Enclosed is a fix. >>>> >>>> Regards; >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> AlienVault Unified Security Management (USM) platform delivers complete >>>> security visibility with the essential security capabilities. Easily and >>>> efficiently configure, manage, and operate all of your security controls >>>> from a single console and one unified framework. Download a free trial. >>>> http://p.sf.net/sfu/alienvault_d2d >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> >>> http://p.sf.net/sfu/alienvault_d2d_______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> >> http://p.sf.net/sfu/alienvault_d2d_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Koichi S. <koi...@gm...> - 2013-05-16 05:29:47
|
Thanks a lot; ---------- Koichi Suzuki 2013/5/15 Ashutosh Bapat <ash...@en...> > I committed this patch, with slight changes. I tested it on build farm to > make sure that the test passes there as well. > > > On Wed, May 15, 2013 at 2:07 PM, 鈴木 幸市 <ko...@in...> wrote: > >> Sorry for bothering. This time, I only applied num_nodes off only to >> the statement which is shipped to more than one datanode. There was two of >> them. >> >> Hope this makes sense. >> >> Regards; >> --- >> Koichi Suzuki >> >> >> >> On 2013/05/15, at 15:04, Ashutosh Bapat <ash...@en...> >> wrote: >> >> Hi Suzuki-san, >> >> >> >> On Wed, May 15, 2013 at 11:15 AM, 鈴木 幸市 <ko...@in...> wrote: >> >>> I see. PFA the revised patch. Some num_node off was missing in >>> explain statements. They're fixed too. >>> >>> >>> >> You need to apply changes only to the queries which are showing diffs >> because of difference in number of nodes. But it seems you have changed all >> the explain commands in the file. That's not needed. >> >> As I said in my previous mail, there are some explain outputs where the >> query is shipped to only one node and the node count in explain output will >> be always 1 for such queries. That doesn't change with the cluster >> configuration. For such queries, it's important that we have num_nodes off. >> >> >>> --- >>> Koichi Suzuki >>> >>> >>> >>> On 2013/05/15, at 13:17, Ashutosh Bapat <ash...@en...> >>> wrote: >>> >>> That's not a correct fix. We should turn off the node count in Explain >>> output. Use EXPLAIN (verbose, num_nodes off, nodes off, costs off) as a >>> general rule. In case of single node reduction tests, we may use num_nodes >>> on, but in that case num_nodes will always be 1, and will not change >>> depending upon cluster configuration. >>> >>> >>> On Wed, May 15, 2013 at 7:56 AM, Koichi Suzuki < >>> koi...@gm...> wrote: >>> >>>> I found the regression failure was due to the change of explain output. >>>> Original expected output uses node count = 3, which is not correct >>>> because regression test is configured with only two datanode. Actual >>>> result says node count = 2 which is correct. >>>> >>>> Attached is a patch for the correction. >>>> >>>> Regards; >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> AlienVault Unified Security Management (USM) platform delivers complete >>>> security visibility with the essential security capabilities. Easily and >>>> efficiently configure, manage, and operate all of your security controls >>>> from a single console and one unified framework. Download a free trial. >>>> http://p.sf.net/sfu/alienvault_d2d >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> >>> http://p.sf.net/sfu/alienvault_d2d_______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> >> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > |
From: Ashutosh B. <ash...@en...> - 2013-05-15 12:16:37
|
I committed this patch with some changes. On Wed, May 15, 2013 at 1:28 PM, 鈴木 幸市 <ko...@in...> wrote: > Here's third try. > > I just left all the explain with (….) option as is and just added (verbose > on, num_nodes off, nodes off, costs off) option to only one explain > statement which simply issued explain verbose. > > Hope this patch makes sense. > --- > Koichi Suzuki > > > > On 2013/05/15, at 16:33, 鈴木 幸市 <ko...@in...> wrote: > > Verbose appeared in the old files. Should I remove all the "verbose"? > > Regards; > --- > Koichi Suzuki > > > > On 2013/05/15, at 14:58, Ashutosh Bapat <ash...@en...> > wrote: > > Hi Suzuki-san, > There are syntax errors in expected output in your patch. The right way to > invoke EXPLAIN is > EXPLAIN (verbose, num_nodes off, nodes off, costs off) <query> > > In your patch it is used as > EXPLAIN (verbose, num_nodes off, nodes off, costs off) verbose <query> > that's why it's giving syntax error. > > Also, you have use cost off instead of cost*s* off. > > > On Wed, May 15, 2013 at 11:01 AM, 鈴木 幸市 <ko...@in...> wrote: > >> Here's the second fix. >> >> Regards; >> --- >> Koichi Suzuki >> >> >> >> >> On 2013/05/15, at 13:22, Ashutosh Bapat <ash...@en...> >> wrote: >> >> Again in this case, the right fix is to use the standard EXPLAIN form >> EXPLAIN (verbose, num_nodes off, nodes off, costs off). >> >> What I don't understand is how come these changes appeared suddenly? >> >> >> On Wed, May 15, 2013 at 7:59 AM, Koichi Suzuki <koi...@gm... >> > wrote: >> >>> Hello; >>> >>> This failure was caused by the following: >>> >>> 1. Now FQS pushes down order by and there're no sort at coordinator. >>> Current regression did not reflect this. >>> >>> 2. Regression test changed the datanode name. Current regression did >>> not reflect this. >>> >>> Enclosed is a fix. >>> >>> Regards; >>> ---------- >>> Koichi Suzuki >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> >> http://p.sf.net/sfu/alienvault_d2d_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > > http://p.sf.net/sfu/alienvault_d2d_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-15 12:14:47
|
I committed this patch, with slight changes. I tested it on build farm to make sure that the test passes there as well. On Wed, May 15, 2013 at 2:07 PM, 鈴木 幸市 <ko...@in...> wrote: > Sorry for bothering. This time, I only applied num_nodes off only to > the statement which is shipped to more than one datanode. There was two of > them. > > Hope this makes sense. > > Regards; > --- > Koichi Suzuki > > > > On 2013/05/15, at 15:04, Ashutosh Bapat <ash...@en...> > wrote: > > Hi Suzuki-san, > > > > On Wed, May 15, 2013 at 11:15 AM, 鈴木 幸市 <ko...@in...> wrote: > >> I see. PFA the revised patch. Some num_node off was missing in >> explain statements. They're fixed too. >> >> >> > You need to apply changes only to the queries which are showing diffs > because of difference in number of nodes. But it seems you have changed all > the explain commands in the file. That's not needed. > > As I said in my previous mail, there are some explain outputs where the > query is shipped to only one node and the node count in explain output will > be always 1 for such queries. That doesn't change with the cluster > configuration. For such queries, it's important that we have num_nodes off. > > >> --- >> Koichi Suzuki >> >> >> >> On 2013/05/15, at 13:17, Ashutosh Bapat <ash...@en...> >> wrote: >> >> That's not a correct fix. We should turn off the node count in Explain >> output. Use EXPLAIN (verbose, num_nodes off, nodes off, costs off) as a >> general rule. In case of single node reduction tests, we may use num_nodes >> on, but in that case num_nodes will always be 1, and will not change >> depending upon cluster configuration. >> >> >> On Wed, May 15, 2013 at 7:56 AM, Koichi Suzuki <koi...@gm... >> > wrote: >> >>> I found the regression failure was due to the change of explain output. >>> Original expected output uses node count = 3, which is not correct >>> because regression test is configured with only two datanode. Actual >>> result says node count = 2 which is correct. >>> >>> Attached is a patch for the correction. >>> >>> Regards; >>> ---------- >>> Koichi Suzuki >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> >> http://p.sf.net/sfu/alienvault_d2d_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |