You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Koichi S. <koi...@gm...> - 2013-06-25 05:47:09
|
Year. The code is not harmfull at all. Removing "return" from void functions could be a good refactoring. Although Solaris is not supported officieally yet, I think it's a good idea to have it in master. I do hope Matt continues to test XC so that we can tell XC runs on Solaris. Any more inputs? Regardsds; ---------- Koichi Suzuki 2013/6/25 Matt Warner <MW...@xi...> > I'll double check but I thought I'd only removed return from functions > declaring void as their return type. > > ? > > Matt > > On Jun 23, 2013, at 6:22 PM, "鈴木 幸市" <ko...@in...> wrote: > > The patch looks reasonable. One comment: removing "return" for non-void > function will cause Linux gcc warning. For this case, we need #ifdef > SOLARIS directive. > > You sent two similar patch for proxy_main.c in separate e-mails. The > later one seems to resolve my comment above. Although the core team > cannot declare that XC runs on Solaris so far, I think the patch is > reasonable to be included. > > Any other comments? > --- > Koichi Suzuki > > > > On 2013/06/22, at 1:26, Matt Warner <MW...@XI...> wrote: > > Regarding the other changes, they are specific to Solaris. For example, in > src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include sys/filio.h. > I’ll be searching to see if I can find a macro already defined for Solaris > that I can leverage to #ifdef those Solaris-specific items.**** > > Matt**** > > *From:* Matt Warner > *Sent:* Friday, June 21, 2013 9:21 AM > *To:* 'Koichi Suzuki' > *Cc:* 'pos...@li...' > *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** > ** ** > First patch.**** > > *From:* Matt Warner > *Sent:* Friday, June 21, 2013 8:50 AM > *To:* 'Koichi Suzuki' > *Cc:* pos...@li... > *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** > ** ** > Yes, I’m running XC on Solaris x64.**** > > *From:* Koichi Suzuki [mailto:koi...@gm...<koi...@gm...> > ] > *Sent:* Thursday, June 20, 2013 6:34 PM > *To:* Matt Warner > *Cc:* pos...@li... > *Subject:* Re: [Postgres-xc-developers] Minor Fixes**** > ** ** > Thanks a lot for the patch. As Michael mentioned, you can send a patch > to developers mailing list.**** > ** ** > BTW, core team tested current XC on 64bit Intel CentOS and others tested > it against RedHat. Did you test XC on Solaris?**** > ** ** > Regards;**** > > **** > ---------- > Koichi Suzuki**** > > ** ** > 2013/6/21 Matt Warner <MW...@xi...>**** > Just a quick question about contributing fixes. I’ve had to make some > minor changes to get XC compiled on Solaris x64.**** > What format would you like to see for the changes? Most are very minor, > such as removing return statements inside void functions (which the Solaris > compiler flags as incorrect since you can’t return from a void function).* > *** > Matt**** > **** > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers**** > ** ** > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > |
From: Matt W. <MW...@XI...> - 2013-06-24 15:16:15
|
I'll double check but I thought I'd only removed return from functions declaring void as their return type. ? Matt On Jun 23, 2013, at 6:22 PM, "鈴木 幸市" <ko...@in...<mailto:ko...@in...>> wrote: The patch looks reasonable. One comment: removing "return" for non-void function will cause Linux gcc warning. For this case, we need #ifdef SOLARIS directive. You sent two similar patch for proxy_main.c in separate e-mails. The later one seems to resolve my comment above. Although the core team cannot declare that XC runs on Solaris so far, I think the patch is reasonable to be included. Any other comments? --- Koichi Suzuki On 2013/06/22, at 1:26, Matt Warner <MW...@XI...<mailto:MW...@XI...>> wrote: Regarding the other changes, they are specific to Solaris. For example, in src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include sys/filio.h. I’ll be searching to see if I can find a macro already defined for Solaris that I can leverage to #ifdef those Solaris-specific items. Matt From: Matt Warner Sent: Friday, June 21, 2013 9:21 AM To: 'Koichi Suzuki' Cc: 'pos...@li...<mailto:pos...@li...>' Subject: RE: [Postgres-xc-developers] Minor Fixes First patch. From: Matt Warner Sent: Friday, June 21, 2013 8:50 AM To: 'Koichi Suzuki' Cc: pos...@li...<mailto:pos...@li...> Subject: RE: [Postgres-xc-developers] Minor Fixes Yes, I’m running XC on Solaris x64. From: Koichi Suzuki [mailto:koi...@gm...] Sent: Thursday, June 20, 2013 6:34 PM To: Matt Warner Cc: pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-developers] Minor Fixes Thanks a lot for the patch. As Michael mentioned, you can send a patch to developers mailing list. BTW, core team tested current XC on 64bit Intel CentOS and others tested it against RedHat. Did you test XC on Solaris? Regards; ---------- Koichi Suzuki 2013/6/21 Matt Warner <MW...@xi...<mailto:MW...@xi...>> Just a quick question about contributing fixes. I’ve had to make some minor changes to get XC compiled on Solaris x64. What format would you like to see for the changes? Most are very minor, such as removing return statements inside void functions (which the Solaris compiler flags as incorrect since you can’t return from a void function). Matt ------------------------------------------------------------------------------ This SF.net<http://SF.net> email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers ------------------------------------------------------------------------------ This SF.net<http://SF.net> email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev_______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Abbas B. <abb...@en...> - 2013-06-24 12:19:07
|
Attached please find a revised patch that contains related test cases. Test cases were earlier submitted as a part of patch for bug id 3608374, but since these two patches might not get into the same release , the test cases are now separated. Regards On Fri, Mar 8, 2013 at 2:01 PM, Koichi Suzuki <koi...@gm...>wrote: > Thanks Abbas for the fix. > ---------- > Koichi Suzuki > > > 2013/3/8 Abbas Butt <abb...@en...>: > > Attached please find patch to fix 3607290. > > > > Regression shows no extra failure. > > > > Test cases for this have already been submitted in email subject [Patch > to > > fix a crash in COPY TO from a replicated table] > > > > -- > > -- > > Abbas > > Architect > > EnterpriseDB Corporation > > The Enterprise PostgreSQL Company > > > > Phone: 92-334-5100153 > > > > Website: www.enterprisedb.com > > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > > Follow us on Twitter: http://www.twitter.com/enterprisedb > > > > This e-mail message (and any attachment) is intended for the use of > > the individual or entity to whom it is addressed. This message > > contains information from EnterpriseDB Corporation that may be > > privileged, confidential, or exempt from disclosure under applicable > > law. If you are not the intended recipient or authorized to receive > > this for the intended recipient, any use, dissemination, distribution, > > retention, archiving, or copying of this communication is strictly > > prohibited. If you have received this e-mail in error, please notify > > the sender immediately by reply e-mail and delete this message. > > > ------------------------------------------------------------------------------ > > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > > endpoint security space. For insight on selecting the right partner to > > tackle endpoint security challenges, access the full report. > > http://p.sf.net/sfu/symantec-dev2dev > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-06-24 11:25:37
|
Hi, Attached please find patch to ignore test case xc_for_update.sql in regression. Feature request id 3614569 tracks this missing support. -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-06-24 11:01:15
|
Hi, Thanks for the review. Here is the updated patch that contains reduced test cases. Also I have updated code comments. The statement failing is already in the comments. Regards On Mon, Jun 17, 2013 at 12:36 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > I think the patch for this is in the other thread (11_fix ..). I looked at > the patch. Here are the comments > 1. There are just too many tests in the patch, without much difference. > Please add only the tests which are needed, and also add comments about the > purpose of the statements. Considering the time at hand, I don't think I > can review all of the tests, so it would be good if it can be reduced to a > minimal set. > 2. The code is fine, but the comment need not have specific details of the > statement failing. Getting preferred node is general practice everywhere > and not just this portion of the code. By the way, we are not getting "just > first node" from the node list but we try to get the preferred node. > > > On Wed, Mar 27, 2013 at 3:55 PM, Abbas Butt <abb...@en...>wrote: > >> Bug ID 3608374 >> >> On Fri, Mar 8, 2013 at 12:25 PM, Abbas Butt <abb...@en...>wrote: >> >>> Attached please find revised patch that provides the following in >>> addition to what it did earlier. >>> >>> 1. Uses GetPreferredReplicationNode() instead of list_truncate() >>> 2. Adds test cases to xc_alter_table and xc_copy. >>> >>> I tested the following in reasonable detail to find whether any other >>> caller of GetRelationNodes() needs some fixing or not and found that none >>> of the other callers needs any more fixing. >>> I tested >>> a) copy >>> b) alter table redistribute >>> c) utilities >>> d) dmls etc >>> >>> However while testing ALTER TABLE, I found that replicated to hash is >>> not working correctly. >>> >>> This test case fails, since only SIX rows are expected in the final >>> result. >>> >>> test=# create table t_r_n12(a int, b int) distribute by replication to >>> node (DATA_NODE_1, DATA_NODE_2); >>> CREATE TABLE >>> test=# insert into t_r_n12 values(1,777),(3,4),(5,6),(20,30),(NULL,999), >>> (NULL, 999); >>> INSERT 0 6 >>> test=# -- rep to hash >>> test=# ALTER TABLE t_r_n12 distribute by hash(a); >>> ALTER TABLE >>> test=# SELECT * FROM t_r_n12 order by 1; >>> a | b >>> ----+----- >>> 1 | 777 >>> 3 | 4 >>> 5 | 6 >>> 20 | 30 >>> | 999 >>> | 999 >>> | 999 >>> | 999 >>> (8 rows) >>> >>> test=# drop table t_r_n12; >>> DROP TABLE >>> >>> I have added a source forge bug tracker id to this case (Artifact >>> 3607290<https://sourceforge.net/tracker/?func=detail&aid=3607290&group_id=311227&atid=1310232>). >>> The reason for this error is that the function distrib_delete_hash does not >>> take into account that the distribution column can be null. I will provide >>> a separate fix for that one. >>> Regression shows no extra failure except that test case xc_alter_table >>> would fail until 3607290 is fixed. >>> >>> Regards >>> >>> >>> >>> On Mon, Feb 25, 2013 at 10:18 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> Thanks a lot Abbas for this quick fix. >>>> >>>> I am sorry, it's caused by my refactoring of GetRelationNodes(). >>>> >>>> If possible, can you please examine the other callers of >>>> GetRelationNodes() which would face the problems, esp. the ones for DML and >>>> utilities. This is other instance, where deciding the nodes to execute on >>>> at the time of execution will help. >>>> >>>> About the fix >>>> Can you please use GetPreferredReplicationNode() instead of >>>> list_truncate()? It will pick the preferred node instead of first one. If >>>> you find more places where we need this fix, it might be better to create a >>>> wrapper function and use it at those places. >>>> >>>> On Sat, Feb 23, 2013 at 2:59 PM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> Hi, >>>>> PFA a patch to fix a crash when COPY TO is used on a replicated table. >>>>> >>>>> This test case produces a crash >>>>> >>>>> create table tab_rep(a int, b int) distribute by replication; >>>>> insert into tab_rep values(1,2), (3,4), (5,6), (7,8); >>>>> COPY tab_rep (a, b) TO stdout; >>>>> >>>>> Here is a description of the problem and the fix >>>>> In case of a read from a replicated table GetRelationNodes() >>>>> returns all nodes and expects that the planner can choose >>>>> one depending on the rest of the join tree. >>>>> In case of COPY TO we should choose the first one in the node list >>>>> This fixes a system crash and makes pg_dump work fine. >>>>> >>>>> -- >>>>> Abbas >>>>> Architect >>>>> EnterpriseDB Corporation >>>>> The Enterprise PostgreSQL Company >>>>> >>>>> Phone: 92-334-5100153 >>>>> >>>>> Website: www.enterprisedb.com >>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>>>> This e-mail message (and any attachment) is intended for the use of >>>>> the individual or entity to whom it is addressed. This message >>>>> contains information from EnterpriseDB Corporation that may be >>>>> privileged, confidential, or exempt from disclosure under applicable >>>>> law. If you are not the intended recipient or authorized to receive >>>>> this for the intended recipient, any use, dissemination, distribution, >>>>> retention, archiving, or copying of this communication is strictly >>>>> prohibited. If you have received this e-mail in error, please notify >>>>> the sender immediately by reply e-mail and delete this message. >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>> _______________________________________________ >>>>> Postgres-xc-developers mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-06-24 08:41:11
|
Thanks for the review. I have tried my best to answer all the questions raised in the review to see if we can take any advantage to reduce the impact of our changes. -------------------- How views are stored? -------------------- A view is stored in pg_class with relkind 'v' The definition of the view is stored in pg_rewrite. The pg_rewrite catalog table has the following important columns: * ev_class is a FK to the oid of the view in pg_class * ev_action stores string representation of the Query node that defines the view. e.g. It can contain ({QUERY :commandType 1 :querySource 0 :canSetTag true :utilityStmt <> :resultRelation 0 :hasAggs false :hasWindowFuncs false :hasSubLinks false :hasDistinctOn false ..... ..... :rowMarks <> :setOperations <> :constraintDeps <>}) --------------------------------------- How are view definitions shown to user? --------------------------------------- This query is converted from string representation to node representation and is deparsed to get the query that defines the view. This all happens in function make_viewdef, which is called ultimately from the function pg_get_viewdef. The function make_viewdef calls get_query_def. --------------------- How views are dumped? --------------------- pg_dump uses a catalog view pg_views to dump views. The view pg_view calls pg_get_viewdef to get view definition. This means that pg_dump would deparse Query node using get_query_def to get the view definition. system_views.sql creates pg_view using CREATE VIEW pg_views AS SELECT N.nspname AS schemaname, C.relname AS viewname, pg_get_userbyid(C.relowner) AS viewowner, pg_get_viewdef(C.oid) AS definition FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE C.relkind = 'v'; -------------------------------------------------------- How PG decides whether to schema qualify objects or not? -------------------------------------------------------- Lets take the example of tables. The decision to schema qualify or not happens in the function generate_relation_name. The main logic is with in RelationIsVisible, which decides whether an unqualified relation name would be found in the current search path or not. The logic for other objects is similar and happens at different places in the code. Having answered all these questions I am not sure how can I use the fact that "The view definition displayed changes with search path" to any advantage here except that I can test my changes using this fact. As far as I can see pg_dump is not using any trick. It uses the same query deparsing logic that is available. Please take a look at the test cases file I uploaded, I have used views and \d+ to verify that the changes I have done do not impact non-remote queries. Best Regards On Mon, Jun 24, 2013 at 12:30 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > We are changing a lot of PostgreSQL deparsing code, which would create > problems in the future merges. Since this change is in query deparsing > logic any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch > should again be the last resort. > > Please take a look at how view definitions are dumped. That will give a > good idea as to how PG schema-qualifies (or not) objects. Here's how view > definitions displayed changes with search path. Since the code to dump > views and display definitions is same, the view definition dumped also > changes with the search path. Thus pg_dump must be using some trick to > always dump a consistent view definition (and hence a deparsed query). > Thanks Amit for the example. > > create table ttt (id int); > > postgres=# create domain dd int; > CREATE DOMAIN > postgres=# create view v2 as select id::dd from ttt; > CREATE VIEW > postgres=# set search_path TO ''; > SET > > postgres=# \d+ public.v2 > View "public.v2" > Column | Type | Modifiers | Storage | Description > --------+-----------+--------- > --+---------+------------- > id | public.dd | | plain | > View definition: > SELECT ttt.id::public.dd AS id > FROM public.ttt; > > postgres=# set search_path TO default ; > SET > postgres=# show search_path ; > search_path > ---------------- > "$user",public > (1 row) > > postgres=# \d+ public.v2 > View "public.v2" > Column | Type | Modifiers | Storage | Description > --------+------+-----------+---------+------------- > id | dd | | plain | > View definition: > SELECT ttt.id::dd AS id > FROM ttt; > > We need to leverage similar mechanism here to reduce PG footprint. > > > On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> As discussed in the last F2F meeting, here is an updated patch that >> provides schema qualification of the following objects: Tables, Views, >> Functions, Types and Domains in case of remote queries. >> Sequence functions are never concerned with datanodes hence, schema >> qualification is not required in case of sequences. >> This solves plancache test case failure issue and does not introduce any >> more failures. >> I have also attached some tests with results to aid in review. >> >> Comments are welcome. >> >> Regards >> >> >> >> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt <abb...@en...>wrote: >> >>> Hi, >>> Attached please find a WIP patch that provides the functionality of >>> preparing the statement at the datanodes as soon as it is prepared >>> on the coordinator. >>> This is to take care of a test case in plancache that makes sure that >>> change of search_path is ignored by replans. >>> While the patch fixes this replan test case and the regression works fine >>> there are still these two problems I have to take care of. >>> >>> 1. This test case fails >>> >>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY >>> HASH(a); >>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails >>> >>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; >>> QUERY PLAN >>> ------------------------------------------------------------------- >>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 >>> width=14) >>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >>> ((xc_alter_table_3.ctid = $1) AND >>> (xc_alter_table_3.xc_node_id = $2)) >>> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" >>> (cost=0.00..0.00 rows=1000 width=14) >>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >>> xc_alter_table_3.xc_node_id >>> Node/s: data_node_3 >>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >>> xc_alter_table_3 WHERE (a = 1) >>> (7 rows) >>> >>> The reason of the failure is that the select query is selecting 3 >>> items, the first of which is an int, >>> whereas the delete query is comparing $1 with a ctid. >>> I am not sure how this works without prepare, but it fails when used >>> with prepare. >>> >>> The reason of this planning is this section of code in function >>> pgxc_build_dml_statement >>> else if (cmdtype == CMD_DELETE) >>> { >>> /* >>> * Since there is no data to update, the first param is going to >>> be >>> * ctid. >>> */ >>> ctid_param_num = 1; >>> } >>> >>> Amit/Ashutosh can you suggest a fix for this problem? >>> There are a number of possibilities. >>> a) The select should not have selected column a. >>> b) The DELETE should have referred to $2 and $3 for ctid and >>> xc_node_id respectively. >>> c) Since the query works without PREPARE, we should make PREPARE work >>> the same way. >>> >>> >>> 2. This test case in plancache fails. >>> >>> -- Try it with a view, which isn't directly used in the resulting >>> plan >>> -- but should trigger invalidation anyway >>> create table tab33 (a int, b int); >>> insert into tab33 values(1,2); >>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >>> PREPARE vprep AS SELECT * FROM v_tab33; >>> EXECUTE vprep; >>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; >>> -- does not cause plan invalidation because views are never created >>> on datanodes >>> EXECUTE vprep; >>> >>> and the reason of the failure is that views are never created on the >>> datanodes hence plan invalidation is not triggered. >>> This can be documented as an XC limitation. >>> >>> 3. I still have to add comments in the patch and some ifdefs may be >>> missing too. >>> >>> >>> In addition to the patch I have also attached some example Java programs >>> that test the some basic functionality through JDBC. I found that these >>> programs are working fine after my patch. >>> >>> 1. Prepared.java : Issues parameterized delete, insert and update >>> through JDBC. These are un-named prepared statements and works fine. >>> 2. NamedPrepared.java : Issues two named prepared statements through >>> JDBC and works fine. >>> 3. Retrieve.java : Runs a simple select to verify results. >>> The comments on top of the files explain their usage. >>> >>> Comments are welcome. >>> >>> Thanks >>> Regards >>> >>> >>> >>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> Attached please find updated patch to fix the bug. The patch takes >>>>>>> care of the bug and the regression issues resulting from the changes done >>>>>>> in the patch. Please note that the issue in test case plancache still >>>>>>> stands unsolved because of the following test case (simplified but taken >>>>>>> from plancache.sql) >>>>>>> >>>>>>> create schema s1 create table abc (f1 int); >>>>>>> create schema s2 create table abc (f1 int); >>>>>>> >>>>>>> >>>>>>> insert into s1.abc values(123); >>>>>>> insert into s2.abc values(456); >>>>>>> >>>>>>> set search_path = s1; >>>>>>> >>>>>>> prepare p1 as select f1 from abc; >>>>>>> execute p1; -- works fine, results in 123 >>>>>>> >>>>>>> set search_path = s2; >>>>>>> execute p1; -- works fine after the patch, results in 123 >>>>>>> >>>>>>> alter table s1.abc add column f2 float8; -- force replan >>>>>>> execute p1; -- fails >>>>>>> >>>>>>> >>>>>> Huh! The beast bit us. >>>>>> >>>>>> I think the right solution here is either of two >>>>>> 1. Take your previous patch to always use qualified names (but you >>>>>> need to improve it not to affect the view dumps) >>>>>> 2. Prepare the statements at the datanode at the time of prepare. >>>>>> >>>>>> >>>>>> Is this test added new in 9.2? >>>>>> >>>>> >>>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in >>>>> March 2007. >>>>> >>>>> >>>>>> Why didn't we see this issue the first time prepare was implemented? >>>>>> I don't remember (but it was two years back). >>>>>> >>>>> >>>>> I was unable to locate the exact reason but since statements were not >>>>> being prepared on datanodes due to a merge issue this issue just surfaced >>>>> up. >>>>> >>>>> >>>> >>>> Well, even though statements were not getting prepared (actually >>>> prepared statements were not being used again and again) on datanodes, we >>>> never prepared them on datanode at the time of preparing the statement. So, >>>> this bug should have shown itself long back. >>>> >>>> >>>>> >>>>>> >>>>>>> The last execute should result in 123, whereas it results in 456. >>>>>>> The reason is that the search path has already been changed at the datanode >>>>>>> and a replan would mean select from abc in s2. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> Hi Abbas, >>>>>>>> I think the fix is on the right track. There are couple of >>>>>>>> improvements that we need to do here (but you may not do those if the time >>>>>>>> doesn't permit). >>>>>>>> >>>>>>>> 1. We should have a status in RemoteQuery node, as to whether the >>>>>>>> query in the node should use extended protocol or not, rather than relying >>>>>>>> on the presence of statement name and parameters etc. Amit has already >>>>>>>> added a status with that effect. We need to leverage it. >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> The patch fixes the dead code issue, that I described earlier. The >>>>>>>>> code was dead because of two issues: >>>>>>>>> >>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name >>>>>>>>> to NULL and this was the main reason ActivateDatanodeStatementOnNode was >>>>>>>>> not being called in the function pgxc_start_command_on_connection. >>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>>>>>> prepared statement must have some parameters. >>>>>>>>> >>>>>>>>> Fixing these two issues makes sure that the function >>>>>>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>>>>>>>> on the datanode. >>>>>>>>> This patch would fix bug 3607975. It would however not fix the >>>>>>>>> test case I described in my previous email because of reasons I described. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>>>>>>>> ash...@en...> wrote: >>>>>>>>> >>>>>>>>>> Can you please explain what this fix does? It would help to have >>>>>>>>>> an elaborate explanation with code snippets. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>>>>>>>> ash...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>>>>>>>> ash...@en...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>>>>>> test case. >>>>>>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>>>>>> 3607975. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >>>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing the query on >>>>>>>>>>>>>>> datanode al times, as against preparing once and executing multiple times. >>>>>>>>>>>>>>> This is because somehow the remote query is being prepared as an unnamed >>>>>>>>>>>>>>> statement." >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Consider this test case >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>>>>>> D. execute p1; >>>>>>>>>>>>>>> E. execute p1; >>>>>>>>>>>>>>> F. execute p1; >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Here are the confusions >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response to >>>>>>>>>>>>>>> a prepare issued by a user. >>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>>>>>>>>>>>>>> datanodes. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a >>>>>>>>>>>>>>> new generic plan, >>>>>>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>>>>>> again does use cached plans >>>>>>>>>>>>>>> and does not prepare again and again every time we issue >>>>>>>>>>>>>>> an execute. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> We never prepare any named/unnamed statements on the datanode. >>>>>>>>>>>>> I spent time looking at the code written in do_query and functions called >>>>>>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>>>>>> prepared statements are being handled now is the same as I described >>>>>>>>>>>>> earlier in the mail chain with the help of an example. >>>>>>>>>>>>> The code that is dead was originally added by Mason through >>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. >>>>>>>>>>>>> This code has been changed a lot over the last two years. This commit does >>>>>>>>>>>>> not contain any test cases so I am not sure how did it use to work back >>>>>>>>>>>>> then. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. >>>>>>>>>>>> So, something has gone wrong in-between. That's what we need to find out >>>>>>>>>>>> and fix. Not preparing statements on the datanode is not good for >>>>>>>>>>>> performance either. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I was able to find the reason why the code was dead and the >>>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure that >>>>>>>>>>> statements are prepared on datanodes whenever required. However there is a >>>>>>>>>>> problem in the way prepared statements are handled. The problem is that >>>>>>>>>>> unless a prepared statement is executed it is never prepared on datanodes, >>>>>>>>>>> hence changing the path before executing the statement gives us incorrect >>>>>>>>>>> results. For Example >>>>>>>>>>> >>>>>>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>>>>>> replication; >>>>>>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>>>>>> replication; >>>>>>>>>>> >>>>>>>>>>> insert into s1.abc values(123); >>>>>>>>>>> insert into s2.abc values(456); >>>>>>>>>>> set search_path = s2; >>>>>>>>>>> prepare p1 as select f1 from abc; >>>>>>>>>>> set search_path = s1; >>>>>>>>>>> execute p1; >>>>>>>>>>> >>>>>>>>>>> The last execute results in 123, where as it should have >>>>>>>>>>> resulted in 456. >>>>>>>>>>> I can finalize the attached patch by fixing any regression >>>>>>>>>>> issues that may result and that would fix 3607975 and improve performance >>>>>>>>>>> however the above test case would still fail. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Did you verify it under the debugger? If that would have been >>>>>>>>>>>>>> the case, we would not have seen this problem if search_path changed in >>>>>>>>>>>>>> between steps D and E. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> If search path is changed between steps D & E, the problem >>>>>>>>>>>>> occurs because when the remote query node is created, schema qualification >>>>>>>>>>>>> is not added in the sql statement to be sent to the datanode, but changes >>>>>>>>>>>>> in search path do get communicated to the datanode. The sql statement is >>>>>>>>>>>>> built when execute is issued for the first time and is reused on subsequent >>>>>>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>>>>>>>> when search path was some thing else. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Fixing the prepared statements the way I suggested, would fix >>>>>>>>>>>> the problem, since the statement will get prepared at the datanode, with >>>>>>>>>>>> the same search path settings, as it would on the coordinator. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> *Abbas* >>>>>>>>>>>>>>> Architect >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>>>>> * >>>>>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>>>>>> monitoring service >>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>>>>>> monitor your >>>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try >>>>>>>>>>>>>>> New Relic >>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Best Wishes, >>>>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> -- >>>>>>>>>>>>> *Abbas* >>>>>>>>>>>>> Architect >>>>>>>>>>>>> >>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>>> * >>>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>> >>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Best Wishes, >>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> -- >>>>>>>>>>> *Abbas* >>>>>>>>>>> Architect >>>>>>>>>>> >>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>> * >>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>> @EnterpriseDB >>>>>>>>>>> >>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Best Wishes, >>>>>>>>>> Ashutosh Bapat >>>>>>>>>> EntepriseDB Corporation >>>>>>>>>> The Postgres Database Company >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> -- >>>>>>>>> *Abbas* >>>>>>>>> Architect >>>>>>>>> >>>>>>>>> Ph: 92.334.5100153 >>>>>>>>> Skype ID: gabbasb >>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>> * >>>>>>>>> Follow us on Twitter* >>>>>>>>> @EnterpriseDB >>>>>>>>> >>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> *Abbas* >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>> * >>>>>>> Follow us on Twitter* >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ashutosh B. <ash...@en...> - 2013-06-24 07:31:01
|
Hi Abbas, We are changing a lot of PostgreSQL deparsing code, which would create problems in the future merges. Since this change is in query deparsing logic any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch should again be the last resort. Please take a look at how view definitions are dumped. That will give a good idea as to how PG schema-qualifies (or not) objects. Here's how view definitions displayed changes with search path. Since the code to dump views and display definitions is same, the view definition dumped also changes with the search path. Thus pg_dump must be using some trick to always dump a consistent view definition (and hence a deparsed query). Thanks Amit for the example. create table ttt (id int); postgres=# create domain dd int; CREATE DOMAIN postgres=# create view v2 as select id::dd from ttt; CREATE VIEW postgres=# set search_path TO ''; SET postgres=# \d+ public.v2 View "public.v2" Column | Type | Modifiers | Storage | Description --------+-----------+--------- --+---------+------------- id | public.dd | | plain | View definition: SELECT ttt.id::public.dd AS id FROM public.ttt; postgres=# set search_path TO default ; SET postgres=# show search_path ; search_path ---------------- "$user",public (1 row) postgres=# \d+ public.v2 View "public.v2" Column | Type | Modifiers | Storage | Description --------+------+-----------+---------+------------- id | dd | | plain | View definition: SELECT ttt.id::dd AS id FROM ttt; We need to leverage similar mechanism here to reduce PG footprint. On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt <abb...@en...>wrote: > Hi, > As discussed in the last F2F meeting, here is an updated patch that > provides schema qualification of the following objects: Tables, Views, > Functions, Types and Domains in case of remote queries. > Sequence functions are never concerned with datanodes hence, schema > qualification is not required in case of sequences. > This solves plancache test case failure issue and does not introduce any > more failures. > I have also attached some tests with results to aid in review. > > Comments are welcome. > > Regards > > > > On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Attached please find a WIP patch that provides the functionality of >> preparing the statement at the datanodes as soon as it is prepared >> on the coordinator. >> This is to take care of a test case in plancache that makes sure that >> change of search_path is ignored by replans. >> While the patch fixes this replan test case and the regression works fine >> there are still these two problems I have to take care of. >> >> 1. This test case fails >> >> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY >> HASH(a); >> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails >> >> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; >> QUERY PLAN >> ------------------------------------------------------------------- >> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 >> width=14) >> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >> ((xc_alter_table_3.ctid = $1) AND >> (xc_alter_table_3.xc_node_id = $2)) >> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" >> (cost=0.00..0.00 rows=1000 width=14) >> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >> xc_alter_table_3.xc_node_id >> Node/s: data_node_3 >> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >> xc_alter_table_3 WHERE (a = 1) >> (7 rows) >> >> The reason of the failure is that the select query is selecting 3 >> items, the first of which is an int, >> whereas the delete query is comparing $1 with a ctid. >> I am not sure how this works without prepare, but it fails when used >> with prepare. >> >> The reason of this planning is this section of code in function >> pgxc_build_dml_statement >> else if (cmdtype == CMD_DELETE) >> { >> /* >> * Since there is no data to update, the first param is going to >> be >> * ctid. >> */ >> ctid_param_num = 1; >> } >> >> Amit/Ashutosh can you suggest a fix for this problem? >> There are a number of possibilities. >> a) The select should not have selected column a. >> b) The DELETE should have referred to $2 and $3 for ctid and >> xc_node_id respectively. >> c) Since the query works without PREPARE, we should make PREPARE work >> the same way. >> >> >> 2. This test case in plancache fails. >> >> -- Try it with a view, which isn't directly used in the resulting plan >> -- but should trigger invalidation anyway >> create table tab33 (a int, b int); >> insert into tab33 values(1,2); >> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >> PREPARE vprep AS SELECT * FROM v_tab33; >> EXECUTE vprep; >> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; >> -- does not cause plan invalidation because views are never created >> on datanodes >> EXECUTE vprep; >> >> and the reason of the failure is that views are never created on the >> datanodes hence plan invalidation is not triggered. >> This can be documented as an XC limitation. >> >> 3. I still have to add comments in the patch and some ifdefs may be >> missing too. >> >> >> In addition to the patch I have also attached some example Java programs >> that test the some basic functionality through JDBC. I found that these >> programs are working fine after my patch. >> >> 1. Prepared.java : Issues parameterized delete, insert and update through >> JDBC. These are un-named prepared statements and works fine. >> 2. NamedPrepared.java : Issues two named prepared statements through JDBC >> and works fine. >> 3. Retrieve.java : Runs a simple select to verify results. >> The comments on top of the files explain their usage. >> >> Comments are welcome. >> >> Thanks >> Regards >> >> >> >> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> >>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> >>>> >>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> Attached please find updated patch to fix the bug. The patch takes >>>>>> care of the bug and the regression issues resulting from the changes done >>>>>> in the patch. Please note that the issue in test case plancache still >>>>>> stands unsolved because of the following test case (simplified but taken >>>>>> from plancache.sql) >>>>>> >>>>>> create schema s1 create table abc (f1 int); >>>>>> create schema s2 create table abc (f1 int); >>>>>> >>>>>> >>>>>> insert into s1.abc values(123); >>>>>> insert into s2.abc values(456); >>>>>> >>>>>> set search_path = s1; >>>>>> >>>>>> prepare p1 as select f1 from abc; >>>>>> execute p1; -- works fine, results in 123 >>>>>> >>>>>> set search_path = s2; >>>>>> execute p1; -- works fine after the patch, results in 123 >>>>>> >>>>>> alter table s1.abc add column f2 float8; -- force replan >>>>>> execute p1; -- fails >>>>>> >>>>>> >>>>> Huh! The beast bit us. >>>>> >>>>> I think the right solution here is either of two >>>>> 1. Take your previous patch to always use qualified names (but you >>>>> need to improve it not to affect the view dumps) >>>>> 2. Prepare the statements at the datanode at the time of prepare. >>>>> >>>>> >>>>> Is this test added new in 9.2? >>>>> >>>> >>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in >>>> March 2007. >>>> >>>> >>>>> Why didn't we see this issue the first time prepare was implemented? I >>>>> don't remember (but it was two years back). >>>>> >>>> >>>> I was unable to locate the exact reason but since statements were not >>>> being prepared on datanodes due to a merge issue this issue just surfaced >>>> up. >>>> >>>> >>> >>> Well, even though statements were not getting prepared (actually >>> prepared statements were not being used again and again) on datanodes, we >>> never prepared them on datanode at the time of preparing the statement. So, >>> this bug should have shown itself long back. >>> >>> >>>> >>>>> >>>>>> The last execute should result in 123, whereas it results in 456. The >>>>>> reason is that the search path has already been changed at the datanode and >>>>>> a replan would mean select from abc in s2. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> Hi Abbas, >>>>>>> I think the fix is on the right track. There are couple of >>>>>>> improvements that we need to do here (but you may not do those if the time >>>>>>> doesn't permit). >>>>>>> >>>>>>> 1. We should have a status in RemoteQuery node, as to whether the >>>>>>> query in the node should use extended protocol or not, rather than relying >>>>>>> on the presence of statement name and parameters etc. Amit has already >>>>>>> added a status with that effect. We need to leverage it. >>>>>>> >>>>>>> >>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> The patch fixes the dead code issue, that I described earlier. The >>>>>>>> code was dead because of two issues: >>>>>>>> >>>>>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>>>>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>>>>>> being called in the function pgxc_start_command_on_connection. >>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>>>>> prepared statement must have some parameters. >>>>>>>> >>>>>>>> Fixing these two issues makes sure that the function >>>>>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>>>>>>> on the datanode. >>>>>>>> This patch would fix bug 3607975. It would however not fix the test >>>>>>>> case I described in my previous email because of reasons I described. >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> Can you please explain what this fix does? It would help to have >>>>>>>>> an elaborate explanation with code snippets. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>>>>>>> ash...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>>>>>>> ash...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>>>>> test case. >>>>>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>>>>> 3607975. >>>>>>>>>>>>>> >>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing the query on >>>>>>>>>>>>>> datanode al times, as against preparing once and executing multiple times. >>>>>>>>>>>>>> This is because somehow the remote query is being prepared as an unnamed >>>>>>>>>>>>>> statement." >>>>>>>>>>>>>> >>>>>>>>>>>>>> Consider this test case >>>>>>>>>>>>>> >>>>>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>>>>> D. execute p1; >>>>>>>>>>>>>> E. execute p1; >>>>>>>>>>>>>> F. execute p1; >>>>>>>>>>>>>> >>>>>>>>>>>>>> Here are the confusions >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response to >>>>>>>>>>>>>> a prepare issued by a user. >>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>>>>>>>>>>>>> datanodes. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a >>>>>>>>>>>>>> new generic plan, >>>>>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>>>>> again does use cached plans >>>>>>>>>>>>>> and does not prepare again and again every time we issue >>>>>>>>>>>>>> an execute. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> We never prepare any named/unnamed statements on the datanode. >>>>>>>>>>>> I spent time looking at the code written in do_query and functions called >>>>>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>>>>> prepared statements are being handled now is the same as I described >>>>>>>>>>>> earlier in the mail chain with the help of an example. >>>>>>>>>>>> The code that is dead was originally added by Mason through >>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. >>>>>>>>>>>> This code has been changed a lot over the last two years. This commit does >>>>>>>>>>>> not contain any test cases so I am not sure how did it use to work back >>>>>>>>>>>> then. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>>>>>> either. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I was able to find the reason why the code was dead and the >>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure that >>>>>>>>>> statements are prepared on datanodes whenever required. However there is a >>>>>>>>>> problem in the way prepared statements are handled. The problem is that >>>>>>>>>> unless a prepared statement is executed it is never prepared on datanodes, >>>>>>>>>> hence changing the path before executing the statement gives us incorrect >>>>>>>>>> results. For Example >>>>>>>>>> >>>>>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>>>>> replication; >>>>>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>>>>> replication; >>>>>>>>>> >>>>>>>>>> insert into s1.abc values(123); >>>>>>>>>> insert into s2.abc values(456); >>>>>>>>>> set search_path = s2; >>>>>>>>>> prepare p1 as select f1 from abc; >>>>>>>>>> set search_path = s1; >>>>>>>>>> execute p1; >>>>>>>>>> >>>>>>>>>> The last execute results in 123, where as it should have resulted >>>>>>>>>> in 456. >>>>>>>>>> I can finalize the attached patch by fixing any regression issues >>>>>>>>>> that may result and that would fix 3607975 and improve performance however >>>>>>>>>> the above test case would still fail. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> Did you verify it under the debugger? If that would have been >>>>>>>>>>>>> the case, we would not have seen this problem if search_path changed in >>>>>>>>>>>>> between steps D and E. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> If search path is changed between steps D & E, the problem >>>>>>>>>>>> occurs because when the remote query node is created, schema qualification >>>>>>>>>>>> is not added in the sql statement to be sent to the datanode, but changes >>>>>>>>>>>> in search path do get communicated to the datanode. The sql statement is >>>>>>>>>>>> built when execute is issued for the first time and is reused on subsequent >>>>>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>>>>>>> when search path was some thing else. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Fixing the prepared statements the way I suggested, would fix >>>>>>>>>>> the problem, since the statement will get prepared at the datanode, with >>>>>>>>>>> the same search path settings, as it would on the coordinator. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> *Abbas* >>>>>>>>>>>>>> Architect >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>>>> * >>>>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>>>>> monitoring service >>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>>>>> monitor your >>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try >>>>>>>>>>>>>> New Relic >>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Best Wishes, >>>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> -- >>>>>>>>>>>> *Abbas* >>>>>>>>>>>> Architect >>>>>>>>>>>> >>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>> * >>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Best Wishes, >>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>> The Postgres Database Company >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> *Abbas* >>>>>>>>>> Architect >>>>>>>>>> >>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>> Skype ID: gabbasb >>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>> * >>>>>>>>>> Follow us on Twitter* >>>>>>>>>> @EnterpriseDB >>>>>>>>>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> *Abbas* >>>>>>>> Architect >>>>>>>> >>>>>>>> Ph: 92.334.5100153 >>>>>>>> Skype ID: gabbasb >>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>> * >>>>>>>> Follow us on Twitter* >>>>>>>> @EnterpriseDB >>>>>>>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-06-24 04:02:32
|
While we are in last run of beta and GA, I don't think, we should include any patches which are not blocking bug-fixes or regression fixes. This is esp applicable, if we are not declaring XC to be available on Solaris. Any kind of porting needs to be a separate project, done before beta test-cycles. I think it's a candidate for 1.2, for which development will start by July mid or end. On Mon, Jun 24, 2013 at 6:52 AM, 鈴木 幸市 <ko...@in...> wrote: > The patch looks reasonable. One comment: removing "return" for non-void > function will cause Linux gcc warning. For this case, we need #ifdef > SOLARIS directive. > > You sent two similar patch for proxy_main.c in separate e-mails. The > later one seems to resolve my comment above. Although the core team > cannot declare that XC runs on Solaris so far, I think the patch is > reasonable to be included. > > Any other comments? > --- > Koichi Suzuki > > > > On 2013/06/22, at 1:26, Matt Warner <MW...@XI...> wrote: > > Regarding the other changes, they are specific to Solaris. For example, in > src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include sys/filio.h. > I’ll be searching to see if I can find a macro already defined for Solaris > that I can leverage to #ifdef those Solaris-specific items.**** > > Matt**** > > *From:* Matt Warner > *Sent:* Friday, June 21, 2013 9:21 AM > *To:* 'Koichi Suzuki' > *Cc:* 'pos...@li...' > *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** > ** ** > First patch.**** > > *From:* Matt Warner > *Sent:* Friday, June 21, 2013 8:50 AM > *To:* 'Koichi Suzuki' > *Cc:* pos...@li... > *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** > ** ** > Yes, I’m running XC on Solaris x64.**** > > *From:* Koichi Suzuki [mailto:koi...@gm...<koi...@gm...> > ] > *Sent:* Thursday, June 20, 2013 6:34 PM > *To:* Matt Warner > *Cc:* pos...@li... > *Subject:* Re: [Postgres-xc-developers] Minor Fixes**** > ** ** > Thanks a lot for the patch. As Michael mentioned, you can send a patch > to developers mailing list.**** > ** ** > BTW, core team tested current XC on 64bit Intel CentOS and others tested > it against RedHat. Did you test XC on Solaris?**** > ** ** > Regards;**** > > **** > ---------- > Koichi Suzuki**** > > ** ** > 2013/6/21 Matt Warner <MW...@xi...>**** > Just a quick question about contributing fixes. I’ve had to make some > minor changes to get XC compiled on Solaris x64.**** > What format would you like to see for the changes? Most are very minor, > such as removing return statements inside void functions (which the Solaris > compiler flags as incorrect since you can’t return from a void function).* > *** > Matt**** > **** > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers**** > ** ** > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-06-24 03:00:34
|
Hi, Attached please find a patch to add some missing ORDER BY in triggers and truncate test cases. -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-06-24 02:57:34
|
Forgot to mention that this does NOT solve the bug Id in the subject line. On Mon, Jun 24, 2013 at 7:42 AM, Abbas Butt <abb...@en...>wrote: > Hi, > As discussed in the last F2F meeting, here is an updated patch that > provides schema qualification of the following objects: Tables, Views, > Functions, Types and Domains in case of remote queries. > Sequence functions are never concerned with datanodes hence, schema > qualification is not required in case of sequences. > This solves plancache test case failure issue and does not introduce any > more failures. > I have also attached some tests with results to aid in review. > > Comments are welcome. > > Regards > > > > On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Attached please find a WIP patch that provides the functionality of >> preparing the statement at the datanodes as soon as it is prepared >> on the coordinator. >> This is to take care of a test case in plancache that makes sure that >> change of search_path is ignored by replans. >> While the patch fixes this replan test case and the regression works fine >> there are still these two problems I have to take care of. >> >> 1. This test case fails >> >> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY >> HASH(a); >> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails >> >> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; >> QUERY PLAN >> ------------------------------------------------------------------- >> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 >> width=14) >> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >> ((xc_alter_table_3.ctid = $1) AND >> (xc_alter_table_3.xc_node_id = $2)) >> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" >> (cost=0.00..0.00 rows=1000 width=14) >> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >> xc_alter_table_3.xc_node_id >> Node/s: data_node_3 >> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >> xc_alter_table_3 WHERE (a = 1) >> (7 rows) >> >> The reason of the failure is that the select query is selecting 3 >> items, the first of which is an int, >> whereas the delete query is comparing $1 with a ctid. >> I am not sure how this works without prepare, but it fails when used >> with prepare. >> >> The reason of this planning is this section of code in function >> pgxc_build_dml_statement >> else if (cmdtype == CMD_DELETE) >> { >> /* >> * Since there is no data to update, the first param is going to >> be >> * ctid. >> */ >> ctid_param_num = 1; >> } >> >> Amit/Ashutosh can you suggest a fix for this problem? >> There are a number of possibilities. >> a) The select should not have selected column a. >> b) The DELETE should have referred to $2 and $3 for ctid and >> xc_node_id respectively. >> c) Since the query works without PREPARE, we should make PREPARE work >> the same way. >> >> >> 2. This test case in plancache fails. >> >> -- Try it with a view, which isn't directly used in the resulting plan >> -- but should trigger invalidation anyway >> create table tab33 (a int, b int); >> insert into tab33 values(1,2); >> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >> PREPARE vprep AS SELECT * FROM v_tab33; >> EXECUTE vprep; >> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; >> -- does not cause plan invalidation because views are never created >> on datanodes >> EXECUTE vprep; >> >> and the reason of the failure is that views are never created on the >> datanodes hence plan invalidation is not triggered. >> This can be documented as an XC limitation. >> >> 3. I still have to add comments in the patch and some ifdefs may be >> missing too. >> >> >> In addition to the patch I have also attached some example Java programs >> that test the some basic functionality through JDBC. I found that these >> programs are working fine after my patch. >> >> 1. Prepared.java : Issues parameterized delete, insert and update through >> JDBC. These are un-named prepared statements and works fine. >> 2. NamedPrepared.java : Issues two named prepared statements through JDBC >> and works fine. >> 3. Retrieve.java : Runs a simple select to verify results. >> The comments on top of the files explain their usage. >> >> Comments are welcome. >> >> Thanks >> Regards >> >> >> >> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> >>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> >>>> >>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> Attached please find updated patch to fix the bug. The patch takes >>>>>> care of the bug and the regression issues resulting from the changes done >>>>>> in the patch. Please note that the issue in test case plancache still >>>>>> stands unsolved because of the following test case (simplified but taken >>>>>> from plancache.sql) >>>>>> >>>>>> create schema s1 create table abc (f1 int); >>>>>> create schema s2 create table abc (f1 int); >>>>>> >>>>>> >>>>>> insert into s1.abc values(123); >>>>>> insert into s2.abc values(456); >>>>>> >>>>>> set search_path = s1; >>>>>> >>>>>> prepare p1 as select f1 from abc; >>>>>> execute p1; -- works fine, results in 123 >>>>>> >>>>>> set search_path = s2; >>>>>> execute p1; -- works fine after the patch, results in 123 >>>>>> >>>>>> alter table s1.abc add column f2 float8; -- force replan >>>>>> execute p1; -- fails >>>>>> >>>>>> >>>>> Huh! The beast bit us. >>>>> >>>>> I think the right solution here is either of two >>>>> 1. Take your previous patch to always use qualified names (but you >>>>> need to improve it not to affect the view dumps) >>>>> 2. Prepare the statements at the datanode at the time of prepare. >>>>> >>>>> >>>>> Is this test added new in 9.2? >>>>> >>>> >>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in >>>> March 2007. >>>> >>>> >>>>> Why didn't we see this issue the first time prepare was implemented? I >>>>> don't remember (but it was two years back). >>>>> >>>> >>>> I was unable to locate the exact reason but since statements were not >>>> being prepared on datanodes due to a merge issue this issue just surfaced >>>> up. >>>> >>>> >>> >>> Well, even though statements were not getting prepared (actually >>> prepared statements were not being used again and again) on datanodes, we >>> never prepared them on datanode at the time of preparing the statement. So, >>> this bug should have shown itself long back. >>> >>> >>>> >>>>> >>>>>> The last execute should result in 123, whereas it results in 456. The >>>>>> reason is that the search path has already been changed at the datanode and >>>>>> a replan would mean select from abc in s2. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> Hi Abbas, >>>>>>> I think the fix is on the right track. There are couple of >>>>>>> improvements that we need to do here (but you may not do those if the time >>>>>>> doesn't permit). >>>>>>> >>>>>>> 1. We should have a status in RemoteQuery node, as to whether the >>>>>>> query in the node should use extended protocol or not, rather than relying >>>>>>> on the presence of statement name and parameters etc. Amit has already >>>>>>> added a status with that effect. We need to leverage it. >>>>>>> >>>>>>> >>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> The patch fixes the dead code issue, that I described earlier. The >>>>>>>> code was dead because of two issues: >>>>>>>> >>>>>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>>>>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>>>>>> being called in the function pgxc_start_command_on_connection. >>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>>>>> prepared statement must have some parameters. >>>>>>>> >>>>>>>> Fixing these two issues makes sure that the function >>>>>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>>>>>>> on the datanode. >>>>>>>> This patch would fix bug 3607975. It would however not fix the test >>>>>>>> case I described in my previous email because of reasons I described. >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> Can you please explain what this fix does? It would help to have >>>>>>>>> an elaborate explanation with code snippets. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>>>>>>> ash...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>>>>>>> ash...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>>>>> test case. >>>>>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>>>>> 3607975. >>>>>>>>>>>>>> >>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing the query on >>>>>>>>>>>>>> datanode al times, as against preparing once and executing multiple times. >>>>>>>>>>>>>> This is because somehow the remote query is being prepared as an unnamed >>>>>>>>>>>>>> statement." >>>>>>>>>>>>>> >>>>>>>>>>>>>> Consider this test case >>>>>>>>>>>>>> >>>>>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>>>>> D. execute p1; >>>>>>>>>>>>>> E. execute p1; >>>>>>>>>>>>>> F. execute p1; >>>>>>>>>>>>>> >>>>>>>>>>>>>> Here are the confusions >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response to >>>>>>>>>>>>>> a prepare issued by a user. >>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>>>>>>>>>>>>> datanodes. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a >>>>>>>>>>>>>> new generic plan, >>>>>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>>>>> again does use cached plans >>>>>>>>>>>>>> and does not prepare again and again every time we issue >>>>>>>>>>>>>> an execute. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> We never prepare any named/unnamed statements on the datanode. >>>>>>>>>>>> I spent time looking at the code written in do_query and functions called >>>>>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>>>>> prepared statements are being handled now is the same as I described >>>>>>>>>>>> earlier in the mail chain with the help of an example. >>>>>>>>>>>> The code that is dead was originally added by Mason through >>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. >>>>>>>>>>>> This code has been changed a lot over the last two years. This commit does >>>>>>>>>>>> not contain any test cases so I am not sure how did it use to work back >>>>>>>>>>>> then. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>>>>>> either. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I was able to find the reason why the code was dead and the >>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure that >>>>>>>>>> statements are prepared on datanodes whenever required. However there is a >>>>>>>>>> problem in the way prepared statements are handled. The problem is that >>>>>>>>>> unless a prepared statement is executed it is never prepared on datanodes, >>>>>>>>>> hence changing the path before executing the statement gives us incorrect >>>>>>>>>> results. For Example >>>>>>>>>> >>>>>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>>>>> replication; >>>>>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>>>>> replication; >>>>>>>>>> >>>>>>>>>> insert into s1.abc values(123); >>>>>>>>>> insert into s2.abc values(456); >>>>>>>>>> set search_path = s2; >>>>>>>>>> prepare p1 as select f1 from abc; >>>>>>>>>> set search_path = s1; >>>>>>>>>> execute p1; >>>>>>>>>> >>>>>>>>>> The last execute results in 123, where as it should have resulted >>>>>>>>>> in 456. >>>>>>>>>> I can finalize the attached patch by fixing any regression issues >>>>>>>>>> that may result and that would fix 3607975 and improve performance however >>>>>>>>>> the above test case would still fail. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> Did you verify it under the debugger? If that would have been >>>>>>>>>>>>> the case, we would not have seen this problem if search_path changed in >>>>>>>>>>>>> between steps D and E. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> If search path is changed between steps D & E, the problem >>>>>>>>>>>> occurs because when the remote query node is created, schema qualification >>>>>>>>>>>> is not added in the sql statement to be sent to the datanode, but changes >>>>>>>>>>>> in search path do get communicated to the datanode. The sql statement is >>>>>>>>>>>> built when execute is issued for the first time and is reused on subsequent >>>>>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>>>>>>> when search path was some thing else. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Fixing the prepared statements the way I suggested, would fix >>>>>>>>>>> the problem, since the statement will get prepared at the datanode, with >>>>>>>>>>> the same search path settings, as it would on the coordinator. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> *Abbas* >>>>>>>>>>>>>> Architect >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>>>> * >>>>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>>>>> monitoring service >>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>>>>> monitor your >>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try >>>>>>>>>>>>>> New Relic >>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Best Wishes, >>>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> -- >>>>>>>>>>>> *Abbas* >>>>>>>>>>>> Architect >>>>>>>>>>>> >>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>> * >>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Best Wishes, >>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>> The Postgres Database Company >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> *Abbas* >>>>>>>>>> Architect >>>>>>>>>> >>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>> Skype ID: gabbasb >>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>> * >>>>>>>>>> Follow us on Twitter* >>>>>>>>>> @EnterpriseDB >>>>>>>>>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> *Abbas* >>>>>>>> Architect >>>>>>>> >>>>>>>> Ph: 92.334.5100153 >>>>>>>> Skype ID: gabbasb >>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>> * >>>>>>>> Follow us on Twitter* >>>>>>>> @EnterpriseDB >>>>>>>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-06-24 02:43:00
|
Hi, As discussed in the last F2F meeting, here is an updated patch that provides schema qualification of the following objects: Tables, Views, Functions, Types and Domains in case of remote queries. Sequence functions are never concerned with datanodes hence, schema qualification is not required in case of sequences. This solves plancache test case failure issue and does not introduce any more failures. I have also attached some tests with results to aid in review. Comments are welcome. Regards On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a WIP patch that provides the functionality of > preparing the statement at the datanodes as soon as it is prepared > on the coordinator. > This is to take care of a test case in plancache that makes sure that > change of search_path is ignored by replans. > While the patch fixes this replan test case and the regression works fine > there are still these two problems I have to take care of. > > 1. This test case fails > > CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY > HASH(a); > INSERT INTO xc_alter_table_3 VALUES (1, 'a'); > PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails > > test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; > QUERY PLAN > ------------------------------------------------------------------- > Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 > width=14) > Node/s: data_node_1, data_node_2, data_node_3, data_node_4 > Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE > ((xc_alter_table_3.ctid = $1) AND > (xc_alter_table_3.xc_node_id = $2)) > -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" > (cost=0.00..0.00 rows=1000 width=14) > Output: xc_alter_table_3.a, xc_alter_table_3.ctid, > xc_alter_table_3.xc_node_id > Node/s: data_node_3 > Remote query: SELECT a, ctid, xc_node_id FROM ONLY > xc_alter_table_3 WHERE (a = 1) > (7 rows) > > The reason of the failure is that the select query is selecting 3 > items, the first of which is an int, > whereas the delete query is comparing $1 with a ctid. > I am not sure how this works without prepare, but it fails when used > with prepare. > > The reason of this planning is this section of code in function > pgxc_build_dml_statement > else if (cmdtype == CMD_DELETE) > { > /* > * Since there is no data to update, the first param is going to be > * ctid. > */ > ctid_param_num = 1; > } > > Amit/Ashutosh can you suggest a fix for this problem? > There are a number of possibilities. > a) The select should not have selected column a. > b) The DELETE should have referred to $2 and $3 for ctid and xc_node_id > respectively. > c) Since the query works without PREPARE, we should make PREPARE work > the same way. > > > 2. This test case in plancache fails. > > -- Try it with a view, which isn't directly used in the resulting plan > -- but should trigger invalidation anyway > create table tab33 (a int, b int); > insert into tab33 values(1,2); > CREATE VIEW v_tab33 AS SELECT * FROM tab33; > PREPARE vprep AS SELECT * FROM v_tab33; > EXECUTE vprep; > CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; > -- does not cause plan invalidation because views are never created on > datanodes > EXECUTE vprep; > > and the reason of the failure is that views are never created on the > datanodes hence plan invalidation is not triggered. > This can be documented as an XC limitation. > > 3. I still have to add comments in the patch and some ifdefs may be > missing too. > > > In addition to the patch I have also attached some example Java programs > that test the some basic functionality through JDBC. I found that these > programs are working fine after my patch. > > 1. Prepared.java : Issues parameterized delete, insert and update through > JDBC. These are un-named prepared statements and works fine. > 2. NamedPrepared.java : Issues two named prepared statements through JDBC > and works fine. > 3. Retrieve.java : Runs a simple select to verify results. > The comments on top of the files explain their usage. > > Comments are welcome. > > Thanks > Regards > > > > On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat < > ash...@en...> wrote: > >> >> >> >> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt <abb...@en...>wrote: >> >>> >>> >>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt <abb...@en... >>>> > wrote: >>>> >>>>> Attached please find updated patch to fix the bug. The patch takes >>>>> care of the bug and the regression issues resulting from the changes done >>>>> in the patch. Please note that the issue in test case plancache still >>>>> stands unsolved because of the following test case (simplified but taken >>>>> from plancache.sql) >>>>> >>>>> create schema s1 create table abc (f1 int); >>>>> create schema s2 create table abc (f1 int); >>>>> >>>>> >>>>> insert into s1.abc values(123); >>>>> insert into s2.abc values(456); >>>>> >>>>> set search_path = s1; >>>>> >>>>> prepare p1 as select f1 from abc; >>>>> execute p1; -- works fine, results in 123 >>>>> >>>>> set search_path = s2; >>>>> execute p1; -- works fine after the patch, results in 123 >>>>> >>>>> alter table s1.abc add column f2 float8; -- force replan >>>>> execute p1; -- fails >>>>> >>>>> >>>> Huh! The beast bit us. >>>> >>>> I think the right solution here is either of two >>>> 1. Take your previous patch to always use qualified names (but you need >>>> to improve it not to affect the view dumps) >>>> 2. Prepare the statements at the datanode at the time of prepare. >>>> >>>> >>>> Is this test added new in 9.2? >>>> >>> >>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in >>> March 2007. >>> >>> >>>> Why didn't we see this issue the first time prepare was implemented? I >>>> don't remember (but it was two years back). >>>> >>> >>> I was unable to locate the exact reason but since statements were not >>> being prepared on datanodes due to a merge issue this issue just surfaced >>> up. >>> >>> >> >> Well, even though statements were not getting prepared (actually prepared >> statements were not being used again and again) on datanodes, we never >> prepared them on datanode at the time of preparing the statement. So, this >> bug should have shown itself long back. >> >> >>> >>>> >>>>> The last execute should result in 123, whereas it results in 456. The >>>>> reason is that the search path has already been changed at the datanode and >>>>> a replan would mean select from abc in s2. >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> Hi Abbas, >>>>>> I think the fix is on the right track. There are couple of >>>>>> improvements that we need to do here (but you may not do those if the time >>>>>> doesn't permit). >>>>>> >>>>>> 1. We should have a status in RemoteQuery node, as to whether the >>>>>> query in the node should use extended protocol or not, rather than relying >>>>>> on the presence of statement name and parameters etc. Amit has already >>>>>> added a status with that effect. We need to leverage it. >>>>>> >>>>>> >>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> The patch fixes the dead code issue, that I described earlier. The >>>>>>> code was dead because of two issues: >>>>>>> >>>>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>>>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>>>>> being called in the function pgxc_start_command_on_connection. >>>>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>>>> prepared statement must have some parameters. >>>>>>> >>>>>>> Fixing these two issues makes sure that the function >>>>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>>>>>> on the datanode. >>>>>>> This patch would fix bug 3607975. It would however not fix the test >>>>>>> case I described in my previous email because of reasons I described. >>>>>>> >>>>>>> >>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> Can you please explain what this fix does? It would help to have an >>>>>>>> elaborate explanation with code snippets. >>>>>>>> >>>>>>>> >>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>>>>>> ash...@en...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>>>>>> ash...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>>>> test case. >>>>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>>>> 3607975. >>>>>>>>>>>>> >>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >>>>>>>>>>>>> times, the coordinator keeps on preparing and executing the query on >>>>>>>>>>>>> datanode al times, as against preparing once and executing multiple times. >>>>>>>>>>>>> This is because somehow the remote query is being prepared as an unnamed >>>>>>>>>>>>> statement." >>>>>>>>>>>>> >>>>>>>>>>>>> Consider this test case >>>>>>>>>>>>> >>>>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>>>> D. execute p1; >>>>>>>>>>>>> E. execute p1; >>>>>>>>>>>>> F. execute p1; >>>>>>>>>>>>> >>>>>>>>>>>>> Here are the confusions >>>>>>>>>>>>> >>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>>>>>>>> prepare issued by a user. >>>>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>>>>>>>>>>>> datanodes. >>>>>>>>>>>>> >>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a >>>>>>>>>>>>> new generic plan, >>>>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>>>> again does use cached plans >>>>>>>>>>>>> and does not prepare again and again every time we issue >>>>>>>>>>>>> an execute. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> We never prepare any named/unnamed statements on the datanode. I >>>>>>>>>>> spent time looking at the code written in do_query and functions called >>>>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>>>> prepared statements are being handled now is the same as I described >>>>>>>>>>> earlier in the mail chain with the help of an example. >>>>>>>>>>> The code that is dead was originally added by Mason through >>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. >>>>>>>>>>> This code has been changed a lot over the last two years. This commit does >>>>>>>>>>> not contain any test cases so I am not sure how did it use to work back >>>>>>>>>>> then. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>>>>> either. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I was able to find the reason why the code was dead and the >>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure that >>>>>>>>> statements are prepared on datanodes whenever required. However there is a >>>>>>>>> problem in the way prepared statements are handled. The problem is that >>>>>>>>> unless a prepared statement is executed it is never prepared on datanodes, >>>>>>>>> hence changing the path before executing the statement gives us incorrect >>>>>>>>> results. For Example >>>>>>>>> >>>>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>>>> replication; >>>>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>>>> replication; >>>>>>>>> >>>>>>>>> insert into s1.abc values(123); >>>>>>>>> insert into s2.abc values(456); >>>>>>>>> set search_path = s2; >>>>>>>>> prepare p1 as select f1 from abc; >>>>>>>>> set search_path = s1; >>>>>>>>> execute p1; >>>>>>>>> >>>>>>>>> The last execute results in 123, where as it should have resulted >>>>>>>>> in 456. >>>>>>>>> I can finalize the attached patch by fixing any regression issues >>>>>>>>> that may result and that would fix 3607975 and improve performance however >>>>>>>>> the above test case would still fail. >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Did you verify it under the debugger? If that would have been >>>>>>>>>>>> the case, we would not have seen this problem if search_path changed in >>>>>>>>>>>> between steps D and E. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> If search path is changed between steps D & E, the problem >>>>>>>>>>> occurs because when the remote query node is created, schema qualification >>>>>>>>>>> is not added in the sql statement to be sent to the datanode, but changes >>>>>>>>>>> in search path do get communicated to the datanode. The sql statement is >>>>>>>>>>> built when execute is issued for the first time and is reused on subsequent >>>>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>>>>>> when search path was some thing else. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Fixing the prepared statements the way I suggested, would fix the >>>>>>>>>> problem, since the statement will get prepared at the datanode, with the >>>>>>>>>> same search path settings, as it would on the coordinator. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> *Abbas* >>>>>>>>>>>>> Architect >>>>>>>>>>>>> >>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>>> * >>>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>> >>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>>>> monitoring service >>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>>>> monitor your >>>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try New >>>>>>>>>>>>> Relic >>>>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>> >>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Best Wishes, >>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> -- >>>>>>>>>>> *Abbas* >>>>>>>>>>> Architect >>>>>>>>>>> >>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>> * >>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>> @EnterpriseDB >>>>>>>>>>> >>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Best Wishes, >>>>>>>>>> Ashutosh Bapat >>>>>>>>>> EntepriseDB Corporation >>>>>>>>>> The Postgres Database Company >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> -- >>>>>>>>> *Abbas* >>>>>>>>> Architect >>>>>>>>> >>>>>>>>> Ph: 92.334.5100153 >>>>>>>>> Skype ID: gabbasb >>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>> * >>>>>>>>> Follow us on Twitter* >>>>>>>>> @EnterpriseDB >>>>>>>>> >>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> *Abbas* >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>> * >>>>>>> Follow us on Twitter* >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: 鈴木 幸市 <ko...@in...> - 2013-06-24 01:23:05
|
The patch looks reasonable. One comment: removing "return" for non-void function will cause Linux gcc warning. For this case, we need #ifdef SOLARIS directive. You sent two similar patch for proxy_main.c in separate e-mails. The later one seems to resolve my comment above. Although the core team cannot declare that XC runs on Solaris so far, I think the patch is reasonable to be included. Any other comments? --- Koichi Suzuki On 2013/06/22, at 1:26, Matt Warner <MW...@XI...> wrote: > Regarding the other changes, they are specific to Solaris. For example, in src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include sys/filio.h. I’ll be searching to see if I can find a macro already defined for Solaris that I can leverage to #ifdef those Solaris-specific items. > > Matt > > From: Matt Warner > Sent: Friday, June 21, 2013 9:21 AM > To: 'Koichi Suzuki' > Cc: 'pos...@li...' > Subject: RE: [Postgres-xc-developers] Minor Fixes > > First patch. > > From: Matt Warner > Sent: Friday, June 21, 2013 8:50 AM > To: 'Koichi Suzuki' > Cc: pos...@li... > Subject: RE: [Postgres-xc-developers] Minor Fixes > > Yes, I’m running XC on Solaris x64. > > From: Koichi Suzuki [mailto:koi...@gm...] > Sent: Thursday, June 20, 2013 6:34 PM > To: Matt Warner > Cc: pos...@li... > Subject: Re: [Postgres-xc-developers] Minor Fixes > > Thanks a lot for the patch. As Michael mentioned, you can send a patch to developers mailing list. > > BTW, core team tested current XC on 64bit Intel CentOS and others tested it against RedHat. Did you test XC on Solaris? > > Regards; > > ---------- > Koichi Suzuki > > > 2013/6/21 Matt Warner <MW...@xi...> > Just a quick question about contributing fixes. I’ve had to make some minor changes to get XC compiled on Solaris x64. > What format would you like to see for the changes? Most are very minor, such as removing return statements inside void functions (which the Solaris compiler flags as incorrect since you can’t return from a void function). > Matt > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Matt W. <MW...@XI...> - 2013-06-21 16:27:12
|
Regarding the other changes, they are specific to Solaris. For example, in src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include sys/filio.h. I’ll be searching to see if I can find a macro already defined for Solaris that I can leverage to #ifdef those Solaris-specific items. Matt From: Matt Warner Sent: Friday, June 21, 2013 9:21 AM To: 'Koichi Suzuki' Cc: 'pos...@li...' Subject: RE: [Postgres-xc-developers] Minor Fixes First patch. From: Matt Warner Sent: Friday, June 21, 2013 8:50 AM To: 'Koichi Suzuki' Cc: pos...@li...<mailto:pos...@li...> Subject: RE: [Postgres-xc-developers] Minor Fixes Yes, I’m running XC on Solaris x64. From: Koichi Suzuki [mailto:koi...@gm...] Sent: Thursday, June 20, 2013 6:34 PM To: Matt Warner Cc: pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-developers] Minor Fixes Thanks a lot for the patch. As Michael mentioned, you can send a patch to developers mailing list. BTW, core team tested current XC on 64bit Intel CentOS and others tested it against RedHat. Did you test XC on Solaris? Regards; ---------- Koichi Suzuki 2013/6/21 Matt Warner <MW...@xi...<mailto:MW...@xi...>> Just a quick question about contributing fixes. I’ve had to make some minor changes to get XC compiled on Solaris x64. What format would you like to see for the changes? Most are very minor, such as removing return statements inside void functions (which the Solaris compiler flags as incorrect since you can’t return from a void function). Matt ------------------------------------------------------------------------------ This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Matt W. <MW...@XI...> - 2013-06-21 15:50:42
|
Yes, I’m running XC on Solaris x64. From: Koichi Suzuki [mailto:koi...@gm...] Sent: Thursday, June 20, 2013 6:34 PM To: Matt Warner Cc: pos...@li... Subject: Re: [Postgres-xc-developers] Minor Fixes Thanks a lot for the patch. As Michael mentioned, you can send a patch to developers mailing list. BTW, core team tested current XC on 64bit Intel CentOS and others tested it against RedHat. Did you test XC on Solaris? Regards; ---------- Koichi Suzuki 2013/6/21 Matt Warner <MW...@xi...<mailto:MW...@xi...>> Just a quick question about contributing fixes. I’ve had to make some minor changes to get XC compiled on Solaris x64. What format would you like to see for the changes? Most are very minor, such as removing return statements inside void functions (which the Solaris compiler flags as incorrect since you can’t return from a void function). Matt ------------------------------------------------------------------------------ This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Koichi S. <koi...@gm...> - 2013-06-21 01:33:40
|
Thanks a lot for the patch. As Michael mentioned, you can send a patch to developers mailing list. BTW, core team tested current XC on 64bit Intel CentOS and others tested it against RedHat. Did you test XC on Solaris? Regards; ---------- Koichi Suzuki 2013/6/21 Matt Warner <MW...@xi...> > Just a quick question about contributing fixes. I’ve had to make some > minor changes to get XC compiled on Solaris x64.**** > > What format would you like to see for the changes? Most are very minor, > such as removing return statements inside void functions (which the Solaris > compiler flags as incorrect since you can’t return from a void function).* > *** > > Matt**** > > ** ** > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Michael P. <mic...@gm...> - 2013-06-21 01:25:53
|
On Fri, Jun 21, 2013 at 10:24 AM, Michael Paquier <mic...@gm...> wrote: > On Fri, Jun 21, 2013 at 5:45 AM, Matt Warner <MW...@xi...> wrote: >> Just a quick question about contributing fixes. I’ve had to make some minor >> changes to get XC compiled on Solaris x64. >> >> What format would you like to see for the changes? Most are very minor, such >> as removing return statements inside void functions (which the Solaris >> compiler flags as incorrect since you can’t return from a void function). > Please send patches generated by git that are based on the branch you > want to have the fix applied, generally master, such as fixes can be > easily backported to other maintenance branches. For the format of the > patches, I personally don't really mind, and I am sure that the others > will agree, if such patches are generated without context diff as long > as they are understandable. Here are some more guidelines that postgres community follows, just be sure to send the patches to the correct ML. http://wiki.postgresql.org/wiki/Submitting_a_Patch -- Michael |
From: Michael P. <mic...@gm...> - 2013-06-21 01:24:46
|
On Fri, Jun 21, 2013 at 5:45 AM, Matt Warner <MW...@xi...> wrote: > Just a quick question about contributing fixes. I’ve had to make some minor > changes to get XC compiled on Solaris x64. > > What format would you like to see for the changes? Most are very minor, such > as removing return statements inside void functions (which the Solaris > compiler flags as incorrect since you can’t return from a void function). Please send patches generated by git that are based on the branch you want to have the fix applied, generally master, such as fixes can be easily backported to other maintenance branches. For the format of the patches, I personally don't really mind, and I am sure that the others will agree, if such patches are generated without context diff as long as they are understandable. Thanks, -- Michael |
From: Matt W. <MW...@XI...> - 2013-06-20 20:45:59
|
Just a quick question about contributing fixes. I've had to make some minor changes to get XC compiled on Solaris x64. What format would you like to see for the changes? Most are very minor, such as removing return statements inside void functions (which the Solaris compiler flags as incorrect since you can't return from a void function). Matt |
From: Koichi S. <koi...@gm...> - 2013-06-20 07:32:39
|
I agree on Ashutosh's comment but this may need another analysis and cleanup of the current code blocking distribution column updates too. Could be time consuming so I think Amit's approach is okay now (we need a cleanup anyway later). Regards; ---------- Koichi Suzuki 2013/6/20 Amit Khandekar <ami...@en...> > > > On 20 June 2013 11:25, Ashutosh Bapat <ash...@en...>wrote: > >> HI Amit, >> Can we move this check deep into heap_update or something? That way we >> can cover all the ways partition column can be updated. >> > > That needs some amount of analysis on to what extent these checks in > heap_update() impacts performance. Currently we disallow updating partition > columns in SQLs, so a user would expect restricting such updates everywhere > else, and trigger is one area where these updates are silently ignored. > > >> >> On Thu, Jun 20, 2013 at 10:25 AM, Amit Khandekar < >> ami...@en...> wrote: >> >>> Currently there is no check if the trigger function has modified the >>> partition column for UPDATE. Attached patch does the verification in >>> ExecBRUpdateTriggers(). INSERT does not require it, because it is a new row >>> being inserted, and if the trigger function modifies the column values, the >>> INSERT gets executed on the datanode corresponding to the final >>> distribution value of the row. >>> >>> A user can still make the trigger function immutable and if this trigger >>> runs on datanode, it will be able to do partition column updates, but then >>> it can be done by any other function by just making it immutable and doing >>> partition column updates via an SQL statement; it's the user's >>> reponsibility to mark functions immutable carefully. >>> >>> Added testcases in xc_triggers.sql. >>> >>> -Amit >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> >>> http://p.sf.net/sfu/windows-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Amit K. <ami...@en...> - 2013-06-20 06:12:24
|
On 20 June 2013 11:25, Ashutosh Bapat <ash...@en...>wrote: > HI Amit, > Can we move this check deep into heap_update or something? That way we can > cover all the ways partition column can be updated. > That needs some amount of analysis on to what extent these checks in heap_update() impacts performance. Currently we disallow updating partition columns in SQLs, so a user would expect restricting such updates everywhere else, and trigger is one area where these updates are silently ignored. > > On Thu, Jun 20, 2013 at 10:25 AM, Amit Khandekar < > ami...@en...> wrote: > >> Currently there is no check if the trigger function has modified the >> partition column for UPDATE. Attached patch does the verification in >> ExecBRUpdateTriggers(). INSERT does not require it, because it is a new row >> being inserted, and if the trigger function modifies the column values, the >> INSERT gets executed on the datanode corresponding to the final >> distribution value of the row. >> >> A user can still make the trigger function immutable and if this trigger >> runs on datanode, it will be able to do partition column updates, but then >> it can be done by any other function by just making it immutable and doing >> partition column updates via an SQL statement; it's the user's >> reponsibility to mark functions immutable carefully. >> >> Added testcases in xc_triggers.sql. >> >> -Amit >> >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > |
From: Ashutosh B. <ash...@en...> - 2013-06-20 05:55:40
|
HI Amit, Can we move this check deep into heap_update or something? That way we can cover all the ways partition column can be updated. On Thu, Jun 20, 2013 at 10:25 AM, Amit Khandekar < ami...@en...> wrote: > Currently there is no check if the trigger function has modified the > partition column for UPDATE. Attached patch does the verification in > ExecBRUpdateTriggers(). INSERT does not require it, because it is a new row > being inserted, and if the trigger function modifies the column values, the > INSERT gets executed on the datanode corresponding to the final > distribution value of the row. > > A user can still make the trigger function immutable and if this trigger > runs on datanode, it will be able to do partition column updates, but then > it can be done by any other function by just making it immutable and doing > partition column updates via an SQL statement; it's the user's > reponsibility to mark functions immutable carefully. > > Added testcases in xc_triggers.sql. > > -Amit > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Koichi S. <koi...@gm...> - 2013-06-19 08:04:56
|
I did something different for this bug, as follows; ----8<-----------8<------------- koichi=# create table T1 koichi-# ( koichi(# C1 varchar(128), koichi(# C2 numeric(9,2), koichi(# C3 int); CREATE TABLE koichi=# koichi=# insert into T1 values koichi-# ('N1', 1.1, 20121001), koichi-# ('N1', 2.1, 20121002); INSERT 0 2 koichi=# koichi=# select koichi-# C1 "NewC1", koichi-# avg(C2)::Numeric(9,2) "NewC2" koichi-# from T1 koichi-# where C3 between 20121001 and 20121231 koichi-# group by C1; NewC1 | NewC2 -------+------- N1 | 1.60 (1 row) koichi=# koichi=# create table newt as koichi-# select koichi-# C1 "NewC1", koichi-# avg(C2)::Numeric(9,2) "NewC2" koichi-# from T1 koichi-# where C3 between 20121001 and 20121231 koichi-# group by C1; ERROR: function pg_catalog.numeric_avg(numeric) does not exist LINE 4: avg(C2)::Numeric(9,2) "NewC2" ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. koichi=# koichi=# create table newt1 as koichi-# select koichi-# C1 "NewC1", koichi-# C2::Numeric(9,2) "NewC2" koichi-# from T1 koichi-# where C3 between 20121001 and 20121231; INSERT 0 2 koichi=# koichi=# create table newt2 as koichi-# select koichi-# C1 "NewC1", koichi-# sum(C2)::Numeric(9,2) "NewC2" koichi-# from T1 koichi-# where C3 between 20121001 and 20121231 koichi-# group by C1; INSERT 0 1 koichi=# create table newt4 as koichi-# select koichi-# C1 "NewC1", koichi-# count(*)::Numeric(9,2) "NewC2" koichi-# from T1 koichi-# where C3 between 20121001 and 20121231 koichi-# group by C1; INSERT 0 1 koichi=# ---->8----------->8------------- Interesting result is: can use sum() and count() in CREATE TABLE AS SELECT ... but not avg(). I suspect this issue occurs with aggregate where we have different functions for two-phase and three-phase aggregation. Hope this is a good hint to fix the problem. Regards; ---------- Koichi Suzuki 2013/6/19 Koichi Suzuki <koi...@gm...> > I've checked the current master status for the bug 3602898. > Unfortunately, the bug still exists. > > Attached is the script to reproduce it. > > Regards; > ---------- > Koichi Suzuki > |
From: Amit K. <ami...@en...> - 2013-06-18 06:48:05
|
On 17 June 2013 19:16, Ashutosh Bapat <ash...@en...>wrote: > > > > On Mon, Jun 17, 2013 at 4:36 PM, Amit Khandekar < > ami...@en...> wrote: > >> >> >> On 7 June 2013 15:28, Ashutosh Bapat <ash...@en...>wrote: >> >>> Hi All, >>> If there is a primary key column in the GROUP BY clause, every group >>> will have only one row. In this case, even if there ungrouped columns in >>> the query, PG 9.2 doesn't raise any error. But this behaviour is only >>> allowed if the functional dependence of the ungrouped columns is not needed >>> to be inferred from subqueries (if there are any). I tried to provide a >>> patch to extend the functional dependency to subqueries, but community is >>> not willing to have this extension. See mail thread "Functional >>> dependencies and GROUP BY - for subqueries" for more details. >>> >>> >> Instead of a new field Query->has_func_grouping, can we use >> Query->constraintDeps field to check the presence of functional >> dependencies in the query ? From looking at the code, it seems this field >> gives the same information that you need. >> >> > That's a very good suggestion. Here's updated patch. > Ok. This looks all good for commit. > > >> Apart from the scenario of target list containing ungrouped columns, the >> other scenario is that of HAVING clause containing ungrouped columns. In >> the latter case, the remote query contains a corresponding WHERE clause, >> so the remote query does not fail with an error unlike how it fails when >> target list contains ungrouped columns. But, with your fix even the query >> with HAVING clause is prevented from a reduced group-by query. I think that >> might be deemed ok, on the basis of the theory that an aggregate query that >> generates single row groups is anyway not going to benefit from the >> reduction. So I am fine with it, but wanted to mention it just in case you >> are not aware that the fix also affects HAVING clause in the way mentioned >> above. >> >> > Yes, I am aware of this fact. > > >> So, in XC, we have to disable GROUP BY optimization for such queries, >>> since the way queries are constructed in XC (to be pushed to the >>> datanodes), they end up with subqueries. >>> >>> Here's the patch for the same. This fixes bug 3604199 and regression >>> test functional_deps. >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >>> >>> ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/servicenow-d2d-j >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > |
From: Amit K. <ami...@en...> - 2013-06-17 12:36:49
|
On 7 June 2013 15:28, Ashutosh Bapat <ash...@en...>wrote: > Hi All, > If there is a primary key column in the GROUP BY clause, every group will > have only one row. In this case, even if there ungrouped columns in the > query, PG 9.2 doesn't raise any error. But this behaviour is only allowed > if the functional dependence of the ungrouped columns is not needed to be > inferred from subqueries (if there are any). I tried to provide a patch to > extend the functional dependency to subqueries, but community is not > willing to have this extension. See mail thread "Functional dependencies > and GROUP BY - for subqueries" for more details. > > Instead of a new field Query->has_func_grouping, can we use Query->constraintDeps field to check the presence of functional dependencies in the query ? From looking at the code, it seems this field gives the same information that you need. Apart from the scenario of target list containing ungrouped columns, the other scenario is that of HAVING clause containing ungrouped columns. In the latter case, the remote query contains a corresponding WHERE clause, so the remote query does not fail with an error unlike how it fails when target list contains ungrouped columns. But, with your fix even the query with HAVING clause is prevented from a reduced group-by query. I think that might be deemed ok, on the basis of the theory that an aggregate query that generates single row groups is anyway not going to benefit from the reduction. So I am fine with it, but wanted to mention it just in case you are not aware that the fix also affects HAVING clause in the way mentioned above. So, in XC, we have to disable GROUP BY optimization for such queries, since > the way queries are constructed in XC (to be pushed to the datanodes), they > end up with subqueries. > > Here's the patch for the same. This fixes bug 3604199 and regression test > functional_deps. > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Ashutosh B. <ash...@en...> - 2013-06-17 09:53:43
|
Hi Tomonari, In which function have you taken this debug info? What is list1 and list2? On Mon, Jun 17, 2013 at 10:13 AM, Tomonari Katsumata < kat...@po...> wrote: > Hi Ashtosh, > > Sorry for slow response. > > I've watched the each lists at list_concat function. > > This function is called several times, but the lists before > last infinitely roop are like below. > > [list1] > (gdb) p *list1->head > $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, > oid_value = 24129768}, next = 0x170d418} > (gdb) p *list1->head->next > $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, > oid_value = 24130512}, next = 0x170fd40} > (gdb) p *list1->head->next->next > $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, > oid_value = 24161880}, next = 0x171e6c8} > (gdb) p *list1->head->next->next->next > $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, > oid_value = 24128680}, next = 0x171ed28} > (gdb) p *list1->head->next->next->next->next > $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, > oid_value = 24162152}, next = 0x171f3a0} > (gdb) p *list1->head->next->next->next->next->next > $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, > oid_value = 24162472}, next = 0x170b7c0} > (gdb) p *list1->head->next->next->next->next->next->next > $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, > oid_value = 24131056}, next = 0x1720998} > ---- from --- > (gdb) p *list1->head->next->next->next->next->next->next->next > $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, > oid_value = 24250808}, next = 0x1721190} > (gdb) p *list1->head->next->next->next->next->next->next->next->next > $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, > oid_value = 24252848}, next = 0x1721988} > (gdb) p *list1->head->next->next->next->next->next->next->next->next->next > $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, > oid_value = 24254888}, next = 0x1722018} > (gdb) p > *list1->head->next->next->next->next->next->next->next->next->next->next > $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, > oid_value = 24256568}, next = 0x1722820} > (gdb) p > > *list1->head->next->next->next->next->next->next->next->next->next->next->next > $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, > oid_value = 24258624}, next = 0x0} > ---- to ---- > > [list2] > (gdb) p *list2->head > $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, > oid_value = 24250808}, next = 0x1721190} > (gdb) p *list2->head->next > $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, > oid_value = 24252848}, next = 0x1721988} > (gdb) p *list2->head->next->next > $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, > oid_value = 24254888}, next = 0x1722018} > (gdb) p *list2->head->next->next->next > $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, > oid_value = 24256568}, next = 0x1722820} > (gdb) p *list2->head->next->next->next->next > $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, > oid_value = 24258624}, next = 0x0} > ---- > > list1's last five elements are same with list2's all elements. > (in above example, between "from" and "to" in list1 equal all of list2) > > This is cause of infinitely roop, but I can not > watch deeper. > Because some values from gdb are optimized and un-displayed. > I tried compile with CFLAGS=O0, but I can't. > > What can I do more ? > > regards, > ------------------ > NTT Software Corporation > Tomonari Katsumata > > (2013/06/12 21:04), Ashutosh Bapat wrote: > > Hi Tomonari, > > Can you please check the list's sanity before calling pgxc_collect_RTE() > > and at every point in the minions of this function. My primary suspect > is > > the line pgxcplan.c:3094. We should copy the list before > concatenating it. > > > > > > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < > > kat...@po...> wrote: > > > >> Hi Ashutosh, > >> > >> Thank you for the response. > >> > >> (2013/06/12 14:43), Ashutosh Bapat wrote: > >> >> Hi, > >> >> > > >> >> > I've investigated this problem(BUG:3614369). > >> >> > > >> >> > I caught the cause of it, but I can not > >> >> > find where to fix. > >> >> > > >> >> > The problem occurs when "pgxc_collect_RTE_walker" is called > >> infinitely. > >> >> > It seems that rtable(List of RangeTable) become cyclic List. > >> >> > I'm not sure where the List is made. > >> >> > > >> >> > > >> > I guess, we are talking about EXECUTE DIRECT statement that you have > >> > mentioned earlier. > >> > >> Yes, that's right. > >> I'm talking about EXECUTE DIRECT statement like below. > >> --- > >> EXECUTE DIRECT ON (data1) $$ > >> SELECT > >> count(*) > >> FROM > >> (SELECT * FROM pg_locks l LEFT JOIN > >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > >> $$ > >> --- > >> > >> > The function pgxc_collect_RTE_walker() is a recursive > >> > function. The condition to end the recursion is if the given node is > >> NULL. > >> > We have to look at if that condition is met and if not why. > >> > > >> I investigated it deeper, and I noticed that > >> the infinitely loop happens at the function "range_table_walker()". > >> > >> Please see below trace. > >> =========================== > >> Breakpoint 1, range_table_walker (rtable=0x15e7968, walker=0x612c70 > >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, > >> flags=0) at nodeFuncs.c:1908 > >> 1908 in nodeFuncs.c > >> > >> (gdb) p *rtable > >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = 0x15e9820} > >> (gdb) p *rtable->head > >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, oid_value = > >> 22968760}, next = 0x15e8190} > >> (gdb) p *rtable->head->next > >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, oid_value = > >> 22970800}, next = 0x15e8988} > >> (gdb) p *rtable->head->next->next > >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, oid_value = > >> 22972840}, next = 0x15e9018} > >> (gdb) p *rtable->head->next->next->next > >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, oid_value = > >> 22974520}, next = 0x15e9820} > >> (gdb) p *rtable->head->next->next->next->next > >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, oid_value = > >> 22976576}, next = 0x15e7998} > >> =========================== > >> > >> The line which starts with "$15" has 0x15e7998 as its next data. > >> But it is the head pointer(see the line which starts with $10). > >> > >> And in range_table_walker(), the function is called recursively. > >> -------- > >> ... > >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) > >> { > >> if (range_table_walker(query->rtable, walker, context, > >> flags)) > >> return true; > >> } > >> ... > >> -------- > >> > >> We should make rtable right or deal with "flags" properly. > >> But I can't find where to do it... > >> > >> What do you think ? > >> > >> regards, > >> --------- > >> NTT Software Corporation > >> Tomonari Katsumata > >> > >> > >> > >> > >> > > ------------------------------------------------------------------------------ > >> This SF.net email is sponsored by Windows: > >> > >> Build for Windows Store. > >> > >> http://p.sf.net/sfu/windows-dev2dev > >> _______________________________________________ > >> Postgres-xc-developers mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > > > > > > > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |