You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Tomonari K. <t.k...@gm...> - 2013-07-01 11:51:22
|
Hi, I've tried regression test, but I could not get right result. Both patched and un-patched Postgres-XC get same result, so I think my process is bad. I run a gtm, a gtm_proxy, a coordinator, a datanode in one server(CentOS6.4 x86_64 on VMWare), and hit "make installcheck". The configure option is [--enable-debug CFLAGS=""]. What is right way to run the regression test? output is below. --------------------------------------------------------- test tablespace ... ok test boolean ... ok test char ... ok test name ... ok test varchar ... ok test text ... ok test int2 ... ok test int4 ... ok test int8 ... ok test oid ... ok test float4 ... ok test float8 ... ok test bit ... ok test numeric ... ok test txid ... ok test uuid ... ok test enum ... FAILED test money ... ok test rangetypes ... ok test strings ... ok test numerology ... ok test point ... ok test lseg ... ok test box ... ok test path ... ok test polygon ... ok test circle ... ok test date ... ok test time ... ok test timetz ... ok test timestamp ... ok test timestamptz ... ok test interval ... ok test abstime ... ok test reltime ... ok test tinterval ... ok test inet ... ok test macaddr ... ok test tstypes ... ok test comments ... ok test geometry ... ok test horology ... ok test regex ... ok test oidjoins ... ok test type_sanity ... ok test opr_sanity ... ok test insert ... ok test create_function_1 ... ok test create_type ... ok test create_table ... ok test create_function_2 ... ok test create_function_3 ... ok test copy ... ok test copyselect ... ok test create_misc ... ok test create_operator ... ok test create_index ... FAILED test create_view ... ok test create_aggregate ... ok test create_cast ... ok test constraints ... FAILED test triggers ... ok test inherit ... FAILED test create_table_like ... ok test typed_table ... ok test vacuum ... ok test drop_if_exists ... ok test sanity_check ... ok test errors ... ok test select ... ok test select_into ... ok test select_distinct ... ok test select_distinct_on ... ok test select_implicit ... ok test select_having ... ok test subselect ... ok test union ... ok test case ... ok test join ... FAILED test aggregates ... FAILED test transactions ... ok test random ... ok test portals ... ok test arrays ... FAILED test btree_index ... ok test hash_index ... ok test update ... ok test delete ... ok test namespace ... ok test prepared_xacts ... ok test privileges ... FAILED test security_label ... ok test collate ... FAILED test misc ... ok test rules ... ok test select_views ... ok test portals_p2 ... ok test foreign_key ... ok test cluster ... ok test dependency ... ok test guc ... ok test bitmapops ... ok test combocid ... ok test tsearch ... ok test tsdicts ... ok test foreign_data ... ok test window ... ok test xmlmap ... FAILED (test process exited with exit code 2) test functional_deps ... FAILED (test process exited with exit code 2) test advisory_lock ... FAILED (test process exited with exit code 2) test json ... FAILED (test process exited with exit code 2) test plancache ... FAILED (test process exited with exit code 2) test limit ... FAILED (test process exited with exit code 2) test plpgsql ... FAILED (test process exited with exit code 2) test copy2 ... FAILED (test process exited with exit code 2) test temp ... FAILED (test process exited with exit code 2) test domain ... FAILED (test process exited with exit code 2) test rangefuncs ... FAILED (test process exited with exit code 2) test prepare ... FAILED (test process exited with exit code 2) test without_oid ... FAILED (test process exited with exit code 2) test conversion ... FAILED (test process exited with exit code 2) test truncate ... FAILED (test process exited with exit code 2) test alter_table ... FAILED (test process exited with exit code 2) test sequence ... FAILED (test process exited with exit code 2) test polymorphism ... FAILED (test process exited with exit code 2) test rowtypes ... FAILED (test process exited with exit code 2) test returning ... FAILED (test process exited with exit code 2) test largeobject ... FAILED (test process exited with exit code 2) test with ... FAILED (test process exited with exit code 2) test xml ... FAILED (test process exited with exit code 2) test stats ... FAILED (test process exited with exit code 2) test xc_create_function ... FAILED (test process exited with exit code 2) test xc_groupby ... FAILED (test process exited with exit code 2) test xc_distkey ... FAILED (test process exited with exit code 2) test xc_having ... FAILED (test process exited with exit code 2) test xc_temp ... FAILED (test process exited with exit code 2) test xc_remote ... FAILED (test process exited with exit code 2) test xc_node ... FAILED (test process exited with exit code 2) test xc_FQS ... FAILED (test process exited with exit code 2) test xc_FQS_join ... FAILED (test process exited with exit code 2) test xc_misc ... FAILED (test process exited with exit code 2) test xc_triggers ... FAILED (test process exited with exit code 2) test xc_trigship ... FAILED (test process exited with exit code 2) test xc_constraints ... FAILED (test process exited with exit code 2) test xc_copy ... FAILED (test process exited with exit code 2) test xc_alter_table ... FAILED (test process exited with exit code 2) test xc_sequence ... FAILED (test process exited with exit code 2) test xc_prepared_xacts ... FAILED (test process exited with exit code 2) test xc_notrans_block ... FAILED (test process exited with exit code 2) test xc_limit ... FAILED (test process exited with exit code 2) test xc_sort ... FAILED (test process exited with exit code 2) test xc_returning ... FAILED (test process exited with exit code 2) test xc_params ... FAILED (test process exited with exit code 2) ========================= 55 of 153 tests failed. ========================= 2013/7/1 Tomonari Katsumata <kat...@po...> > Hi Ashutosh, > > OK, I'll try regression test. > Please wait for the result. > > regards, > ------------ > NTT Software Corporation > Tomonari Katsumata > > (2013/07/01 17:06), Ashutosh Bapat wrote: > > Hi Tomonori, > > > > > > > > On Mon, Jul 1, 2013 at 1:33 PM, Tomonari Katsumata < > > kat...@po...> wrote: > > > >> Hi Ashutosh and all, > >> > >> Sorry for late response. > >> I made a patch for resolving the problem I mentioned before. > >> > >> I thought the reason of this problem is parsing query twice. > >> because of this, the rtable is made from same Lists and become > >> cycliced List. > >> I fixed this problem with making former List empty. > >> > >> I'm not sure this fix leads any anothre problems but > >> the problem query I mentioned before works fine. > >> > >> This patch is against for > "**a074cac9b6b507e6d4b58c5004673f**6cc65fcde1". > >> > >> > > You can check the robustness of patch by running regression. Please let > me > > know what you see. > > > > > >> regards, > >> ------------------ > >> NTT Software Corporation > >> Tomonari Katsumata > >> > >> > >> (2013/06/17 18:53), Ashutosh Bapat wrote: > >> > >>> Hi Tomonari, > >>> In which function have you taken this debug info? What is list1 and > list2? > >>> > >>> > >>> On Mon, Jun 17, 2013 at 10:13 AM, Tomonari Katsumata < > >>> kat...@po....**jp <kat...@po...>> > >>> wrote: > >>> > >>> Hi Ashtosh, > >>>> Sorry for slow response. > >>>> > >>>> I've watched the each lists at list_concat function. > >>>> > >>>> This function is called several times, but the lists before > >>>> last infinitely roop are like below. > >>>> > >>>> [list1] > >>>> (gdb) p *list1->head > >>>> $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, > >>>> oid_value = 24129768}, next = 0x170d418} > >>>> (gdb) p *list1->head->next > >>>> $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, > >>>> oid_value = 24130512}, next = 0x170fd40} > >>>> (gdb) p *list1->head->next->next > >>>> $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, > >>>> oid_value = 24161880}, next = 0x171e6c8} > >>>> (gdb) p *list1->head->next->next->next > >>>> $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, > >>>> oid_value = 24128680}, next = 0x171ed28} > >>>> (gdb) p *list1->head->next->next->**next->next > >>>> $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, > >>>> oid_value = 24162152}, next = 0x171f3a0} > >>>> (gdb) p *list1->head->next->next->**next->next->next > >>>> $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, > >>>> oid_value = 24162472}, next = 0x170b7c0} > >>>> (gdb) p *list1->head->next->next->**next->next->next->next > >>>> $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, > >>>> oid_value = 24131056}, next = 0x1720998} > >>>> ---- from --- > >>>> (gdb) p *list1->head->next->next->**next->next->next->next->next > >>>> $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, > >>>> oid_value = 24250808}, next = 0x1721190} > >>>> (gdb) p > *list1->head->next->next->**next->next->next->next->next->**next > >>>> $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, > >>>> oid_value = 24252848}, next = 0x1721988} > >>>> (gdb) p *list1->head->next->next->**next->next->next->next->next->** > >>>> next->next > >>>> $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, > >>>> oid_value = 24254888}, next = 0x1722018} > >>>> (gdb) p > >>>> *list1->head->next->next->**next->next->next->next->next->** > >>>> next->next->next > >>>> $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, > >>>> oid_value = 24256568}, next = 0x1722820} > >>>> (gdb) p > >>>> > >>>> *list1->head->next->next->**next->next->next->next->next->** > >>>> next->next->next->next > >>>> $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, > >>>> oid_value = 24258624}, next = 0x0} > >>>> ---- to ---- > >>>> > >>>> [list2] > >>>> (gdb) p *list2->head > >>>> $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, > >>>> oid_value = 24250808}, next = 0x1721190} > >>>> (gdb) p *list2->head->next > >>>> $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, > >>>> oid_value = 24252848}, next = 0x1721988} > >>>> (gdb) p *list2->head->next->next > >>>> $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, > >>>> oid_value = 24254888}, next = 0x1722018} > >>>> (gdb) p *list2->head->next->next->next > >>>> $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, > >>>> oid_value = 24256568}, next = 0x1722820} > >>>> (gdb) p *list2->head->next->next->**next->next > >>>> $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, > >>>> oid_value = 24258624}, next = 0x0} > >>>> ---- > >>>> > >>>> list1's last five elements are same with list2's all elements. > >>>> (in above example, between "from" and "to" in list1 equal all of > list2) > >>>> > >>>> This is cause of infinitely roop, but I can not > >>>> watch deeper. > >>>> Because some values from gdb are optimized and un-displayed. > >>>> I tried compile with CFLAGS=O0, but I can't. > >>>> > >>>> What can I do more ? > >>>> > >>>> regards, > >>>> ------------------ > >>>> NTT Software Corporation > >>>> Tomonari Katsumata > >>>> > >>>> (2013/06/12 21:04), Ashutosh Bapat wrote: > >>>> > Hi Tomonari, > >>>> > Can you please check the list's sanity before calling > >>>> pgxc_collect_RTE() > >>>> > and at every point in the minions of this function. My primary > >>>> suspect > >>>> is > >>>> > the line pgxcplan.c:3094. We should copy the list before > >>>> concatenating it. > >>>> > > >>>> > > >>>> > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < > >>>> > kat...@po....**jp< > kat...@po...>> > >>>> wrote: > >>>> > > >>>> >> Hi Ashutosh, > >>>> >> > >>>> >> Thank you for the response. > >>>> >> > >>>> >> (2013/06/12 14:43), Ashutosh Bapat wrote: > >>>> >> >> Hi, > >>>> >> >> > > >>>> >> >> > I've investigated this problem(BUG:3614369). > >>>> >> >> > > >>>> >> >> > I caught the cause of it, but I can not > >>>> >> >> > find where to fix. > >>>> >> >> > > >>>> >> >> > The problem occurs when "pgxc_collect_RTE_walker" is > called > >>>> >> infinitely. > >>>> >> >> > It seems that rtable(List of RangeTable) become cyclic > List. > >>>> >> >> > I'm not sure where the List is made. > >>>> >> >> > > >>>> >> >> > > >>>> >> > I guess, we are talking about EXECUTE DIRECT statement that > you > >>>> have > >>>> >> > mentioned earlier. > >>>> >> > >>>> >> Yes, that's right. > >>>> >> I'm talking about EXECUTE DIRECT statement like below. > >>>> >> --- > >>>> >> EXECUTE DIRECT ON (data1) $$ > >>>> >> SELECT > >>>> >> count(*) > >>>> >> FROM > >>>> >> (SELECT * FROM pg_locks l LEFT JOIN > >>>> >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > >>>> >> $$ > >>>> >> --- > >>>> >> > >>>> >> > The function pgxc_collect_RTE_walker() is a recursive > >>>> >> > function. The condition to end the recursion is if the given > >>>> node is > >>>> >> NULL. > >>>> >> > We have to look at if that condition is met and if not why. > >>>> >> > > >>>> >> I investigated it deeper, and I noticed that > >>>> >> the infinitely loop happens at the function > "range_table_walker()". > >>>> >> > >>>> >> Please see below trace. > >>>> >> =========================== > >>>> >> Breakpoint 1, range_table_walker (rtable=0x15e7968, > walker=0x612c70 > >>>> >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, > >>>> >> flags=0) at nodeFuncs.c:1908 > >>>> >> 1908 in nodeFuncs.c > >>>> >> > >>>> >> (gdb) p *rtable > >>>> >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = > >>>> 0x15e9820} > >>>> >> (gdb) p *rtable->head > >>>> >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, > >>>> oid_value = > >>>> >> 22968760}, next = 0x15e8190} > >>>> >> (gdb) p *rtable->head->next > >>>> >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, > >>>> oid_value = > >>>> >> 22970800}, next = 0x15e8988} > >>>> >> (gdb) p *rtable->head->next->next > >>>> >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, > >>>> oid_value = > >>>> >> 22972840}, next = 0x15e9018} > >>>> >> (gdb) p *rtable->head->next->next->**next > >>>> >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, > >>>> oid_value = > >>>> >> 22974520}, next = 0x15e9820} > >>>> >> (gdb) p *rtable->head->next->next->**next->next > >>>> >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, > >>>> oid_value = > >>>> >> 22976576}, next = 0x15e7998} > >>>> >> =========================== > >>>> >> > >>>> >> The line which starts with "$15" has 0x15e7998 as its next data. > >>>> >> But it is the head pointer(see the line which starts with $10). > >>>> >> > >>>> >> And in range_table_walker(), the function is called recursively. > >>>> >> -------- > >>>> >> ... > >>>> >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) > >>>> >> { > >>>> >> if (range_table_walker(query->**rtable, walker, > >>>> context, > >>>> >> flags)) > >>>> >> return true; > >>>> >> } > >>>> >> ... > >>>> >> -------- > >>>> >> > >>>> >> We should make rtable right or deal with "flags" properly. > >>>> >> But I can't find where to do it... > >>>> >> > >>>> >> What do you think ? > >>>> >> > >>>> >> regards, > >>>> >> --------- > >>>> >> NTT Software Corporation > >>>> >> Tomonari Katsumata > >>>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> >> > >>>> > >>>> ------------------------------**------------------------------** > >>>> ------------------ > >>>> >> This SF.net email is sponsored by Windows: > >>>> >> > >>>> >> Build for Windows Store. > >>>> >> > >>>> >> http://p.sf.net/sfu/windows-**dev2dev< > http://p.sf.net/sfu/windows-dev2dev> > >>>> >> ______________________________**_________________ > >>>> >> Postgres-xc-developers mailing list > >>>> >> Postgres-xc-developers@lists.**sourceforge.net< > Pos...@li...> > >>>> >> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-** > >>>> developers< > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> > >>>> >> > >>>> > > >>>> > > >>>> > > >>>> > >>>> > >>>> > >>>> > >>>> ------------------------------**------------------------------** > >>>> ------------------ > >>>> This SF.net email is sponsored by Windows: > >>>> > >>>> Build for Windows Store. > >>>> > >>>> http://p.sf.net/sfu/windows-**dev2dev< > http://p.sf.net/sfu/windows-dev2dev> > >>>> ______________________________**_________________ > >>>> Postgres-xc-developers mailing list > >>>> Postgres-xc-developers@lists.**sourceforge.net< > Pos...@li...> > >>>> > https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers< > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> > >>>> > >>>> > >>> > >> > >> > ------------------------------------------------------------------------------ > >> This SF.net email is sponsored by Windows: > >> > >> Build for Windows Store. > >> > >> http://p.sf.net/sfu/windows-dev2dev > >> _______________________________________________ > >> Postgres-xc-developers mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > >> > > > > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Tomonari K. <kat...@po...> - 2013-07-01 08:09:43
|
Hi Ashutosh, OK, I'll try regression test. Please wait for the result. regards, ------------ NTT Software Corporation Tomonari Katsumata (2013/07/01 17:06), Ashutosh Bapat wrote: > Hi Tomonori, > > > > On Mon, Jul 1, 2013 at 1:33 PM, Tomonari Katsumata < > kat...@po...> wrote: > >> Hi Ashutosh and all, >> >> Sorry for late response. >> I made a patch for resolving the problem I mentioned before. >> >> I thought the reason of this problem is parsing query twice. >> because of this, the rtable is made from same Lists and become >> cycliced List. >> I fixed this problem with making former List empty. >> >> I'm not sure this fix leads any anothre problems but >> the problem query I mentioned before works fine. >> >> This patch is against for "**a074cac9b6b507e6d4b58c5004673f**6cc65fcde1". >> >> > You can check the robustness of patch by running regression. Please let me > know what you see. > > >> regards, >> ------------------ >> NTT Software Corporation >> Tomonari Katsumata >> >> >> (2013/06/17 18:53), Ashutosh Bapat wrote: >> >>> Hi Tomonari, >>> In which function have you taken this debug info? What is list1 and list2? >>> >>> >>> On Mon, Jun 17, 2013 at 10:13 AM, Tomonari Katsumata < >>> kat...@po....**jp <kat...@po...>> >>> wrote: >>> >>> Hi Ashtosh, >>>> Sorry for slow response. >>>> >>>> I've watched the each lists at list_concat function. >>>> >>>> This function is called several times, but the lists before >>>> last infinitely roop are like below. >>>> >>>> [list1] >>>> (gdb) p *list1->head >>>> $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, >>>> oid_value = 24129768}, next = 0x170d418} >>>> (gdb) p *list1->head->next >>>> $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, >>>> oid_value = 24130512}, next = 0x170fd40} >>>> (gdb) p *list1->head->next->next >>>> $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, >>>> oid_value = 24161880}, next = 0x171e6c8} >>>> (gdb) p *list1->head->next->next->next >>>> $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, >>>> oid_value = 24128680}, next = 0x171ed28} >>>> (gdb) p *list1->head->next->next->**next->next >>>> $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, >>>> oid_value = 24162152}, next = 0x171f3a0} >>>> (gdb) p *list1->head->next->next->**next->next->next >>>> $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, >>>> oid_value = 24162472}, next = 0x170b7c0} >>>> (gdb) p *list1->head->next->next->**next->next->next->next >>>> $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, >>>> oid_value = 24131056}, next = 0x1720998} >>>> ---- from --- >>>> (gdb) p *list1->head->next->next->**next->next->next->next->next >>>> $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >>>> oid_value = 24250808}, next = 0x1721190} >>>> (gdb) p *list1->head->next->next->**next->next->next->next->next->**next >>>> $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >>>> oid_value = 24252848}, next = 0x1721988} >>>> (gdb) p *list1->head->next->next->**next->next->next->next->next->** >>>> next->next >>>> $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >>>> oid_value = 24254888}, next = 0x1722018} >>>> (gdb) p >>>> *list1->head->next->next->**next->next->next->next->next->** >>>> next->next->next >>>> $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, >>>> oid_value = 24256568}, next = 0x1722820} >>>> (gdb) p >>>> >>>> *list1->head->next->next->**next->next->next->next->next->** >>>> next->next->next->next >>>> $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, >>>> oid_value = 24258624}, next = 0x0} >>>> ---- to ---- >>>> >>>> [list2] >>>> (gdb) p *list2->head >>>> $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >>>> oid_value = 24250808}, next = 0x1721190} >>>> (gdb) p *list2->head->next >>>> $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >>>> oid_value = 24252848}, next = 0x1721988} >>>> (gdb) p *list2->head->next->next >>>> $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >>>> oid_value = 24254888}, next = 0x1722018} >>>> (gdb) p *list2->head->next->next->next >>>> $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, >>>> oid_value = 24256568}, next = 0x1722820} >>>> (gdb) p *list2->head->next->next->**next->next >>>> $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, >>>> oid_value = 24258624}, next = 0x0} >>>> ---- >>>> >>>> list1's last five elements are same with list2's all elements. >>>> (in above example, between "from" and "to" in list1 equal all of list2) >>>> >>>> This is cause of infinitely roop, but I can not >>>> watch deeper. >>>> Because some values from gdb are optimized and un-displayed. >>>> I tried compile with CFLAGS=O0, but I can't. >>>> >>>> What can I do more ? >>>> >>>> regards, >>>> ------------------ >>>> NTT Software Corporation >>>> Tomonari Katsumata >>>> >>>> (2013/06/12 21:04), Ashutosh Bapat wrote: >>>> > Hi Tomonari, >>>> > Can you please check the list's sanity before calling >>>> pgxc_collect_RTE() >>>> > and at every point in the minions of this function. My primary >>>> suspect >>>> is >>>> > the line pgxcplan.c:3094. We should copy the list before >>>> concatenating it. >>>> > >>>> > >>>> > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < >>>> > kat...@po....**jp<kat...@po...>> >>>> wrote: >>>> > >>>> >> Hi Ashutosh, >>>> >> >>>> >> Thank you for the response. >>>> >> >>>> >> (2013/06/12 14:43), Ashutosh Bapat wrote: >>>> >> >> Hi, >>>> >> >> > >>>> >> >> > I've investigated this problem(BUG:3614369). >>>> >> >> > >>>> >> >> > I caught the cause of it, but I can not >>>> >> >> > find where to fix. >>>> >> >> > >>>> >> >> > The problem occurs when "pgxc_collect_RTE_walker" is called >>>> >> infinitely. >>>> >> >> > It seems that rtable(List of RangeTable) become cyclic List. >>>> >> >> > I'm not sure where the List is made. >>>> >> >> > >>>> >> >> > >>>> >> > I guess, we are talking about EXECUTE DIRECT statement that you >>>> have >>>> >> > mentioned earlier. >>>> >> >>>> >> Yes, that's right. >>>> >> I'm talking about EXECUTE DIRECT statement like below. >>>> >> --- >>>> >> EXECUTE DIRECT ON (data1) $$ >>>> >> SELECT >>>> >> count(*) >>>> >> FROM >>>> >> (SELECT * FROM pg_locks l LEFT JOIN >>>> >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>>> >> $$ >>>> >> --- >>>> >> >>>> >> > The function pgxc_collect_RTE_walker() is a recursive >>>> >> > function. The condition to end the recursion is if the given >>>> node is >>>> >> NULL. >>>> >> > We have to look at if that condition is met and if not why. >>>> >> > >>>> >> I investigated it deeper, and I noticed that >>>> >> the infinitely loop happens at the function "range_table_walker()". >>>> >> >>>> >> Please see below trace. >>>> >> =========================== >>>> >> Breakpoint 1, range_table_walker (rtable=0x15e7968, walker=0x612c70 >>>> >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, >>>> >> flags=0) at nodeFuncs.c:1908 >>>> >> 1908 in nodeFuncs.c >>>> >> >>>> >> (gdb) p *rtable >>>> >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = >>>> 0x15e9820} >>>> >> (gdb) p *rtable->head >>>> >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, >>>> oid_value = >>>> >> 22968760}, next = 0x15e8190} >>>> >> (gdb) p *rtable->head->next >>>> >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, >>>> oid_value = >>>> >> 22970800}, next = 0x15e8988} >>>> >> (gdb) p *rtable->head->next->next >>>> >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, >>>> oid_value = >>>> >> 22972840}, next = 0x15e9018} >>>> >> (gdb) p *rtable->head->next->next->**next >>>> >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, >>>> oid_value = >>>> >> 22974520}, next = 0x15e9820} >>>> >> (gdb) p *rtable->head->next->next->**next->next >>>> >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, >>>> oid_value = >>>> >> 22976576}, next = 0x15e7998} >>>> >> =========================== >>>> >> >>>> >> The line which starts with "$15" has 0x15e7998 as its next data. >>>> >> But it is the head pointer(see the line which starts with $10). >>>> >> >>>> >> And in range_table_walker(), the function is called recursively. >>>> >> -------- >>>> >> ... >>>> >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) >>>> >> { >>>> >> if (range_table_walker(query->**rtable, walker, >>>> context, >>>> >> flags)) >>>> >> return true; >>>> >> } >>>> >> ... >>>> >> -------- >>>> >> >>>> >> We should make rtable right or deal with "flags" properly. >>>> >> But I can't find where to do it... >>>> >> >>>> >> What do you think ? >>>> >> >>>> >> regards, >>>> >> --------- >>>> >> NTT Software Corporation >>>> >> Tomonari Katsumata >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >>>> ------------------------------**------------------------------** >>>> ------------------ >>>> >> This SF.net email is sponsored by Windows: >>>> >> >>>> >> Build for Windows Store. >>>> >> >>>> >> http://p.sf.net/sfu/windows-**dev2dev<http://p.sf.net/sfu/windows-dev2dev> >>>> >> ______________________________**_________________ >>>> >> Postgres-xc-developers mailing list >>>> >> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>>> >> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-** >>>> developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>>> >> >>>> > >>>> > >>>> > >>>> >>>> >>>> >>>> >>>> ------------------------------**------------------------------** >>>> ------------------ >>>> This SF.net email is sponsored by Windows: >>>> >>>> Build for Windows Store. >>>> >>>> http://p.sf.net/sfu/windows-**dev2dev<http://p.sf.net/sfu/windows-dev2dev> >>>> ______________________________**_________________ >>>> Postgres-xc-developers mailing list >>>> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>>> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>>> >>>> >>> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > |
From: Ashutosh B. <ash...@en...> - 2013-07-01 08:06:34
|
Hi Tomonori, On Mon, Jul 1, 2013 at 1:33 PM, Tomonari Katsumata < kat...@po...> wrote: > Hi Ashutosh and all, > > Sorry for late response. > I made a patch for resolving the problem I mentioned before. > > I thought the reason of this problem is parsing query twice. > because of this, the rtable is made from same Lists and become > cycliced List. > I fixed this problem with making former List empty. > > I'm not sure this fix leads any anothre problems but > the problem query I mentioned before works fine. > > This patch is against for "**a074cac9b6b507e6d4b58c5004673f**6cc65fcde1". > > You can check the robustness of patch by running regression. Please let me know what you see. > > regards, > ------------------ > NTT Software Corporation > Tomonari Katsumata > > > (2013/06/17 18:53), Ashutosh Bapat wrote: > >> Hi Tomonari, >> In which function have you taken this debug info? What is list1 and list2? >> >> >> On Mon, Jun 17, 2013 at 10:13 AM, Tomonari Katsumata < >> kat...@po....**jp <kat...@po...>> >> wrote: >> >> Hi Ashtosh, >>> >>> Sorry for slow response. >>> >>> I've watched the each lists at list_concat function. >>> >>> This function is called several times, but the lists before >>> last infinitely roop are like below. >>> >>> [list1] >>> (gdb) p *list1->head >>> $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, >>> oid_value = 24129768}, next = 0x170d418} >>> (gdb) p *list1->head->next >>> $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, >>> oid_value = 24130512}, next = 0x170fd40} >>> (gdb) p *list1->head->next->next >>> $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, >>> oid_value = 24161880}, next = 0x171e6c8} >>> (gdb) p *list1->head->next->next->next >>> $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, >>> oid_value = 24128680}, next = 0x171ed28} >>> (gdb) p *list1->head->next->next->**next->next >>> $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, >>> oid_value = 24162152}, next = 0x171f3a0} >>> (gdb) p *list1->head->next->next->**next->next->next >>> $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, >>> oid_value = 24162472}, next = 0x170b7c0} >>> (gdb) p *list1->head->next->next->**next->next->next->next >>> $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, >>> oid_value = 24131056}, next = 0x1720998} >>> ---- from --- >>> (gdb) p *list1->head->next->next->**next->next->next->next->next >>> $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >>> oid_value = 24250808}, next = 0x1721190} >>> (gdb) p *list1->head->next->next->**next->next->next->next->next->**next >>> $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >>> oid_value = 24252848}, next = 0x1721988} >>> (gdb) p *list1->head->next->next->**next->next->next->next->next->** >>> next->next >>> $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >>> oid_value = 24254888}, next = 0x1722018} >>> (gdb) p >>> *list1->head->next->next->**next->next->next->next->next->** >>> next->next->next >>> $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, >>> oid_value = 24256568}, next = 0x1722820} >>> (gdb) p >>> >>> *list1->head->next->next->**next->next->next->next->next->** >>> next->next->next->next >>> $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, >>> oid_value = 24258624}, next = 0x0} >>> ---- to ---- >>> >>> [list2] >>> (gdb) p *list2->head >>> $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >>> oid_value = 24250808}, next = 0x1721190} >>> (gdb) p *list2->head->next >>> $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >>> oid_value = 24252848}, next = 0x1721988} >>> (gdb) p *list2->head->next->next >>> $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >>> oid_value = 24254888}, next = 0x1722018} >>> (gdb) p *list2->head->next->next->next >>> $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, >>> oid_value = 24256568}, next = 0x1722820} >>> (gdb) p *list2->head->next->next->**next->next >>> $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, >>> oid_value = 24258624}, next = 0x0} >>> ---- >>> >>> list1's last five elements are same with list2's all elements. >>> (in above example, between "from" and "to" in list1 equal all of list2) >>> >>> This is cause of infinitely roop, but I can not >>> watch deeper. >>> Because some values from gdb are optimized and un-displayed. >>> I tried compile with CFLAGS=O0, but I can't. >>> >>> What can I do more ? >>> >>> regards, >>> ------------------ >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> (2013/06/12 21:04), Ashutosh Bapat wrote: >>> > Hi Tomonari, >>> > Can you please check the list's sanity before calling >>> pgxc_collect_RTE() >>> > and at every point in the minions of this function. My primary >>> suspect >>> is >>> > the line pgxcplan.c:3094. We should copy the list before >>> concatenating it. >>> > >>> > >>> > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < >>> > kat...@po....**jp<kat...@po...>> >>> wrote: >>> > >>> >> Hi Ashutosh, >>> >> >>> >> Thank you for the response. >>> >> >>> >> (2013/06/12 14:43), Ashutosh Bapat wrote: >>> >> >> Hi, >>> >> >> > >>> >> >> > I've investigated this problem(BUG:3614369). >>> >> >> > >>> >> >> > I caught the cause of it, but I can not >>> >> >> > find where to fix. >>> >> >> > >>> >> >> > The problem occurs when "pgxc_collect_RTE_walker" is called >>> >> infinitely. >>> >> >> > It seems that rtable(List of RangeTable) become cyclic List. >>> >> >> > I'm not sure where the List is made. >>> >> >> > >>> >> >> > >>> >> > I guess, we are talking about EXECUTE DIRECT statement that you >>> have >>> >> > mentioned earlier. >>> >> >>> >> Yes, that's right. >>> >> I'm talking about EXECUTE DIRECT statement like below. >>> >> --- >>> >> EXECUTE DIRECT ON (data1) $$ >>> >> SELECT >>> >> count(*) >>> >> FROM >>> >> (SELECT * FROM pg_locks l LEFT JOIN >>> >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> >> $$ >>> >> --- >>> >> >>> >> > The function pgxc_collect_RTE_walker() is a recursive >>> >> > function. The condition to end the recursion is if the given >>> node is >>> >> NULL. >>> >> > We have to look at if that condition is met and if not why. >>> >> > >>> >> I investigated it deeper, and I noticed that >>> >> the infinitely loop happens at the function "range_table_walker()". >>> >> >>> >> Please see below trace. >>> >> =========================== >>> >> Breakpoint 1, range_table_walker (rtable=0x15e7968, walker=0x612c70 >>> >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, >>> >> flags=0) at nodeFuncs.c:1908 >>> >> 1908 in nodeFuncs.c >>> >> >>> >> (gdb) p *rtable >>> >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = >>> 0x15e9820} >>> >> (gdb) p *rtable->head >>> >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, >>> oid_value = >>> >> 22968760}, next = 0x15e8190} >>> >> (gdb) p *rtable->head->next >>> >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, >>> oid_value = >>> >> 22970800}, next = 0x15e8988} >>> >> (gdb) p *rtable->head->next->next >>> >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, >>> oid_value = >>> >> 22972840}, next = 0x15e9018} >>> >> (gdb) p *rtable->head->next->next->**next >>> >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, >>> oid_value = >>> >> 22974520}, next = 0x15e9820} >>> >> (gdb) p *rtable->head->next->next->**next->next >>> >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, >>> oid_value = >>> >> 22976576}, next = 0x15e7998} >>> >> =========================== >>> >> >>> >> The line which starts with "$15" has 0x15e7998 as its next data. >>> >> But it is the head pointer(see the line which starts with $10). >>> >> >>> >> And in range_table_walker(), the function is called recursively. >>> >> -------- >>> >> ... >>> >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) >>> >> { >>> >> if (range_table_walker(query->**rtable, walker, >>> context, >>> >> flags)) >>> >> return true; >>> >> } >>> >> ... >>> >> -------- >>> >> >>> >> We should make rtable right or deal with "flags" properly. >>> >> But I can't find where to do it... >>> >> >>> >> What do you think ? >>> >> >>> >> regards, >>> >> --------- >>> >> NTT Software Corporation >>> >> Tomonari Katsumata >>> >> >>> >> >>> >> >>> >> >>> >> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> >> This SF.net email is sponsored by Windows: >>> >> >>> >> Build for Windows Store. >>> >> >>> >> http://p.sf.net/sfu/windows-**dev2dev<http://p.sf.net/sfu/windows-dev2dev> >>> >> ______________________________**_________________ >>> >> Postgres-xc-developers mailing list >>> >> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>> >> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-** >>> developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>> >> >>> > >>> > >>> > >>> >>> >>> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> >>> http://p.sf.net/sfu/windows-**dev2dev<http://p.sf.net/sfu/windows-dev2dev> >>> ______________________________**_________________ >>> Postgres-xc-developers mailing list >>> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>> >>> >> >> > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Tomonari K. <kat...@po...> - 2013-07-01 08:04:07
|
Hi Ashutosh and all, Sorry for late response. I made a patch for resolving the problem I mentioned before. I thought the reason of this problem is parsing query twice. because of this, the rtable is made from same Lists and become cycliced List. I fixed this problem with making former List empty. I'm not sure this fix leads any anothre problems but the problem query I mentioned before works fine. This patch is against for "a074cac9b6b507e6d4b58c5004673f6cc65fcde1". regards, ------------------ NTT Software Corporation Tomonari Katsumata (2013/06/17 18:53), Ashutosh Bapat wrote: > Hi Tomonari, > In which function have you taken this debug info? What is list1 and list2? > > > On Mon, Jun 17, 2013 at 10:13 AM, Tomonari Katsumata < > kat...@po...> wrote: > >> Hi Ashtosh, >> >> Sorry for slow response. >> >> I've watched the each lists at list_concat function. >> >> This function is called several times, but the lists before >> last infinitely roop are like below. >> >> [list1] >> (gdb) p *list1->head >> $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, >> oid_value = 24129768}, next = 0x170d418} >> (gdb) p *list1->head->next >> $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, >> oid_value = 24130512}, next = 0x170fd40} >> (gdb) p *list1->head->next->next >> $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, >> oid_value = 24161880}, next = 0x171e6c8} >> (gdb) p *list1->head->next->next->next >> $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, >> oid_value = 24128680}, next = 0x171ed28} >> (gdb) p *list1->head->next->next->next->next >> $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, >> oid_value = 24162152}, next = 0x171f3a0} >> (gdb) p *list1->head->next->next->next->next->next >> $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, >> oid_value = 24162472}, next = 0x170b7c0} >> (gdb) p *list1->head->next->next->next->next->next->next >> $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, >> oid_value = 24131056}, next = 0x1720998} >> ---- from --- >> (gdb) p *list1->head->next->next->next->next->next->next->next >> $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >> oid_value = 24250808}, next = 0x1721190} >> (gdb) p *list1->head->next->next->next->next->next->next->next->next >> $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >> oid_value = 24252848}, next = 0x1721988} >> (gdb) p *list1->head->next->next->next->next->next->next->next->next->next >> $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >> oid_value = 24254888}, next = 0x1722018} >> (gdb) p >> *list1->head->next->next->next->next->next->next->next->next->next->next >> $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, >> oid_value = 24256568}, next = 0x1722820} >> (gdb) p >> >> *list1->head->next->next->next->next->next->next->next->next->next->next->next >> $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, >> oid_value = 24258624}, next = 0x0} >> ---- to ---- >> >> [list2] >> (gdb) p *list2->head >> $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >> oid_value = 24250808}, next = 0x1721190} >> (gdb) p *list2->head->next >> $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >> oid_value = 24252848}, next = 0x1721988} >> (gdb) p *list2->head->next->next >> $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >> oid_value = 24254888}, next = 0x1722018} >> (gdb) p *list2->head->next->next->next >> $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, >> oid_value = 24256568}, next = 0x1722820} >> (gdb) p *list2->head->next->next->next->next >> $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, >> oid_value = 24258624}, next = 0x0} >> ---- >> >> list1's last five elements are same with list2's all elements. >> (in above example, between "from" and "to" in list1 equal all of list2) >> >> This is cause of infinitely roop, but I can not >> watch deeper. >> Because some values from gdb are optimized and un-displayed. >> I tried compile with CFLAGS=O0, but I can't. >> >> What can I do more ? >> >> regards, >> ------------------ >> NTT Software Corporation >> Tomonari Katsumata >> >> (2013/06/12 21:04), Ashutosh Bapat wrote: >> > Hi Tomonari, >> > Can you please check the list's sanity before calling pgxc_collect_RTE() >> > and at every point in the minions of this function. My primary suspect >> is >> > the line pgxcplan.c:3094. We should copy the list before >> concatenating it. >> > >> > >> > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < >> > kat...@po...> wrote: >> > >> >> Hi Ashutosh, >> >> >> >> Thank you for the response. >> >> >> >> (2013/06/12 14:43), Ashutosh Bapat wrote: >> >> >> Hi, >> >> >> > >> >> >> > I've investigated this problem(BUG:3614369). >> >> >> > >> >> >> > I caught the cause of it, but I can not >> >> >> > find where to fix. >> >> >> > >> >> >> > The problem occurs when "pgxc_collect_RTE_walker" is called >> >> infinitely. >> >> >> > It seems that rtable(List of RangeTable) become cyclic List. >> >> >> > I'm not sure where the List is made. >> >> >> > >> >> >> > >> >> > I guess, we are talking about EXECUTE DIRECT statement that you have >> >> > mentioned earlier. >> >> >> >> Yes, that's right. >> >> I'm talking about EXECUTE DIRECT statement like below. >> >> --- >> >> EXECUTE DIRECT ON (data1) $$ >> >> SELECT >> >> count(*) >> >> FROM >> >> (SELECT * FROM pg_locks l LEFT JOIN >> >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >> >> $$ >> >> --- >> >> >> >> > The function pgxc_collect_RTE_walker() is a recursive >> >> > function. The condition to end the recursion is if the given node is >> >> NULL. >> >> > We have to look at if that condition is met and if not why. >> >> > >> >> I investigated it deeper, and I noticed that >> >> the infinitely loop happens at the function "range_table_walker()". >> >> >> >> Please see below trace. >> >> =========================== >> >> Breakpoint 1, range_table_walker (rtable=0x15e7968, walker=0x612c70 >> >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, >> >> flags=0) at nodeFuncs.c:1908 >> >> 1908 in nodeFuncs.c >> >> >> >> (gdb) p *rtable >> >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = 0x15e9820} >> >> (gdb) p *rtable->head >> >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, oid_value = >> >> 22968760}, next = 0x15e8190} >> >> (gdb) p *rtable->head->next >> >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, oid_value = >> >> 22970800}, next = 0x15e8988} >> >> (gdb) p *rtable->head->next->next >> >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, oid_value = >> >> 22972840}, next = 0x15e9018} >> >> (gdb) p *rtable->head->next->next->next >> >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, oid_value = >> >> 22974520}, next = 0x15e9820} >> >> (gdb) p *rtable->head->next->next->next->next >> >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, oid_value = >> >> 22976576}, next = 0x15e7998} >> >> =========================== >> >> >> >> The line which starts with "$15" has 0x15e7998 as its next data. >> >> But it is the head pointer(see the line which starts with $10). >> >> >> >> And in range_table_walker(), the function is called recursively. >> >> -------- >> >> ... >> >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) >> >> { >> >> if (range_table_walker(query->rtable, walker, context, >> >> flags)) >> >> return true; >> >> } >> >> ... >> >> -------- >> >> >> >> We should make rtable right or deal with "flags" properly. >> >> But I can't find where to do it... >> >> >> >> What do you think ? >> >> >> >> regards, >> >> --------- >> >> NTT Software Corporation >> >> Tomonari Katsumata >> >> >> >> >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> This SF.net email is sponsored by Windows: >> >> >> >> Build for Windows Store. >> >> >> >> http://p.sf.net/sfu/windows-dev2dev >> >> _______________________________________________ >> >> Postgres-xc-developers mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> > >> > >> > >> >> >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > |
From: Michael P. <mic...@gm...> - 2013-06-27 22:33:32
|
On Fri, Jun 28, 2013 at 1:25 AM, Koichi Suzuki <koi...@gm...> wrote: > Sorry for taking longer. We're still working agressively. I'd rather say actively ;) -- Michael |
From: Koichi S. <koi...@gm...> - 2013-06-27 16:26:59
|
Thank you Matt. Please wait a bit until branch for 1.1 is built. Regards; ---------- Koichi Suzuki 2013/6/28 Matt Warner <MW...@xi...> > I’d be happy to continue testing on Solaris.**** > > ** ** > > *From:* Koichi Suzuki [mailto:koi...@gm...] > *Sent:* Wednesday, June 26, 2013 11:19 PM > *To:* Matt Warner; Postgres-XC Developers > > *Subject:* Re: [Postgres-xc-developers] Minor Fixes**** > > ** ** > > Hi,**** > > I reviewed this thread again. It may be better to include Matt's patch to > the master after we build REL1_1_STABLE so that he can continue his > Solaris-related work on the master. As Ashutosh suggested, it will be > less confusing not to include this in REL1_1_STABLE.**** > > Because I'm about to build REL1_1_STABLE for beta work, please let me know > if anybody need Matt's patch in 1.1.**** > > Matt, could you let me know your idea and if you can continue to test XC > on Solaris and declare that XC supports Solaris?**** > > Best Regards;**** > > > **** > > ---------- > Koichi Suzuki**** > > ** ** > > 2013/6/25 Koichi Suzuki <koi...@gm...>**** > > I meant that removing "return" statement which returns another function > return value will be a good refactoring. Of course, simple return may not > be removed.**** > > ** ** > > > **** > > ---------- > Koichi Suzuki**** > > ** ** > > 2013/6/25 Koichi Suzuki <koi...@gm...>**** > > Year. The code is not harmfull at all. Removing "return" from void > functions could be a good refactoring. Although Solaris is not supported > officieally yet, I think it's a good idea to have it in master. I do hope > Matt continues to test XC so that we can tell XC runs on Solaris.**** > > Any more inputs?**** > > Regardsds;**** > > > **** > > ---------- > Koichi Suzuki**** > > ** ** > > 2013/6/25 Matt Warner <MW...@xi...>**** > > I'll double check but I thought I'd only removed return from functions > declaring void as their return type. **** > > ** ** > > ?**** > > ** ** > > Matt**** > > > On Jun 23, 2013, at 6:22 PM, "鈴木 幸市" <ko...@in...> wrote:**** > > The patch looks reasonable. One comment: removing "return" for non-void > function will cause Linux gcc warning. For this case, we need #ifdef > SOLARIS directive.**** > > ** ** > > You sent two similar patch for proxy_main.c in separate e-mails. The > later one seems to resolve my comment above. Although the core team > cannot declare that XC runs on Solaris so far, I think the patch is > reasonable to be included.**** > > ** ** > > Any other comments?**** > > ---**** > > Koichi Suzuki**** > > ** ** > > ** ** > > ** ** > > On 2013/06/22, at 1:26, Matt Warner <MW...@XI...> wrote:**** > > > > **** > > Regarding the other changes, they are specific to Solaris. For example, in > src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include sys/filio.h. > I’ll be searching to see if I can find a macro already defined for Solaris > that I can leverage to #ifdef those Solaris-specific items.**** > > **** > > Matt**** > > **** > > *From:* Matt Warner > *Sent:* Friday, June 21, 2013 9:21 AM > *To:* 'Koichi Suzuki' > *Cc:* 'pos...@li...' > *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** > > **** > > First patch.**** > > **** > > *From:* Matt Warner > *Sent:* Friday, June 21, 2013 8:50 AM > *To:* 'Koichi Suzuki' > *Cc:* pos...@li... > *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** > > **** > > Yes, I’m running XC on Solaris x64.**** > > **** > > *From:* Koichi Suzuki [mailto:koi...@gm...<koi...@gm...> > ] > *Sent:* Thursday, June 20, 2013 6:34 PM > *To:* Matt Warner > *Cc:* pos...@li... > *Subject:* Re: [Postgres-xc-developers] Minor Fixes**** > > **** > > Thanks a lot for the patch. As Michael mentioned, you can send a patch > to developers mailing list.**** > > **** > > BTW, core team tested current XC on 64bit Intel CentOS and others tested > it against RedHat. Did you test XC on Solaris?**** > > **** > > Regards;**** > > > **** > > ---------- > Koichi Suzuki**** > > **** > > 2013/6/21 Matt Warner <MW...@xi...>**** > > Just a quick question about contributing fixes. I’ve had to make some > minor changes to get XC compiled on Solaris x64.**** > > What format would you like to see for the changes? Most are very minor, > such as removing return statements inside void functions (which the Solaris > compiler flags as incorrect since you can’t return from a void function).* > *** > > Matt**** > > **** > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers**** > > **** > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers**** > > ** ** > > ** ** > > ** ** > > ** ** > |
From: Koichi S. <koi...@gm...> - 2013-06-27 16:25:49
|
Online server additon/removal is already in the master. It took much longer than we expected to fix all the regression test. Now we will build a branch for 1.1 and merge with PostgreSQL 9.2.4, which will take another week and then the beta will be out. Sorry for taking longer. We're still working agressively. Best Regards; ---------- Koichi Suzuki 2013/6/26 Himpich, Stefan <Ste...@se...> > Hello *, > > first of all: thanks for a great project, taking postgres to the next > level. We are looking forward to the feature "Online server addition and > removal" which was announced for Version 1.1 [1]. > > Is this feature still planned for version 1.1 or was it postponed? > > And as the planned releasedate [1] is in the past, do you have any new > estimations of the release of Version 1.1? > > > Greetings, > Stefan Himpich > > > [1] http://postgres-xc.sourceforge.net/roadmap.html > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Matt W. <MW...@XI...> - 2013-06-27 16:12:03
|
I’d be happy to continue testing on Solaris. From: Koichi Suzuki [mailto:koi...@gm...] Sent: Wednesday, June 26, 2013 11:19 PM To: Matt Warner; Postgres-XC Developers Subject: Re: [Postgres-xc-developers] Minor Fixes Hi, I reviewed this thread again. It may be better to include Matt's patch to the master after we build REL1_1_STABLE so that he can continue his Solaris-related work on the master. As Ashutosh suggested, it will be less confusing not to include this in REL1_1_STABLE. Because I'm about to build REL1_1_STABLE for beta work, please let me know if anybody need Matt's patch in 1.1. Matt, could you let me know your idea and if you can continue to test XC on Solaris and declare that XC supports Solaris? Best Regards; ---------- Koichi Suzuki 2013/6/25 Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> I meant that removing "return" statement which returns another function return value will be a good refactoring. Of course, simple return may not be removed. ---------- Koichi Suzuki 2013/6/25 Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> Year. The code is not harmfull at all. Removing "return" from void functions could be a good refactoring. Although Solaris is not supported officieally yet, I think it's a good idea to have it in master. I do hope Matt continues to test XC so that we can tell XC runs on Solaris. Any more inputs? Regardsds; ---------- Koichi Suzuki 2013/6/25 Matt Warner <MW...@xi...<mailto:MW...@xi...>> I'll double check but I thought I'd only removed return from functions declaring void as their return type. ? Matt On Jun 23, 2013, at 6:22 PM, "鈴木 幸市" <ko...@in...<mailto:ko...@in...>> wrote: The patch looks reasonable. One comment: removing "return" for non-void function will cause Linux gcc warning. For this case, we need #ifdef SOLARIS directive. You sent two similar patch for proxy_main.c in separate e-mails. The later one seems to resolve my comment above. Although the core team cannot declare that XC runs on Solaris so far, I think the patch is reasonable to be included. Any other comments? --- Koichi Suzuki On 2013/06/22, at 1:26, Matt Warner <MW...@XI...<mailto:MW...@XI...>> wrote: Regarding the other changes, they are specific to Solaris. For example, in src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include sys/filio.h. I’ll be searching to see if I can find a macro already defined for Solaris that I can leverage to #ifdef those Solaris-specific items. Matt From: Matt Warner Sent: Friday, June 21, 2013 9:21 AM To: 'Koichi Suzuki' Cc: 'pos...@li...<mailto:pos...@li...>' Subject: RE: [Postgres-xc-developers] Minor Fixes First patch. From: Matt Warner Sent: Friday, June 21, 2013 8:50 AM To: 'Koichi Suzuki' Cc: pos...@li...<mailto:pos...@li...> Subject: RE: [Postgres-xc-developers] Minor Fixes Yes, I’m running XC on Solaris x64. From: Koichi Suzuki [mailto:koi...@gm...] Sent: Thursday, June 20, 2013 6:34 PM To: Matt Warner Cc: pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-developers] Minor Fixes Thanks a lot for the patch. As Michael mentioned, you can send a patch to developers mailing list. BTW, core team tested current XC on 64bit Intel CentOS and others tested it against RedHat. Did you test XC on Solaris? Regards; ---------- Koichi Suzuki 2013/6/21 Matt Warner <MW...@xi...<mailto:MW...@xi...>> Just a quick question about contributing fixes. I’ve had to make some minor changes to get XC compiled on Solaris x64. What format would you like to see for the changes? Most are very minor, such as removing return statements inside void functions (which the Solaris compiler flags as incorrect since you can’t return from a void function). Matt ------------------------------------------------------------------------------ This SF.net<http://SF.net> email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers ------------------------------------------------------------------------------ This SF.net<http://SF.net> email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev_______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Koichi S. <koi...@gm...> - 2013-06-27 06:18:52
|
Hi, I reviewed this thread again. It may be better to include Matt's patch to the master after we build REL1_1_STABLE so that he can continue his Solaris-related work on the master. As Ashutosh suggested, it will be less confusing not to include this in REL1_1_STABLE. Because I'm about to build REL1_1_STABLE for beta work, please let me know if anybody need Matt's patch in 1.1. Matt, could you let me know your idea and if you can continue to test XC on Solaris and declare that XC supports Solaris? Best Regards; ---------- Koichi Suzuki 2013/6/25 Koichi Suzuki <koi...@gm...> > I meant that removing "return" statement which returns another function > return value will be a good refactoring. Of course, simple return may not > be removed. > > > ---------- > Koichi Suzuki > > > 2013/6/25 Koichi Suzuki <koi...@gm...> > >> Year. The code is not harmfull at all. Removing "return" from void >> functions could be a good refactoring. Although Solaris is not supported >> officieally yet, I think it's a good idea to have it in master. I do hope >> Matt continues to test XC so that we can tell XC runs on Solaris. >> >> Any more inputs? >> >> Regardsds; >> >> ---------- >> Koichi Suzuki >> >> >> 2013/6/25 Matt Warner <MW...@xi...> >> >>> I'll double check but I thought I'd only removed return from functions >>> declaring void as their return type. >>> >>> ? >>> >>> Matt >>> >>> On Jun 23, 2013, at 6:22 PM, "鈴木 幸市" <ko...@in...> wrote: >>> >>> The patch looks reasonable. One comment: removing "return" for >>> non-void function will cause Linux gcc warning. For this case, we need >>> #ifdef SOLARIS directive. >>> >>> You sent two similar patch for proxy_main.c in separate e-mails. The >>> later one seems to resolve my comment above. Although the core team >>> cannot declare that XC runs on Solaris so far, I think the patch is >>> reasonable to be included. >>> >>> Any other comments? >>> --- >>> Koichi Suzuki >>> >>> >>> >>> On 2013/06/22, at 1:26, Matt Warner <MW...@XI...> wrote: >>> >>> Regarding the other changes, they are specific to Solaris. For example, >>> in src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include >>> sys/filio.h. I’ll be searching to see if I can find a macro already defined >>> for Solaris that I can leverage to #ifdef those Solaris-specific items.* >>> *** >>> >>> Matt**** >>> >>> *From:* Matt Warner >>> *Sent:* Friday, June 21, 2013 9:21 AM >>> *To:* 'Koichi Suzuki' >>> *Cc:* 'pos...@li...' >>> *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** >>> ** ** >>> First patch.**** >>> >>> *From:* Matt Warner >>> *Sent:* Friday, June 21, 2013 8:50 AM >>> *To:* 'Koichi Suzuki' >>> *Cc:* pos...@li... >>> *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** >>> ** ** >>> Yes, I’m running XC on Solaris x64.**** >>> >>> *From:* Koichi Suzuki [mailto:koi...@gm...<koi...@gm...> >>> ] >>> *Sent:* Thursday, June 20, 2013 6:34 PM >>> *To:* Matt Warner >>> *Cc:* pos...@li... >>> *Subject:* Re: [Postgres-xc-developers] Minor Fixes**** >>> ** ** >>> Thanks a lot for the patch. As Michael mentioned, you can send a patch >>> to developers mailing list.**** >>> ** ** >>> BTW, core team tested current XC on 64bit Intel CentOS and others tested >>> it against RedHat. Did you test XC on Solaris?**** >>> ** ** >>> Regards;**** >>> >>> **** >>> ---------- >>> Koichi Suzuki**** >>> >>> ** ** >>> 2013/6/21 Matt Warner <MW...@xi...>**** >>> Just a quick question about contributing fixes. I’ve had to make some >>> minor changes to get XC compiled on Solaris x64.**** >>> What format would you like to see for the changes? Most are very minor, >>> such as removing return statements inside void functions (which the Solaris >>> compiler flags as incorrect since you can’t return from a void function). >>> **** >>> Matt**** >>> **** >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> >>> http://p.sf.net/sfu/windows-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers**** >>> ** ** >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> >>> >>> http://p.sf.net/sfu/windows-dev2dev_______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >> > |
From: Himpich, S. <Ste...@se...> - 2013-06-26 11:41:21
|
Hello *, first of all: thanks for a great project, taking postgres to the next level. We are looking forward to the feature "Online server addition and removal" which was announced for Version 1.1 [1]. Is this feature still planned for version 1.1 or was it postponed? And as the planned releasedate [1] is in the past, do you have any new estimations of the release of Version 1.1? Greetings, Stefan Himpich [1] http://postgres-xc.sourceforge.net/roadmap.html |
From: Koichi S. <koi...@gm...> - 2013-06-26 09:25:04
|
Amit; Could you fix parallel_scuedule to run inherit in series? This can be committed right away. Regards; ---------- Koichi Suzuki 2013/6/26 Koichi Suzuki <koi...@gm...> > You're right. Should we change parallel_schedule? > > Regards; > > ---------- > Koichi Suzuki > > > 2013/6/26 Amit Khandekar <ami...@en...> > >> Now I see how to reproduce the inherit.sql failure. It fails only in >> parallel schedule. And I just checked that it also failed 6 days ago >> with the same diff when I ran some regression test on the build farm. >> So it has nothing to do with the patch, and this test needs to be >> analyzed and fixed separately. >> >> On 26 June 2013 13:21, 鈴木 幸市 <ko...@in...> wrote: >> > It seems that the name of materialized table may be >> environment-dependent. >> > Any idea to make it environment-independent? >> > >> > Apparently, the result is correct but just does not match expected ones. >> > >> > Reagards; >> > --- >> > Koichi Suzuki >> > >> > >> > >> > On 2013/06/26, at 16:47, Koichi Suzuki <koi...@gm...> >> wrote: >> > >> > I tested this patch for the latest master (without any pending patches) >> and >> > found inherit fails in linker machine environment. >> > >> > PFA related files. >> > >> > Regards; >> > >> > ---------- >> > Koichi Suzuki >> > >> > >> > 2013/6/26 Amit Khandekar <ami...@en...> >> >> >> >> On 26 June 2013 10:34, Amit Khandekar <ami...@en... >> > >> >> wrote: >> >> > On 26 June 2013 08:56, Ashutosh Bapat < >> ash...@en...> >> >> > wrote: >> >> >> Hi Amit, >> >> >> From a cursory look, this looks much more cleaner than Abbas's >> patch. >> >> >> So, it >> >> >> looks to be the approach we should take. BTW, you need to update the >> >> >> original outputs as well, instead of just the alternate expected >> >> >> outputs. >> >> >> Remember, the merge applies changes to the original expected outputs >> >> >> and not >> >> >> alternate ones (added in XC), thus we come to know about conflicts >> only >> >> >> when >> >> >> we apply changes to original expected outputs. >> >> > >> >> > Right. Will look into it. >> >> >> >> All of the expected files except inherit.sql are xc_* tests which >> >> don't have alternate files. I have made the same inherit_1.out changes >> >> onto inherit.out. Attached revised patch. >> >> >> >> Suzuki-san, I was looking at the following diff in the inherit.sql >> >> failure that you had attached from your local regression run: >> >> >> >> ! Hash Cond: (pg_temp_2.patest0.id = int4_tbl.f1) >> >> --- 1247,1253 ---- >> >> ! Hash Cond: (pg_temp_7.patest0.id = int4_tbl.f1) >> >> >> >> I am not sure if this has started coming after you applied the patch. >> >> Can you please once again run the test after reinitializing the >> >> cluster ? I am not getting this diff although I ran using >> >> serial_schedule, not parallel; and the diff itself looks harmless. As >> >> I mentioned, the actual temp schema name may differ. >> >> >> >> > >> >> >> >> >> >> Regards to temporary namespaces, is it possible to schema-qualify >> the >> >> >> temporary namespaces as always pg_temp irrespective of the actual >> name? >> >> > >> >> > get_namespace_name() is used to deparse the schema name. In order to >> >> > keep the deparsing logic working for both local queries (i.e. for >> view >> >> > for e.g.) and remote queries, we need to push in the context that the >> >> > deparsing is being done for remote queries, and this needs to be done >> >> > all the way from the uppermost function (say pg_get_querydef) upto >> >> > get_namespace_name() which does not look good. >> >> > >> >> > We need to think about some solution in general for the existing >> issue >> >> > of deparsing temp table names. Not sure of any solution. May be, >> after >> >> > we resolve the bug id in subject, the fix may even not require the >> >> > schema qualification. >> >> >> >> >> >> >> >> >> On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar >> >> >> <ami...@en...> wrote: >> >> >>> >> >> >>> On 25 June 2013 19:59, Amit Khandekar >> >> >>> <ami...@en...> >> >> >>> wrote: >> >> >>> > Attached is a patch that does schema qualification by overriding >> the >> >> >>> > search_path just before doing the deparse in deparse_query(). >> >> >>> > PopOVerrideSearchPath() in the end pops out the temp path. Also, >> the >> >> >>> > transaction callback function already takes care of popping out >> such >> >> >>> > Push in case of transaction rollback. >> >> >>> > >> >> >>> > Unfortunately we cannot apply this solution to temp tables. The >> >> >>> > problem is, when a pg_temp schema is deparsed, it is deparsed >> into >> >> >>> > pg_temp_1, pg_temp_2 etc. and these names are specific to the >> node. >> >> >>> > An >> >> >>> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at >> >> >>> > datanode. So the remote query generated may or may not work on >> >> >>> > datanode, so totally unreliable. >> >> >>> > >> >> >>> > In fact, the issue with pg_temp_1 names in the deparsed remote >> query >> >> >>> > is present even currently. >> >> >>> > >> >> >>> > But wherever it is a correctness issue to *not* schema qualify >> temp >> >> >>> > object, I have kept the schema qualification. >> >> >>> > For e.g. user can set search_path to" s1, pg_temp", >> >> >>> > and have obj1 in both s1 and pg_temp, and want to refer to >> >> >>> > pg_temp.s1. >> >> >>> > In such case, the remote query should have pg_temp_[1-9].obj1, >> >> >>> > although it may cause errors because of the existing issue. >> >> >>> > >> >> >>> --- >> >> >>> > So, the prepare-execute with search_path would remain there for >> temp >> >> >>> > tables. >> >> >>> I mean, the prepare-exute issue with search_patch would remain >> there >> >> >>> for temp tables. >> >> >>> >> >> >>> > >> >> >>> > I tried to run the regression by extracting regression expected >> >> >>> > output >> >> >>> > files from Abbas's patch , and regression passes, including >> >> >>> > plancache. >> >> >>> > >> >> >>> > I think for this release, we should go ahead by keeping this >> issue >> >> >>> > open for temp tables. This solution is an improvement, and does >> not >> >> >>> > cause any new issues. >> >> >>> > >> >> >>> > Comments welcome. >> >> >>> > >> >> >>> > >> >> >>> > On 24 June 2013 13:00, Ashutosh Bapat >> >> >>> > <ash...@en...> >> >> >>> > wrote: >> >> >>> >> Hi Abbas, >> >> >>> >> We are changing a lot of PostgreSQL deparsing code, which would >> >> >>> >> create >> >> >>> >> problems in the future merges. Since this change is in query >> >> >>> >> deparsing >> >> >>> >> logic >> >> >>> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this >> patch >> >> >>> >> should >> >> >>> >> again be the last resort. >> >> >>> >> >> >> >>> >> Please take a look at how view definitions are dumped. That will >> >> >>> >> give a >> >> >>> >> good >> >> >>> >> idea as to how PG schema-qualifies (or not) objects. Here's how >> >> >>> >> view >> >> >>> >> definitions displayed changes with search path. Since the code >> to >> >> >>> >> dump >> >> >>> >> views >> >> >>> >> and display definitions is same, the view definition dumped also >> >> >>> >> changes >> >> >>> >> with the search path. Thus pg_dump must be using some trick to >> >> >>> >> always >> >> >>> >> dump a >> >> >>> >> consistent view definition (and hence a deparsed query). Thanks >> >> >>> >> Amit >> >> >>> >> for the >> >> >>> >> example. >> >> >>> >> >> >> >>> >> create table ttt (id int); >> >> >>> >> >> >> >>> >> postgres=# create domain dd int; >> >> >>> >> CREATE DOMAIN >> >> >>> >> postgres=# create view v2 as select id::dd from ttt; >> >> >>> >> CREATE VIEW >> >> >>> >> postgres=# set search_path TO ''; >> >> >>> >> SET >> >> >>> >> >> >> >>> >> postgres=# \d+ public.v2 >> >> >>> >> View "public.v2" >> >> >>> >> Column | Type | Modifiers | Storage | Description >> >> >>> >> --------+-----------+--------- >> >> >>> >> --+---------+------------- >> >> >>> >> id | public.dd | | plain | >> >> >>> >> View definition: >> >> >>> >> SELECT ttt.id::public.dd AS id >> >> >>> >> FROM public.ttt; >> >> >>> >> >> >> >>> >> postgres=# set search_path TO default ; >> >> >>> >> SET >> >> >>> >> postgres=# show search_path ; >> >> >>> >> search_path >> >> >>> >> ---------------- >> >> >>> >> "$user",public >> >> >>> >> (1 row) >> >> >>> >> >> >> >>> >> postgres=# \d+ public.v2 >> >> >>> >> View "public.v2" >> >> >>> >> Column | Type | Modifiers | Storage | Description >> >> >>> >> --------+------+-----------+---------+------------- >> >> >>> >> id | dd | | plain | >> >> >>> >> View definition: >> >> >>> >> SELECT ttt.id::dd AS id >> >> >>> >> FROM ttt; >> >> >>> >> >> >> >>> >> We need to leverage similar mechanism here to reduce PG >> footprint. >> >> >>> >> >> >> >>> >> >> >> >>> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt >> >> >>> >> <abb...@en...> >> >> >>> >> wrote: >> >> >>> >>> >> >> >>> >>> Hi, >> >> >>> >>> As discussed in the last F2F meeting, here is an updated patch >> >> >>> >>> that >> >> >>> >>> provides schema qualification of the following objects: Tables, >> >> >>> >>> Views, >> >> >>> >>> Functions, Types and Domains in case of remote queries. >> >> >>> >>> Sequence functions are never concerned with datanodes hence, >> >> >>> >>> schema >> >> >>> >>> qualification is not required in case of sequences. >> >> >>> >>> This solves plancache test case failure issue and does not >> >> >>> >>> introduce >> >> >>> >>> any >> >> >>> >>> more failures. >> >> >>> >>> I have also attached some tests with results to aid in review. >> >> >>> >>> >> >> >>> >>> Comments are welcome. >> >> >>> >>> >> >> >>> >>> Regards >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt >> >> >>> >>> <abb...@en...> >> >> >>> >>> wrote: >> >> >>> >>>> >> >> >>> >>>> Hi, >> >> >>> >>>> Attached please find a WIP patch that provides the >> functionality >> >> >>> >>>> of >> >> >>> >>>> preparing the statement at the datanodes as soon as it is >> >> >>> >>>> prepared >> >> >>> >>>> on the coordinator. >> >> >>> >>>> This is to take care of a test case in plancache that makes >> sure >> >> >>> >>>> that >> >> >>> >>>> change of search_path is ignored by replans. >> >> >>> >>>> While the patch fixes this replan test case and the regression >> >> >>> >>>> works >> >> >>> >>>> fine >> >> >>> >>>> there are still these two problems I have to take care of. >> >> >>> >>>> >> >> >>> >>>> 1. This test case fails >> >> >>> >>>> >> >> >>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) >> >> >>> >>>> DISTRIBUTE >> >> >>> >>>> BY >> >> >>> >>>> HASH(a); >> >> >>> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >> >> >>> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; >> -- >> >> >>> >>>> fails >> >> >>> >>>> >> >> >>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE >> a = >> >> >>> >>>> 1; >> >> >>> >>>> QUERY PLAN >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> ------------------------------------------------------------------- >> >> >>> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 >> >> >>> >>>> rows=1000 >> >> >>> >>>> width=14) >> >> >>> >>>> Node/s: data_node_1, data_node_2, data_node_3, >> data_node_4 >> >> >>> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >> >> >>> >>>> ((xc_alter_table_3.ctid = $1) AND >> >> >>> >>>> (xc_alter_table_3.xc_node_id = $2)) >> >> >>> >>>> -> Data Node Scan on xc_alter_table_3 >> >> >>> >>>> "_REMOTE_TABLE_QUERY_" >> >> >>> >>>> (cost=0.00..0.00 rows=1000 width=14) >> >> >>> >>>> Output: xc_alter_table_3.a, >> xc_alter_table_3.ctid, >> >> >>> >>>> xc_alter_table_3.xc_node_id >> >> >>> >>>> Node/s: data_node_3 >> >> >>> >>>> Remote query: SELECT a, ctid, xc_node_id FROM >> ONLY >> >> >>> >>>> xc_alter_table_3 WHERE (a = 1) >> >> >>> >>>> (7 rows) >> >> >>> >>>> >> >> >>> >>>> The reason of the failure is that the select query is >> >> >>> >>>> selecting 3 >> >> >>> >>>> items, the first of which is an int, >> >> >>> >>>> whereas the delete query is comparing $1 with a ctid. >> >> >>> >>>> I am not sure how this works without prepare, but it fails >> >> >>> >>>> when >> >> >>> >>>> used >> >> >>> >>>> with prepare. >> >> >>> >>>> >> >> >>> >>>> The reason of this planning is this section of code in >> >> >>> >>>> function >> >> >>> >>>> pgxc_build_dml_statement >> >> >>> >>>> else if (cmdtype == CMD_DELETE) >> >> >>> >>>> { >> >> >>> >>>> /* >> >> >>> >>>> * Since there is no data to update, the first param >> is >> >> >>> >>>> going >> >> >>> >>>> to >> >> >>> >>>> be >> >> >>> >>>> * ctid. >> >> >>> >>>> */ >> >> >>> >>>> ctid_param_num = 1; >> >> >>> >>>> } >> >> >>> >>>> >> >> >>> >>>> Amit/Ashutosh can you suggest a fix for this problem? >> >> >>> >>>> There are a number of possibilities. >> >> >>> >>>> a) The select should not have selected column a. >> >> >>> >>>> b) The DELETE should have referred to $2 and $3 for ctid >> and >> >> >>> >>>> xc_node_id respectively. >> >> >>> >>>> c) Since the query works without PREPARE, we should make >> >> >>> >>>> PREPARE >> >> >>> >>>> work >> >> >>> >>>> the same way. >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> 2. This test case in plancache fails. >> >> >>> >>>> >> >> >>> >>>> -- Try it with a view, which isn't directly used in the >> >> >>> >>>> resulting >> >> >>> >>>> plan >> >> >>> >>>> -- but should trigger invalidation anyway >> >> >>> >>>> create table tab33 (a int, b int); >> >> >>> >>>> insert into tab33 values(1,2); >> >> >>> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >> >> >>> >>>> PREPARE vprep AS SELECT * FROM v_tab33; >> >> >>> >>>> EXECUTE vprep; >> >> >>> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM >> >> >>> >>>> tab33; >> >> >>> >>>> -- does not cause plan invalidation because views are >> never >> >> >>> >>>> created >> >> >>> >>>> on datanodes >> >> >>> >>>> EXECUTE vprep; >> >> >>> >>>> >> >> >>> >>>> and the reason of the failure is that views are never >> created >> >> >>> >>>> on >> >> >>> >>>> the >> >> >>> >>>> datanodes hence plan invalidation is not triggered. >> >> >>> >>>> This can be documented as an XC limitation. >> >> >>> >>>> >> >> >>> >>>> 3. I still have to add comments in the patch and some ifdefs >> may >> >> >>> >>>> be >> >> >>> >>>> missing too. >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> In addition to the patch I have also attached some example >> Java >> >> >>> >>>> programs >> >> >>> >>>> that test the some basic functionality through JDBC. I found >> that >> >> >>> >>>> these >> >> >>> >>>> programs are working fine after my patch. >> >> >>> >>>> >> >> >>> >>>> 1. Prepared.java : Issues parameterized delete, insert and >> update >> >> >>> >>>> through >> >> >>> >>>> JDBC. These are un-named prepared statements and works fine. >> >> >>> >>>> 2. NamedPrepared.java : Issues two named prepared statements >> >> >>> >>>> through >> >> >>> >>>> JDBC >> >> >>> >>>> and works fine. >> >> >>> >>>> 3. Retrieve.java : Runs a simple select to verify results. >> >> >>> >>>> The comments on top of the files explain their usage. >> >> >>> >>>> >> >> >>> >>>> Comments are welcome. >> >> >>> >>>> >> >> >>> >>>> Thanks >> >> >>> >>>> Regards >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat >> >> >>> >>>> <ash...@en...> wrote: >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt >> >> >>> >>>>> <abb...@en...> wrote: >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat >> >> >>> >>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt >> >> >>> >>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Attached please find updated patch to fix the bug. The >> patch >> >> >>> >>>>>>>> takes >> >> >>> >>>>>>>> care of the bug and the regression issues resulting from >> the >> >> >>> >>>>>>>> changes done in >> >> >>> >>>>>>>> the patch. Please note that the issue in test case >> plancache >> >> >>> >>>>>>>> still stands >> >> >>> >>>>>>>> unsolved because of the following test case (simplified >> but >> >> >>> >>>>>>>> taken >> >> >>> >>>>>>>> from >> >> >>> >>>>>>>> plancache.sql) >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> create schema s1 create table abc (f1 int); >> >> >>> >>>>>>>> create schema s2 create table abc (f1 int); >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> insert into s1.abc values(123); >> >> >>> >>>>>>>> insert into s2.abc values(456); >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> set search_path = s1; >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> prepare p1 as select f1 from abc; >> >> >>> >>>>>>>> execute p1; -- works fine, results in 123 >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> set search_path = s2; >> >> >>> >>>>>>>> execute p1; -- works fine after the patch, results in 123 >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan >> >> >>> >>>>>>>> execute p1; -- fails >> >> >>> >>>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> Huh! The beast bit us. >> >> >>> >>>>>>> >> >> >>> >>>>>>> I think the right solution here is either of two >> >> >>> >>>>>>> 1. Take your previous patch to always use qualified names >> (but >> >> >>> >>>>>>> you >> >> >>> >>>>>>> need to improve it not to affect the view dumps) >> >> >>> >>>>>>> 2. Prepare the statements at the datanode at the time of >> >> >>> >>>>>>> prepare. >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> Is this test added new in 9.2? >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> No, it was added by commit >> >> >>> >>>>>> 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c >> >> >>> >>>>>> in >> >> >>> >>>>>> March 2007. >> >> >>> >>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> Why didn't we see this issue the first time prepare was >> >> >>> >>>>>>> implemented? I >> >> >>> >>>>>>> don't remember (but it was two years back). >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> I was unable to locate the exact reason but since statements >> >> >>> >>>>>> were >> >> >>> >>>>>> not >> >> >>> >>>>>> being prepared on datanodes due to a merge issue this issue >> >> >>> >>>>>> just >> >> >>> >>>>>> surfaced >> >> >>> >>>>>> up. >> >> >>> >>>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> Well, even though statements were not getting prepared >> (actually >> >> >>> >>>>> prepared statements were not being used again and again) on >> >> >>> >>>>> datanodes, we >> >> >>> >>>>> never prepared them on datanode at the time of preparing the >> >> >>> >>>>> statement. So, >> >> >>> >>>>> this bug should have shown itself long back. >> >> >>> >>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> The last execute should result in 123, whereas it results >> in >> >> >>> >>>>>>>> 456. >> >> >>> >>>>>>>> The >> >> >>> >>>>>>>> reason is that the search path has already been changed at >> >> >>> >>>>>>>> the >> >> >>> >>>>>>>> datanode and >> >> >>> >>>>>>>> a replan would mean select from abc in s2. >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat >> >> >>> >>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> Hi Abbas, >> >> >>> >>>>>>>>> I think the fix is on the right track. There are couple >> of >> >> >>> >>>>>>>>> improvements that we need to do here (but you may not do >> >> >>> >>>>>>>>> those >> >> >>> >>>>>>>>> if the time >> >> >>> >>>>>>>>> doesn't permit). >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to >> >> >>> >>>>>>>>> whether >> >> >>> >>>>>>>>> the >> >> >>> >>>>>>>>> query in the node should use extended protocol or not, >> >> >>> >>>>>>>>> rather >> >> >>> >>>>>>>>> than relying >> >> >>> >>>>>>>>> on the presence of statement name and parameters etc. >> Amit >> >> >>> >>>>>>>>> has >> >> >>> >>>>>>>>> already added >> >> >>> >>>>>>>>> a status with that effect. We need to leverage it. >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt >> >> >>> >>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> The patch fixes the dead code issue, that I described >> >> >>> >>>>>>>>>> earlier. >> >> >>> >>>>>>>>>> The >> >> >>> >>>>>>>>>> code was dead because of two issues: >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting >> >> >>> >>>>>>>>>> stmt_name to >> >> >>> >>>>>>>>>> NULL and this was the main reason >> >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode was not >> >> >>> >>>>>>>>>> being called in the function >> >> >>> >>>>>>>>>> pgxc_start_command_on_connection. >> >> >>> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly >> assuming >> >> >>> >>>>>>>>>> that a >> >> >>> >>>>>>>>>> prepared statement must have some parameters. >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Fixing these two issues makes sure that the function >> >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and >> >> >>> >>>>>>>>>> statements >> >> >>> >>>>>>>>>> get prepared on >> >> >>> >>>>>>>>>> the datanode. >> >> >>> >>>>>>>>>> This patch would fix bug 3607975. It would however not >> fix >> >> >>> >>>>>>>>>> the >> >> >>> >>>>>>>>>> test >> >> >>> >>>>>>>>>> case I described in my previous email because of >> reasons I >> >> >>> >>>>>>>>>> described. >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat >> >> >>> >>>>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> Can you please explain what this fix does? It would >> help >> >> >>> >>>>>>>>>>> to >> >> >>> >>>>>>>>>>> have >> >> >>> >>>>>>>>>>> an elaborate explanation with code snippets. >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt >> >> >>> >>>>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat >> >> >>> >>>>>>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt >> >> >>> >>>>>>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat >> >> >>> >>>>>>>>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt >> >> >>> >>>>>>>>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Hi, >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> While working on test case plancache it was >> brought >> >> >>> >>>>>>>>>>>>>>>> up as >> >> >>> >>>>>>>>>>>>>>>> a >> >> >>> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should >> >> >>> >>>>>>>>>>>>>>>> solve >> >> >>> >>>>>>>>>>>>>>>> the problem of the >> >> >>> >>>>>>>>>>>>>>>> test case. >> >> >>> >>>>>>>>>>>>>>>> However there is some confusion in the statement >> of >> >> >>> >>>>>>>>>>>>>>>> bug >> >> >>> >>>>>>>>>>>>>>>> id >> >> >>> >>>>>>>>>>>>>>>> 3607975. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs >> >> >>> >>>>>>>>>>>>>>>> multiple >> >> >>> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and >> >> >>> >>>>>>>>>>>>>>>> executing >> >> >>> >>>>>>>>>>>>>>>> the query on >> >> >>> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and >> >> >>> >>>>>>>>>>>>>>>> executing multiple times. >> >> >>> >>>>>>>>>>>>>>>> This is because somehow the remote query is being >> >> >>> >>>>>>>>>>>>>>>> prepared as an unnamed >> >> >>> >>>>>>>>>>>>>>>> statement." >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Consider this test case >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); >> >> >>> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); >> >> >>> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >> >> >>> >>>>>>>>>>>>>>>> D. execute p1; >> >> >>> >>>>>>>>>>>>>>>> E. execute p1; >> >> >>> >>>>>>>>>>>>>>>> F. execute p1; >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Here are the confusions >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in >> >> >>> >>>>>>>>>>>>>>>> response >> >> >>> >>>>>>>>>>>>>>>> to >> >> >>> >>>>>>>>>>>>>>>> a prepare issued by a user. >> >> >>> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >> >> >>> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to >> >> >>> >>>>>>>>>>>>>>>> all >> >> >>> >>>>>>>>>>>>>>>> datanodes. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan >> to >> >> >>> >>>>>>>>>>>>>>>> build >> >> >>> >>>>>>>>>>>>>>>> a >> >> >>> >>>>>>>>>>>>>>>> new generic plan, >> >> >>> >>>>>>>>>>>>>>>> and steps E and F use the already built >> generic >> >> >>> >>>>>>>>>>>>>>>> plan. >> >> >>> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. >> >> >>> >>>>>>>>>>>>>>>> This means that executing a prepared statement >> >> >>> >>>>>>>>>>>>>>>> again >> >> >>> >>>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>>> again does use cached plans >> >> >>> >>>>>>>>>>>>>>>> and does not prepare again and again every >> time >> >> >>> >>>>>>>>>>>>>>>> we >> >> >>> >>>>>>>>>>>>>>>> issue >> >> >>> >>>>>>>>>>>>>>>> an execute. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> The problem is not here. The problem is in >> do_query() >> >> >>> >>>>>>>>>>>>>>> where >> >> >>> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped >> out >> >> >>> >>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>> we keep on >> >> >>> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the >> >> >>> >>>>>>>>>>>>>> datanode. >> >> >>> >>>>>>>>>>>>>> I spent time looking at the code written in do_query >> >> >>> >>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>> functions called >> >> >>> >>>>>>>>>>>>>> from with in do_query to handle prepared statements >> but >> >> >>> >>>>>>>>>>>>>> the >> >> >>> >>>>>>>>>>>>>> code written in >> >> >>> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle >> statements >> >> >>> >>>>>>>>>>>>>> prepared on datanodes >> >> >>> >>>>>>>>>>>>>> is dead as of now. It is never called during the >> >> >>> >>>>>>>>>>>>>> complete >> >> >>> >>>>>>>>>>>>>> regression run. >> >> >>> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is >> never >> >> >>> >>>>>>>>>>>>>> called. The way >> >> >>> >>>>>>>>>>>>>> prepared statements are being handled now is the >> same >> >> >>> >>>>>>>>>>>>>> as I >> >> >>> >>>>>>>>>>>>>> described earlier >> >> >>> >>>>>>>>>>>>>> in the mail chain with the help of an example. >> >> >>> >>>>>>>>>>>>>> The code that is dead was originally added by Mason >> >> >>> >>>>>>>>>>>>>> through >> >> >>> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, >> back >> >> >>> >>>>>>>>>>>>>> in >> >> >>> >>>>>>>>>>>>>> December 2010. This >> >> >>> >>>>>>>>>>>>>> code has been changed a lot over the last two years. >> >> >>> >>>>>>>>>>>>>> This >> >> >>> >>>>>>>>>>>>>> commit does not >> >> >>> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it >> use >> >> >>> >>>>>>>>>>>>>> to >> >> >>> >>>>>>>>>>>>>> work back then. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared >> >> >>> >>>>>>>>>>>>> statements. >> >> >>> >>>>>>>>>>>>> So, >> >> >>> >>>>>>>>>>>>> something has gone wrong in-between. That's what we >> need >> >> >>> >>>>>>>>>>>>> to >> >> >>> >>>>>>>>>>>>> find out and >> >> >>> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not >> >> >>> >>>>>>>>>>>>> good >> >> >>> >>>>>>>>>>>>> for performance >> >> >>> >>>>>>>>>>>>> either. >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> I was able to find the reason why the code was dead >> and >> >> >>> >>>>>>>>>>>> the >> >> >>> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now >> >> >>> >>>>>>>>>>>> ensure >> >> >>> >>>>>>>>>>>> that >> >> >>> >>>>>>>>>>>> statements are prepared on datanodes whenever >> required. >> >> >>> >>>>>>>>>>>> However there is a >> >> >>> >>>>>>>>>>>> problem in the way prepared statements are handled. >> The >> >> >>> >>>>>>>>>>>> problem is that >> >> >>> >>>>>>>>>>>> unless a prepared statement is executed it is never >> >> >>> >>>>>>>>>>>> prepared >> >> >>> >>>>>>>>>>>> on datanodes, >> >> >>> >>>>>>>>>>>> hence changing the path before executing the statement >> >> >>> >>>>>>>>>>>> gives >> >> >>> >>>>>>>>>>>> us incorrect >> >> >>> >>>>>>>>>>>> results. For Example >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) >> distribute >> >> >>> >>>>>>>>>>>> by >> >> >>> >>>>>>>>>>>> replication; >> >> >>> >>>>>>>>>>>> create schema s2 create table abc (f1 int) >> distribute >> >> >>> >>>>>>>>>>>> by >> >> >>> >>>>>>>>>>>> replication; >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> insert into s1.abc values(123); >> >> >>> >>>>>>>>>>>> insert into s2.abc values(456); >> >> >>> >>>>>>>>>>>> set search_path = s2; >> >> >>> >>>>>>>>>>>> prepare p1 as select f1 from abc; >> >> >>> >>>>>>>>>>>> set search_path = s1; >> >> >>> >>>>>>>>>>>> execute p1; >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> The last execute results in 123, where as it should >> have >> >> >>> >>>>>>>>>>>> resulted >> >> >>> >>>>>>>>>>>> in 456. >> >> >>> >>>>>>>>>>>> I can finalize the attached patch by fixing any >> >> >>> >>>>>>>>>>>> regression >> >> >>> >>>>>>>>>>>> issues >> >> >>> >>>>>>>>>>>> that may result and that would fix 3607975 and improve >> >> >>> >>>>>>>>>>>> performance however >> >> >>> >>>>>>>>>>>> the above test case would still fail. >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not >> >> >>> >>>>>>>>>>>>>>>> reproducible. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would >> >> >>> >>>>>>>>>>>>>>> have >> >> >>> >>>>>>>>>>>>>>> been >> >> >>> >>>>>>>>>>>>>>> the case, we would not have seen this problem if >> >> >>> >>>>>>>>>>>>>>> search_path changed in >> >> >>> >>>>>>>>>>>>>>> between steps D and E. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the >> >> >>> >>>>>>>>>>>>>> problem >> >> >>> >>>>>>>>>>>>>> occurs because when the remote query node is >> created, >> >> >>> >>>>>>>>>>>>>> schema qualification >> >> >>> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the >> >> >>> >>>>>>>>>>>>>> datanode, but changes in >> >> >>> >>>>>>>>>>>>>> search path do get communicated to the datanode. The >> >> >>> >>>>>>>>>>>>>> sql >> >> >>> >>>>>>>>>>>>>> statement is built >> >> >>> >>>>>>>>>>>>>> when execute is issued for the first time and is >> reused >> >> >>> >>>>>>>>>>>>>> on >> >> >>> >>>>>>>>>>>>>> subsequent >> >> >>> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the >> >> >>> >>>>>>>>>>>>>> select >> >> >>> >>>>>>>>>>>>>> that it just >> >> >>> >>>>>>>>>>>>>> received is due to an execute of a prepared >> statement >> >> >>> >>>>>>>>>>>>>> that >> >> >>> >>>>>>>>>>>>>> was prepared when >> >> >>> >>>>>>>>>>>>>> search path was some thing else. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, >> >> >>> >>>>>>>>>>>>> would >> >> >>> >>>>>>>>>>>>> fix >> >> >>> >>>>>>>>>>>>> the problem, since the statement will get prepared at >> >> >>> >>>>>>>>>>>>> the >> >> >>> >>>>>>>>>>>>> datanode, with the >> >> >>> >>>>>>>>>>>>> same search path settings, as it would on the >> >> >>> >>>>>>>>>>>>> coordinator. >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Comments are welcome. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>>>> Abbas >> >> >>> >>>>>>>>>>>>>>>> Architect >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, >> >> >>> >>>>>>>>>>>>>>>> whitepapers >> >> >>> >>>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>>> more >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> ------------------------------------------------------------------------------ >> >> >>> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >> >> >>> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application >> >> >>> >>>>>>>>>>>>>>>> performance >> >> >>> >>>>>>>>>>>>>>>> monitoring service >> >> >>> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. >> Optimize >> >> >>> >>>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>>> monitor your >> >> >>> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of >> >> >>> >>>>>>>>>>>>>>>> code. >> >> >>> >>>>>>>>>>>>>>>> Try >> >> >>> >>>>>>>>>>>>>>>> New Relic >> >> >>> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >> >> >>> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >> >> >>> >>>>>>>>>>>>>>>> _______________________________________________ >> >> >>> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list >> >> >>> >>>>>>>>>>>>>>>> Pos...@li... >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>> Abbas >> >> >>> >>>>>>>>>>>>>> Architect >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, >> whitepapers >> >> >>> >>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>> more >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>> Abbas >> >> >>> >>>>>>>>>>>> Architect >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, >> whitepapers >> >> >>> >>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>> more >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> -- >> >> >>> >>>>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> -- >> >> >>> >>>>>>>>>> -- >> >> >>> >>>>>>>>>> Abbas >> >> >>> >>>>>>>>>> Architect >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >> and >> >> >>> >>>>>>>>>> more >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> -- >> >> >>> >>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> -- >> >> >>> >>>>>>>> -- >> >> >>> >>>>>>>> Abbas >> >> >>> >>>>>>>> Architect >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >> and >> >> >>> >>>>>>>> more >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> -- >> >> >>> >>>>>>> Best Wishes, >> >> >>> >>>>>>> Ashutosh Bapat >> >> >>> >>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>> The Postgres Database Company >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> -- >> >> >>> >>>>>> -- >> >> >>> >>>>>> Abbas >> >> >>> >>>>>> Architect >> >> >>> >>>>>> >> >> >>> >>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>> Skype ID: gabbasb >> >> >>> >>>>>> www.enterprisedb.com >> >> >>> >>>>>> >> >> >>> >>>>>> Follow us on Twitter >> >> >>> >>>>>> @EnterpriseDB >> >> >>> >>>>>> >> >> >>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >> >>> >>>>>> more >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> -- >> >> >>> >>>>> Best Wishes, >> >> >>> >>>>> Ashutosh Bapat >> >> >>> >>>>> EntepriseDB Corporation >> >> >>> >>>>> The Postgres Database Company >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> -- >> >> >>> >>>> -- >> >> >>> >>>> Abbas >> >> >>> >>>> Architect >> >> >>> >>>> >> >> >>> >>>> Ph: 92.334.5100153 >> >> >>> >>>> Skype ID: gabbasb >> >> >>> >>>> www.enterprisedb.com >> >> >>> >>>> >> >> >>> >>>> Follow us on Twitter >> >> >>> >>>> @EnterpriseDB >> >> >>> >>>> >> >> >>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> more >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> -- >> >> >>> >>> -- >> >> >>> >>> Abbas >> >> >>> >>> Architect >> >> >>> >>> >> >> >>> >>> Ph: 92.334.5100153 >> >> >>> >>> Skype ID: gabbasb >> >> >>> >>> www.enterprisedb.com >> >> >>> >>> >> >> >>> >>> Follow us on Twitter >> >> >>> >>> @EnterpriseDB >> >> >>> >>> >> >> >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> more >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> -- >> >> >>> >> Best Wishes, >> >> >>> >> Ashutosh Bapat >> >> >>> >> EntepriseDB Corporation >> >> >>> >> The Postgres Database Company >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> ------------------------------------------------------------------------------ >> >> >>> >> This SF.net email is sponsored by Windows: >> >> >>> >> >> >> >>> >> Build for Windows Store. >> >> >>> >> >> >> >>> >> http://p.sf.net/sfu/windows-dev2dev >> >> >>> >> _______________________________________________ >> >> >>> >> Postgres-xc-developers mailing list >> >> >>> >> Pos...@li... >> >> >>> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >>> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> Best Wishes, >> >> >> Ashutosh Bapat >> >> >> EntepriseDB Corporation >> >> >> The Postgres Database Company >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> This SF.net email is sponsored by Windows: >> >> >> >> Build for Windows Store. >> >> >> >> http://p.sf.net/sfu/windows-dev2dev >> >> _______________________________________________ >> >> Postgres-xc-developers mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> > >> > >> <inherit.out><inherit.sql><regression.diffs><regression.out>------------------------------------------------------------------------------ >> > >> > This SF.net email is sponsored by Windows: >> > >> > Build for Windows Store. >> > >> > >> http://p.sf.net/sfu/windows-dev2dev_______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > >> > > |
From: Koichi S. <koi...@gm...> - 2013-06-26 08:59:14
|
Unfortunately, this patch does not deal with plancache. We're still waiting for further input for plancache. Regards; ---------- Koichi Suzuki 2013/6/26 Ahsan Hadi <ahs...@en...> > Hi Amit, > > So i believe your patch for the plancache regression is ready to be > committed? As i understand, this is the only blocker remaining for beta. > > We need to freeze the changes for beta by EOD today unless we find another > serious regression. The PG 924 merge exercise should begin from tomorrow as > per plan. > > -- Ahsan > > > > On Wed, Jun 26, 2013 at 1:46 PM, Amit Khandekar < > ami...@en...> wrote: > >> Now I see how to reproduce the inherit.sql failure. It fails only in >> parallel schedule. And I just checked that it also failed 6 days ago >> with the same diff when I ran some regression test on the build farm. >> So it has nothing to do with the patch, and this test needs to be >> analyzed and fixed separately. >> >> On 26 June 2013 13:21, 鈴木 幸市 <ko...@in...> wrote: >> > It seems that the name of materialized table may be >> environment-dependent. >> > Any idea to make it environment-independent? >> > >> > Apparently, the result is correct but just does not match expected ones. >> > >> > Reagards; >> > --- >> > Koichi Suzuki >> > >> > >> > >> > On 2013/06/26, at 16:47, Koichi Suzuki <koi...@gm...> >> wrote: >> > >> > I tested this patch for the latest master (without any pending patches) >> and >> > found inherit fails in linker machine environment. >> > >> > PFA related files. >> > >> > Regards; >> > >> > ---------- >> > Koichi Suzuki >> > >> > >> > 2013/6/26 Amit Khandekar <ami...@en...> >> >> >> >> On 26 June 2013 10:34, Amit Khandekar <ami...@en... >> > >> >> wrote: >> >> > On 26 June 2013 08:56, Ashutosh Bapat < >> ash...@en...> >> >> > wrote: >> >> >> Hi Amit, >> >> >> From a cursory look, this looks much more cleaner than Abbas's >> patch. >> >> >> So, it >> >> >> looks to be the approach we should take. BTW, you need to update the >> >> >> original outputs as well, instead of just the alternate expected >> >> >> outputs. >> >> >> Remember, the merge applies changes to the original expected outputs >> >> >> and not >> >> >> alternate ones (added in XC), thus we come to know about conflicts >> only >> >> >> when >> >> >> we apply changes to original expected outputs. >> >> > >> >> > Right. Will look into it. >> >> >> >> All of the expected files except inherit.sql are xc_* tests which >> >> don't have alternate files. I have made the same inherit_1.out changes >> >> onto inherit.out. Attached revised patch. >> >> >> >> Suzuki-san, I was looking at the following diff in the inherit.sql >> >> failure that you had attached from your local regression run: >> >> >> >> ! Hash Cond: (pg_temp_2.patest0.id = int4_tbl.f1) >> >> --- 1247,1253 ---- >> >> ! Hash Cond: (pg_temp_7.patest0.id = int4_tbl.f1) >> >> >> >> I am not sure if this has started coming after you applied the patch. >> >> Can you please once again run the test after reinitializing the >> >> cluster ? I am not getting this diff although I ran using >> >> serial_schedule, not parallel; and the diff itself looks harmless. As >> >> I mentioned, the actual temp schema name may differ. >> >> >> >> > >> >> >> >> >> >> Regards to temporary namespaces, is it possible to schema-qualify >> the >> >> >> temporary namespaces as always pg_temp irrespective of the actual >> name? >> >> > >> >> > get_namespace_name() is used to deparse the schema name. In order to >> >> > keep the deparsing logic working for both local queries (i.e. for >> view >> >> > for e.g.) and remote queries, we need to push in the context that the >> >> > deparsing is being done for remote queries, and this needs to be done >> >> > all the way from the uppermost function (say pg_get_querydef) upto >> >> > get_namespace_name() which does not look good. >> >> > >> >> > We need to think about some solution in general for the existing >> issue >> >> > of deparsing temp table names. Not sure of any solution. May be, >> after >> >> > we resolve the bug id in subject, the fix may even not require the >> >> > schema qualification. >> >> >> >> >> >> >> >> >> On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar >> >> >> <ami...@en...> wrote: >> >> >>> >> >> >>> On 25 June 2013 19:59, Amit Khandekar >> >> >>> <ami...@en...> >> >> >>> wrote: >> >> >>> > Attached is a patch that does schema qualification by overriding >> the >> >> >>> > search_path just before doing the deparse in deparse_query(). >> >> >>> > PopOVerrideSearchPath() in the end pops out the temp path. Also, >> the >> >> >>> > transaction callback function already takes care of popping out >> such >> >> >>> > Push in case of transaction rollback. >> >> >>> > >> >> >>> > Unfortunately we cannot apply this solution to temp tables. The >> >> >>> > problem is, when a pg_temp schema is deparsed, it is deparsed >> into >> >> >>> > pg_temp_1, pg_temp_2 etc. and these names are specific to the >> node. >> >> >>> > An >> >> >>> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at >> >> >>> > datanode. So the remote query generated may or may not work on >> >> >>> > datanode, so totally unreliable. >> >> >>> > >> >> >>> > In fact, the issue with pg_temp_1 names in the deparsed remote >> query >> >> >>> > is present even currently. >> >> >>> > >> >> >>> > But wherever it is a correctness issue to *not* schema qualify >> temp >> >> >>> > object, I have kept the schema qualification. >> >> >>> > For e.g. user can set search_path to" s1, pg_temp", >> >> >>> > and have obj1 in both s1 and pg_temp, and want to refer to >> >> >>> > pg_temp.s1. >> >> >>> > In such case, the remote query should have pg_temp_[1-9].obj1, >> >> >>> > although it may cause errors because of the existing issue. >> >> >>> > >> >> >>> --- >> >> >>> > So, the prepare-execute with search_path would remain there for >> temp >> >> >>> > tables. >> >> >>> I mean, the prepare-exute issue with search_patch would remain >> there >> >> >>> for temp tables. >> >> >>> >> >> >>> > >> >> >>> > I tried to run the regression by extracting regression expected >> >> >>> > output >> >> >>> > files from Abbas's patch , and regression passes, including >> >> >>> > plancache. >> >> >>> > >> >> >>> > I think for this release, we should go ahead by keeping this >> issue >> >> >>> > open for temp tables. This solution is an improvement, and does >> not >> >> >>> > cause any new issues. >> >> >>> > >> >> >>> > Comments welcome. >> >> >>> > >> >> >>> > >> >> >>> > On 24 June 2013 13:00, Ashutosh Bapat >> >> >>> > <ash...@en...> >> >> >>> > wrote: >> >> >>> >> Hi Abbas, >> >> >>> >> We are changing a lot of PostgreSQL deparsing code, which would >> >> >>> >> create >> >> >>> >> problems in the future merges. Since this change is in query >> >> >>> >> deparsing >> >> >>> >> logic >> >> >>> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this >> patch >> >> >>> >> should >> >> >>> >> again be the last resort. >> >> >>> >> >> >> >>> >> Please take a look at how view definitions are dumped. That will >> >> >>> >> give a >> >> >>> >> good >> >> >>> >> idea as to how PG schema-qualifies (or not) objects. Here's how >> >> >>> >> view >> >> >>> >> definitions displayed changes with search path. Since the code >> to >> >> >>> >> dump >> >> >>> >> views >> >> >>> >> and display definitions is same, the view definition dumped also >> >> >>> >> changes >> >> >>> >> with the search path. Thus pg_dump must be using some trick to >> >> >>> >> always >> >> >>> >> dump a >> >> >>> >> consistent view definition (and hence a deparsed query). Thanks >> >> >>> >> Amit >> >> >>> >> for the >> >> >>> >> example. >> >> >>> >> >> >> >>> >> create table ttt (id int); >> >> >>> >> >> >> >>> >> postgres=# create domain dd int; >> >> >>> >> CREATE DOMAIN >> >> >>> >> postgres=# create view v2 as select id::dd from ttt; >> >> >>> >> CREATE VIEW >> >> >>> >> postgres=# set search_path TO ''; >> >> >>> >> SET >> >> >>> >> >> >> >>> >> postgres=# \d+ public.v2 >> >> >>> >> View "public.v2" >> >> >>> >> Column | Type | Modifiers | Storage | Description >> >> >>> >> --------+-----------+--------- >> >> >>> >> --+---------+------------- >> >> >>> >> id | public.dd | | plain | >> >> >>> >> View definition: >> >> >>> >> SELECT ttt.id::public.dd AS id >> >> >>> >> FROM public.ttt; >> >> >>> >> >> >> >>> >> postgres=# set search_path TO default ; >> >> >>> >> SET >> >> >>> >> postgres=# show search_path ; >> >> >>> >> search_path >> >> >>> >> ---------------- >> >> >>> >> "$user",public >> >> >>> >> (1 row) >> >> >>> >> >> >> >>> >> postgres=# \d+ public.v2 >> >> >>> >> View "public.v2" >> >> >>> >> Column | Type | Modifiers | Storage | Description >> >> >>> >> --------+------+-----------+---------+------------- >> >> >>> >> id | dd | | plain | >> >> >>> >> View definition: >> >> >>> >> SELECT ttt.id::dd AS id >> >> >>> >> FROM ttt; >> >> >>> >> >> >> >>> >> We need to leverage similar mechanism here to reduce PG >> footprint. >> >> >>> >> >> >> >>> >> >> >> >>> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt >> >> >>> >> <abb...@en...> >> >> >>> >> wrote: >> >> >>> >>> >> >> >>> >>> Hi, >> >> >>> >>> As discussed in the last F2F meeting, here is an updated patch >> >> >>> >>> that >> >> >>> >>> provides schema qualification of the following objects: Tables, >> >> >>> >>> Views, >> >> >>> >>> Functions, Types and Domains in case of remote queries. >> >> >>> >>> Sequence functions are never concerned with datanodes hence, >> >> >>> >>> schema >> >> >>> >>> qualification is not required in case of sequences. >> >> >>> >>> This solves plancache test case failure issue and does not >> >> >>> >>> introduce >> >> >>> >>> any >> >> >>> >>> more failures. >> >> >>> >>> I have also attached some tests with results to aid in review. >> >> >>> >>> >> >> >>> >>> Comments are welcome. >> >> >>> >>> >> >> >>> >>> Regards >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt >> >> >>> >>> <abb...@en...> >> >> >>> >>> wrote: >> >> >>> >>>> >> >> >>> >>>> Hi, >> >> >>> >>>> Attached please find a WIP patch that provides the >> functionality >> >> >>> >>>> of >> >> >>> >>>> preparing the statement at the datanodes as soon as it is >> >> >>> >>>> prepared >> >> >>> >>>> on the coordinator. >> >> >>> >>>> This is to take care of a test case in plancache that makes >> sure >> >> >>> >>>> that >> >> >>> >>>> change of search_path is ignored by replans. >> >> >>> >>>> While the patch fixes this replan test case and the regression >> >> >>> >>>> works >> >> >>> >>>> fine >> >> >>> >>>> there are still these two problems I have to take care of. >> >> >>> >>>> >> >> >>> >>>> 1. This test case fails >> >> >>> >>>> >> >> >>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) >> >> >>> >>>> DISTRIBUTE >> >> >>> >>>> BY >> >> >>> >>>> HASH(a); >> >> >>> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >> >> >>> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; >> -- >> >> >>> >>>> fails >> >> >>> >>>> >> >> >>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE >> a = >> >> >>> >>>> 1; >> >> >>> >>>> QUERY PLAN >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> ------------------------------------------------------------------- >> >> >>> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 >> >> >>> >>>> rows=1000 >> >> >>> >>>> width=14) >> >> >>> >>>> Node/s: data_node_1, data_node_2, data_node_3, >> data_node_4 >> >> >>> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >> >> >>> >>>> ((xc_alter_table_3.ctid = $1) AND >> >> >>> >>>> (xc_alter_table_3.xc_node_id = $2)) >> >> >>> >>>> -> Data Node Scan on xc_alter_table_3 >> >> >>> >>>> "_REMOTE_TABLE_QUERY_" >> >> >>> >>>> (cost=0.00..0.00 rows=1000 width=14) >> >> >>> >>>> Output: xc_alter_table_3.a, >> xc_alter_table_3.ctid, >> >> >>> >>>> xc_alter_table_3.xc_node_id >> >> >>> >>>> Node/s: data_node_3 >> >> >>> >>>> Remote query: SELECT a, ctid, xc_node_id FROM >> ONLY >> >> >>> >>>> xc_alter_table_3 WHERE (a = 1) >> >> >>> >>>> (7 rows) >> >> >>> >>>> >> >> >>> >>>> The reason of the failure is that the select query is >> >> >>> >>>> selecting 3 >> >> >>> >>>> items, the first of which is an int, >> >> >>> >>>> whereas the delete query is comparing $1 with a ctid. >> >> >>> >>>> I am not sure how this works without prepare, but it fails >> >> >>> >>>> when >> >> >>> >>>> used >> >> >>> >>>> with prepare. >> >> >>> >>>> >> >> >>> >>>> The reason of this planning is this section of code in >> >> >>> >>>> function >> >> >>> >>>> pgxc_build_dml_statement >> >> >>> >>>> else if (cmdtype == CMD_DELETE) >> >> >>> >>>> { >> >> >>> >>>> /* >> >> >>> >>>> * Since there is no data to update, the first param >> is >> >> >>> >>>> going >> >> >>> >>>> to >> >> >>> >>>> be >> >> >>> >>>> * ctid. >> >> >>> >>>> */ >> >> >>> >>>> ctid_param_num = 1; >> >> >>> >>>> } >> >> >>> >>>> >> >> >>> >>>> Amit/Ashutosh can you suggest a fix for this problem? >> >> >>> >>>> There are a number of possibilities. >> >> >>> >>>> a) The select should not have selected column a. >> >> >>> >>>> b) The DELETE should have referred to $2 and $3 for ctid >> and >> >> >>> >>>> xc_node_id respectively. >> >> >>> >>>> c) Since the query works without PREPARE, we should make >> >> >>> >>>> PREPARE >> >> >>> >>>> work >> >> >>> >>>> the same way. >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> 2. This test case in plancache fails. >> >> >>> >>>> >> >> >>> >>>> -- Try it with a view, which isn't directly used in the >> >> >>> >>>> resulting >> >> >>> >>>> plan >> >> >>> >>>> -- but should trigger invalidation anyway >> >> >>> >>>> create table tab33 (a int, b int); >> >> >>> >>>> insert into tab33 values(1,2); >> >> >>> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >> >> >>> >>>> PREPARE vprep AS SELECT * FROM v_tab33; >> >> >>> >>>> EXECUTE vprep; >> >> >>> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM >> >> >>> >>>> tab33; >> >> >>> >>>> -- does not cause plan invalidation because views are >> never >> >> >>> >>>> created >> >> >>> >>>> on datanodes >> >> >>> >>>> EXECUTE vprep; >> >> >>> >>>> >> >> >>> >>>> and the reason of the failure is that views are never >> created >> >> >>> >>>> on >> >> >>> >>>> the >> >> >>> >>>> datanodes hence plan invalidation is not triggered. >> >> >>> >>>> This can be documented as an XC limitation. >> >> >>> >>>> >> >> >>> >>>> 3. I still have to add comments in the patch and some ifdefs >> may >> >> >>> >>>> be >> >> >>> >>>> missing too. >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> In addition to the patch I have also attached some example >> Java >> >> >>> >>>> programs >> >> >>> >>>> that test the some basic functionality through JDBC. I found >> that >> >> >>> >>>> these >> >> >>> >>>> programs are working fine after my patch. >> >> >>> >>>> >> >> >>> >>>> 1. Prepared.java : Issues parameterized delete, insert and >> update >> >> >>> >>>> through >> >> >>> >>>> JDBC. These are un-named prepared statements and works fine. >> >> >>> >>>> 2. NamedPrepared.java : Issues two named prepared statements >> >> >>> >>>> through >> >> >>> >>>> JDBC >> >> >>> >>>> and works fine. >> >> >>> >>>> 3. Retrieve.java : Runs a simple select to verify results. >> >> >>> >>>> The comments on top of the files explain their usage. >> >> >>> >>>> >> >> >>> >>>> Comments are welcome. >> >> >>> >>>> >> >> >>> >>>> Thanks >> >> >>> >>>> Regards >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat >> >> >>> >>>> <ash...@en...> wrote: >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt >> >> >>> >>>>> <abb...@en...> wrote: >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat >> >> >>> >>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt >> >> >>> >>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Attached please find updated patch to fix the bug. The >> patch >> >> >>> >>>>>>>> takes >> >> >>> >>>>>>>> care of the bug and the regression issues resulting from >> the >> >> >>> >>>>>>>> changes done in >> >> >>> >>>>>>>> the patch. Please note that the issue in test case >> plancache >> >> >>> >>>>>>>> still stands >> >> >>> >>>>>>>> unsolved because of the following test case (simplified >> but >> >> >>> >>>>>>>> taken >> >> >>> >>>>>>>> from >> >> >>> >>>>>>>> plancache.sql) >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> create schema s1 create table abc (f1 int); >> >> >>> >>>>>>>> create schema s2 create table abc (f1 int); >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> insert into s1.abc values(123); >> >> >>> >>>>>>>> insert into s2.abc values(456); >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> set search_path = s1; >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> prepare p1 as select f1 from abc; >> >> >>> >>>>>>>> execute p1; -- works fine, results in 123 >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> set search_path = s2; >> >> >>> >>>>>>>> execute p1; -- works fine after the patch, results in 123 >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan >> >> >>> >>>>>>>> execute p1; -- fails >> >> >>> >>>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> Huh! The beast bit us. >> >> >>> >>>>>>> >> >> >>> >>>>>>> I think the right solution here is either of two >> >> >>> >>>>>>> 1. Take your previous patch to always use qualified names >> (but >> >> >>> >>>>>>> you >> >> >>> >>>>>>> need to improve it not to affect the view dumps) >> >> >>> >>>>>>> 2. Prepare the statements at the datanode at the time of >> >> >>> >>>>>>> prepare. >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> Is this test added new in 9.2? >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> No, it was added by commit >> >> >>> >>>>>> 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c >> >> >>> >>>>>> in >> >> >>> >>>>>> March 2007. >> >> >>> >>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> Why didn't we see this issue the first time prepare was >> >> >>> >>>>>>> implemented? I >> >> >>> >>>>>>> don't remember (but it was two years back). >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> I was unable to locate the exact reason but since statements >> >> >>> >>>>>> were >> >> >>> >>>>>> not >> >> >>> >>>>>> being prepared on datanodes due to a merge issue this issue >> >> >>> >>>>>> just >> >> >>> >>>>>> surfaced >> >> >>> >>>>>> up. >> >> >>> >>>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> Well, even though statements were not getting prepared >> (actually >> >> >>> >>>>> prepared statements were not being used again and again) on >> >> >>> >>>>> datanodes, we >> >> >>> >>>>> never prepared them on datanode at the time of preparing the >> >> >>> >>>>> statement. So, >> >> >>> >>>>> this bug should have shown itself long back. >> >> >>> >>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> The last execute should result in 123, whereas it results >> in >> >> >>> >>>>>>>> 456. >> >> >>> >>>>>>>> The >> >> >>> >>>>>>>> reason is that the search path has already been changed at >> >> >>> >>>>>>>> the >> >> >>> >>>>>>>> datanode and >> >> >>> >>>>>>>> a replan would mean select from abc in s2. >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat >> >> >>> >>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> Hi Abbas, >> >> >>> >>>>>>>>> I think the fix is on the right track. There are couple >> of >> >> >>> >>>>>>>>> improvements that we need to do here (but you may not do >> >> >>> >>>>>>>>> those >> >> >>> >>>>>>>>> if the time >> >> >>> >>>>>>>>> doesn't permit). >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to >> >> >>> >>>>>>>>> whether >> >> >>> >>>>>>>>> the >> >> >>> >>>>>>>>> query in the node should use extended protocol or not, >> >> >>> >>>>>>>>> rather >> >> >>> >>>>>>>>> than relying >> >> >>> >>>>>>>>> on the presence of statement name and parameters etc. >> Amit >> >> >>> >>>>>>>>> has >> >> >>> >>>>>>>>> already added >> >> >>> >>>>>>>>> a status with that effect. We need to leverage it. >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt >> >> >>> >>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> The patch fixes the dead code issue, that I described >> >> >>> >>>>>>>>>> earlier. >> >> >>> >>>>>>>>>> The >> >> >>> >>>>>>>>>> code was dead because of two issues: >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting >> >> >>> >>>>>>>>>> stmt_name to >> >> >>> >>>>>>>>>> NULL and this was the main reason >> >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode was not >> >> >>> >>>>>>>>>> being called in the function >> >> >>> >>>>>>>>>> pgxc_start_command_on_connection. >> >> >>> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly >> assuming >> >> >>> >>>>>>>>>> that a >> >> >>> >>>>>>>>>> prepared statement must have some parameters. >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Fixing these two issues makes sure that the function >> >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and >> >> >>> >>>>>>>>>> statements >> >> >>> >>>>>>>>>> get prepared on >> >> >>> >>>>>>>>>> the datanode. >> >> >>> >>>>>>>>>> This patch would fix bug 3607975. It would however not >> fix >> >> >>> >>>>>>>>>> the >> >> >>> >>>>>>>>>> test >> >> >>> >>>>>>>>>> case I described in my previous email because of >> reasons I >> >> >>> >>>>>>>>>> described. >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat >> >> >>> >>>>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> Can you please explain what this fix does? It would >> help >> >> >>> >>>>>>>>>>> to >> >> >>> >>>>>>>>>>> have >> >> >>> >>>>>>>>>>> an elaborate explanation with code snippets. >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt >> >> >>> >>>>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat >> >> >>> >>>>>>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt >> >> >>> >>>>>>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat >> >> >>> >>>>>>>>>>>>>> <ash...@en...> wrote: >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt >> >> >>> >>>>>>>>>>>>>>> <abb...@en...> wrote: >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Hi, >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> While working on test case plancache it was >> brought >> >> >>> >>>>>>>>>>>>>>>> up as >> >> >>> >>>>>>>>>>>>>>>> a >> >> >>> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should >> >> >>> >>>>>>>>>>>>>>>> solve >> >> >>> >>>>>>>>>>>>>>>> the problem of the >> >> >>> >>>>>>>>>>>>>>>> test case. >> >> >>> >>>>>>>>>>>>>>>> However there is some confusion in the statement >> of >> >> >>> >>>>>>>>>>>>>>>> bug >> >> >>> >>>>>>>>>>>>>>>> id >> >> >>> >>>>>>>>>>>>>>>> 3607975. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs >> >> >>> >>>>>>>>>>>>>>>> multiple >> >> >>> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and >> >> >>> >>>>>>>>>>>>>>>> executing >> >> >>> >>>>>>>>>>>>>>>> the query on >> >> >>> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and >> >> >>> >>>>>>>>>>>>>>>> executing multiple times. >> >> >>> >>>>>>>>>>>>>>>> This is because somehow the remote query is being >> >> >>> >>>>>>>>>>>>>>>> prepared as an unnamed >> >> >>> >>>>>>>>>>>>>>>> statement." >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Consider this test case >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); >> >> >>> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); >> >> >>> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >> >> >>> >>>>>>>>>>>>>>>> D. execute p1; >> >> >>> >>>>>>>>>>>>>>>> E. execute p1; >> >> >>> >>>>>>>>>>>>>>>> F. execute p1; >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Here are the confusions >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in >> >> >>> >>>>>>>>>>>>>>>> response >> >> >>> >>>>>>>>>>>>>>>> to >> >> >>> >>>>>>>>>>>>>>>> a prepare issued by a user. >> >> >>> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >> >> >>> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to >> >> >>> >>>>>>>>>>>>>>>> all >> >> >>> >>>>>>>>>>>>>>>> datanodes. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan >> to >> >> >>> >>>>>>>>>>>>>>>> build >> >> >>> >>>>>>>>>>>>>>>> a >> >> >>> >>>>>>>>>>>>>>>> new generic plan, >> >> >>> >>>>>>>>>>>>>>>> and steps E and F use the already built >> generic >> >> >>> >>>>>>>>>>>>>>>> plan. >> >> >>> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. >> >> >>> >>>>>>>>>>>>>>>> This means that executing a prepared statement >> >> >>> >>>>>>>>>>>>>>>> again >> >> >>> >>>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>>> again does use cached plans >> >> >>> >>>>>>>>>>>>>>>> and does not prepare again and again every >> time >> >> >>> >>>>>>>>>>>>>>>> we >> >> >>> >>>>>>>>>>>>>>>> issue >> >> >>> >>>>>>>>>>>>>>>> an execute. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> The problem is not here. The problem is in >> do_query() >> >> >>> >>>>>>>>>>>>>>> where >> >> >>> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped >> out >> >> >>> >>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>> we keep on >> >> >>> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the >> >> >>> >>>>>>>>>>>>>> datanode. >> >> >>> >>>>>>>>>>>>>> I spent time looking at the code written in do_query >> >> >>> >>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>> functions called >> >> >>> >>>>>>>>>>>>>> from with in do_query to handle prepared statements >> but >> >> >>> >>>>>>>>>>>>>> the >> >> >>> >>>>>>>>>>>>>> code written in >> >> >>> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle >> statements >> >> >>> >>>>>>>>>>>>>> prepared on datanodes >> >> >>> >>>>>>>>>>>>>> is dead as of now. It is never called during the >> >> >>> >>>>>>>>>>>>>> complete >> >> >>> >>>>>>>>>>>>>> regression run. >> >> >>> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is >> never >> >> >>> >>>>>>>>>>>>>> called. The way >> >> >>> >>>>>>>>>>>>>> prepared statements are being handled now is the >> same >> >> >>> >>>>>>>>>>>>>> as I >> >> >>> >>>>>>>>>>>>>> described earlier >> >> >>> >>>>>>>>>>>>>> in the mail chain with the help of an example. >> >> >>> >>>>>>>>>>>>>> The code that is dead was originally added by Mason >> >> >>> >>>>>>>>>>>>>> through >> >> >>> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, >> back >> >> >>> >>>>>>>>>>>>>> in >> >> >>> >>>>>>>>>>>>>> December 2010. This >> >> >>> >>>>>>>>>>>>>> code has been changed a lot over the last two years. >> >> >>> >>>>>>>>>>>>>> This >> >> >>> >>>>>>>>>>>>>> commit does not >> >> >>> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it >> use >> >> >>> >>>>>>>>>>>>>> to >> >> >>> >>>>>>>>>>>>>> work back then. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared >> >> >>> >>>>>>>>>>>>> statements. >> >> >>> >>>>>>>>>>>>> So, >> >> >>> >>>>>>>>>>>>> something has gone wrong in-between. That's what we >> need >> >> >>> >>>>>>>>>>>>> to >> >> >>> >>>>>>>>>>>>> find out and >> >> >>> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not >> >> >>> >>>>>>>>>>>>> good >> >> >>> >>>>>>>>>>>>> for performance >> >> >>> >>>>>>>>>>>>> either. >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> I was able to find the reason why the code was dead >> and >> >> >>> >>>>>>>>>>>> the >> >> >>> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now >> >> >>> >>>>>>>>>>>> ensure >> >> >>> >>>>>>>>>>>> that >> >> >>> >>>>>>>>>>>> statements are prepared on datanodes whenever >> required. >> >> >>> >>>>>>>>>>>> However there is a >> >> >>> >>>>>>>>>>>> problem in the way prepared statements are handled. >> The >> >> >>> >>>>>>>>>>>> problem is that >> >> >>> >>>>>>>>>>>> unless a prepared statement is executed it is never >> >> >>> >>>>>>>>>>>> prepared >> >> >>> >>>>>>>>>>>> on datanodes, >> >> >>> >>>>>>>>>>>> hence changing the path before executing the statement >> >> >>> >>>>>>>>>>>> gives >> >> >>> >>>>>>>>>>>> us incorrect >> >> >>> >>>>>>>>>>>> results. For Example >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) >> distribute >> >> >>> >>>>>>>>>>>> by >> >> >>> >>>>>>>>>>>> replication; >> >> >>> >>>>>>>>>>>> create schema s2 create table abc (f1 int) >> distribute >> >> >>> >>>>>>>>>>>> by >> >> >>> >>>>>>>>>>>> replication; >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> insert into s1.abc values(123); >> >> >>> >>>>>>>>>>>> insert into s2.abc values(456); >> >> >>> >>>>>>>>>>>> set search_path = s2; >> >> >>> >>>>>>>>>>>> prepare p1 as select f1 from abc; >> >> >>> >>>>>>>>>>>> set search_path = s1; >> >> >>> >>>>>>>>>>>> execute p1; >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> The last execute results in 123, where as it should >> have >> >> >>> >>>>>>>>>>>> resulted >> >> >>> >>>>>>>>>>>> in 456. >> >> >>> >>>>>>>>>>>> I can finalize the attached patch by fixing any >> >> >>> >>>>>>>>>>>> regression >> >> >>> >>>>>>>>>>>> issues >> >> >>> >>>>>>>>>>>> that may result and that would fix 3607975 and improve >> >> >>> >>>>>>>>>>>> performance however >> >> >>> >>>>>>>>>>>> the above test case would still fail. >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not >> >> >>> >>>>>>>>>>>>>>>> reproducible. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would >> >> >>> >>>>>>>>>>>>>>> have >> >> >>> >>>>>>>>>>>>>>> been >> >> >>> >>>>>>>>>>>>>>> the case, we would not have seen this problem if >> >> >>> >>>>>>>>>>>>>>> search_path changed in >> >> >>> >>>>>>>>>>>>>>> between steps D and E. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the >> >> >>> >>>>>>>>>>>>>> problem >> >> >>> >>>>>>>>>>>>>> occurs because when the remote query node is >> created, >> >> >>> >>>>>>>>>>>>>> schema qualification >> >> >>> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the >> >> >>> >>>>>>>>>>>>>> datanode, but changes in >> >> >>> >>>>>>>>>>>>>> search path do get communicated to the datanode. The >> >> >>> >>>>>>>>>>>>>> sql >> >> >>> >>>>>>>>>>>>>> statement is built >> >> >>> >>>>>>>>>>>>>> when execute is issued for the first time and is >> reused >> >> >>> >>>>>>>>>>>>>> on >> >> >>> >>>>>>>>>>>>>> subsequent >> >> >>> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the >> >> >>> >>>>>>>>>>>>>> select >> >> >>> >>>>>>>>>>>>>> that it just >> >> >>> >>>>>>>>>>>>>> received is due to an execute of a prepared >> statement >> >> >>> >>>>>>>>>>>>>> that >> >> >>> >>>>>>>>>>>>>> was prepared when >> >> >>> >>>>>>>>>>>>>> search path was some thing else. >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, >> >> >>> >>>>>>>>>>>>> would >> >> >>> >>>>>>>>>>>>> fix >> >> >>> >>>>>>>>>>>>> the problem, since the statement will get prepared at >> >> >>> >>>>>>>>>>>>> the >> >> >>> >>>>>>>>>>>>> datanode, with the >> >> >>> >>>>>>>>>>>>> same search path settings, as it would on the >> >> >>> >>>>>>>>>>>>> coordinator. >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Comments are welcome. >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>>>> Abbas >> >> >>> >>>>>>>>>>>>>>>> Architect >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, >> >> >>> >>>>>>>>>>>>>>>> whitepapers >> >> >>> >>>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>>> more >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> ------------------------------------------------------------------------------ >> >> >>> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >> >> >>> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application >> >> >>> >>>>>>>>>>>>>>>> performance >> >> >>> >>>>>>>>>>>>>>>> monitoring service >> >> >>> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. >> Optimize >> >> >>> >>>>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>>>> monitor your >> >> >>> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of >> >> >>> >>>>>>>>>>>>>>>> code. >> >> >>> >>>>>>>>>>>>>>>> Try >> >> >>> >>>>>>>>>>>>>>>> New Relic >> >> >>> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >> >> >>> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >> >> >>> >>>>>>>>>>>>>>>> _______________________________________________ >> >> >>> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list >> >> >>> >>>>>>>>>>>>>>>> Pos...@li... >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>>> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >>> >>>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>>> Abbas >> >> >>> >>>>>>>>>>>>>> Architect >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, >> whitepapers >> >> >>> >>>>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>>>> more >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> >> >> >>> >>>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>> -- >> >> >>> >>>>>>>>>>>> Abbas >> >> >>> >>>>>>>>>>>> Architect >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>>>> >> >> >>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, >> whitepapers >> >> >>> >>>>>>>>>>>> and >> >> >>> >>>>>>>>>>>> more >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> >> >> >>> >>>>>>>>>>> -- >> >> >>> >>>>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> -- >> >> >>> >>>>>>>>>> -- >> >> >>> >>>>>>>>>> Abbas >> >> >>> >>>>>>>>>> Architect >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>>>> >> >> >>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >> and >> >> >>> >>>>>>>>>> more >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> >> >> >>> >>>>>>>>> -- >> >> >>> >>>>>>>>> Best Wishes, >> >> >>> >>>>>>>>> Ashutosh Bapat >> >> >>> >>>>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>>>> The Postgres Database Company >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> -- >> >> >>> >>>>>>>> -- >> >> >>> >>>>>>>> Abbas >> >> >>> >>>>>>>> Architect >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>>>> Skype ID: gabbasb >> >> >>> >>>>>>>> www.enterprisedb.com >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Follow us on Twitter >> >> >>> >>>>>>>> @EnterpriseDB >> >> >>> >>>>>>>> >> >> >>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >> and >> >> >>> >>>>>>>> more >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> >> >> >>> >>>>>>> -- >> >> >>> >>>>>>> Best Wishes, >> >> >>> >>>>>>> Ashutosh Bapat >> >> >>> >>>>>>> EntepriseDB Corporation >> >> >>> >>>>>>> The Postgres Database Company >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> >> >> >>> >>>>>> -- >> >> >>> >>>>>> -- >> >> >>> >>>>>> Abbas >> >> >>> >>>>>> Architect >> >> >>> >>>>>> >> >> >>> >>>>>> Ph: 92.334.5100153 >> >> >>> >>>>>> Skype ID: gabbasb >> >> >>> >>>>>> www.enterprisedb.com >> >> >>> >>>>>> >> >> >>> >>>>>> Follow us on Twitter >> >> >>> >>>>>> @EnterpriseDB >> >> >>> >>>>>> >> >> >>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >> >>> >>>>>> more >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> >> >> >>> >>>>> -- >> >> >>> >>>>> Best Wishes, >> >> >>> >>>>> Ashutosh Bapat >> >> >>> >>>>> EntepriseDB Corporation >> >> >>> >>>>> The Postgres Database Company >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> >> >> >>> >>>> -- >> >> >>> >>>> -- >> >> >>> >>>> Abbas >> >> >>> >>>> Architect >> >> >>> >>>> >> >> >>> >>>> Ph: 92.334.5100153 >> >> >>> >>>> Skype ID: gabbasb >> >> >>> >>>> www.enterprisedb.com >> >> >>> >>>> >> >> >>> >>>> Follow us on Twitter >> >> >>> >>>> @EnterpriseDB >> >> >>> >>>> >> >> >>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> more >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> -- >> >> >>> >>> -- >> >> >>> >>> Abbas >> >> >>> >>> Architect >> >> >>> >>> >> >> >>> >>> Ph: 92.334.5100153 >> >> >>> >>> Skype ID: gabbasb >> >> >>> >>> www.enterprisedb.com >> >> >>> >>> >> >> >>> >>> Follow us on Twitter >> >> >>> >>> @EnterpriseDB >> >> >>> >>> >> >> >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> more >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> -- >> >> >>> >> Best Wishes, >> >> >>> >> Ashutosh Bapat >> >> >>> >> EntepriseDB Corporation >> >> >>> >> The Postgres Database Company >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> ------------------------------------------------------------------------------ >> >> >>> >> This SF.net email is sponsored by Windows: >> >> >>> >> >> >> >>> >> Build for Windows Store. >> >> >>> >> >> >> >>> >> http://p.sf.net/sfu/windows-dev2dev >> >> >>> >> _______________________________________________ >> >> >>> >> Postgres-xc-developers mailing list >> >> >>> >> Pos...@li... >> >> >>> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >>> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> Best Wishes, >> >> >> Ashutosh Bapat >> >> >> EntepriseDB Corporation >> >> >> The Postgres Database Company >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> This SF.net email is sponsored by Windows: >> >> >> >> Build for Windows Store. >> >> >> >> http://p.sf.net/sfu/windows-dev2dev >> >> _______________________________________________ >> >> Postgres-xc-developers mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> > >> > >> <inherit.out><inherit.sql><regression.diffs><regression.out>------------------------------------------------------------------------------ >> > >> > This SF.net email is sponsored by Windows: >> > >> > Build for Windows Store. >> > >> > >> http://p.sf.net/sfu/windows-dev2dev_______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > Ahsan Hadi > Snr Director Product Development > EnterpriseDB Corporation > The Enterprise Postgres Company > > Phone: +92-51-8358874 > Mobile: +92-333-5162114 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of the > individual or entity to whom it is addressed. This message contains > information from EnterpriseDB Corporation that may be privileged, > confidential, or exempt from disclosure under applicable law. If you are > not the intended recipient or authorized to receive this for the intended > recipient, any use, dissemination, distribution, retention, archiving, or > copying of this communication is strictly prohibited. If you have received > this e-mail in error, please notify the sender immediately by reply e-mail > and delete this message. > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Koichi S. <koi...@gm...> - 2013-06-26 08:56:43
|
You're right. Should we change parallel_schedule? Regards; ---------- Koichi Suzuki 2013/6/26 Amit Khandekar <ami...@en...> > Now I see how to reproduce the inherit.sql failure. It fails only in > parallel schedule. And I just checked that it also failed 6 days ago > with the same diff when I ran some regression test on the build farm. > So it has nothing to do with the patch, and this test needs to be > analyzed and fixed separately. > > On 26 June 2013 13:21, 鈴木 幸市 <ko...@in...> wrote: > > It seems that the name of materialized table may be > environment-dependent. > > Any idea to make it environment-independent? > > > > Apparently, the result is correct but just does not match expected ones. > > > > Reagards; > > --- > > Koichi Suzuki > > > > > > > > On 2013/06/26, at 16:47, Koichi Suzuki <koi...@gm...> > wrote: > > > > I tested this patch for the latest master (without any pending patches) > and > > found inherit fails in linker machine environment. > > > > PFA related files. > > > > Regards; > > > > ---------- > > Koichi Suzuki > > > > > > 2013/6/26 Amit Khandekar <ami...@en...> > >> > >> On 26 June 2013 10:34, Amit Khandekar <ami...@en...> > >> wrote: > >> > On 26 June 2013 08:56, Ashutosh Bapat < > ash...@en...> > >> > wrote: > >> >> Hi Amit, > >> >> From a cursory look, this looks much more cleaner than Abbas's patch. > >> >> So, it > >> >> looks to be the approach we should take. BTW, you need to update the > >> >> original outputs as well, instead of just the alternate expected > >> >> outputs. > >> >> Remember, the merge applies changes to the original expected outputs > >> >> and not > >> >> alternate ones (added in XC), thus we come to know about conflicts > only > >> >> when > >> >> we apply changes to original expected outputs. > >> > > >> > Right. Will look into it. > >> > >> All of the expected files except inherit.sql are xc_* tests which > >> don't have alternate files. I have made the same inherit_1.out changes > >> onto inherit.out. Attached revised patch. > >> > >> Suzuki-san, I was looking at the following diff in the inherit.sql > >> failure that you had attached from your local regression run: > >> > >> ! Hash Cond: (pg_temp_2.patest0.id = int4_tbl.f1) > >> --- 1247,1253 ---- > >> ! Hash Cond: (pg_temp_7.patest0.id = int4_tbl.f1) > >> > >> I am not sure if this has started coming after you applied the patch. > >> Can you please once again run the test after reinitializing the > >> cluster ? I am not getting this diff although I ran using > >> serial_schedule, not parallel; and the diff itself looks harmless. As > >> I mentioned, the actual temp schema name may differ. > >> > >> > > >> >> > >> >> Regards to temporary namespaces, is it possible to schema-qualify the > >> >> temporary namespaces as always pg_temp irrespective of the actual > name? > >> > > >> > get_namespace_name() is used to deparse the schema name. In order to > >> > keep the deparsing logic working for both local queries (i.e. for view > >> > for e.g.) and remote queries, we need to push in the context that the > >> > deparsing is being done for remote queries, and this needs to be done > >> > all the way from the uppermost function (say pg_get_querydef) upto > >> > get_namespace_name() which does not look good. > >> > > >> > We need to think about some solution in general for the existing issue > >> > of deparsing temp table names. Not sure of any solution. May be, after > >> > we resolve the bug id in subject, the fix may even not require the > >> > schema qualification. > >> >> > >> >> > >> >> On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar > >> >> <ami...@en...> wrote: > >> >>> > >> >>> On 25 June 2013 19:59, Amit Khandekar > >> >>> <ami...@en...> > >> >>> wrote: > >> >>> > Attached is a patch that does schema qualification by overriding > the > >> >>> > search_path just before doing the deparse in deparse_query(). > >> >>> > PopOVerrideSearchPath() in the end pops out the temp path. Also, > the > >> >>> > transaction callback function already takes care of popping out > such > >> >>> > Push in case of transaction rollback. > >> >>> > > >> >>> > Unfortunately we cannot apply this solution to temp tables. The > >> >>> > problem is, when a pg_temp schema is deparsed, it is deparsed into > >> >>> > pg_temp_1, pg_temp_2 etc. and these names are specific to the > node. > >> >>> > An > >> >>> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at > >> >>> > datanode. So the remote query generated may or may not work on > >> >>> > datanode, so totally unreliable. > >> >>> > > >> >>> > In fact, the issue with pg_temp_1 names in the deparsed remote > query > >> >>> > is present even currently. > >> >>> > > >> >>> > But wherever it is a correctness issue to *not* schema qualify > temp > >> >>> > object, I have kept the schema qualification. > >> >>> > For e.g. user can set search_path to" s1, pg_temp", > >> >>> > and have obj1 in both s1 and pg_temp, and want to refer to > >> >>> > pg_temp.s1. > >> >>> > In such case, the remote query should have pg_temp_[1-9].obj1, > >> >>> > although it may cause errors because of the existing issue. > >> >>> > > >> >>> --- > >> >>> > So, the prepare-execute with search_path would remain there for > temp > >> >>> > tables. > >> >>> I mean, the prepare-exute issue with search_patch would remain there > >> >>> for temp tables. > >> >>> > >> >>> > > >> >>> > I tried to run the regression by extracting regression expected > >> >>> > output > >> >>> > files from Abbas's patch , and regression passes, including > >> >>> > plancache. > >> >>> > > >> >>> > I think for this release, we should go ahead by keeping this issue > >> >>> > open for temp tables. This solution is an improvement, and does > not > >> >>> > cause any new issues. > >> >>> > > >> >>> > Comments welcome. > >> >>> > > >> >>> > > >> >>> > On 24 June 2013 13:00, Ashutosh Bapat > >> >>> > <ash...@en...> > >> >>> > wrote: > >> >>> >> Hi Abbas, > >> >>> >> We are changing a lot of PostgreSQL deparsing code, which would > >> >>> >> create > >> >>> >> problems in the future merges. Since this change is in query > >> >>> >> deparsing > >> >>> >> logic > >> >>> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this > patch > >> >>> >> should > >> >>> >> again be the last resort. > >> >>> >> > >> >>> >> Please take a look at how view definitions are dumped. That will > >> >>> >> give a > >> >>> >> good > >> >>> >> idea as to how PG schema-qualifies (or not) objects. Here's how > >> >>> >> view > >> >>> >> definitions displayed changes with search path. Since the code to > >> >>> >> dump > >> >>> >> views > >> >>> >> and display definitions is same, the view definition dumped also > >> >>> >> changes > >> >>> >> with the search path. Thus pg_dump must be using some trick to > >> >>> >> always > >> >>> >> dump a > >> >>> >> consistent view definition (and hence a deparsed query). Thanks > >> >>> >> Amit > >> >>> >> for the > >> >>> >> example. > >> >>> >> > >> >>> >> create table ttt (id int); > >> >>> >> > >> >>> >> postgres=# create domain dd int; > >> >>> >> CREATE DOMAIN > >> >>> >> postgres=# create view v2 as select id::dd from ttt; > >> >>> >> CREATE VIEW > >> >>> >> postgres=# set search_path TO ''; > >> >>> >> SET > >> >>> >> > >> >>> >> postgres=# \d+ public.v2 > >> >>> >> View "public.v2" > >> >>> >> Column | Type | Modifiers | Storage | Description > >> >>> >> --------+-----------+--------- > >> >>> >> --+---------+------------- > >> >>> >> id | public.dd | | plain | > >> >>> >> View definition: > >> >>> >> SELECT ttt.id::public.dd AS id > >> >>> >> FROM public.ttt; > >> >>> >> > >> >>> >> postgres=# set search_path TO default ; > >> >>> >> SET > >> >>> >> postgres=# show search_path ; > >> >>> >> search_path > >> >>> >> ---------------- > >> >>> >> "$user",public > >> >>> >> (1 row) > >> >>> >> > >> >>> >> postgres=# \d+ public.v2 > >> >>> >> View "public.v2" > >> >>> >> Column | Type | Modifiers | Storage | Description > >> >>> >> --------+------+-----------+---------+------------- > >> >>> >> id | dd | | plain | > >> >>> >> View definition: > >> >>> >> SELECT ttt.id::dd AS id > >> >>> >> FROM ttt; > >> >>> >> > >> >>> >> We need to leverage similar mechanism here to reduce PG > footprint. > >> >>> >> > >> >>> >> > >> >>> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt > >> >>> >> <abb...@en...> > >> >>> >> wrote: > >> >>> >>> > >> >>> >>> Hi, > >> >>> >>> As discussed in the last F2F meeting, here is an updated patch > >> >>> >>> that > >> >>> >>> provides schema qualification of the following objects: Tables, > >> >>> >>> Views, > >> >>> >>> Functions, Types and Domains in case of remote queries. > >> >>> >>> Sequence functions are never concerned with datanodes hence, > >> >>> >>> schema > >> >>> >>> qualification is not required in case of sequences. > >> >>> >>> This solves plancache test case failure issue and does not > >> >>> >>> introduce > >> >>> >>> any > >> >>> >>> more failures. > >> >>> >>> I have also attached some tests with results to aid in review. > >> >>> >>> > >> >>> >>> Comments are welcome. > >> >>> >>> > >> >>> >>> Regards > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt > >> >>> >>> <abb...@en...> > >> >>> >>> wrote: > >> >>> >>>> > >> >>> >>>> Hi, > >> >>> >>>> Attached please find a WIP patch that provides the > functionality > >> >>> >>>> of > >> >>> >>>> preparing the statement at the datanodes as soon as it is > >> >>> >>>> prepared > >> >>> >>>> on the coordinator. > >> >>> >>>> This is to take care of a test case in plancache that makes > sure > >> >>> >>>> that > >> >>> >>>> change of search_path is ignored by replans. > >> >>> >>>> While the patch fixes this replan test case and the regression > >> >>> >>>> works > >> >>> >>>> fine > >> >>> >>>> there are still these two problems I have to take care of. > >> >>> >>>> > >> >>> >>>> 1. This test case fails > >> >>> >>>> > >> >>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) > >> >>> >>>> DISTRIBUTE > >> >>> >>>> BY > >> >>> >>>> HASH(a); > >> >>> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); > >> >>> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- > >> >>> >>>> fails > >> >>> >>>> > >> >>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE > a = > >> >>> >>>> 1; > >> >>> >>>> QUERY PLAN > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > ------------------------------------------------------------------- > >> >>> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 > >> >>> >>>> rows=1000 > >> >>> >>>> width=14) > >> >>> >>>> Node/s: data_node_1, data_node_2, data_node_3, > data_node_4 > >> >>> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE > >> >>> >>>> ((xc_alter_table_3.ctid = $1) AND > >> >>> >>>> (xc_alter_table_3.xc_node_id = $2)) > >> >>> >>>> -> Data Node Scan on xc_alter_table_3 > >> >>> >>>> "_REMOTE_TABLE_QUERY_" > >> >>> >>>> (cost=0.00..0.00 rows=1000 width=14) > >> >>> >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, > >> >>> >>>> xc_alter_table_3.xc_node_id > >> >>> >>>> Node/s: data_node_3 > >> >>> >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY > >> >>> >>>> xc_alter_table_3 WHERE (a = 1) > >> >>> >>>> (7 rows) > >> >>> >>>> > >> >>> >>>> The reason of the failure is that the select query is > >> >>> >>>> selecting 3 > >> >>> >>>> items, the first of which is an int, > >> >>> >>>> whereas the delete query is comparing $1 with a ctid. > >> >>> >>>> I am not sure how this works without prepare, but it fails > >> >>> >>>> when > >> >>> >>>> used > >> >>> >>>> with prepare. > >> >>> >>>> > >> >>> >>>> The reason of this planning is this section of code in > >> >>> >>>> function > >> >>> >>>> pgxc_build_dml_statement > >> >>> >>>> else if (cmdtype == CMD_DELETE) > >> >>> >>>> { > >> >>> >>>> /* > >> >>> >>>> * Since there is no data to update, the first param is > >> >>> >>>> going > >> >>> >>>> to > >> >>> >>>> be > >> >>> >>>> * ctid. > >> >>> >>>> */ > >> >>> >>>> ctid_param_num = 1; > >> >>> >>>> } > >> >>> >>>> > >> >>> >>>> Amit/Ashutosh can you suggest a fix for this problem? > >> >>> >>>> There are a number of possibilities. > >> >>> >>>> a) The select should not have selected column a. > >> >>> >>>> b) The DELETE should have referred to $2 and $3 for ctid and > >> >>> >>>> xc_node_id respectively. > >> >>> >>>> c) Since the query works without PREPARE, we should make > >> >>> >>>> PREPARE > >> >>> >>>> work > >> >>> >>>> the same way. > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> 2. This test case in plancache fails. > >> >>> >>>> > >> >>> >>>> -- Try it with a view, which isn't directly used in the > >> >>> >>>> resulting > >> >>> >>>> plan > >> >>> >>>> -- but should trigger invalidation anyway > >> >>> >>>> create table tab33 (a int, b int); > >> >>> >>>> insert into tab33 values(1,2); > >> >>> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; > >> >>> >>>> PREPARE vprep AS SELECT * FROM v_tab33; > >> >>> >>>> EXECUTE vprep; > >> >>> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM > >> >>> >>>> tab33; > >> >>> >>>> -- does not cause plan invalidation because views are never > >> >>> >>>> created > >> >>> >>>> on datanodes > >> >>> >>>> EXECUTE vprep; > >> >>> >>>> > >> >>> >>>> and the reason of the failure is that views are never > created > >> >>> >>>> on > >> >>> >>>> the > >> >>> >>>> datanodes hence plan invalidation is not triggered. > >> >>> >>>> This can be documented as an XC limitation. > >> >>> >>>> > >> >>> >>>> 3. I still have to add comments in the patch and some ifdefs > may > >> >>> >>>> be > >> >>> >>>> missing too. > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> In addition to the patch I have also attached some example Java > >> >>> >>>> programs > >> >>> >>>> that test the some basic functionality through JDBC. I found > that > >> >>> >>>> these > >> >>> >>>> programs are working fine after my patch. > >> >>> >>>> > >> >>> >>>> 1. Prepared.java : Issues parameterized delete, insert and > update > >> >>> >>>> through > >> >>> >>>> JDBC. These are un-named prepared statements and works fine. > >> >>> >>>> 2. NamedPrepared.java : Issues two named prepared statements > >> >>> >>>> through > >> >>> >>>> JDBC > >> >>> >>>> and works fine. > >> >>> >>>> 3. Retrieve.java : Runs a simple select to verify results. > >> >>> >>>> The comments on top of the files explain their usage. > >> >>> >>>> > >> >>> >>>> Comments are welcome. > >> >>> >>>> > >> >>> >>>> Thanks > >> >>> >>>> Regards > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat > >> >>> >>>> <ash...@en...> wrote: > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt > >> >>> >>>>> <abb...@en...> wrote: > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat > >> >>> >>>>>> <ash...@en...> wrote: > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt > >> >>> >>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>> > >> >>> >>>>>>>> Attached please find updated patch to fix the bug. The > patch > >> >>> >>>>>>>> takes > >> >>> >>>>>>>> care of the bug and the regression issues resulting from > the > >> >>> >>>>>>>> changes done in > >> >>> >>>>>>>> the patch. Please note that the issue in test case > plancache > >> >>> >>>>>>>> still stands > >> >>> >>>>>>>> unsolved because of the following test case (simplified but > >> >>> >>>>>>>> taken > >> >>> >>>>>>>> from > >> >>> >>>>>>>> plancache.sql) > >> >>> >>>>>>>> > >> >>> >>>>>>>> create schema s1 create table abc (f1 int); > >> >>> >>>>>>>> create schema s2 create table abc (f1 int); > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> insert into s1.abc values(123); > >> >>> >>>>>>>> insert into s2.abc values(456); > >> >>> >>>>>>>> > >> >>> >>>>>>>> set search_path = s1; > >> >>> >>>>>>>> > >> >>> >>>>>>>> prepare p1 as select f1 from abc; > >> >>> >>>>>>>> execute p1; -- works fine, results in 123 > >> >>> >>>>>>>> > >> >>> >>>>>>>> set search_path = s2; > >> >>> >>>>>>>> execute p1; -- works fine after the patch, results in 123 > >> >>> >>>>>>>> > >> >>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan > >> >>> >>>>>>>> execute p1; -- fails > >> >>> >>>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> Huh! The beast bit us. > >> >>> >>>>>>> > >> >>> >>>>>>> I think the right solution here is either of two > >> >>> >>>>>>> 1. Take your previous patch to always use qualified names > (but > >> >>> >>>>>>> you > >> >>> >>>>>>> need to improve it not to affect the view dumps) > >> >>> >>>>>>> 2. Prepare the statements at the datanode at the time of > >> >>> >>>>>>> prepare. > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> Is this test added new in 9.2? > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> No, it was added by commit > >> >>> >>>>>> 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c > >> >>> >>>>>> in > >> >>> >>>>>> March 2007. > >> >>> >>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> Why didn't we see this issue the first time prepare was > >> >>> >>>>>>> implemented? I > >> >>> >>>>>>> don't remember (but it was two years back). > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> I was unable to locate the exact reason but since statements > >> >>> >>>>>> were > >> >>> >>>>>> not > >> >>> >>>>>> being prepared on datanodes due to a merge issue this issue > >> >>> >>>>>> just > >> >>> >>>>>> surfaced > >> >>> >>>>>> up. > >> >>> >>>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> Well, even though statements were not getting prepared > (actually > >> >>> >>>>> prepared statements were not being used again and again) on > >> >>> >>>>> datanodes, we > >> >>> >>>>> never prepared them on datanode at the time of preparing the > >> >>> >>>>> statement. So, > >> >>> >>>>> this bug should have shown itself long back. > >> >>> >>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> The last execute should result in 123, whereas it results > in > >> >>> >>>>>>>> 456. > >> >>> >>>>>>>> The > >> >>> >>>>>>>> reason is that the search path has already been changed at > >> >>> >>>>>>>> the > >> >>> >>>>>>>> datanode and > >> >>> >>>>>>>> a replan would mean select from abc in s2. > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat > >> >>> >>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> Hi Abbas, > >> >>> >>>>>>>>> I think the fix is on the right track. There are couple of > >> >>> >>>>>>>>> improvements that we need to do here (but you may not do > >> >>> >>>>>>>>> those > >> >>> >>>>>>>>> if the time > >> >>> >>>>>>>>> doesn't permit). > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to > >> >>> >>>>>>>>> whether > >> >>> >>>>>>>>> the > >> >>> >>>>>>>>> query in the node should use extended protocol or not, > >> >>> >>>>>>>>> rather > >> >>> >>>>>>>>> than relying > >> >>> >>>>>>>>> on the presence of statement name and parameters etc. Amit > >> >>> >>>>>>>>> has > >> >>> >>>>>>>>> already added > >> >>> >>>>>>>>> a status with that effect. We need to leverage it. > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt > >> >>> >>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> The patch fixes the dead code issue, that I described > >> >>> >>>>>>>>>> earlier. > >> >>> >>>>>>>>>> The > >> >>> >>>>>>>>>> code was dead because of two issues: > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting > >> >>> >>>>>>>>>> stmt_name to > >> >>> >>>>>>>>>> NULL and this was the main reason > >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode was not > >> >>> >>>>>>>>>> being called in the function > >> >>> >>>>>>>>>> pgxc_start_command_on_connection. > >> >>> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly > assuming > >> >>> >>>>>>>>>> that a > >> >>> >>>>>>>>>> prepared statement must have some parameters. > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Fixing these two issues makes sure that the function > >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and > >> >>> >>>>>>>>>> statements > >> >>> >>>>>>>>>> get prepared on > >> >>> >>>>>>>>>> the datanode. > >> >>> >>>>>>>>>> This patch would fix bug 3607975. It would however not > fix > >> >>> >>>>>>>>>> the > >> >>> >>>>>>>>>> test > >> >>> >>>>>>>>>> case I described in my previous email because of reasons > I > >> >>> >>>>>>>>>> described. > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat > >> >>> >>>>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> Can you please explain what this fix does? It would help > >> >>> >>>>>>>>>>> to > >> >>> >>>>>>>>>>> have > >> >>> >>>>>>>>>>> an elaborate explanation with code snippets. > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt > >> >>> >>>>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat > >> >>> >>>>>>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt > >> >>> >>>>>>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat > >> >>> >>>>>>>>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt > >> >>> >>>>>>>>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Hi, > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> While working on test case plancache it was brought > >> >>> >>>>>>>>>>>>>>>> up as > >> >>> >>>>>>>>>>>>>>>> a > >> >>> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should > >> >>> >>>>>>>>>>>>>>>> solve > >> >>> >>>>>>>>>>>>>>>> the problem of the > >> >>> >>>>>>>>>>>>>>>> test case. > >> >>> >>>>>>>>>>>>>>>> However there is some confusion in the statement of > >> >>> >>>>>>>>>>>>>>>> bug > >> >>> >>>>>>>>>>>>>>>> id > >> >>> >>>>>>>>>>>>>>>> 3607975. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs > >> >>> >>>>>>>>>>>>>>>> multiple > >> >>> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and > >> >>> >>>>>>>>>>>>>>>> executing > >> >>> >>>>>>>>>>>>>>>> the query on > >> >>> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and > >> >>> >>>>>>>>>>>>>>>> executing multiple times. > >> >>> >>>>>>>>>>>>>>>> This is because somehow the remote query is being > >> >>> >>>>>>>>>>>>>>>> prepared as an unnamed > >> >>> >>>>>>>>>>>>>>>> statement." > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Consider this test case > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); > >> >>> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); > >> >>> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; > >> >>> >>>>>>>>>>>>>>>> D. execute p1; > >> >>> >>>>>>>>>>>>>>>> E. execute p1; > >> >>> >>>>>>>>>>>>>>>> F. execute p1; > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Here are the confusions > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in > >> >>> >>>>>>>>>>>>>>>> response > >> >>> >>>>>>>>>>>>>>>> to > >> >>> >>>>>>>>>>>>>>>> a prepare issued by a user. > >> >>> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. > >> >>> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to > >> >>> >>>>>>>>>>>>>>>> all > >> >>> >>>>>>>>>>>>>>>> datanodes. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to > >> >>> >>>>>>>>>>>>>>>> build > >> >>> >>>>>>>>>>>>>>>> a > >> >>> >>>>>>>>>>>>>>>> new generic plan, > >> >>> >>>>>>>>>>>>>>>> and steps E and F use the already built generic > >> >>> >>>>>>>>>>>>>>>> plan. > >> >>> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. > >> >>> >>>>>>>>>>>>>>>> This means that executing a prepared statement > >> >>> >>>>>>>>>>>>>>>> again > >> >>> >>>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>>> again does use cached plans > >> >>> >>>>>>>>>>>>>>>> and does not prepare again and again every time > >> >>> >>>>>>>>>>>>>>>> we > >> >>> >>>>>>>>>>>>>>>> issue > >> >>> >>>>>>>>>>>>>>>> an execute. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> The problem is not here. The problem is in > do_query() > >> >>> >>>>>>>>>>>>>>> where > >> >>> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped > out > >> >>> >>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>> we keep on > >> >>> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the > >> >>> >>>>>>>>>>>>>> datanode. > >> >>> >>>>>>>>>>>>>> I spent time looking at the code written in do_query > >> >>> >>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>> functions called > >> >>> >>>>>>>>>>>>>> from with in do_query to handle prepared statements > but > >> >>> >>>>>>>>>>>>>> the > >> >>> >>>>>>>>>>>>>> code written in > >> >>> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements > >> >>> >>>>>>>>>>>>>> prepared on datanodes > >> >>> >>>>>>>>>>>>>> is dead as of now. It is never called during the > >> >>> >>>>>>>>>>>>>> complete > >> >>> >>>>>>>>>>>>>> regression run. > >> >>> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never > >> >>> >>>>>>>>>>>>>> called. The way > >> >>> >>>>>>>>>>>>>> prepared statements are being handled now is the same > >> >>> >>>>>>>>>>>>>> as I > >> >>> >>>>>>>>>>>>>> described earlier > >> >>> >>>>>>>>>>>>>> in the mail chain with the help of an example. > >> >>> >>>>>>>>>>>>>> The code that is dead was originally added by Mason > >> >>> >>>>>>>>>>>>>> through > >> >>> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back > >> >>> >>>>>>>>>>>>>> in > >> >>> >>>>>>>>>>>>>> December 2010. This > >> >>> >>>>>>>>>>>>>> code has been changed a lot over the last two years. > >> >>> >>>>>>>>>>>>>> This > >> >>> >>>>>>>>>>>>>> commit does not > >> >>> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it > use > >> >>> >>>>>>>>>>>>>> to > >> >>> >>>>>>>>>>>>>> work back then. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared > >> >>> >>>>>>>>>>>>> statements. > >> >>> >>>>>>>>>>>>> So, > >> >>> >>>>>>>>>>>>> something has gone wrong in-between. That's what we > need > >> >>> >>>>>>>>>>>>> to > >> >>> >>>>>>>>>>>>> find out and > >> >>> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not > >> >>> >>>>>>>>>>>>> good > >> >>> >>>>>>>>>>>>> for performance > >> >>> >>>>>>>>>>>>> either. > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> I was able to find the reason why the code was dead and > >> >>> >>>>>>>>>>>> the > >> >>> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now > >> >>> >>>>>>>>>>>> ensure > >> >>> >>>>>>>>>>>> that > >> >>> >>>>>>>>>>>> statements are prepared on datanodes whenever required. > >> >>> >>>>>>>>>>>> However there is a > >> >>> >>>>>>>>>>>> problem in the way prepared statements are handled. The > >> >>> >>>>>>>>>>>> problem is that > >> >>> >>>>>>>>>>>> unless a prepared statement is executed it is never > >> >>> >>>>>>>>>>>> prepared > >> >>> >>>>>>>>>>>> on datanodes, > >> >>> >>>>>>>>>>>> hence changing the path before executing the statement > >> >>> >>>>>>>>>>>> gives > >> >>> >>>>>>>>>>>> us incorrect > >> >>> >>>>>>>>>>>> results. For Example > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute > >> >>> >>>>>>>>>>>> by > >> >>> >>>>>>>>>>>> replication; > >> >>> >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute > >> >>> >>>>>>>>>>>> by > >> >>> >>>>>>>>>>>> replication; > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> insert into s1.abc values(123); > >> >>> >>>>>>>>>>>> insert into s2.abc values(456); > >> >>> >>>>>>>>>>>> set search_path = s2; > >> >>> >>>>>>>>>>>> prepare p1 as select f1 from abc; > >> >>> >>>>>>>>>>>> set search_path = s1; > >> >>> >>>>>>>>>>>> execute p1; > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> The last execute results in 123, where as it should > have > >> >>> >>>>>>>>>>>> resulted > >> >>> >>>>>>>>>>>> in 456. > >> >>> >>>>>>>>>>>> I can finalize the attached patch by fixing any > >> >>> >>>>>>>>>>>> regression > >> >>> >>>>>>>>>>>> issues > >> >>> >>>>>>>>>>>> that may result and that would fix 3607975 and improve > >> >>> >>>>>>>>>>>> performance however > >> >>> >>>>>>>>>>>> the above test case would still fail. > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not > >> >>> >>>>>>>>>>>>>>>> reproducible. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would > >> >>> >>>>>>>>>>>>>>> have > >> >>> >>>>>>>>>>>>>>> been > >> >>> >>>>>>>>>>>>>>> the case, we would not have seen this problem if > >> >>> >>>>>>>>>>>>>>> search_path changed in > >> >>> >>>>>>>>>>>>>>> between steps D and E. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the > >> >>> >>>>>>>>>>>>>> problem > >> >>> >>>>>>>>>>>>>> occurs because when the remote query node is created, > >> >>> >>>>>>>>>>>>>> schema qualification > >> >>> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the > >> >>> >>>>>>>>>>>>>> datanode, but changes in > >> >>> >>>>>>>>>>>>>> search path do get communicated to the datanode. The > >> >>> >>>>>>>>>>>>>> sql > >> >>> >>>>>>>>>>>>>> statement is built > >> >>> >>>>>>>>>>>>>> when execute is issued for the first time and is > reused > >> >>> >>>>>>>>>>>>>> on > >> >>> >>>>>>>>>>>>>> subsequent > >> >>> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the > >> >>> >>>>>>>>>>>>>> select > >> >>> >>>>>>>>>>>>>> that it just > >> >>> >>>>>>>>>>>>>> received is due to an execute of a prepared statement > >> >>> >>>>>>>>>>>>>> that > >> >>> >>>>>>>>>>>>>> was prepared when > >> >>> >>>>>>>>>>>>>> search path was some thing else. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, > >> >>> >>>>>>>>>>>>> would > >> >>> >>>>>>>>>>>>> fix > >> >>> >>>>>>>>>>>>> the problem, since the statement will get prepared at > >> >>> >>>>>>>>>>>>> the > >> >>> >>>>>>>>>>>>> datanode, with the > >> >>> >>>>>>>>>>>>> same search path settings, as it would on the > >> >>> >>>>>>>>>>>>> coordinator. > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Comments are welcome. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>>>> Abbas > >> >>> >>>>>>>>>>>>>>>> Architect > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, > >> >>> >>>>>>>>>>>>>>>> whitepapers > >> >>> >>>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>>> more > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > ------------------------------------------------------------------------------ > >> >>> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt > >> >>> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application > >> >>> >>>>>>>>>>>>>>>> performance > >> >>> >>>>>>>>>>>>>>>> monitoring service > >> >>> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. > Optimize > >> >>> >>>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>>> monitor your > >> >>> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of > >> >>> >>>>>>>>>>>>>>>> code. > >> >>> >>>>>>>>>>>>>>>> Try > >> >>> >>>>>>>>>>>>>>>> New Relic > >> >>> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! > >> >>> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may > >> >>> >>>>>>>>>>>>>>>> _______________________________________________ > >> >>> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list > >> >>> >>>>>>>>>>>>>>>> Pos...@li... > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>>> Best Wishes, > >> >>> >>>>>>>>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>> Abbas > >> >>> >>>>>>>>>>>>>> Architect > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, > whitepapers > >> >>> >>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>> more > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>> Best Wishes, > >> >>> >>>>>>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>> Abbas > >> >>> >>>>>>>>>>>> Architect > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers > >> >>> >>>>>>>>>>>> and > >> >>> >>>>>>>>>>>> more > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> -- > >> >>> >>>>>>>>>>> Best Wishes, > >> >>> >>>>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> -- > >> >>> >>>>>>>>>> -- > >> >>> >>>>>>>>>> Abbas > >> >>> >>>>>>>>>> Architect > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers > and > >> >>> >>>>>>>>>> more > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> -- > >> >>> >>>>>>>>> Best Wishes, > >> >>> >>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> -- > >> >>> >>>>>>>> -- > >> >>> >>>>>>>> Abbas > >> >>> >>>>>>>> Architect > >> >>> >>>>>>>> > >> >>> >>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>> > >> >>> >>>>>>>> Follow us on Twitter > >> >>> >>>>>>>> @EnterpriseDB > >> >>> >>>>>>>> > >> >>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >> >>> >>>>>>>> more > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> -- > >> >>> >>>>>>> Best Wishes, > >> >>> >>>>>>> Ashutosh Bapat > >> >>> >>>>>>> EntepriseDB Corporation > >> >>> >>>>>>> The Postgres Database Company > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> -- > >> >>> >>>>>> -- > >> >>> >>>>>> Abbas > >> >>> >>>>>> Architect > >> >>> >>>>>> > >> >>> >>>>>> Ph: 92.334.5100153 > >> >>> >>>>>> Skype ID: gabbasb > >> >>> >>>>>> www.enterprisedb.com > >> >>> >>>>>> > >> >>> >>>>>> Follow us on Twitter > >> >>> >>>>>> @EnterpriseDB > >> >>> >>>>>> > >> >>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >> >>> >>>>>> more > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> -- > >> >>> >>>>> Best Wishes, > >> >>> >>>>> Ashutosh Bapat > >> >>> >>>>> EntepriseDB Corporation > >> >>> >>>>> The Postgres Database Company > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> -- > >> >>> >>>> -- > >> >>> >>>> Abbas > >> >>> >>>> Architect > >> >>> >>>> > >> >>> >>>> Ph: 92.334.5100153 > >> >>> >>>> Skype ID: gabbasb > >> >>> >>>> www.enterprisedb.com > >> >>> >>>> > >> >>> >>>> Follow us on Twitter > >> >>> >>>> @EnterpriseDB > >> >>> >>>> > >> >>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > more > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> -- > >> >>> >>> -- > >> >>> >>> Abbas > >> >>> >>> Architect > >> >>> >>> > >> >>> >>> Ph: 92.334.5100153 > >> >>> >>> Skype ID: gabbasb > >> >>> >>> www.enterprisedb.com > >> >>> >>> > >> >>> >>> Follow us on Twitter > >> >>> >>> @EnterpriseDB > >> >>> >>> > >> >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> -- > >> >>> >> Best Wishes, > >> >>> >> Ashutosh Bapat > >> >>> >> EntepriseDB Corporation > >> >>> >> The Postgres Database Company > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> > ------------------------------------------------------------------------------ > >> >>> >> This SF.net email is sponsored by Windows: > >> >>> >> > >> >>> >> Build for Windows Store. > >> >>> >> > >> >>> >> http://p.sf.net/sfu/windows-dev2dev > >> >>> >> _______________________________________________ > >> >>> >> Postgres-xc-developers mailing list > >> >>> >> Pos...@li... > >> >>> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> >>> >> > >> >> > >> >> > >> >> > >> >> > >> >> -- > >> >> Best Wishes, > >> >> Ashutosh Bapat > >> >> EntepriseDB Corporation > >> >> The Postgres Database Company > >> > >> > >> > ------------------------------------------------------------------------------ > >> This SF.net email is sponsored by Windows: > >> > >> Build for Windows Store. > >> > >> http://p.sf.net/sfu/windows-dev2dev > >> _______________________________________________ > >> Postgres-xc-developers mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > > > > > <inherit.out><inherit.sql><regression.diffs><regression.out>------------------------------------------------------------------------------ > > > > This SF.net email is sponsored by Windows: > > > > Build for Windows Store. > > > > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > |
From: Ahsan H. <ahs...@en...> - 2013-06-26 08:55:52
|
Hi Amit, So i believe your patch for the plancache regression is ready to be committed? As i understand, this is the only blocker remaining for beta. We need to freeze the changes for beta by EOD today unless we find another serious regression. The PG 924 merge exercise should begin from tomorrow as per plan. -- Ahsan On Wed, Jun 26, 2013 at 1:46 PM, Amit Khandekar < ami...@en...> wrote: > Now I see how to reproduce the inherit.sql failure. It fails only in > parallel schedule. And I just checked that it also failed 6 days ago > with the same diff when I ran some regression test on the build farm. > So it has nothing to do with the patch, and this test needs to be > analyzed and fixed separately. > > On 26 June 2013 13:21, 鈴木 幸市 <ko...@in...> wrote: > > It seems that the name of materialized table may be > environment-dependent. > > Any idea to make it environment-independent? > > > > Apparently, the result is correct but just does not match expected ones. > > > > Reagards; > > --- > > Koichi Suzuki > > > > > > > > On 2013/06/26, at 16:47, Koichi Suzuki <koi...@gm...> > wrote: > > > > I tested this patch for the latest master (without any pending patches) > and > > found inherit fails in linker machine environment. > > > > PFA related files. > > > > Regards; > > > > ---------- > > Koichi Suzuki > > > > > > 2013/6/26 Amit Khandekar <ami...@en...> > >> > >> On 26 June 2013 10:34, Amit Khandekar <ami...@en...> > >> wrote: > >> > On 26 June 2013 08:56, Ashutosh Bapat < > ash...@en...> > >> > wrote: > >> >> Hi Amit, > >> >> From a cursory look, this looks much more cleaner than Abbas's patch. > >> >> So, it > >> >> looks to be the approach we should take. BTW, you need to update the > >> >> original outputs as well, instead of just the alternate expected > >> >> outputs. > >> >> Remember, the merge applies changes to the original expected outputs > >> >> and not > >> >> alternate ones (added in XC), thus we come to know about conflicts > only > >> >> when > >> >> we apply changes to original expected outputs. > >> > > >> > Right. Will look into it. > >> > >> All of the expected files except inherit.sql are xc_* tests which > >> don't have alternate files. I have made the same inherit_1.out changes > >> onto inherit.out. Attached revised patch. > >> > >> Suzuki-san, I was looking at the following diff in the inherit.sql > >> failure that you had attached from your local regression run: > >> > >> ! Hash Cond: (pg_temp_2.patest0.id = int4_tbl.f1) > >> --- 1247,1253 ---- > >> ! Hash Cond: (pg_temp_7.patest0.id = int4_tbl.f1) > >> > >> I am not sure if this has started coming after you applied the patch. > >> Can you please once again run the test after reinitializing the > >> cluster ? I am not getting this diff although I ran using > >> serial_schedule, not parallel; and the diff itself looks harmless. As > >> I mentioned, the actual temp schema name may differ. > >> > >> > > >> >> > >> >> Regards to temporary namespaces, is it possible to schema-qualify the > >> >> temporary namespaces as always pg_temp irrespective of the actual > name? > >> > > >> > get_namespace_name() is used to deparse the schema name. In order to > >> > keep the deparsing logic working for both local queries (i.e. for view > >> > for e.g.) and remote queries, we need to push in the context that the > >> > deparsing is being done for remote queries, and this needs to be done > >> > all the way from the uppermost function (say pg_get_querydef) upto > >> > get_namespace_name() which does not look good. > >> > > >> > We need to think about some solution in general for the existing issue > >> > of deparsing temp table names. Not sure of any solution. May be, after > >> > we resolve the bug id in subject, the fix may even not require the > >> > schema qualification. > >> >> > >> >> > >> >> On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar > >> >> <ami...@en...> wrote: > >> >>> > >> >>> On 25 June 2013 19:59, Amit Khandekar > >> >>> <ami...@en...> > >> >>> wrote: > >> >>> > Attached is a patch that does schema qualification by overriding > the > >> >>> > search_path just before doing the deparse in deparse_query(). > >> >>> > PopOVerrideSearchPath() in the end pops out the temp path. Also, > the > >> >>> > transaction callback function already takes care of popping out > such > >> >>> > Push in case of transaction rollback. > >> >>> > > >> >>> > Unfortunately we cannot apply this solution to temp tables. The > >> >>> > problem is, when a pg_temp schema is deparsed, it is deparsed into > >> >>> > pg_temp_1, pg_temp_2 etc. and these names are specific to the > node. > >> >>> > An > >> >>> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at > >> >>> > datanode. So the remote query generated may or may not work on > >> >>> > datanode, so totally unreliable. > >> >>> > > >> >>> > In fact, the issue with pg_temp_1 names in the deparsed remote > query > >> >>> > is present even currently. > >> >>> > > >> >>> > But wherever it is a correctness issue to *not* schema qualify > temp > >> >>> > object, I have kept the schema qualification. > >> >>> > For e.g. user can set search_path to" s1, pg_temp", > >> >>> > and have obj1 in both s1 and pg_temp, and want to refer to > >> >>> > pg_temp.s1. > >> >>> > In such case, the remote query should have pg_temp_[1-9].obj1, > >> >>> > although it may cause errors because of the existing issue. > >> >>> > > >> >>> --- > >> >>> > So, the prepare-execute with search_path would remain there for > temp > >> >>> > tables. > >> >>> I mean, the prepare-exute issue with search_patch would remain there > >> >>> for temp tables. > >> >>> > >> >>> > > >> >>> > I tried to run the regression by extracting regression expected > >> >>> > output > >> >>> > files from Abbas's patch , and regression passes, including > >> >>> > plancache. > >> >>> > > >> >>> > I think for this release, we should go ahead by keeping this issue > >> >>> > open for temp tables. This solution is an improvement, and does > not > >> >>> > cause any new issues. > >> >>> > > >> >>> > Comments welcome. > >> >>> > > >> >>> > > >> >>> > On 24 June 2013 13:00, Ashutosh Bapat > >> >>> > <ash...@en...> > >> >>> > wrote: > >> >>> >> Hi Abbas, > >> >>> >> We are changing a lot of PostgreSQL deparsing code, which would > >> >>> >> create > >> >>> >> problems in the future merges. Since this change is in query > >> >>> >> deparsing > >> >>> >> logic > >> >>> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this > patch > >> >>> >> should > >> >>> >> again be the last resort. > >> >>> >> > >> >>> >> Please take a look at how view definitions are dumped. That will > >> >>> >> give a > >> >>> >> good > >> >>> >> idea as to how PG schema-qualifies (or not) objects. Here's how > >> >>> >> view > >> >>> >> definitions displayed changes with search path. Since the code to > >> >>> >> dump > >> >>> >> views > >> >>> >> and display definitions is same, the view definition dumped also > >> >>> >> changes > >> >>> >> with the search path. Thus pg_dump must be using some trick to > >> >>> >> always > >> >>> >> dump a > >> >>> >> consistent view definition (and hence a deparsed query). Thanks > >> >>> >> Amit > >> >>> >> for the > >> >>> >> example. > >> >>> >> > >> >>> >> create table ttt (id int); > >> >>> >> > >> >>> >> postgres=# create domain dd int; > >> >>> >> CREATE DOMAIN > >> >>> >> postgres=# create view v2 as select id::dd from ttt; > >> >>> >> CREATE VIEW > >> >>> >> postgres=# set search_path TO ''; > >> >>> >> SET > >> >>> >> > >> >>> >> postgres=# \d+ public.v2 > >> >>> >> View "public.v2" > >> >>> >> Column | Type | Modifiers | Storage | Description > >> >>> >> --------+-----------+--------- > >> >>> >> --+---------+------------- > >> >>> >> id | public.dd | | plain | > >> >>> >> View definition: > >> >>> >> SELECT ttt.id::public.dd AS id > >> >>> >> FROM public.ttt; > >> >>> >> > >> >>> >> postgres=# set search_path TO default ; > >> >>> >> SET > >> >>> >> postgres=# show search_path ; > >> >>> >> search_path > >> >>> >> ---------------- > >> >>> >> "$user",public > >> >>> >> (1 row) > >> >>> >> > >> >>> >> postgres=# \d+ public.v2 > >> >>> >> View "public.v2" > >> >>> >> Column | Type | Modifiers | Storage | Description > >> >>> >> --------+------+-----------+---------+------------- > >> >>> >> id | dd | | plain | > >> >>> >> View definition: > >> >>> >> SELECT ttt.id::dd AS id > >> >>> >> FROM ttt; > >> >>> >> > >> >>> >> We need to leverage similar mechanism here to reduce PG > footprint. > >> >>> >> > >> >>> >> > >> >>> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt > >> >>> >> <abb...@en...> > >> >>> >> wrote: > >> >>> >>> > >> >>> >>> Hi, > >> >>> >>> As discussed in the last F2F meeting, here is an updated patch > >> >>> >>> that > >> >>> >>> provides schema qualification of the following objects: Tables, > >> >>> >>> Views, > >> >>> >>> Functions, Types and Domains in case of remote queries. > >> >>> >>> Sequence functions are never concerned with datanodes hence, > >> >>> >>> schema > >> >>> >>> qualification is not required in case of sequences. > >> >>> >>> This solves plancache test case failure issue and does not > >> >>> >>> introduce > >> >>> >>> any > >> >>> >>> more failures. > >> >>> >>> I have also attached some tests with results to aid in review. > >> >>> >>> > >> >>> >>> Comments are welcome. > >> >>> >>> > >> >>> >>> Regards > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt > >> >>> >>> <abb...@en...> > >> >>> >>> wrote: > >> >>> >>>> > >> >>> >>>> Hi, > >> >>> >>>> Attached please find a WIP patch that provides the > functionality > >> >>> >>>> of > >> >>> >>>> preparing the statement at the datanodes as soon as it is > >> >>> >>>> prepared > >> >>> >>>> on the coordinator. > >> >>> >>>> This is to take care of a test case in plancache that makes > sure > >> >>> >>>> that > >> >>> >>>> change of search_path is ignored by replans. > >> >>> >>>> While the patch fixes this replan test case and the regression > >> >>> >>>> works > >> >>> >>>> fine > >> >>> >>>> there are still these two problems I have to take care of. > >> >>> >>>> > >> >>> >>>> 1. This test case fails > >> >>> >>>> > >> >>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) > >> >>> >>>> DISTRIBUTE > >> >>> >>>> BY > >> >>> >>>> HASH(a); > >> >>> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); > >> >>> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- > >> >>> >>>> fails > >> >>> >>>> > >> >>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE > a = > >> >>> >>>> 1; > >> >>> >>>> QUERY PLAN > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > ------------------------------------------------------------------- > >> >>> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 > >> >>> >>>> rows=1000 > >> >>> >>>> width=14) > >> >>> >>>> Node/s: data_node_1, data_node_2, data_node_3, > data_node_4 > >> >>> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE > >> >>> >>>> ((xc_alter_table_3.ctid = $1) AND > >> >>> >>>> (xc_alter_table_3.xc_node_id = $2)) > >> >>> >>>> -> Data Node Scan on xc_alter_table_3 > >> >>> >>>> "_REMOTE_TABLE_QUERY_" > >> >>> >>>> (cost=0.00..0.00 rows=1000 width=14) > >> >>> >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, > >> >>> >>>> xc_alter_table_3.xc_node_id > >> >>> >>>> Node/s: data_node_3 > >> >>> >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY > >> >>> >>>> xc_alter_table_3 WHERE (a = 1) > >> >>> >>>> (7 rows) > >> >>> >>>> > >> >>> >>>> The reason of the failure is that the select query is > >> >>> >>>> selecting 3 > >> >>> >>>> items, the first of which is an int, > >> >>> >>>> whereas the delete query is comparing $1 with a ctid. > >> >>> >>>> I am not sure how this works without prepare, but it fails > >> >>> >>>> when > >> >>> >>>> used > >> >>> >>>> with prepare. > >> >>> >>>> > >> >>> >>>> The reason of this planning is this section of code in > >> >>> >>>> function > >> >>> >>>> pgxc_build_dml_statement > >> >>> >>>> else if (cmdtype == CMD_DELETE) > >> >>> >>>> { > >> >>> >>>> /* > >> >>> >>>> * Since there is no data to update, the first param is > >> >>> >>>> going > >> >>> >>>> to > >> >>> >>>> be > >> >>> >>>> * ctid. > >> >>> >>>> */ > >> >>> >>>> ctid_param_num = 1; > >> >>> >>>> } > >> >>> >>>> > >> >>> >>>> Amit/Ashutosh can you suggest a fix for this problem? > >> >>> >>>> There are a number of possibilities. > >> >>> >>>> a) The select should not have selected column a. > >> >>> >>>> b) The DELETE should have referred to $2 and $3 for ctid and > >> >>> >>>> xc_node_id respectively. > >> >>> >>>> c) Since the query works without PREPARE, we should make > >> >>> >>>> PREPARE > >> >>> >>>> work > >> >>> >>>> the same way. > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> 2. This test case in plancache fails. > >> >>> >>>> > >> >>> >>>> -- Try it with a view, which isn't directly used in the > >> >>> >>>> resulting > >> >>> >>>> plan > >> >>> >>>> -- but should trigger invalidation anyway > >> >>> >>>> create table tab33 (a int, b int); > >> >>> >>>> insert into tab33 values(1,2); > >> >>> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; > >> >>> >>>> PREPARE vprep AS SELECT * FROM v_tab33; > >> >>> >>>> EXECUTE vprep; > >> >>> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM > >> >>> >>>> tab33; > >> >>> >>>> -- does not cause plan invalidation because views are never > >> >>> >>>> created > >> >>> >>>> on datanodes > >> >>> >>>> EXECUTE vprep; > >> >>> >>>> > >> >>> >>>> and the reason of the failure is that views are never > created > >> >>> >>>> on > >> >>> >>>> the > >> >>> >>>> datanodes hence plan invalidation is not triggered. > >> >>> >>>> This can be documented as an XC limitation. > >> >>> >>>> > >> >>> >>>> 3. I still have to add comments in the patch and some ifdefs > may > >> >>> >>>> be > >> >>> >>>> missing too. > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> In addition to the patch I have also attached some example Java > >> >>> >>>> programs > >> >>> >>>> that test the some basic functionality through JDBC. I found > that > >> >>> >>>> these > >> >>> >>>> programs are working fine after my patch. > >> >>> >>>> > >> >>> >>>> 1. Prepared.java : Issues parameterized delete, insert and > update > >> >>> >>>> through > >> >>> >>>> JDBC. These are un-named prepared statements and works fine. > >> >>> >>>> 2. NamedPrepared.java : Issues two named prepared statements > >> >>> >>>> through > >> >>> >>>> JDBC > >> >>> >>>> and works fine. > >> >>> >>>> 3. Retrieve.java : Runs a simple select to verify results. > >> >>> >>>> The comments on top of the files explain their usage. > >> >>> >>>> > >> >>> >>>> Comments are welcome. > >> >>> >>>> > >> >>> >>>> Thanks > >> >>> >>>> Regards > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat > >> >>> >>>> <ash...@en...> wrote: > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt > >> >>> >>>>> <abb...@en...> wrote: > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat > >> >>> >>>>>> <ash...@en...> wrote: > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt > >> >>> >>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>> > >> >>> >>>>>>>> Attached please find updated patch to fix the bug. The > patch > >> >>> >>>>>>>> takes > >> >>> >>>>>>>> care of the bug and the regression issues resulting from > the > >> >>> >>>>>>>> changes done in > >> >>> >>>>>>>> the patch. Please note that the issue in test case > plancache > >> >>> >>>>>>>> still stands > >> >>> >>>>>>>> unsolved because of the following test case (simplified but > >> >>> >>>>>>>> taken > >> >>> >>>>>>>> from > >> >>> >>>>>>>> plancache.sql) > >> >>> >>>>>>>> > >> >>> >>>>>>>> create schema s1 create table abc (f1 int); > >> >>> >>>>>>>> create schema s2 create table abc (f1 int); > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> insert into s1.abc values(123); > >> >>> >>>>>>>> insert into s2.abc values(456); > >> >>> >>>>>>>> > >> >>> >>>>>>>> set search_path = s1; > >> >>> >>>>>>>> > >> >>> >>>>>>>> prepare p1 as select f1 from abc; > >> >>> >>>>>>>> execute p1; -- works fine, results in 123 > >> >>> >>>>>>>> > >> >>> >>>>>>>> set search_path = s2; > >> >>> >>>>>>>> execute p1; -- works fine after the patch, results in 123 > >> >>> >>>>>>>> > >> >>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan > >> >>> >>>>>>>> execute p1; -- fails > >> >>> >>>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> Huh! The beast bit us. > >> >>> >>>>>>> > >> >>> >>>>>>> I think the right solution here is either of two > >> >>> >>>>>>> 1. Take your previous patch to always use qualified names > (but > >> >>> >>>>>>> you > >> >>> >>>>>>> need to improve it not to affect the view dumps) > >> >>> >>>>>>> 2. Prepare the statements at the datanode at the time of > >> >>> >>>>>>> prepare. > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> Is this test added new in 9.2? > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> No, it was added by commit > >> >>> >>>>>> 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c > >> >>> >>>>>> in > >> >>> >>>>>> March 2007. > >> >>> >>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> Why didn't we see this issue the first time prepare was > >> >>> >>>>>>> implemented? I > >> >>> >>>>>>> don't remember (but it was two years back). > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> I was unable to locate the exact reason but since statements > >> >>> >>>>>> were > >> >>> >>>>>> not > >> >>> >>>>>> being prepared on datanodes due to a merge issue this issue > >> >>> >>>>>> just > >> >>> >>>>>> surfaced > >> >>> >>>>>> up. > >> >>> >>>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> Well, even though statements were not getting prepared > (actually > >> >>> >>>>> prepared statements were not being used again and again) on > >> >>> >>>>> datanodes, we > >> >>> >>>>> never prepared them on datanode at the time of preparing the > >> >>> >>>>> statement. So, > >> >>> >>>>> this bug should have shown itself long back. > >> >>> >>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> The last execute should result in 123, whereas it results > in > >> >>> >>>>>>>> 456. > >> >>> >>>>>>>> The > >> >>> >>>>>>>> reason is that the search path has already been changed at > >> >>> >>>>>>>> the > >> >>> >>>>>>>> datanode and > >> >>> >>>>>>>> a replan would mean select from abc in s2. > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat > >> >>> >>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> Hi Abbas, > >> >>> >>>>>>>>> I think the fix is on the right track. There are couple of > >> >>> >>>>>>>>> improvements that we need to do here (but you may not do > >> >>> >>>>>>>>> those > >> >>> >>>>>>>>> if the time > >> >>> >>>>>>>>> doesn't permit). > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to > >> >>> >>>>>>>>> whether > >> >>> >>>>>>>>> the > >> >>> >>>>>>>>> query in the node should use extended protocol or not, > >> >>> >>>>>>>>> rather > >> >>> >>>>>>>>> than relying > >> >>> >>>>>>>>> on the presence of statement name and parameters etc. Amit > >> >>> >>>>>>>>> has > >> >>> >>>>>>>>> already added > >> >>> >>>>>>>>> a status with that effect. We need to leverage it. > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt > >> >>> >>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> The patch fixes the dead code issue, that I described > >> >>> >>>>>>>>>> earlier. > >> >>> >>>>>>>>>> The > >> >>> >>>>>>>>>> code was dead because of two issues: > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting > >> >>> >>>>>>>>>> stmt_name to > >> >>> >>>>>>>>>> NULL and this was the main reason > >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode was not > >> >>> >>>>>>>>>> being called in the function > >> >>> >>>>>>>>>> pgxc_start_command_on_connection. > >> >>> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly > assuming > >> >>> >>>>>>>>>> that a > >> >>> >>>>>>>>>> prepared statement must have some parameters. > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Fixing these two issues makes sure that the function > >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and > >> >>> >>>>>>>>>> statements > >> >>> >>>>>>>>>> get prepared on > >> >>> >>>>>>>>>> the datanode. > >> >>> >>>>>>>>>> This patch would fix bug 3607975. It would however not > fix > >> >>> >>>>>>>>>> the > >> >>> >>>>>>>>>> test > >> >>> >>>>>>>>>> case I described in my previous email because of reasons > I > >> >>> >>>>>>>>>> described. > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat > >> >>> >>>>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> Can you please explain what this fix does? It would help > >> >>> >>>>>>>>>>> to > >> >>> >>>>>>>>>>> have > >> >>> >>>>>>>>>>> an elaborate explanation with code snippets. > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt > >> >>> >>>>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat > >> >>> >>>>>>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt > >> >>> >>>>>>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat > >> >>> >>>>>>>>>>>>>> <ash...@en...> wrote: > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt > >> >>> >>>>>>>>>>>>>>> <abb...@en...> wrote: > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Hi, > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> While working on test case plancache it was brought > >> >>> >>>>>>>>>>>>>>>> up as > >> >>> >>>>>>>>>>>>>>>> a > >> >>> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should > >> >>> >>>>>>>>>>>>>>>> solve > >> >>> >>>>>>>>>>>>>>>> the problem of the > >> >>> >>>>>>>>>>>>>>>> test case. > >> >>> >>>>>>>>>>>>>>>> However there is some confusion in the statement of > >> >>> >>>>>>>>>>>>>>>> bug > >> >>> >>>>>>>>>>>>>>>> id > >> >>> >>>>>>>>>>>>>>>> 3607975. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs > >> >>> >>>>>>>>>>>>>>>> multiple > >> >>> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and > >> >>> >>>>>>>>>>>>>>>> executing > >> >>> >>>>>>>>>>>>>>>> the query on > >> >>> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and > >> >>> >>>>>>>>>>>>>>>> executing multiple times. > >> >>> >>>>>>>>>>>>>>>> This is because somehow the remote query is being > >> >>> >>>>>>>>>>>>>>>> prepared as an unnamed > >> >>> >>>>>>>>>>>>>>>> statement." > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Consider this test case > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); > >> >>> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); > >> >>> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; > >> >>> >>>>>>>>>>>>>>>> D. execute p1; > >> >>> >>>>>>>>>>>>>>>> E. execute p1; > >> >>> >>>>>>>>>>>>>>>> F. execute p1; > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Here are the confusions > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in > >> >>> >>>>>>>>>>>>>>>> response > >> >>> >>>>>>>>>>>>>>>> to > >> >>> >>>>>>>>>>>>>>>> a prepare issued by a user. > >> >>> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. > >> >>> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to > >> >>> >>>>>>>>>>>>>>>> all > >> >>> >>>>>>>>>>>>>>>> datanodes. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to > >> >>> >>>>>>>>>>>>>>>> build > >> >>> >>>>>>>>>>>>>>>> a > >> >>> >>>>>>>>>>>>>>>> new generic plan, > >> >>> >>>>>>>>>>>>>>>> and steps E and F use the already built generic > >> >>> >>>>>>>>>>>>>>>> plan. > >> >>> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. > >> >>> >>>>>>>>>>>>>>>> This means that executing a prepared statement > >> >>> >>>>>>>>>>>>>>>> again > >> >>> >>>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>>> again does use cached plans > >> >>> >>>>>>>>>>>>>>>> and does not prepare again and again every time > >> >>> >>>>>>>>>>>>>>>> we > >> >>> >>>>>>>>>>>>>>>> issue > >> >>> >>>>>>>>>>>>>>>> an execute. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> The problem is not here. The problem is in > do_query() > >> >>> >>>>>>>>>>>>>>> where > >> >>> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped > out > >> >>> >>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>> we keep on > >> >>> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the > >> >>> >>>>>>>>>>>>>> datanode. > >> >>> >>>>>>>>>>>>>> I spent time looking at the code written in do_query > >> >>> >>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>> functions called > >> >>> >>>>>>>>>>>>>> from with in do_query to handle prepared statements > but > >> >>> >>>>>>>>>>>>>> the > >> >>> >>>>>>>>>>>>>> code written in > >> >>> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements > >> >>> >>>>>>>>>>>>>> prepared on datanodes > >> >>> >>>>>>>>>>>>>> is dead as of now. It is never called during the > >> >>> >>>>>>>>>>>>>> complete > >> >>> >>>>>>>>>>>>>> regression run. > >> >>> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never > >> >>> >>>>>>>>>>>>>> called. The way > >> >>> >>>>>>>>>>>>>> prepared statements are being handled now is the same > >> >>> >>>>>>>>>>>>>> as I > >> >>> >>>>>>>>>>>>>> described earlier > >> >>> >>>>>>>>>>>>>> in the mail chain with the help of an example. > >> >>> >>>>>>>>>>>>>> The code that is dead was originally added by Mason > >> >>> >>>>>>>>>>>>>> through > >> >>> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back > >> >>> >>>>>>>>>>>>>> in > >> >>> >>>>>>>>>>>>>> December 2010. This > >> >>> >>>>>>>>>>>>>> code has been changed a lot over the last two years. > >> >>> >>>>>>>>>>>>>> This > >> >>> >>>>>>>>>>>>>> commit does not > >> >>> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it > use > >> >>> >>>>>>>>>>>>>> to > >> >>> >>>>>>>>>>>>>> work back then. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared > >> >>> >>>>>>>>>>>>> statements. > >> >>> >>>>>>>>>>>>> So, > >> >>> >>>>>>>>>>>>> something has gone wrong in-between. That's what we > need > >> >>> >>>>>>>>>>>>> to > >> >>> >>>>>>>>>>>>> find out and > >> >>> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not > >> >>> >>>>>>>>>>>>> good > >> >>> >>>>>>>>>>>>> for performance > >> >>> >>>>>>>>>>>>> either. > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> I was able to find the reason why the code was dead and > >> >>> >>>>>>>>>>>> the > >> >>> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now > >> >>> >>>>>>>>>>>> ensure > >> >>> >>>>>>>>>>>> that > >> >>> >>>>>>>>>>>> statements are prepared on datanodes whenever required. > >> >>> >>>>>>>>>>>> However there is a > >> >>> >>>>>>>>>>>> problem in the way prepared statements are handled. The > >> >>> >>>>>>>>>>>> problem is that > >> >>> >>>>>>>>>>>> unless a prepared statement is executed it is never > >> >>> >>>>>>>>>>>> prepared > >> >>> >>>>>>>>>>>> on datanodes, > >> >>> >>>>>>>>>>>> hence changing the path before executing the statement > >> >>> >>>>>>>>>>>> gives > >> >>> >>>>>>>>>>>> us incorrect > >> >>> >>>>>>>>>>>> results. For Example > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute > >> >>> >>>>>>>>>>>> by > >> >>> >>>>>>>>>>>> replication; > >> >>> >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute > >> >>> >>>>>>>>>>>> by > >> >>> >>>>>>>>>>>> replication; > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> insert into s1.abc values(123); > >> >>> >>>>>>>>>>>> insert into s2.abc values(456); > >> >>> >>>>>>>>>>>> set search_path = s2; > >> >>> >>>>>>>>>>>> prepare p1 as select f1 from abc; > >> >>> >>>>>>>>>>>> set search_path = s1; > >> >>> >>>>>>>>>>>> execute p1; > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> The last execute results in 123, where as it should > have > >> >>> >>>>>>>>>>>> resulted > >> >>> >>>>>>>>>>>> in 456. > >> >>> >>>>>>>>>>>> I can finalize the attached patch by fixing any > >> >>> >>>>>>>>>>>> regression > >> >>> >>>>>>>>>>>> issues > >> >>> >>>>>>>>>>>> that may result and that would fix 3607975 and improve > >> >>> >>>>>>>>>>>> performance however > >> >>> >>>>>>>>>>>> the above test case would still fail. > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not > >> >>> >>>>>>>>>>>>>>>> reproducible. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would > >> >>> >>>>>>>>>>>>>>> have > >> >>> >>>>>>>>>>>>>>> been > >> >>> >>>>>>>>>>>>>>> the case, we would not have seen this problem if > >> >>> >>>>>>>>>>>>>>> search_path changed in > >> >>> >>>>>>>>>>>>>>> between steps D and E. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the > >> >>> >>>>>>>>>>>>>> problem > >> >>> >>>>>>>>>>>>>> occurs because when the remote query node is created, > >> >>> >>>>>>>>>>>>>> schema qualification > >> >>> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the > >> >>> >>>>>>>>>>>>>> datanode, but changes in > >> >>> >>>>>>>>>>>>>> search path do get communicated to the datanode. The > >> >>> >>>>>>>>>>>>>> sql > >> >>> >>>>>>>>>>>>>> statement is built > >> >>> >>>>>>>>>>>>>> when execute is issued for the first time and is > reused > >> >>> >>>>>>>>>>>>>> on > >> >>> >>>>>>>>>>>>>> subsequent > >> >>> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the > >> >>> >>>>>>>>>>>>>> select > >> >>> >>>>>>>>>>>>>> that it just > >> >>> >>>>>>>>>>>>>> received is due to an execute of a prepared statement > >> >>> >>>>>>>>>>>>>> that > >> >>> >>>>>>>>>>>>>> was prepared when > >> >>> >>>>>>>>>>>>>> search path was some thing else. > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, > >> >>> >>>>>>>>>>>>> would > >> >>> >>>>>>>>>>>>> fix > >> >>> >>>>>>>>>>>>> the problem, since the statement will get prepared at > >> >>> >>>>>>>>>>>>> the > >> >>> >>>>>>>>>>>>> datanode, with the > >> >>> >>>>>>>>>>>>> same search path settings, as it would on the > >> >>> >>>>>>>>>>>>> coordinator. > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Comments are welcome. > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>>>> Abbas > >> >>> >>>>>>>>>>>>>>>> Architect > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, > >> >>> >>>>>>>>>>>>>>>> whitepapers > >> >>> >>>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>>> more > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > ------------------------------------------------------------------------------ > >> >>> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt > >> >>> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application > >> >>> >>>>>>>>>>>>>>>> performance > >> >>> >>>>>>>>>>>>>>>> monitoring service > >> >>> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. > Optimize > >> >>> >>>>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>>>> monitor your > >> >>> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of > >> >>> >>>>>>>>>>>>>>>> code. > >> >>> >>>>>>>>>>>>>>>> Try > >> >>> >>>>>>>>>>>>>>>> New Relic > >> >>> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! > >> >>> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may > >> >>> >>>>>>>>>>>>>>>> _______________________________________________ > >> >>> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list > >> >>> >>>>>>>>>>>>>>>> Pos...@li... > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> >>> >>>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>>> Best Wishes, > >> >>> >>>>>>>>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>>> Abbas > >> >>> >>>>>>>>>>>>>> Architect > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, > whitepapers > >> >>> >>>>>>>>>>>>>> and > >> >>> >>>>>>>>>>>>>> more > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> > >> >>> >>>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>>> Best Wishes, > >> >>> >>>>>>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>> -- > >> >>> >>>>>>>>>>>> Abbas > >> >>> >>>>>>>>>>>> Architect > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>>>> > >> >>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers > >> >>> >>>>>>>>>>>> and > >> >>> >>>>>>>>>>>> more > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> > >> >>> >>>>>>>>>>> -- > >> >>> >>>>>>>>>>> Best Wishes, > >> >>> >>>>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> -- > >> >>> >>>>>>>>>> -- > >> >>> >>>>>>>>>> Abbas > >> >>> >>>>>>>>>> Architect > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Follow us on Twitter > >> >>> >>>>>>>>>> @EnterpriseDB > >> >>> >>>>>>>>>> > >> >>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers > and > >> >>> >>>>>>>>>> more > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> > >> >>> >>>>>>>>> -- > >> >>> >>>>>>>>> Best Wishes, > >> >>> >>>>>>>>> Ashutosh Bapat > >> >>> >>>>>>>>> EntepriseDB Corporation > >> >>> >>>>>>>>> The Postgres Database Company > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> > >> >>> >>>>>>>> -- > >> >>> >>>>>>>> -- > >> >>> >>>>>>>> Abbas > >> >>> >>>>>>>> Architect > >> >>> >>>>>>>> > >> >>> >>>>>>>> Ph: 92.334.5100153 > >> >>> >>>>>>>> Skype ID: gabbasb > >> >>> >>>>>>>> www.enterprisedb.com > >> >>> >>>>>>>> > >> >>> >>>>>>>> Follow us on Twitter > >> >>> >>>>>>>> @EnterpriseDB > >> >>> >>>>>>>> > >> >>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >> >>> >>>>>>>> more > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> > >> >>> >>>>>>> -- > >> >>> >>>>>>> Best Wishes, > >> >>> >>>>>>> Ashutosh Bapat > >> >>> >>>>>>> EntepriseDB Corporation > >> >>> >>>>>>> The Postgres Database Company > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> > >> >>> >>>>>> -- > >> >>> >>>>>> -- > >> >>> >>>>>> Abbas > >> >>> >>>>>> Architect > >> >>> >>>>>> > >> >>> >>>>>> Ph: 92.334.5100153 > >> >>> >>>>>> Skype ID: gabbasb > >> >>> >>>>>> www.enterprisedb.com > >> >>> >>>>>> > >> >>> >>>>>> Follow us on Twitter > >> >>> >>>>>> @EnterpriseDB > >> >>> >>>>>> > >> >>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >> >>> >>>>>> more > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> > >> >>> >>>>> -- > >> >>> >>>>> Best Wishes, > >> >>> >>>>> Ashutosh Bapat > >> >>> >>>>> EntepriseDB Corporation > >> >>> >>>>> The Postgres Database Company > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> > >> >>> >>>> -- > >> >>> >>>> -- > >> >>> >>>> Abbas > >> >>> >>>> Architect > >> >>> >>>> > >> >>> >>>> Ph: 92.334.5100153 > >> >>> >>>> Skype ID: gabbasb > >> >>> >>>> www.enterprisedb.com > >> >>> >>>> > >> >>> >>>> Follow us on Twitter > >> >>> >>>> @EnterpriseDB > >> >>> >>>> > >> >>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > more > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> -- > >> >>> >>> -- > >> >>> >>> Abbas > >> >>> >>> Architect > >> >>> >>> > >> >>> >>> Ph: 92.334.5100153 > >> >>> >>> Skype ID: gabbasb > >> >>> >>> www.enterprisedb.com > >> >>> >>> > >> >>> >>> Follow us on Twitter > >> >>> >>> @EnterpriseDB > >> >>> >>> > >> >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> -- > >> >>> >> Best Wishes, > >> >>> >> Ashutosh Bapat > >> >>> >> EntepriseDB Corporation > >> >>> >> The Postgres Database Company > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> > ------------------------------------------------------------------------------ > >> >>> >> This SF.net email is sponsored by Windows: > >> >>> >> > >> >>> >> Build for Windows Store. > >> >>> >> > >> >>> >> http://p.sf.net/sfu/windows-dev2dev > >> >>> >> _______________________________________________ > >> >>> >> Postgres-xc-developers mailing list > >> >>> >> Pos...@li... > >> >>> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> >>> >> > >> >> > >> >> > >> >> > >> >> > >> >> -- > >> >> Best Wishes, > >> >> Ashutosh Bapat > >> >> EntepriseDB Corporation > >> >> The Postgres Database Company > >> > >> > >> > ------------------------------------------------------------------------------ > >> This SF.net email is sponsored by Windows: > >> > >> Build for Windows Store. > >> > >> http://p.sf.net/sfu/windows-dev2dev > >> _______________________________________________ > >> Postgres-xc-developers mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > > > > > <inherit.out><inherit.sql><regression.diffs><regression.out>------------------------------------------------------------------------------ > > > > This SF.net email is sponsored by Windows: > > > > Build for Windows Store. > > > > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Ahsan Hadi Snr Director Product Development EnterpriseDB Corporation The Enterprise Postgres Company Phone: +92-51-8358874 Mobile: +92-333-5162114 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Amit K. <ami...@en...> - 2013-06-26 08:46:57
|
Now I see how to reproduce the inherit.sql failure. It fails only in parallel schedule. And I just checked that it also failed 6 days ago with the same diff when I ran some regression test on the build farm. So it has nothing to do with the patch, and this test needs to be analyzed and fixed separately. On 26 June 2013 13:21, 鈴木 幸市 <ko...@in...> wrote: > It seems that the name of materialized table may be environment-dependent. > Any idea to make it environment-independent? > > Apparently, the result is correct but just does not match expected ones. > > Reagards; > --- > Koichi Suzuki > > > > On 2013/06/26, at 16:47, Koichi Suzuki <koi...@gm...> wrote: > > I tested this patch for the latest master (without any pending patches) and > found inherit fails in linker machine environment. > > PFA related files. > > Regards; > > ---------- > Koichi Suzuki > > > 2013/6/26 Amit Khandekar <ami...@en...> >> >> On 26 June 2013 10:34, Amit Khandekar <ami...@en...> >> wrote: >> > On 26 June 2013 08:56, Ashutosh Bapat <ash...@en...> >> > wrote: >> >> Hi Amit, >> >> From a cursory look, this looks much more cleaner than Abbas's patch. >> >> So, it >> >> looks to be the approach we should take. BTW, you need to update the >> >> original outputs as well, instead of just the alternate expected >> >> outputs. >> >> Remember, the merge applies changes to the original expected outputs >> >> and not >> >> alternate ones (added in XC), thus we come to know about conflicts only >> >> when >> >> we apply changes to original expected outputs. >> > >> > Right. Will look into it. >> >> All of the expected files except inherit.sql are xc_* tests which >> don't have alternate files. I have made the same inherit_1.out changes >> onto inherit.out. Attached revised patch. >> >> Suzuki-san, I was looking at the following diff in the inherit.sql >> failure that you had attached from your local regression run: >> >> ! Hash Cond: (pg_temp_2.patest0.id = int4_tbl.f1) >> --- 1247,1253 ---- >> ! Hash Cond: (pg_temp_7.patest0.id = int4_tbl.f1) >> >> I am not sure if this has started coming after you applied the patch. >> Can you please once again run the test after reinitializing the >> cluster ? I am not getting this diff although I ran using >> serial_schedule, not parallel; and the diff itself looks harmless. As >> I mentioned, the actual temp schema name may differ. >> >> > >> >> >> >> Regards to temporary namespaces, is it possible to schema-qualify the >> >> temporary namespaces as always pg_temp irrespective of the actual name? >> > >> > get_namespace_name() is used to deparse the schema name. In order to >> > keep the deparsing logic working for both local queries (i.e. for view >> > for e.g.) and remote queries, we need to push in the context that the >> > deparsing is being done for remote queries, and this needs to be done >> > all the way from the uppermost function (say pg_get_querydef) upto >> > get_namespace_name() which does not look good. >> > >> > We need to think about some solution in general for the existing issue >> > of deparsing temp table names. Not sure of any solution. May be, after >> > we resolve the bug id in subject, the fix may even not require the >> > schema qualification. >> >> >> >> >> >> On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar >> >> <ami...@en...> wrote: >> >>> >> >>> On 25 June 2013 19:59, Amit Khandekar >> >>> <ami...@en...> >> >>> wrote: >> >>> > Attached is a patch that does schema qualification by overriding the >> >>> > search_path just before doing the deparse in deparse_query(). >> >>> > PopOVerrideSearchPath() in the end pops out the temp path. Also, the >> >>> > transaction callback function already takes care of popping out such >> >>> > Push in case of transaction rollback. >> >>> > >> >>> > Unfortunately we cannot apply this solution to temp tables. The >> >>> > problem is, when a pg_temp schema is deparsed, it is deparsed into >> >>> > pg_temp_1, pg_temp_2 etc. and these names are specific to the node. >> >>> > An >> >>> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at >> >>> > datanode. So the remote query generated may or may not work on >> >>> > datanode, so totally unreliable. >> >>> > >> >>> > In fact, the issue with pg_temp_1 names in the deparsed remote query >> >>> > is present even currently. >> >>> > >> >>> > But wherever it is a correctness issue to *not* schema qualify temp >> >>> > object, I have kept the schema qualification. >> >>> > For e.g. user can set search_path to" s1, pg_temp", >> >>> > and have obj1 in both s1 and pg_temp, and want to refer to >> >>> > pg_temp.s1. >> >>> > In such case, the remote query should have pg_temp_[1-9].obj1, >> >>> > although it may cause errors because of the existing issue. >> >>> > >> >>> --- >> >>> > So, the prepare-execute with search_path would remain there for temp >> >>> > tables. >> >>> I mean, the prepare-exute issue with search_patch would remain there >> >>> for temp tables. >> >>> >> >>> > >> >>> > I tried to run the regression by extracting regression expected >> >>> > output >> >>> > files from Abbas's patch , and regression passes, including >> >>> > plancache. >> >>> > >> >>> > I think for this release, we should go ahead by keeping this issue >> >>> > open for temp tables. This solution is an improvement, and does not >> >>> > cause any new issues. >> >>> > >> >>> > Comments welcome. >> >>> > >> >>> > >> >>> > On 24 June 2013 13:00, Ashutosh Bapat >> >>> > <ash...@en...> >> >>> > wrote: >> >>> >> Hi Abbas, >> >>> >> We are changing a lot of PostgreSQL deparsing code, which would >> >>> >> create >> >>> >> problems in the future merges. Since this change is in query >> >>> >> deparsing >> >>> >> logic >> >>> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch >> >>> >> should >> >>> >> again be the last resort. >> >>> >> >> >>> >> Please take a look at how view definitions are dumped. That will >> >>> >> give a >> >>> >> good >> >>> >> idea as to how PG schema-qualifies (or not) objects. Here's how >> >>> >> view >> >>> >> definitions displayed changes with search path. Since the code to >> >>> >> dump >> >>> >> views >> >>> >> and display definitions is same, the view definition dumped also >> >>> >> changes >> >>> >> with the search path. Thus pg_dump must be using some trick to >> >>> >> always >> >>> >> dump a >> >>> >> consistent view definition (and hence a deparsed query). Thanks >> >>> >> Amit >> >>> >> for the >> >>> >> example. >> >>> >> >> >>> >> create table ttt (id int); >> >>> >> >> >>> >> postgres=# create domain dd int; >> >>> >> CREATE DOMAIN >> >>> >> postgres=# create view v2 as select id::dd from ttt; >> >>> >> CREATE VIEW >> >>> >> postgres=# set search_path TO ''; >> >>> >> SET >> >>> >> >> >>> >> postgres=# \d+ public.v2 >> >>> >> View "public.v2" >> >>> >> Column | Type | Modifiers | Storage | Description >> >>> >> --------+-----------+--------- >> >>> >> --+---------+------------- >> >>> >> id | public.dd | | plain | >> >>> >> View definition: >> >>> >> SELECT ttt.id::public.dd AS id >> >>> >> FROM public.ttt; >> >>> >> >> >>> >> postgres=# set search_path TO default ; >> >>> >> SET >> >>> >> postgres=# show search_path ; >> >>> >> search_path >> >>> >> ---------------- >> >>> >> "$user",public >> >>> >> (1 row) >> >>> >> >> >>> >> postgres=# \d+ public.v2 >> >>> >> View "public.v2" >> >>> >> Column | Type | Modifiers | Storage | Description >> >>> >> --------+------+-----------+---------+------------- >> >>> >> id | dd | | plain | >> >>> >> View definition: >> >>> >> SELECT ttt.id::dd AS id >> >>> >> FROM ttt; >> >>> >> >> >>> >> We need to leverage similar mechanism here to reduce PG footprint. >> >>> >> >> >>> >> >> >>> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt >> >>> >> <abb...@en...> >> >>> >> wrote: >> >>> >>> >> >>> >>> Hi, >> >>> >>> As discussed in the last F2F meeting, here is an updated patch >> >>> >>> that >> >>> >>> provides schema qualification of the following objects: Tables, >> >>> >>> Views, >> >>> >>> Functions, Types and Domains in case of remote queries. >> >>> >>> Sequence functions are never concerned with datanodes hence, >> >>> >>> schema >> >>> >>> qualification is not required in case of sequences. >> >>> >>> This solves plancache test case failure issue and does not >> >>> >>> introduce >> >>> >>> any >> >>> >>> more failures. >> >>> >>> I have also attached some tests with results to aid in review. >> >>> >>> >> >>> >>> Comments are welcome. >> >>> >>> >> >>> >>> Regards >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt >> >>> >>> <abb...@en...> >> >>> >>> wrote: >> >>> >>>> >> >>> >>>> Hi, >> >>> >>>> Attached please find a WIP patch that provides the functionality >> >>> >>>> of >> >>> >>>> preparing the statement at the datanodes as soon as it is >> >>> >>>> prepared >> >>> >>>> on the coordinator. >> >>> >>>> This is to take care of a test case in plancache that makes sure >> >>> >>>> that >> >>> >>>> change of search_path is ignored by replans. >> >>> >>>> While the patch fixes this replan test case and the regression >> >>> >>>> works >> >>> >>>> fine >> >>> >>>> there are still these two problems I have to take care of. >> >>> >>>> >> >>> >>>> 1. This test case fails >> >>> >>>> >> >>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) >> >>> >>>> DISTRIBUTE >> >>> >>>> BY >> >>> >>>> HASH(a); >> >>> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >> >>> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- >> >>> >>>> fails >> >>> >>>> >> >>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = >> >>> >>>> 1; >> >>> >>>> QUERY PLAN >> >>> >>>> >> >>> >>>> >> >>> >>>> ------------------------------------------------------------------- >> >>> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 >> >>> >>>> rows=1000 >> >>> >>>> width=14) >> >>> >>>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >> >>> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >> >>> >>>> ((xc_alter_table_3.ctid = $1) AND >> >>> >>>> (xc_alter_table_3.xc_node_id = $2)) >> >>> >>>> -> Data Node Scan on xc_alter_table_3 >> >>> >>>> "_REMOTE_TABLE_QUERY_" >> >>> >>>> (cost=0.00..0.00 rows=1000 width=14) >> >>> >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >> >>> >>>> xc_alter_table_3.xc_node_id >> >>> >>>> Node/s: data_node_3 >> >>> >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >> >>> >>>> xc_alter_table_3 WHERE (a = 1) >> >>> >>>> (7 rows) >> >>> >>>> >> >>> >>>> The reason of the failure is that the select query is >> >>> >>>> selecting 3 >> >>> >>>> items, the first of which is an int, >> >>> >>>> whereas the delete query is comparing $1 with a ctid. >> >>> >>>> I am not sure how this works without prepare, but it fails >> >>> >>>> when >> >>> >>>> used >> >>> >>>> with prepare. >> >>> >>>> >> >>> >>>> The reason of this planning is this section of code in >> >>> >>>> function >> >>> >>>> pgxc_build_dml_statement >> >>> >>>> else if (cmdtype == CMD_DELETE) >> >>> >>>> { >> >>> >>>> /* >> >>> >>>> * Since there is no data to update, the first param is >> >>> >>>> going >> >>> >>>> to >> >>> >>>> be >> >>> >>>> * ctid. >> >>> >>>> */ >> >>> >>>> ctid_param_num = 1; >> >>> >>>> } >> >>> >>>> >> >>> >>>> Amit/Ashutosh can you suggest a fix for this problem? >> >>> >>>> There are a number of possibilities. >> >>> >>>> a) The select should not have selected column a. >> >>> >>>> b) The DELETE should have referred to $2 and $3 for ctid and >> >>> >>>> xc_node_id respectively. >> >>> >>>> c) Since the query works without PREPARE, we should make >> >>> >>>> PREPARE >> >>> >>>> work >> >>> >>>> the same way. >> >>> >>>> >> >>> >>>> >> >>> >>>> 2. This test case in plancache fails. >> >>> >>>> >> >>> >>>> -- Try it with a view, which isn't directly used in the >> >>> >>>> resulting >> >>> >>>> plan >> >>> >>>> -- but should trigger invalidation anyway >> >>> >>>> create table tab33 (a int, b int); >> >>> >>>> insert into tab33 values(1,2); >> >>> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >> >>> >>>> PREPARE vprep AS SELECT * FROM v_tab33; >> >>> >>>> EXECUTE vprep; >> >>> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM >> >>> >>>> tab33; >> >>> >>>> -- does not cause plan invalidation because views are never >> >>> >>>> created >> >>> >>>> on datanodes >> >>> >>>> EXECUTE vprep; >> >>> >>>> >> >>> >>>> and the reason of the failure is that views are never created >> >>> >>>> on >> >>> >>>> the >> >>> >>>> datanodes hence plan invalidation is not triggered. >> >>> >>>> This can be documented as an XC limitation. >> >>> >>>> >> >>> >>>> 3. I still have to add comments in the patch and some ifdefs may >> >>> >>>> be >> >>> >>>> missing too. >> >>> >>>> >> >>> >>>> >> >>> >>>> In addition to the patch I have also attached some example Java >> >>> >>>> programs >> >>> >>>> that test the some basic functionality through JDBC. I found that >> >>> >>>> these >> >>> >>>> programs are working fine after my patch. >> >>> >>>> >> >>> >>>> 1. Prepared.java : Issues parameterized delete, insert and update >> >>> >>>> through >> >>> >>>> JDBC. These are un-named prepared statements and works fine. >> >>> >>>> 2. NamedPrepared.java : Issues two named prepared statements >> >>> >>>> through >> >>> >>>> JDBC >> >>> >>>> and works fine. >> >>> >>>> 3. Retrieve.java : Runs a simple select to verify results. >> >>> >>>> The comments on top of the files explain their usage. >> >>> >>>> >> >>> >>>> Comments are welcome. >> >>> >>>> >> >>> >>>> Thanks >> >>> >>>> Regards >> >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat >> >>> >>>> <ash...@en...> wrote: >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt >> >>> >>>>> <abb...@en...> wrote: >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat >> >>> >>>>>> <ash...@en...> wrote: >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt >> >>> >>>>>>> <abb...@en...> wrote: >> >>> >>>>>>>> >> >>> >>>>>>>> Attached please find updated patch to fix the bug. The patch >> >>> >>>>>>>> takes >> >>> >>>>>>>> care of the bug and the regression issues resulting from the >> >>> >>>>>>>> changes done in >> >>> >>>>>>>> the patch. Please note that the issue in test case plancache >> >>> >>>>>>>> still stands >> >>> >>>>>>>> unsolved because of the following test case (simplified but >> >>> >>>>>>>> taken >> >>> >>>>>>>> from >> >>> >>>>>>>> plancache.sql) >> >>> >>>>>>>> >> >>> >>>>>>>> create schema s1 create table abc (f1 int); >> >>> >>>>>>>> create schema s2 create table abc (f1 int); >> >>> >>>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> insert into s1.abc values(123); >> >>> >>>>>>>> insert into s2.abc values(456); >> >>> >>>>>>>> >> >>> >>>>>>>> set search_path = s1; >> >>> >>>>>>>> >> >>> >>>>>>>> prepare p1 as select f1 from abc; >> >>> >>>>>>>> execute p1; -- works fine, results in 123 >> >>> >>>>>>>> >> >>> >>>>>>>> set search_path = s2; >> >>> >>>>>>>> execute p1; -- works fine after the patch, results in 123 >> >>> >>>>>>>> >> >>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan >> >>> >>>>>>>> execute p1; -- fails >> >>> >>>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> Huh! The beast bit us. >> >>> >>>>>>> >> >>> >>>>>>> I think the right solution here is either of two >> >>> >>>>>>> 1. Take your previous patch to always use qualified names (but >> >>> >>>>>>> you >> >>> >>>>>>> need to improve it not to affect the view dumps) >> >>> >>>>>>> 2. Prepare the statements at the datanode at the time of >> >>> >>>>>>> prepare. >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> Is this test added new in 9.2? >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> No, it was added by commit >> >>> >>>>>> 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c >> >>> >>>>>> in >> >>> >>>>>> March 2007. >> >>> >>>>>> >> >>> >>>>>>> >> >>> >>>>>>> Why didn't we see this issue the first time prepare was >> >>> >>>>>>> implemented? I >> >>> >>>>>>> don't remember (but it was two years back). >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> I was unable to locate the exact reason but since statements >> >>> >>>>>> were >> >>> >>>>>> not >> >>> >>>>>> being prepared on datanodes due to a merge issue this issue >> >>> >>>>>> just >> >>> >>>>>> surfaced >> >>> >>>>>> up. >> >>> >>>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> Well, even though statements were not getting prepared (actually >> >>> >>>>> prepared statements were not being used again and again) on >> >>> >>>>> datanodes, we >> >>> >>>>> never prepared them on datanode at the time of preparing the >> >>> >>>>> statement. So, >> >>> >>>>> this bug should have shown itself long back. >> >>> >>>>> >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> The last execute should result in 123, whereas it results in >> >>> >>>>>>>> 456. >> >>> >>>>>>>> The >> >>> >>>>>>>> reason is that the search path has already been changed at >> >>> >>>>>>>> the >> >>> >>>>>>>> datanode and >> >>> >>>>>>>> a replan would mean select from abc in s2. >> >>> >>>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat >> >>> >>>>>>>> <ash...@en...> wrote: >> >>> >>>>>>>>> >> >>> >>>>>>>>> Hi Abbas, >> >>> >>>>>>>>> I think the fix is on the right track. There are couple of >> >>> >>>>>>>>> improvements that we need to do here (but you may not do >> >>> >>>>>>>>> those >> >>> >>>>>>>>> if the time >> >>> >>>>>>>>> doesn't permit). >> >>> >>>>>>>>> >> >>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to >> >>> >>>>>>>>> whether >> >>> >>>>>>>>> the >> >>> >>>>>>>>> query in the node should use extended protocol or not, >> >>> >>>>>>>>> rather >> >>> >>>>>>>>> than relying >> >>> >>>>>>>>> on the presence of statement name and parameters etc. Amit >> >>> >>>>>>>>> has >> >>> >>>>>>>>> already added >> >>> >>>>>>>>> a status with that effect. We need to leverage it. >> >>> >>>>>>>>> >> >>> >>>>>>>>> >> >>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt >> >>> >>>>>>>>> <abb...@en...> wrote: >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> The patch fixes the dead code issue, that I described >> >>> >>>>>>>>>> earlier. >> >>> >>>>>>>>>> The >> >>> >>>>>>>>>> code was dead because of two issues: >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting >> >>> >>>>>>>>>> stmt_name to >> >>> >>>>>>>>>> NULL and this was the main reason >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode was not >> >>> >>>>>>>>>> being called in the function >> >>> >>>>>>>>>> pgxc_start_command_on_connection. >> >>> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming >> >>> >>>>>>>>>> that a >> >>> >>>>>>>>>> prepared statement must have some parameters. >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> Fixing these two issues makes sure that the function >> >>> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and >> >>> >>>>>>>>>> statements >> >>> >>>>>>>>>> get prepared on >> >>> >>>>>>>>>> the datanode. >> >>> >>>>>>>>>> This patch would fix bug 3607975. It would however not fix >> >>> >>>>>>>>>> the >> >>> >>>>>>>>>> test >> >>> >>>>>>>>>> case I described in my previous email because of reasons I >> >>> >>>>>>>>>> described. >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat >> >>> >>>>>>>>>> <ash...@en...> wrote: >> >>> >>>>>>>>>>> >> >>> >>>>>>>>>>> Can you please explain what this fix does? It would help >> >>> >>>>>>>>>>> to >> >>> >>>>>>>>>>> have >> >>> >>>>>>>>>>> an elaborate explanation with code snippets. >> >>> >>>>>>>>>>> >> >>> >>>>>>>>>>> >> >>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt >> >>> >>>>>>>>>>> <abb...@en...> wrote: >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat >> >>> >>>>>>>>>>>> <ash...@en...> wrote: >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt >> >>> >>>>>>>>>>>>> <abb...@en...> wrote: >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat >> >>> >>>>>>>>>>>>>> <ash...@en...> wrote: >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt >> >>> >>>>>>>>>>>>>>> <abb...@en...> wrote: >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> Hi, >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> While working on test case plancache it was brought >> >>> >>>>>>>>>>>>>>>> up as >> >>> >>>>>>>>>>>>>>>> a >> >>> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should >> >>> >>>>>>>>>>>>>>>> solve >> >>> >>>>>>>>>>>>>>>> the problem of the >> >>> >>>>>>>>>>>>>>>> test case. >> >>> >>>>>>>>>>>>>>>> However there is some confusion in the statement of >> >>> >>>>>>>>>>>>>>>> bug >> >>> >>>>>>>>>>>>>>>> id >> >>> >>>>>>>>>>>>>>>> 3607975. >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs >> >>> >>>>>>>>>>>>>>>> multiple >> >>> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and >> >>> >>>>>>>>>>>>>>>> executing >> >>> >>>>>>>>>>>>>>>> the query on >> >>> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and >> >>> >>>>>>>>>>>>>>>> executing multiple times. >> >>> >>>>>>>>>>>>>>>> This is because somehow the remote query is being >> >>> >>>>>>>>>>>>>>>> prepared as an unnamed >> >>> >>>>>>>>>>>>>>>> statement." >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> Consider this test case >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); >> >>> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); >> >>> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >> >>> >>>>>>>>>>>>>>>> D. execute p1; >> >>> >>>>>>>>>>>>>>>> E. execute p1; >> >>> >>>>>>>>>>>>>>>> F. execute p1; >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> Here are the confusions >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in >> >>> >>>>>>>>>>>>>>>> response >> >>> >>>>>>>>>>>>>>>> to >> >>> >>>>>>>>>>>>>>>> a prepare issued by a user. >> >>> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >> >>> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to >> >>> >>>>>>>>>>>>>>>> all >> >>> >>>>>>>>>>>>>>>> datanodes. >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to >> >>> >>>>>>>>>>>>>>>> build >> >>> >>>>>>>>>>>>>>>> a >> >>> >>>>>>>>>>>>>>>> new generic plan, >> >>> >>>>>>>>>>>>>>>> and steps E and F use the already built generic >> >>> >>>>>>>>>>>>>>>> plan. >> >>> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. >> >>> >>>>>>>>>>>>>>>> This means that executing a prepared statement >> >>> >>>>>>>>>>>>>>>> again >> >>> >>>>>>>>>>>>>>>> and >> >>> >>>>>>>>>>>>>>>> again does use cached plans >> >>> >>>>>>>>>>>>>>>> and does not prepare again and again every time >> >>> >>>>>>>>>>>>>>>> we >> >>> >>>>>>>>>>>>>>>> issue >> >>> >>>>>>>>>>>>>>>> an execute. >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> The problem is not here. The problem is in do_query() >> >>> >>>>>>>>>>>>>>> where >> >>> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out >> >>> >>>>>>>>>>>>>>> and >> >>> >>>>>>>>>>>>>>> we keep on >> >>> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the >> >>> >>>>>>>>>>>>>> datanode. >> >>> >>>>>>>>>>>>>> I spent time looking at the code written in do_query >> >>> >>>>>>>>>>>>>> and >> >>> >>>>>>>>>>>>>> functions called >> >>> >>>>>>>>>>>>>> from with in do_query to handle prepared statements but >> >>> >>>>>>>>>>>>>> the >> >>> >>>>>>>>>>>>>> code written in >> >>> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements >> >>> >>>>>>>>>>>>>> prepared on datanodes >> >>> >>>>>>>>>>>>>> is dead as of now. It is never called during the >> >>> >>>>>>>>>>>>>> complete >> >>> >>>>>>>>>>>>>> regression run. >> >>> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never >> >>> >>>>>>>>>>>>>> called. The way >> >>> >>>>>>>>>>>>>> prepared statements are being handled now is the same >> >>> >>>>>>>>>>>>>> as I >> >>> >>>>>>>>>>>>>> described earlier >> >>> >>>>>>>>>>>>>> in the mail chain with the help of an example. >> >>> >>>>>>>>>>>>>> The code that is dead was originally added by Mason >> >>> >>>>>>>>>>>>>> through >> >>> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back >> >>> >>>>>>>>>>>>>> in >> >>> >>>>>>>>>>>>>> December 2010. This >> >>> >>>>>>>>>>>>>> code has been changed a lot over the last two years. >> >>> >>>>>>>>>>>>>> This >> >>> >>>>>>>>>>>>>> commit does not >> >>> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it use >> >>> >>>>>>>>>>>>>> to >> >>> >>>>>>>>>>>>>> work back then. >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared >> >>> >>>>>>>>>>>>> statements. >> >>> >>>>>>>>>>>>> So, >> >>> >>>>>>>>>>>>> something has gone wrong in-between. That's what we need >> >>> >>>>>>>>>>>>> to >> >>> >>>>>>>>>>>>> find out and >> >>> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not >> >>> >>>>>>>>>>>>> good >> >>> >>>>>>>>>>>>> for performance >> >>> >>>>>>>>>>>>> either. >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> I was able to find the reason why the code was dead and >> >>> >>>>>>>>>>>> the >> >>> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now >> >>> >>>>>>>>>>>> ensure >> >>> >>>>>>>>>>>> that >> >>> >>>>>>>>>>>> statements are prepared on datanodes whenever required. >> >>> >>>>>>>>>>>> However there is a >> >>> >>>>>>>>>>>> problem in the way prepared statements are handled. The >> >>> >>>>>>>>>>>> problem is that >> >>> >>>>>>>>>>>> unless a prepared statement is executed it is never >> >>> >>>>>>>>>>>> prepared >> >>> >>>>>>>>>>>> on datanodes, >> >>> >>>>>>>>>>>> hence changing the path before executing the statement >> >>> >>>>>>>>>>>> gives >> >>> >>>>>>>>>>>> us incorrect >> >>> >>>>>>>>>>>> results. For Example >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute >> >>> >>>>>>>>>>>> by >> >>> >>>>>>>>>>>> replication; >> >>> >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute >> >>> >>>>>>>>>>>> by >> >>> >>>>>>>>>>>> replication; >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> insert into s1.abc values(123); >> >>> >>>>>>>>>>>> insert into s2.abc values(456); >> >>> >>>>>>>>>>>> set search_path = s2; >> >>> >>>>>>>>>>>> prepare p1 as select f1 from abc; >> >>> >>>>>>>>>>>> set search_path = s1; >> >>> >>>>>>>>>>>> execute p1; >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> The last execute results in 123, where as it should have >> >>> >>>>>>>>>>>> resulted >> >>> >>>>>>>>>>>> in 456. >> >>> >>>>>>>>>>>> I can finalize the attached patch by fixing any >> >>> >>>>>>>>>>>> regression >> >>> >>>>>>>>>>>> issues >> >>> >>>>>>>>>>>> that may result and that would fix 3607975 and improve >> >>> >>>>>>>>>>>> performance however >> >>> >>>>>>>>>>>> the above test case would still fail. >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not >> >>> >>>>>>>>>>>>>>>> reproducible. >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would >> >>> >>>>>>>>>>>>>>> have >> >>> >>>>>>>>>>>>>>> been >> >>> >>>>>>>>>>>>>>> the case, we would not have seen this problem if >> >>> >>>>>>>>>>>>>>> search_path changed in >> >>> >>>>>>>>>>>>>>> between steps D and E. >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the >> >>> >>>>>>>>>>>>>> problem >> >>> >>>>>>>>>>>>>> occurs because when the remote query node is created, >> >>> >>>>>>>>>>>>>> schema qualification >> >>> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the >> >>> >>>>>>>>>>>>>> datanode, but changes in >> >>> >>>>>>>>>>>>>> search path do get communicated to the datanode. The >> >>> >>>>>>>>>>>>>> sql >> >>> >>>>>>>>>>>>>> statement is built >> >>> >>>>>>>>>>>>>> when execute is issued for the first time and is reused >> >>> >>>>>>>>>>>>>> on >> >>> >>>>>>>>>>>>>> subsequent >> >>> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the >> >>> >>>>>>>>>>>>>> select >> >>> >>>>>>>>>>>>>> that it just >> >>> >>>>>>>>>>>>>> received is due to an execute of a prepared statement >> >>> >>>>>>>>>>>>>> that >> >>> >>>>>>>>>>>>>> was prepared when >> >>> >>>>>>>>>>>>>> search path was some thing else. >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, >> >>> >>>>>>>>>>>>> would >> >>> >>>>>>>>>>>>> fix >> >>> >>>>>>>>>>>>> the problem, since the statement will get prepared at >> >>> >>>>>>>>>>>>> the >> >>> >>>>>>>>>>>>> datanode, with the >> >>> >>>>>>>>>>>>> same search path settings, as it would on the >> >>> >>>>>>>>>>>>> coordinator. >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> Comments are welcome. >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> -- >> >>> >>>>>>>>>>>>>>>> Abbas >> >>> >>>>>>>>>>>>>>>> Architect >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 >> >>> >>>>>>>>>>>>>>>> Skype ID: gabbasb >> >>> >>>>>>>>>>>>>>>> www.enterprisedb.com >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> Follow us on Twitter >> >>> >>>>>>>>>>>>>>>> @EnterpriseDB >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, >> >>> >>>>>>>>>>>>>>>> whitepapers >> >>> >>>>>>>>>>>>>>>> and >> >>> >>>>>>>>>>>>>>>> more >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >> >>> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >> >>> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application >> >>> >>>>>>>>>>>>>>>> performance >> >>> >>>>>>>>>>>>>>>> monitoring service >> >>> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize >> >>> >>>>>>>>>>>>>>>> and >> >>> >>>>>>>>>>>>>>>> monitor your >> >>> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of >> >>> >>>>>>>>>>>>>>>> code. >> >>> >>>>>>>>>>>>>>>> Try >> >>> >>>>>>>>>>>>>>>> New Relic >> >>> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >> >>> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >> >>> >>>>>>>>>>>>>>>> _______________________________________________ >> >>> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list >> >>> >>>>>>>>>>>>>>>> Pos...@li... >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >>> >>>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>>> -- >> >>> >>>>>>>>>>>>>>> Best Wishes, >> >>> >>>>>>>>>>>>>>> Ashutosh Bapat >> >>> >>>>>>>>>>>>>>> EntepriseDB Corporation >> >>> >>>>>>>>>>>>>>> The Postgres Database Company >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> -- >> >>> >>>>>>>>>>>>>> -- >> >>> >>>>>>>>>>>>>> Abbas >> >>> >>>>>>>>>>>>>> Architect >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> Ph: 92.334.5100153 >> >>> >>>>>>>>>>>>>> Skype ID: gabbasb >> >>> >>>>>>>>>>>>>> www.enterprisedb.com >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> Follow us on Twitter >> >>> >>>>>>>>>>>>>> @EnterpriseDB >> >>> >>>>>>>>>>>>>> >> >>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >> >>> >>>>>>>>>>>>>> and >> >>> >>>>>>>>>>>>>> more >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> >> >>> >>>>>>>>>>>>> -- >> >>> >>>>>>>>>>>>> Best Wishes, >> >>> >>>>>>>>>>>>> Ashutosh Bapat >> >>> >>>>>>>>>>>>> EntepriseDB Corporation >> >>> >>>>>>>>>>>>> The Postgres Database Company >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> -- >> >>> >>>>>>>>>>>> -- >> >>> >>>>>>>>>>>> Abbas >> >>> >>>>>>>>>>>> Architect >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> Ph: 92.334.5100153 >> >>> >>>>>>>>>>>> Skype ID: gabbasb >> >>> >>>>>>>>>>>> www.enterprisedb.com >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> Follow us on Twitter >> >>> >>>>>>>>>>>> @EnterpriseDB >> >>> >>>>>>>>>>>> >> >>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >> >>> >>>>>>>>>>>> and >> >>> >>>>>>>>>>>> more >> >>> >>>>>>>>>>> >> >>> >>>>>>>>>>> >> >>> >>>>>>>>>>> >> >>> >>>>>>>>>>> >> >>> >>>>>>>>>>> -- >> >>> >>>>>>>>>>> Best Wishes, >> >>> >>>>>>>>>>> Ashutosh Bapat >> >>> >>>>>>>>>>> EntepriseDB Corporation >> >>> >>>>>>>>>>> The Postgres Database Company >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> -- >> >>> >>>>>>>>>> -- >> >>> >>>>>>>>>> Abbas >> >>> >>>>>>>>>> Architect >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> Ph: 92.334.5100153 >> >>> >>>>>>>>>> Skype ID: gabbasb >> >>> >>>>>>>>>> www.enterprisedb.com >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> Follow us on Twitter >> >>> >>>>>>>>>> @EnterpriseDB >> >>> >>>>>>>>>> >> >>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >>> >>>>>>>>>> more >> >>> >>>>>>>>> >> >>> >>>>>>>>> >> >>> >>>>>>>>> >> >>> >>>>>>>>> >> >>> >>>>>>>>> -- >> >>> >>>>>>>>> Best Wishes, >> >>> >>>>>>>>> Ashutosh Bapat >> >>> >>>>>>>>> EntepriseDB Corporation >> >>> >>>>>>>>> The Postgres Database Company >> >>> >>>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> >> >>> >>>>>>>> -- >> >>> >>>>>>>> -- >> >>> >>>>>>>> Abbas >> >>> >>>>>>>> Architect >> >>> >>>>>>>> >> >>> >>>>>>>> Ph: 92.334.5100153 >> >>> >>>>>>>> Skype ID: gabbasb >> >>> >>>>>>>> www.enterprisedb.com >> >>> >>>>>>>> >> >>> >>>>>>>> Follow us on Twitter >> >>> >>>>>>>> @EnterpriseDB >> >>> >>>>>>>> >> >>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >>> >>>>>>>> more >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> >> >>> >>>>>>> -- >> >>> >>>>>>> Best Wishes, >> >>> >>>>>>> Ashutosh Bapat >> >>> >>>>>>> EntepriseDB Corporation >> >>> >>>>>>> The Postgres Database Company >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> >> >>> >>>>>> -- >> >>> >>>>>> -- >> >>> >>>>>> Abbas >> >>> >>>>>> Architect >> >>> >>>>>> >> >>> >>>>>> Ph: 92.334.5100153 >> >>> >>>>>> Skype ID: gabbasb >> >>> >>>>>> www.enterprisedb.com >> >>> >>>>>> >> >>> >>>>>> Follow us on Twitter >> >>> >>>>>> @EnterpriseDB >> >>> >>>>>> >> >>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >>> >>>>>> more >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> -- >> >>> >>>>> Best Wishes, >> >>> >>>>> Ashutosh Bapat >> >>> >>>>> EntepriseDB Corporation >> >>> >>>>> The Postgres Database Company >> >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> -- >> >>> >>>> -- >> >>> >>>> Abbas >> >>> >>>> Architect >> >>> >>>> >> >>> >>>> Ph: 92.334.5100153 >> >>> >>>> Skype ID: gabbasb >> >>> >>>> www.enterprisedb.com >> >>> >>>> >> >>> >>>> Follow us on Twitter >> >>> >>>> @EnterpriseDB >> >>> >>>> >> >>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> -- >> >>> >>> -- >> >>> >>> Abbas >> >>> >>> Architect >> >>> >>> >> >>> >>> Ph: 92.334.5100153 >> >>> >>> Skype ID: gabbasb >> >>> >>> www.enterprisedb.com >> >>> >>> >> >>> >>> Follow us on Twitter >> >>> >>> @EnterpriseDB >> >>> >>> >> >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> -- >> >>> >> Best Wishes, >> >>> >> Ashutosh Bapat >> >>> >> EntepriseDB Corporation >> >>> >> The Postgres Database Company >> >>> >> >> >>> >> >> >>> >> >> >>> >> ------------------------------------------------------------------------------ >> >>> >> This SF.net email is sponsored by Windows: >> >>> >> >> >>> >> Build for Windows Store. >> >>> >> >> >>> >> http://p.sf.net/sfu/windows-dev2dev >> >>> >> _______________________________________________ >> >>> >> Postgres-xc-developers mailing list >> >>> >> Pos...@li... >> >>> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >>> >> >> >> >> >> >> >> >> >> >> >> -- >> >> Best Wishes, >> >> Ashutosh Bapat >> >> EntepriseDB Corporation >> >> The Postgres Database Company >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > <inherit.out><inherit.sql><regression.diffs><regression.out>------------------------------------------------------------------------------ > > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: 鈴木 幸市 <ko...@in...> - 2013-06-26 07:51:51
|
It seems that the name of materialized table may be environment-dependent. Any idea to make it environment-independent? Apparently, the result is correct but just does not match expected ones. Reagards; --- Koichi Suzuki On 2013/06/26, at 16:47, Koichi Suzuki <koi...@gm...> wrote: > I tested this patch for the latest master (without any pending patches) and found inherit fails in linker machine environment. > > PFA related files. > > Regards; > > ---------- > Koichi Suzuki > > > 2013/6/26 Amit Khandekar <ami...@en...> > On 26 June 2013 10:34, Amit Khandekar <ami...@en...> wrote: > > On 26 June 2013 08:56, Ashutosh Bapat <ash...@en...> wrote: > >> Hi Amit, > >> From a cursory look, this looks much more cleaner than Abbas's patch. So, it > >> looks to be the approach we should take. BTW, you need to update the > >> original outputs as well, instead of just the alternate expected outputs. > >> Remember, the merge applies changes to the original expected outputs and not > >> alternate ones (added in XC), thus we come to know about conflicts only when > >> we apply changes to original expected outputs. > > > > Right. Will look into it. > > All of the expected files except inherit.sql are xc_* tests which > don't have alternate files. I have made the same inherit_1.out changes > onto inherit.out. Attached revised patch. > > Suzuki-san, I was looking at the following diff in the inherit.sql > failure that you had attached from your local regression run: > > ! Hash Cond: (pg_temp_2.patest0.id = int4_tbl.f1) > --- 1247,1253 ---- > ! Hash Cond: (pg_temp_7.patest0.id = int4_tbl.f1) > > I am not sure if this has started coming after you applied the patch. > Can you please once again run the test after reinitializing the > cluster ? I am not getting this diff although I ran using > serial_schedule, not parallel; and the diff itself looks harmless. As > I mentioned, the actual temp schema name may differ. > > > > >> > >> Regards to temporary namespaces, is it possible to schema-qualify the > >> temporary namespaces as always pg_temp irrespective of the actual name? > > > > get_namespace_name() is used to deparse the schema name. In order to > > keep the deparsing logic working for both local queries (i.e. for view > > for e.g.) and remote queries, we need to push in the context that the > > deparsing is being done for remote queries, and this needs to be done > > all the way from the uppermost function (say pg_get_querydef) upto > > get_namespace_name() which does not look good. > > > > We need to think about some solution in general for the existing issue > > of deparsing temp table names. Not sure of any solution. May be, after > > we resolve the bug id in subject, the fix may even not require the > > schema qualification. > >> > >> > >> On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar > >> <ami...@en...> wrote: > >>> > >>> On 25 June 2013 19:59, Amit Khandekar <ami...@en...> > >>> wrote: > >>> > Attached is a patch that does schema qualification by overriding the > >>> > search_path just before doing the deparse in deparse_query(). > >>> > PopOVerrideSearchPath() in the end pops out the temp path. Also, the > >>> > transaction callback function already takes care of popping out such > >>> > Push in case of transaction rollback. > >>> > > >>> > Unfortunately we cannot apply this solution to temp tables. The > >>> > problem is, when a pg_temp schema is deparsed, it is deparsed into > >>> > pg_temp_1, pg_temp_2 etc. and these names are specific to the node. An > >>> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at > >>> > datanode. So the remote query generated may or may not work on > >>> > datanode, so totally unreliable. > >>> > > >>> > In fact, the issue with pg_temp_1 names in the deparsed remote query > >>> > is present even currently. > >>> > > >>> > But wherever it is a correctness issue to *not* schema qualify temp > >>> > object, I have kept the schema qualification. > >>> > For e.g. user can set search_path to" s1, pg_temp", > >>> > and have obj1 in both s1 and pg_temp, and want to refer to pg_temp.s1. > >>> > In such case, the remote query should have pg_temp_[1-9].obj1, > >>> > although it may cause errors because of the existing issue. > >>> > > >>> --- > >>> > So, the prepare-execute with search_path would remain there for temp > >>> > tables. > >>> I mean, the prepare-exute issue with search_patch would remain there > >>> for temp tables. > >>> > >>> > > >>> > I tried to run the regression by extracting regression expected output > >>> > files from Abbas's patch , and regression passes, including plancache. > >>> > > >>> > I think for this release, we should go ahead by keeping this issue > >>> > open for temp tables. This solution is an improvement, and does not > >>> > cause any new issues. > >>> > > >>> > Comments welcome. > >>> > > >>> > > >>> > On 24 June 2013 13:00, Ashutosh Bapat <ash...@en...> > >>> > wrote: > >>> >> Hi Abbas, > >>> >> We are changing a lot of PostgreSQL deparsing code, which would create > >>> >> problems in the future merges. Since this change is in query deparsing > >>> >> logic > >>> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch > >>> >> should > >>> >> again be the last resort. > >>> >> > >>> >> Please take a look at how view definitions are dumped. That will give a > >>> >> good > >>> >> idea as to how PG schema-qualifies (or not) objects. Here's how view > >>> >> definitions displayed changes with search path. Since the code to dump > >>> >> views > >>> >> and display definitions is same, the view definition dumped also > >>> >> changes > >>> >> with the search path. Thus pg_dump must be using some trick to always > >>> >> dump a > >>> >> consistent view definition (and hence a deparsed query). Thanks Amit > >>> >> for the > >>> >> example. > >>> >> > >>> >> create table ttt (id int); > >>> >> > >>> >> postgres=# create domain dd int; > >>> >> CREATE DOMAIN > >>> >> postgres=# create view v2 as select id::dd from ttt; > >>> >> CREATE VIEW > >>> >> postgres=# set search_path TO ''; > >>> >> SET > >>> >> > >>> >> postgres=# \d+ public.v2 > >>> >> View "public.v2" > >>> >> Column | Type | Modifiers | Storage | Description > >>> >> --------+-----------+--------- > >>> >> --+---------+------------- > >>> >> id | public.dd | | plain | > >>> >> View definition: > >>> >> SELECT ttt.id::public.dd AS id > >>> >> FROM public.ttt; > >>> >> > >>> >> postgres=# set search_path TO default ; > >>> >> SET > >>> >> postgres=# show search_path ; > >>> >> search_path > >>> >> ---------------- > >>> >> "$user",public > >>> >> (1 row) > >>> >> > >>> >> postgres=# \d+ public.v2 > >>> >> View "public.v2" > >>> >> Column | Type | Modifiers | Storage | Description > >>> >> --------+------+-----------+---------+------------- > >>> >> id | dd | | plain | > >>> >> View definition: > >>> >> SELECT ttt.id::dd AS id > >>> >> FROM ttt; > >>> >> > >>> >> We need to leverage similar mechanism here to reduce PG footprint. > >>> >> > >>> >> > >>> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt > >>> >> <abb...@en...> > >>> >> wrote: > >>> >>> > >>> >>> Hi, > >>> >>> As discussed in the last F2F meeting, here is an updated patch that > >>> >>> provides schema qualification of the following objects: Tables, Views, > >>> >>> Functions, Types and Domains in case of remote queries. > >>> >>> Sequence functions are never concerned with datanodes hence, schema > >>> >>> qualification is not required in case of sequences. > >>> >>> This solves plancache test case failure issue and does not introduce > >>> >>> any > >>> >>> more failures. > >>> >>> I have also attached some tests with results to aid in review. > >>> >>> > >>> >>> Comments are welcome. > >>> >>> > >>> >>> Regards > >>> >>> > >>> >>> > >>> >>> > >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt > >>> >>> <abb...@en...> > >>> >>> wrote: > >>> >>>> > >>> >>>> Hi, > >>> >>>> Attached please find a WIP patch that provides the functionality of > >>> >>>> preparing the statement at the datanodes as soon as it is prepared > >>> >>>> on the coordinator. > >>> >>>> This is to take care of a test case in plancache that makes sure that > >>> >>>> change of search_path is ignored by replans. > >>> >>>> While the patch fixes this replan test case and the regression works > >>> >>>> fine > >>> >>>> there are still these two problems I have to take care of. > >>> >>>> > >>> >>>> 1. This test case fails > >>> >>>> > >>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE > >>> >>>> BY > >>> >>>> HASH(a); > >>> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); > >>> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails > >>> >>>> > >>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; > >>> >>>> QUERY PLAN > >>> >>>> > >>> >>>> ------------------------------------------------------------------- > >>> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 > >>> >>>> width=14) > >>> >>>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 > >>> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE > >>> >>>> ((xc_alter_table_3.ctid = $1) AND > >>> >>>> (xc_alter_table_3.xc_node_id = $2)) > >>> >>>> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" > >>> >>>> (cost=0.00..0.00 rows=1000 width=14) > >>> >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, > >>> >>>> xc_alter_table_3.xc_node_id > >>> >>>> Node/s: data_node_3 > >>> >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY > >>> >>>> xc_alter_table_3 WHERE (a = 1) > >>> >>>> (7 rows) > >>> >>>> > >>> >>>> The reason of the failure is that the select query is selecting 3 > >>> >>>> items, the first of which is an int, > >>> >>>> whereas the delete query is comparing $1 with a ctid. > >>> >>>> I am not sure how this works without prepare, but it fails when > >>> >>>> used > >>> >>>> with prepare. > >>> >>>> > >>> >>>> The reason of this planning is this section of code in function > >>> >>>> pgxc_build_dml_statement > >>> >>>> else if (cmdtype == CMD_DELETE) > >>> >>>> { > >>> >>>> /* > >>> >>>> * Since there is no data to update, the first param is going > >>> >>>> to > >>> >>>> be > >>> >>>> * ctid. > >>> >>>> */ > >>> >>>> ctid_param_num = 1; > >>> >>>> } > >>> >>>> > >>> >>>> Amit/Ashutosh can you suggest a fix for this problem? > >>> >>>> There are a number of possibilities. > >>> >>>> a) The select should not have selected column a. > >>> >>>> b) The DELETE should have referred to $2 and $3 for ctid and > >>> >>>> xc_node_id respectively. > >>> >>>> c) Since the query works without PREPARE, we should make PREPARE > >>> >>>> work > >>> >>>> the same way. > >>> >>>> > >>> >>>> > >>> >>>> 2. This test case in plancache fails. > >>> >>>> > >>> >>>> -- Try it with a view, which isn't directly used in the resulting > >>> >>>> plan > >>> >>>> -- but should trigger invalidation anyway > >>> >>>> create table tab33 (a int, b int); > >>> >>>> insert into tab33 values(1,2); > >>> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; > >>> >>>> PREPARE vprep AS SELECT * FROM v_tab33; > >>> >>>> EXECUTE vprep; > >>> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; > >>> >>>> -- does not cause plan invalidation because views are never > >>> >>>> created > >>> >>>> on datanodes > >>> >>>> EXECUTE vprep; > >>> >>>> > >>> >>>> and the reason of the failure is that views are never created on > >>> >>>> the > >>> >>>> datanodes hence plan invalidation is not triggered. > >>> >>>> This can be documented as an XC limitation. > >>> >>>> > >>> >>>> 3. I still have to add comments in the patch and some ifdefs may be > >>> >>>> missing too. > >>> >>>> > >>> >>>> > >>> >>>> In addition to the patch I have also attached some example Java > >>> >>>> programs > >>> >>>> that test the some basic functionality through JDBC. I found that > >>> >>>> these > >>> >>>> programs are working fine after my patch. > >>> >>>> > >>> >>>> 1. Prepared.java : Issues parameterized delete, insert and update > >>> >>>> through > >>> >>>> JDBC. These are un-named prepared statements and works fine. > >>> >>>> 2. NamedPrepared.java : Issues two named prepared statements through > >>> >>>> JDBC > >>> >>>> and works fine. > >>> >>>> 3. Retrieve.java : Runs a simple select to verify results. > >>> >>>> The comments on top of the files explain their usage. > >>> >>>> > >>> >>>> Comments are welcome. > >>> >>>> > >>> >>>> Thanks > >>> >>>> Regards > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat > >>> >>>> <ash...@en...> wrote: > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt > >>> >>>>> <abb...@en...> wrote: > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat > >>> >>>>>> <ash...@en...> wrote: > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt > >>> >>>>>>> <abb...@en...> wrote: > >>> >>>>>>>> > >>> >>>>>>>> Attached please find updated patch to fix the bug. The patch > >>> >>>>>>>> takes > >>> >>>>>>>> care of the bug and the regression issues resulting from the > >>> >>>>>>>> changes done in > >>> >>>>>>>> the patch. Please note that the issue in test case plancache > >>> >>>>>>>> still stands > >>> >>>>>>>> unsolved because of the following test case (simplified but taken > >>> >>>>>>>> from > >>> >>>>>>>> plancache.sql) > >>> >>>>>>>> > >>> >>>>>>>> create schema s1 create table abc (f1 int); > >>> >>>>>>>> create schema s2 create table abc (f1 int); > >>> >>>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> insert into s1.abc values(123); > >>> >>>>>>>> insert into s2.abc values(456); > >>> >>>>>>>> > >>> >>>>>>>> set search_path = s1; > >>> >>>>>>>> > >>> >>>>>>>> prepare p1 as select f1 from abc; > >>> >>>>>>>> execute p1; -- works fine, results in 123 > >>> >>>>>>>> > >>> >>>>>>>> set search_path = s2; > >>> >>>>>>>> execute p1; -- works fine after the patch, results in 123 > >>> >>>>>>>> > >>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan > >>> >>>>>>>> execute p1; -- fails > >>> >>>>>>>> > >>> >>>>>>> > >>> >>>>>>> Huh! The beast bit us. > >>> >>>>>>> > >>> >>>>>>> I think the right solution here is either of two > >>> >>>>>>> 1. Take your previous patch to always use qualified names (but you > >>> >>>>>>> need to improve it not to affect the view dumps) > >>> >>>>>>> 2. Prepare the statements at the datanode at the time of prepare. > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> Is this test added new in 9.2? > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c > >>> >>>>>> in > >>> >>>>>> March 2007. > >>> >>>>>> > >>> >>>>>>> > >>> >>>>>>> Why didn't we see this issue the first time prepare was > >>> >>>>>>> implemented? I > >>> >>>>>>> don't remember (but it was two years back). > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> I was unable to locate the exact reason but since statements were > >>> >>>>>> not > >>> >>>>>> being prepared on datanodes due to a merge issue this issue just > >>> >>>>>> surfaced > >>> >>>>>> up. > >>> >>>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> Well, even though statements were not getting prepared (actually > >>> >>>>> prepared statements were not being used again and again) on > >>> >>>>> datanodes, we > >>> >>>>> never prepared them on datanode at the time of preparing the > >>> >>>>> statement. So, > >>> >>>>> this bug should have shown itself long back. > >>> >>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> The last execute should result in 123, whereas it results in 456. > >>> >>>>>>>> The > >>> >>>>>>>> reason is that the search path has already been changed at the > >>> >>>>>>>> datanode and > >>> >>>>>>>> a replan would mean select from abc in s2. > >>> >>>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat > >>> >>>>>>>> <ash...@en...> wrote: > >>> >>>>>>>>> > >>> >>>>>>>>> Hi Abbas, > >>> >>>>>>>>> I think the fix is on the right track. There are couple of > >>> >>>>>>>>> improvements that we need to do here (but you may not do those > >>> >>>>>>>>> if the time > >>> >>>>>>>>> doesn't permit). > >>> >>>>>>>>> > >>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to whether > >>> >>>>>>>>> the > >>> >>>>>>>>> query in the node should use extended protocol or not, rather > >>> >>>>>>>>> than relying > >>> >>>>>>>>> on the presence of statement name and parameters etc. Amit has > >>> >>>>>>>>> already added > >>> >>>>>>>>> a status with that effect. We need to leverage it. > >>> >>>>>>>>> > >>> >>>>>>>>> > >>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt > >>> >>>>>>>>> <abb...@en...> wrote: > >>> >>>>>>>>>> > >>> >>>>>>>>>> The patch fixes the dead code issue, that I described earlier. > >>> >>>>>>>>>> The > >>> >>>>>>>>>> code was dead because of two issues: > >>> >>>>>>>>>> > >>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting > >>> >>>>>>>>>> stmt_name to > >>> >>>>>>>>>> NULL and this was the main reason > >>> >>>>>>>>>> ActivateDatanodeStatementOnNode was not > >>> >>>>>>>>>> being called in the function pgxc_start_command_on_connection. > >>> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming > >>> >>>>>>>>>> that a > >>> >>>>>>>>>> prepared statement must have some parameters. > >>> >>>>>>>>>> > >>> >>>>>>>>>> Fixing these two issues makes sure that the function > >>> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and statements > >>> >>>>>>>>>> get prepared on > >>> >>>>>>>>>> the datanode. > >>> >>>>>>>>>> This patch would fix bug 3607975. It would however not fix the > >>> >>>>>>>>>> test > >>> >>>>>>>>>> case I described in my previous email because of reasons I > >>> >>>>>>>>>> described. > >>> >>>>>>>>>> > >>> >>>>>>>>>> > >>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat > >>> >>>>>>>>>> <ash...@en...> wrote: > >>> >>>>>>>>>>> > >>> >>>>>>>>>>> Can you please explain what this fix does? It would help to > >>> >>>>>>>>>>> have > >>> >>>>>>>>>>> an elaborate explanation with code snippets. > >>> >>>>>>>>>>> > >>> >>>>>>>>>>> > >>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt > >>> >>>>>>>>>>> <abb...@en...> wrote: > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat > >>> >>>>>>>>>>>> <ash...@en...> wrote: > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt > >>> >>>>>>>>>>>>> <abb...@en...> wrote: > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat > >>> >>>>>>>>>>>>>> <ash...@en...> wrote: > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt > >>> >>>>>>>>>>>>>>> <abb...@en...> wrote: > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> Hi, > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> While working on test case plancache it was brought up as > >>> >>>>>>>>>>>>>>>> a > >>> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve > >>> >>>>>>>>>>>>>>>> the problem of the > >>> >>>>>>>>>>>>>>>> test case. > >>> >>>>>>>>>>>>>>>> However there is some confusion in the statement of bug > >>> >>>>>>>>>>>>>>>> id > >>> >>>>>>>>>>>>>>>> 3607975. > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple > >>> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing > >>> >>>>>>>>>>>>>>>> the query on > >>> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and > >>> >>>>>>>>>>>>>>>> executing multiple times. > >>> >>>>>>>>>>>>>>>> This is because somehow the remote query is being > >>> >>>>>>>>>>>>>>>> prepared as an unnamed > >>> >>>>>>>>>>>>>>>> statement." > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> Consider this test case > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); > >>> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); > >>> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; > >>> >>>>>>>>>>>>>>>> D. execute p1; > >>> >>>>>>>>>>>>>>>> E. execute p1; > >>> >>>>>>>>>>>>>>>> F. execute p1; > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> Here are the confusions > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response > >>> >>>>>>>>>>>>>>>> to > >>> >>>>>>>>>>>>>>>> a prepare issued by a user. > >>> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. > >>> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all > >>> >>>>>>>>>>>>>>>> datanodes. > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build > >>> >>>>>>>>>>>>>>>> a > >>> >>>>>>>>>>>>>>>> new generic plan, > >>> >>>>>>>>>>>>>>>> and steps E and F use the already built generic plan. > >>> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. > >>> >>>>>>>>>>>>>>>> This means that executing a prepared statement again > >>> >>>>>>>>>>>>>>>> and > >>> >>>>>>>>>>>>>>>> again does use cached plans > >>> >>>>>>>>>>>>>>>> and does not prepare again and again every time we > >>> >>>>>>>>>>>>>>>> issue > >>> >>>>>>>>>>>>>>>> an execute. > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> The problem is not here. The problem is in do_query() > >>> >>>>>>>>>>>>>>> where > >>> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and > >>> >>>>>>>>>>>>>>> we keep on > >>> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the > >>> >>>>>>>>>>>>>> datanode. > >>> >>>>>>>>>>>>>> I spent time looking at the code written in do_query and > >>> >>>>>>>>>>>>>> functions called > >>> >>>>>>>>>>>>>> from with in do_query to handle prepared statements but the > >>> >>>>>>>>>>>>>> code written in > >>> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements > >>> >>>>>>>>>>>>>> prepared on datanodes > >>> >>>>>>>>>>>>>> is dead as of now. It is never called during the complete > >>> >>>>>>>>>>>>>> regression run. > >>> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never > >>> >>>>>>>>>>>>>> called. The way > >>> >>>>>>>>>>>>>> prepared statements are being handled now is the same as I > >>> >>>>>>>>>>>>>> described earlier > >>> >>>>>>>>>>>>>> in the mail chain with the help of an example. > >>> >>>>>>>>>>>>>> The code that is dead was originally added by Mason through > >>> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in > >>> >>>>>>>>>>>>>> December 2010. This > >>> >>>>>>>>>>>>>> code has been changed a lot over the last two years. This > >>> >>>>>>>>>>>>>> commit does not > >>> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it use to > >>> >>>>>>>>>>>>>> work back then. > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. > >>> >>>>>>>>>>>>> So, > >>> >>>>>>>>>>>>> something has gone wrong in-between. That's what we need to > >>> >>>>>>>>>>>>> find out and > >>> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not good > >>> >>>>>>>>>>>>> for performance > >>> >>>>>>>>>>>>> either. > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> I was able to find the reason why the code was dead and the > >>> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure > >>> >>>>>>>>>>>> that > >>> >>>>>>>>>>>> statements are prepared on datanodes whenever required. > >>> >>>>>>>>>>>> However there is a > >>> >>>>>>>>>>>> problem in the way prepared statements are handled. The > >>> >>>>>>>>>>>> problem is that > >>> >>>>>>>>>>>> unless a prepared statement is executed it is never prepared > >>> >>>>>>>>>>>> on datanodes, > >>> >>>>>>>>>>>> hence changing the path before executing the statement gives > >>> >>>>>>>>>>>> us incorrect > >>> >>>>>>>>>>>> results. For Example > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute by > >>> >>>>>>>>>>>> replication; > >>> >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute by > >>> >>>>>>>>>>>> replication; > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> insert into s1.abc values(123); > >>> >>>>>>>>>>>> insert into s2.abc values(456); > >>> >>>>>>>>>>>> set search_path = s2; > >>> >>>>>>>>>>>> prepare p1 as select f1 from abc; > >>> >>>>>>>>>>>> set search_path = s1; > >>> >>>>>>>>>>>> execute p1; > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> The last execute results in 123, where as it should have > >>> >>>>>>>>>>>> resulted > >>> >>>>>>>>>>>> in 456. > >>> >>>>>>>>>>>> I can finalize the attached patch by fixing any regression > >>> >>>>>>>>>>>> issues > >>> >>>>>>>>>>>> that may result and that would fix 3607975 and improve > >>> >>>>>>>>>>>> performance however > >>> >>>>>>>>>>>> the above test case would still fail. > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not > >>> >>>>>>>>>>>>>>>> reproducible. > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would have > >>> >>>>>>>>>>>>>>> been > >>> >>>>>>>>>>>>>>> the case, we would not have seen this problem if > >>> >>>>>>>>>>>>>>> search_path changed in > >>> >>>>>>>>>>>>>>> between steps D and E. > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the problem > >>> >>>>>>>>>>>>>> occurs because when the remote query node is created, > >>> >>>>>>>>>>>>>> schema qualification > >>> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the > >>> >>>>>>>>>>>>>> datanode, but changes in > >>> >>>>>>>>>>>>>> search path do get communicated to the datanode. The sql > >>> >>>>>>>>>>>>>> statement is built > >>> >>>>>>>>>>>>>> when execute is issued for the first time and is reused on > >>> >>>>>>>>>>>>>> subsequent > >>> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the select > >>> >>>>>>>>>>>>>> that it just > >>> >>>>>>>>>>>>>> received is due to an execute of a prepared statement that > >>> >>>>>>>>>>>>>> was prepared when > >>> >>>>>>>>>>>>>> search path was some thing else. > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, would > >>> >>>>>>>>>>>>> fix > >>> >>>>>>>>>>>>> the problem, since the statement will get prepared at the > >>> >>>>>>>>>>>>> datanode, with the > >>> >>>>>>>>>>>>> same search path settings, as it would on the coordinator. > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> Comments are welcome. > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> -- > >>> >>>>>>>>>>>>>>>> Abbas > >>> >>>>>>>>>>>>>>>> Architect > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 > >>> >>>>>>>>>>>>>>>> Skype ID: gabbasb > >>> >>>>>>>>>>>>>>>> www.enterprisedb.com > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> Follow us on Twitter > >>> >>>>>>>>>>>>>>>> @EnterpriseDB > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers > >>> >>>>>>>>>>>>>>>> and > >>> >>>>>>>>>>>>>>>> more > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ > >>> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt > >>> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application performance > >>> >>>>>>>>>>>>>>>> monitoring service > >>> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and > >>> >>>>>>>>>>>>>>>> monitor your > >>> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. > >>> >>>>>>>>>>>>>>>> Try > >>> >>>>>>>>>>>>>>>> New Relic > >>> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! > >>> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may > >>> >>>>>>>>>>>>>>>> _______________________________________________ > >>> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list > >>> >>>>>>>>>>>>>>>> Pos...@li... > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>> >>>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>>> -- > >>> >>>>>>>>>>>>>>> Best Wishes, > >>> >>>>>>>>>>>>>>> Ashutosh Bapat > >>> >>>>>>>>>>>>>>> EntepriseDB Corporation > >>> >>>>>>>>>>>>>>> The Postgres Database Company > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> -- > >>> >>>>>>>>>>>>>> -- > >>> >>>>>>>>>>>>>> Abbas > >>> >>>>>>>>>>>>>> Architect > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> Ph: 92.334.5100153 > >>> >>>>>>>>>>>>>> Skype ID: gabbasb > >>> >>>>>>>>>>>>>> www.enterprisedb.com > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> Follow us on Twitter > >>> >>>>>>>>>>>>>> @EnterpriseDB > >>> >>>>>>>>>>>>>> > >>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >>> >>>>>>>>>>>>>> more > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> > >>> >>>>>>>>>>>>> -- > >>> >>>>>>>>>>>>> Best Wishes, > >>> >>>>>>>>>>>>> Ashutosh Bapat > >>> >>>>>>>>>>>>> EntepriseDB Corporation > >>> >>>>>>>>>>>>> The Postgres Database Company > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> -- > >>> >>>>>>>>>>>> -- > >>> >>>>>>>>>>>> Abbas > >>> >>>>>>>>>>>> Architect > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> Ph: 92.334.5100153 > >>> >>>>>>>>>>>> Skype ID: gabbasb > >>> >>>>>>>>>>>> www.enterprisedb.com > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> Follow us on Twitter > >>> >>>>>>>>>>>> @EnterpriseDB > >>> >>>>>>>>>>>> > >>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >>> >>>>>>>>>>>> more > >>> >>>>>>>>>>> > >>> >>>>>>>>>>> > >>> >>>>>>>>>>> > >>> >>>>>>>>>>> > >>> >>>>>>>>>>> -- > >>> >>>>>>>>>>> Best Wishes, > >>> >>>>>>>>>>> Ashutosh Bapat > >>> >>>>>>>>>>> EntepriseDB Corporation > >>> >>>>>>>>>>> The Postgres Database Company > >>> >>>>>>>>>> > >>> >>>>>>>>>> > >>> >>>>>>>>>> > >>> >>>>>>>>>> > >>> >>>>>>>>>> -- > >>> >>>>>>>>>> -- > >>> >>>>>>>>>> Abbas > >>> >>>>>>>>>> Architect > >>> >>>>>>>>>> > >>> >>>>>>>>>> Ph: 92.334.5100153 > >>> >>>>>>>>>> Skype ID: gabbasb > >>> >>>>>>>>>> www.enterprisedb.com > >>> >>>>>>>>>> > >>> >>>>>>>>>> Follow us on Twitter > >>> >>>>>>>>>> @EnterpriseDB > >>> >>>>>>>>>> > >>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >>> >>>>>>>>>> more > >>> >>>>>>>>> > >>> >>>>>>>>> > >>> >>>>>>>>> > >>> >>>>>>>>> > >>> >>>>>>>>> -- > >>> >>>>>>>>> Best Wishes, > >>> >>>>>>>>> Ashutosh Bapat > >>> >>>>>>>>> EntepriseDB Corporation > >>> >>>>>>>>> The Postgres Database Company > >>> >>>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> > >>> >>>>>>>> -- > >>> >>>>>>>> -- > >>> >>>>>>>> Abbas > >>> >>>>>>>> Architect > >>> >>>>>>>> > >>> >>>>>>>> Ph: 92.334.5100153 > >>> >>>>>>>> Skype ID: gabbasb > >>> >>>>>>>> www.enterprisedb.com > >>> >>>>>>>> > >>> >>>>>>>> Follow us on Twitter > >>> >>>>>>>> @EnterpriseDB > >>> >>>>>>>> > >>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> -- > >>> >>>>>>> Best Wishes, > >>> >>>>>>> Ashutosh Bapat > >>> >>>>>>> EntepriseDB Corporation > >>> >>>>>>> The Postgres Database Company > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> > >>> >>>>>> -- > >>> >>>>>> -- > >>> >>>>>> Abbas > >>> >>>>>> Architect > >>> >>>>>> > >>> >>>>>> Ph: 92.334.5100153 > >>> >>>>>> Skype ID: gabbasb > >>> >>>>>> www.enterprisedb.com > >>> >>>>>> > >>> >>>>>> Follow us on Twitter > >>> >>>>>> @EnterpriseDB > >>> >>>>>> > >>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> > >>> >>>>> -- > >>> >>>>> Best Wishes, > >>> >>>>> Ashutosh Bapat > >>> >>>>> EntepriseDB Corporation > >>> >>>>> The Postgres Database Company > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> -- > >>> >>>> -- > >>> >>>> Abbas > >>> >>>> Architect > >>> >>>> > >>> >>>> Ph: 92.334.5100153 > >>> >>>> Skype ID: gabbasb > >>> >>>> www.enterprisedb.com > >>> >>>> > >>> >>>> Follow us on Twitter > >>> >>>> @EnterpriseDB > >>> >>>> > >>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> -- > >>> >>> -- > >>> >>> Abbas > >>> >>> Architect > >>> >>> > >>> >>> Ph: 92.334.5100153 > >>> >>> Skype ID: gabbasb > >>> >>> www.enterprisedb.com > >>> >>> > >>> >>> Follow us on Twitter > >>> >>> @EnterpriseDB > >>> >>> > >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>> >> > >>> >> > >>> >> > >>> >> > >>> >> -- > >>> >> Best Wishes, > >>> >> Ashutosh Bapat > >>> >> EntepriseDB Corporation > >>> >> The Postgres Database Company > >>> >> > >>> >> > >>> >> ------------------------------------------------------------------------------ > >>> >> This SF.net email is sponsored by Windows: > >>> >> > >>> >> Build for Windows Store. > >>> >> > >>> >> http://p.sf.net/sfu/windows-dev2dev > >>> >> _______________________________________________ > >>> >> Postgres-xc-developers mailing list > >>> >> Pos...@li... > >>> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>> >> > >> > >> > >> > >> > >> -- > >> Best Wishes, > >> Ashutosh Bapat > >> EntepriseDB Corporation > >> The Postgres Database Company > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > <inherit.out><inherit.sql><regression.diffs><regression.out>------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Amit K. <ami...@en...> - 2013-06-26 07:22:06
|
On 26 June 2013 10:34, Amit Khandekar <ami...@en...> wrote: > On 26 June 2013 08:56, Ashutosh Bapat <ash...@en...> wrote: >> Hi Amit, >> From a cursory look, this looks much more cleaner than Abbas's patch. So, it >> looks to be the approach we should take. BTW, you need to update the >> original outputs as well, instead of just the alternate expected outputs. >> Remember, the merge applies changes to the original expected outputs and not >> alternate ones (added in XC), thus we come to know about conflicts only when >> we apply changes to original expected outputs. > > Right. Will look into it. All of the expected files except inherit.sql are xc_* tests which don't have alternate files. I have made the same inherit_1.out changes onto inherit.out. Attached revised patch. Suzuki-san, I was looking at the following diff in the inherit.sql failure that you had attached from your local regression run: ! Hash Cond: (pg_temp_2.patest0.id = int4_tbl.f1) --- 1247,1253 ---- ! Hash Cond: (pg_temp_7.patest0.id = int4_tbl.f1) I am not sure if this has started coming after you applied the patch. Can you please once again run the test after reinitializing the cluster ? I am not getting this diff although I ran using serial_schedule, not parallel; and the diff itself looks harmless. As I mentioned, the actual temp schema name may differ. > >> >> Regards to temporary namespaces, is it possible to schema-qualify the >> temporary namespaces as always pg_temp irrespective of the actual name? > > get_namespace_name() is used to deparse the schema name. In order to > keep the deparsing logic working for both local queries (i.e. for view > for e.g.) and remote queries, we need to push in the context that the > deparsing is being done for remote queries, and this needs to be done > all the way from the uppermost function (say pg_get_querydef) upto > get_namespace_name() which does not look good. > > We need to think about some solution in general for the existing issue > of deparsing temp table names. Not sure of any solution. May be, after > we resolve the bug id in subject, the fix may even not require the > schema qualification. >> >> >> On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar >> <ami...@en...> wrote: >>> >>> On 25 June 2013 19:59, Amit Khandekar <ami...@en...> >>> wrote: >>> > Attached is a patch that does schema qualification by overriding the >>> > search_path just before doing the deparse in deparse_query(). >>> > PopOVerrideSearchPath() in the end pops out the temp path. Also, the >>> > transaction callback function already takes care of popping out such >>> > Push in case of transaction rollback. >>> > >>> > Unfortunately we cannot apply this solution to temp tables. The >>> > problem is, when a pg_temp schema is deparsed, it is deparsed into >>> > pg_temp_1, pg_temp_2 etc. and these names are specific to the node. An >>> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at >>> > datanode. So the remote query generated may or may not work on >>> > datanode, so totally unreliable. >>> > >>> > In fact, the issue with pg_temp_1 names in the deparsed remote query >>> > is present even currently. >>> > >>> > But wherever it is a correctness issue to *not* schema qualify temp >>> > object, I have kept the schema qualification. >>> > For e.g. user can set search_path to" s1, pg_temp", >>> > and have obj1 in both s1 and pg_temp, and want to refer to pg_temp.s1. >>> > In such case, the remote query should have pg_temp_[1-9].obj1, >>> > although it may cause errors because of the existing issue. >>> > >>> --- >>> > So, the prepare-execute with search_path would remain there for temp >>> > tables. >>> I mean, the prepare-exute issue with search_patch would remain there >>> for temp tables. >>> >>> > >>> > I tried to run the regression by extracting regression expected output >>> > files from Abbas's patch , and regression passes, including plancache. >>> > >>> > I think for this release, we should go ahead by keeping this issue >>> > open for temp tables. This solution is an improvement, and does not >>> > cause any new issues. >>> > >>> > Comments welcome. >>> > >>> > >>> > On 24 June 2013 13:00, Ashutosh Bapat <ash...@en...> >>> > wrote: >>> >> Hi Abbas, >>> >> We are changing a lot of PostgreSQL deparsing code, which would create >>> >> problems in the future merges. Since this change is in query deparsing >>> >> logic >>> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch >>> >> should >>> >> again be the last resort. >>> >> >>> >> Please take a look at how view definitions are dumped. That will give a >>> >> good >>> >> idea as to how PG schema-qualifies (or not) objects. Here's how view >>> >> definitions displayed changes with search path. Since the code to dump >>> >> views >>> >> and display definitions is same, the view definition dumped also >>> >> changes >>> >> with the search path. Thus pg_dump must be using some trick to always >>> >> dump a >>> >> consistent view definition (and hence a deparsed query). Thanks Amit >>> >> for the >>> >> example. >>> >> >>> >> create table ttt (id int); >>> >> >>> >> postgres=# create domain dd int; >>> >> CREATE DOMAIN >>> >> postgres=# create view v2 as select id::dd from ttt; >>> >> CREATE VIEW >>> >> postgres=# set search_path TO ''; >>> >> SET >>> >> >>> >> postgres=# \d+ public.v2 >>> >> View "public.v2" >>> >> Column | Type | Modifiers | Storage | Description >>> >> --------+-----------+--------- >>> >> --+---------+------------- >>> >> id | public.dd | | plain | >>> >> View definition: >>> >> SELECT ttt.id::public.dd AS id >>> >> FROM public.ttt; >>> >> >>> >> postgres=# set search_path TO default ; >>> >> SET >>> >> postgres=# show search_path ; >>> >> search_path >>> >> ---------------- >>> >> "$user",public >>> >> (1 row) >>> >> >>> >> postgres=# \d+ public.v2 >>> >> View "public.v2" >>> >> Column | Type | Modifiers | Storage | Description >>> >> --------+------+-----------+---------+------------- >>> >> id | dd | | plain | >>> >> View definition: >>> >> SELECT ttt.id::dd AS id >>> >> FROM ttt; >>> >> >>> >> We need to leverage similar mechanism here to reduce PG footprint. >>> >> >>> >> >>> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt >>> >> <abb...@en...> >>> >> wrote: >>> >>> >>> >>> Hi, >>> >>> As discussed in the last F2F meeting, here is an updated patch that >>> >>> provides schema qualification of the following objects: Tables, Views, >>> >>> Functions, Types and Domains in case of remote queries. >>> >>> Sequence functions are never concerned with datanodes hence, schema >>> >>> qualification is not required in case of sequences. >>> >>> This solves plancache test case failure issue and does not introduce >>> >>> any >>> >>> more failures. >>> >>> I have also attached some tests with results to aid in review. >>> >>> >>> >>> Comments are welcome. >>> >>> >>> >>> Regards >>> >>> >>> >>> >>> >>> >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt >>> >>> <abb...@en...> >>> >>> wrote: >>> >>>> >>> >>>> Hi, >>> >>>> Attached please find a WIP patch that provides the functionality of >>> >>>> preparing the statement at the datanodes as soon as it is prepared >>> >>>> on the coordinator. >>> >>>> This is to take care of a test case in plancache that makes sure that >>> >>>> change of search_path is ignored by replans. >>> >>>> While the patch fixes this replan test case and the regression works >>> >>>> fine >>> >>>> there are still these two problems I have to take care of. >>> >>>> >>> >>>> 1. This test case fails >>> >>>> >>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE >>> >>>> BY >>> >>>> HASH(a); >>> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >>> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails >>> >>>> >>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; >>> >>>> QUERY PLAN >>> >>>> >>> >>>> ------------------------------------------------------------------- >>> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 >>> >>>> width=14) >>> >>>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >>> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >>> >>>> ((xc_alter_table_3.ctid = $1) AND >>> >>>> (xc_alter_table_3.xc_node_id = $2)) >>> >>>> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" >>> >>>> (cost=0.00..0.00 rows=1000 width=14) >>> >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >>> >>>> xc_alter_table_3.xc_node_id >>> >>>> Node/s: data_node_3 >>> >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >>> >>>> xc_alter_table_3 WHERE (a = 1) >>> >>>> (7 rows) >>> >>>> >>> >>>> The reason of the failure is that the select query is selecting 3 >>> >>>> items, the first of which is an int, >>> >>>> whereas the delete query is comparing $1 with a ctid. >>> >>>> I am not sure how this works without prepare, but it fails when >>> >>>> used >>> >>>> with prepare. >>> >>>> >>> >>>> The reason of this planning is this section of code in function >>> >>>> pgxc_build_dml_statement >>> >>>> else if (cmdtype == CMD_DELETE) >>> >>>> { >>> >>>> /* >>> >>>> * Since there is no data to update, the first param is going >>> >>>> to >>> >>>> be >>> >>>> * ctid. >>> >>>> */ >>> >>>> ctid_param_num = 1; >>> >>>> } >>> >>>> >>> >>>> Amit/Ashutosh can you suggest a fix for this problem? >>> >>>> There are a number of possibilities. >>> >>>> a) The select should not have selected column a. >>> >>>> b) The DELETE should have referred to $2 and $3 for ctid and >>> >>>> xc_node_id respectively. >>> >>>> c) Since the query works without PREPARE, we should make PREPARE >>> >>>> work >>> >>>> the same way. >>> >>>> >>> >>>> >>> >>>> 2. This test case in plancache fails. >>> >>>> >>> >>>> -- Try it with a view, which isn't directly used in the resulting >>> >>>> plan >>> >>>> -- but should trigger invalidation anyway >>> >>>> create table tab33 (a int, b int); >>> >>>> insert into tab33 values(1,2); >>> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >>> >>>> PREPARE vprep AS SELECT * FROM v_tab33; >>> >>>> EXECUTE vprep; >>> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; >>> >>>> -- does not cause plan invalidation because views are never >>> >>>> created >>> >>>> on datanodes >>> >>>> EXECUTE vprep; >>> >>>> >>> >>>> and the reason of the failure is that views are never created on >>> >>>> the >>> >>>> datanodes hence plan invalidation is not triggered. >>> >>>> This can be documented as an XC limitation. >>> >>>> >>> >>>> 3. I still have to add comments in the patch and some ifdefs may be >>> >>>> missing too. >>> >>>> >>> >>>> >>> >>>> In addition to the patch I have also attached some example Java >>> >>>> programs >>> >>>> that test the some basic functionality through JDBC. I found that >>> >>>> these >>> >>>> programs are working fine after my patch. >>> >>>> >>> >>>> 1. Prepared.java : Issues parameterized delete, insert and update >>> >>>> through >>> >>>> JDBC. These are un-named prepared statements and works fine. >>> >>>> 2. NamedPrepared.java : Issues two named prepared statements through >>> >>>> JDBC >>> >>>> and works fine. >>> >>>> 3. Retrieve.java : Runs a simple select to verify results. >>> >>>> The comments on top of the files explain their usage. >>> >>>> >>> >>>> Comments are welcome. >>> >>>> >>> >>>> Thanks >>> >>>> Regards >>> >>>> >>> >>>> >>> >>>> >>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat >>> >>>> <ash...@en...> wrote: >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt >>> >>>>> <abb...@en...> wrote: >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat >>> >>>>>> <ash...@en...> wrote: >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt >>> >>>>>>> <abb...@en...> wrote: >>> >>>>>>>> >>> >>>>>>>> Attached please find updated patch to fix the bug. The patch >>> >>>>>>>> takes >>> >>>>>>>> care of the bug and the regression issues resulting from the >>> >>>>>>>> changes done in >>> >>>>>>>> the patch. Please note that the issue in test case plancache >>> >>>>>>>> still stands >>> >>>>>>>> unsolved because of the following test case (simplified but taken >>> >>>>>>>> from >>> >>>>>>>> plancache.sql) >>> >>>>>>>> >>> >>>>>>>> create schema s1 create table abc (f1 int); >>> >>>>>>>> create schema s2 create table abc (f1 int); >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> insert into s1.abc values(123); >>> >>>>>>>> insert into s2.abc values(456); >>> >>>>>>>> >>> >>>>>>>> set search_path = s1; >>> >>>>>>>> >>> >>>>>>>> prepare p1 as select f1 from abc; >>> >>>>>>>> execute p1; -- works fine, results in 123 >>> >>>>>>>> >>> >>>>>>>> set search_path = s2; >>> >>>>>>>> execute p1; -- works fine after the patch, results in 123 >>> >>>>>>>> >>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan >>> >>>>>>>> execute p1; -- fails >>> >>>>>>>> >>> >>>>>>> >>> >>>>>>> Huh! The beast bit us. >>> >>>>>>> >>> >>>>>>> I think the right solution here is either of two >>> >>>>>>> 1. Take your previous patch to always use qualified names (but you >>> >>>>>>> need to improve it not to affect the view dumps) >>> >>>>>>> 2. Prepare the statements at the datanode at the time of prepare. >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> Is this test added new in 9.2? >>> >>>>>> >>> >>>>>> >>> >>>>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c >>> >>>>>> in >>> >>>>>> March 2007. >>> >>>>>> >>> >>>>>>> >>> >>>>>>> Why didn't we see this issue the first time prepare was >>> >>>>>>> implemented? I >>> >>>>>>> don't remember (but it was two years back). >>> >>>>>> >>> >>>>>> >>> >>>>>> I was unable to locate the exact reason but since statements were >>> >>>>>> not >>> >>>>>> being prepared on datanodes due to a merge issue this issue just >>> >>>>>> surfaced >>> >>>>>> up. >>> >>>>>> >>> >>>>> >>> >>>>> >>> >>>>> Well, even though statements were not getting prepared (actually >>> >>>>> prepared statements were not being used again and again) on >>> >>>>> datanodes, we >>> >>>>> never prepared them on datanode at the time of preparing the >>> >>>>> statement. So, >>> >>>>> this bug should have shown itself long back. >>> >>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>>> >>> >>>>>>>> The last execute should result in 123, whereas it results in 456. >>> >>>>>>>> The >>> >>>>>>>> reason is that the search path has already been changed at the >>> >>>>>>>> datanode and >>> >>>>>>>> a replan would mean select from abc in s2. >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat >>> >>>>>>>> <ash...@en...> wrote: >>> >>>>>>>>> >>> >>>>>>>>> Hi Abbas, >>> >>>>>>>>> I think the fix is on the right track. There are couple of >>> >>>>>>>>> improvements that we need to do here (but you may not do those >>> >>>>>>>>> if the time >>> >>>>>>>>> doesn't permit). >>> >>>>>>>>> >>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to whether >>> >>>>>>>>> the >>> >>>>>>>>> query in the node should use extended protocol or not, rather >>> >>>>>>>>> than relying >>> >>>>>>>>> on the presence of statement name and parameters etc. Amit has >>> >>>>>>>>> already added >>> >>>>>>>>> a status with that effect. We need to leverage it. >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt >>> >>>>>>>>> <abb...@en...> wrote: >>> >>>>>>>>>> >>> >>>>>>>>>> The patch fixes the dead code issue, that I described earlier. >>> >>>>>>>>>> The >>> >>>>>>>>>> code was dead because of two issues: >>> >>>>>>>>>> >>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting >>> >>>>>>>>>> stmt_name to >>> >>>>>>>>>> NULL and this was the main reason >>> >>>>>>>>>> ActivateDatanodeStatementOnNode was not >>> >>>>>>>>>> being called in the function pgxc_start_command_on_connection. >>> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming >>> >>>>>>>>>> that a >>> >>>>>>>>>> prepared statement must have some parameters. >>> >>>>>>>>>> >>> >>>>>>>>>> Fixing these two issues makes sure that the function >>> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and statements >>> >>>>>>>>>> get prepared on >>> >>>>>>>>>> the datanode. >>> >>>>>>>>>> This patch would fix bug 3607975. It would however not fix the >>> >>>>>>>>>> test >>> >>>>>>>>>> case I described in my previous email because of reasons I >>> >>>>>>>>>> described. >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat >>> >>>>>>>>>> <ash...@en...> wrote: >>> >>>>>>>>>>> >>> >>>>>>>>>>> Can you please explain what this fix does? It would help to >>> >>>>>>>>>>> have >>> >>>>>>>>>>> an elaborate explanation with code snippets. >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt >>> >>>>>>>>>>> <abb...@en...> wrote: >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat >>> >>>>>>>>>>>> <ash...@en...> wrote: >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt >>> >>>>>>>>>>>>> <abb...@en...> wrote: >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat >>> >>>>>>>>>>>>>> <ash...@en...> wrote: >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt >>> >>>>>>>>>>>>>>> <abb...@en...> wrote: >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> Hi, >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> While working on test case plancache it was brought up as >>> >>>>>>>>>>>>>>>> a >>> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve >>> >>>>>>>>>>>>>>>> the problem of the >>> >>>>>>>>>>>>>>>> test case. >>> >>>>>>>>>>>>>>>> However there is some confusion in the statement of bug >>> >>>>>>>>>>>>>>>> id >>> >>>>>>>>>>>>>>>> 3607975. >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >>> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing >>> >>>>>>>>>>>>>>>> the query on >>> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and >>> >>>>>>>>>>>>>>>> executing multiple times. >>> >>>>>>>>>>>>>>>> This is because somehow the remote query is being >>> >>>>>>>>>>>>>>>> prepared as an unnamed >>> >>>>>>>>>>>>>>>> statement." >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> Consider this test case >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); >>> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); >>> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >>> >>>>>>>>>>>>>>>> D. execute p1; >>> >>>>>>>>>>>>>>>> E. execute p1; >>> >>>>>>>>>>>>>>>> F. execute p1; >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> Here are the confusions >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response >>> >>>>>>>>>>>>>>>> to >>> >>>>>>>>>>>>>>>> a prepare issued by a user. >>> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >>> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>> >>>>>>>>>>>>>>>> datanodes. >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build >>> >>>>>>>>>>>>>>>> a >>> >>>>>>>>>>>>>>>> new generic plan, >>> >>>>>>>>>>>>>>>> and steps E and F use the already built generic plan. >>> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. >>> >>>>>>>>>>>>>>>> This means that executing a prepared statement again >>> >>>>>>>>>>>>>>>> and >>> >>>>>>>>>>>>>>>> again does use cached plans >>> >>>>>>>>>>>>>>>> and does not prepare again and again every time we >>> >>>>>>>>>>>>>>>> issue >>> >>>>>>>>>>>>>>>> an execute. >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> The problem is not here. The problem is in do_query() >>> >>>>>>>>>>>>>>> where >>> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and >>> >>>>>>>>>>>>>>> we keep on >>> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the >>> >>>>>>>>>>>>>> datanode. >>> >>>>>>>>>>>>>> I spent time looking at the code written in do_query and >>> >>>>>>>>>>>>>> functions called >>> >>>>>>>>>>>>>> from with in do_query to handle prepared statements but the >>> >>>>>>>>>>>>>> code written in >>> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements >>> >>>>>>>>>>>>>> prepared on datanodes >>> >>>>>>>>>>>>>> is dead as of now. It is never called during the complete >>> >>>>>>>>>>>>>> regression run. >>> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never >>> >>>>>>>>>>>>>> called. The way >>> >>>>>>>>>>>>>> prepared statements are being handled now is the same as I >>> >>>>>>>>>>>>>> described earlier >>> >>>>>>>>>>>>>> in the mail chain with the help of an example. >>> >>>>>>>>>>>>>> The code that is dead was originally added by Mason through >>> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in >>> >>>>>>>>>>>>>> December 2010. This >>> >>>>>>>>>>>>>> code has been changed a lot over the last two years. This >>> >>>>>>>>>>>>>> commit does not >>> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it use to >>> >>>>>>>>>>>>>> work back then. >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. >>> >>>>>>>>>>>>> So, >>> >>>>>>>>>>>>> something has gone wrong in-between. That's what we need to >>> >>>>>>>>>>>>> find out and >>> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not good >>> >>>>>>>>>>>>> for performance >>> >>>>>>>>>>>>> either. >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> I was able to find the reason why the code was dead and the >>> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure >>> >>>>>>>>>>>> that >>> >>>>>>>>>>>> statements are prepared on datanodes whenever required. >>> >>>>>>>>>>>> However there is a >>> >>>>>>>>>>>> problem in the way prepared statements are handled. The >>> >>>>>>>>>>>> problem is that >>> >>>>>>>>>>>> unless a prepared statement is executed it is never prepared >>> >>>>>>>>>>>> on datanodes, >>> >>>>>>>>>>>> hence changing the path before executing the statement gives >>> >>>>>>>>>>>> us incorrect >>> >>>>>>>>>>>> results. For Example >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute by >>> >>>>>>>>>>>> replication; >>> >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute by >>> >>>>>>>>>>>> replication; >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> insert into s1.abc values(123); >>> >>>>>>>>>>>> insert into s2.abc values(456); >>> >>>>>>>>>>>> set search_path = s2; >>> >>>>>>>>>>>> prepare p1 as select f1 from abc; >>> >>>>>>>>>>>> set search_path = s1; >>> >>>>>>>>>>>> execute p1; >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> The last execute results in 123, where as it should have >>> >>>>>>>>>>>> resulted >>> >>>>>>>>>>>> in 456. >>> >>>>>>>>>>>> I can finalize the attached patch by fixing any regression >>> >>>>>>>>>>>> issues >>> >>>>>>>>>>>> that may result and that would fix 3607975 and improve >>> >>>>>>>>>>>> performance however >>> >>>>>>>>>>>> the above test case would still fail. >>> >>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not >>> >>>>>>>>>>>>>>>> reproducible. >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would have >>> >>>>>>>>>>>>>>> been >>> >>>>>>>>>>>>>>> the case, we would not have seen this problem if >>> >>>>>>>>>>>>>>> search_path changed in >>> >>>>>>>>>>>>>>> between steps D and E. >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the problem >>> >>>>>>>>>>>>>> occurs because when the remote query node is created, >>> >>>>>>>>>>>>>> schema qualification >>> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the >>> >>>>>>>>>>>>>> datanode, but changes in >>> >>>>>>>>>>>>>> search path do get communicated to the datanode. The sql >>> >>>>>>>>>>>>>> statement is built >>> >>>>>>>>>>>>>> when execute is issued for the first time and is reused on >>> >>>>>>>>>>>>>> subsequent >>> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the select >>> >>>>>>>>>>>>>> that it just >>> >>>>>>>>>>>>>> received is due to an execute of a prepared statement that >>> >>>>>>>>>>>>>> was prepared when >>> >>>>>>>>>>>>>> search path was some thing else. >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, would >>> >>>>>>>>>>>>> fix >>> >>>>>>>>>>>>> the problem, since the statement will get prepared at the >>> >>>>>>>>>>>>> datanode, with the >>> >>>>>>>>>>>>> same search path settings, as it would on the coordinator. >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> Comments are welcome. >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> -- >>> >>>>>>>>>>>>>>>> Abbas >>> >>>>>>>>>>>>>>>> Architect >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 >>> >>>>>>>>>>>>>>>> Skype ID: gabbasb >>> >>>>>>>>>>>>>>>> www.enterprisedb.com >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> Follow us on Twitter >>> >>>>>>>>>>>>>>>> @EnterpriseDB >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >>> >>>>>>>>>>>>>>>> and >>> >>>>>>>>>>>>>>>> more >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application performance >>> >>>>>>>>>>>>>>>> monitoring service >>> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>> >>>>>>>>>>>>>>>> monitor your >>> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. >>> >>>>>>>>>>>>>>>> Try >>> >>>>>>>>>>>>>>>> New Relic >>> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >>> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>> >>>>>>>>>>>>>>>> _______________________________________________ >>> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list >>> >>>>>>>>>>>>>>>> Pos...@li... >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> -- >>> >>>>>>>>>>>>>>> Best Wishes, >>> >>>>>>>>>>>>>>> Ashutosh Bapat >>> >>>>>>>>>>>>>>> EntepriseDB Corporation >>> >>>>>>>>>>>>>>> The Postgres Database Company >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> -- >>> >>>>>>>>>>>>>> -- >>> >>>>>>>>>>>>>> Abbas >>> >>>>>>>>>>>>>> Architect >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> Ph: 92.334.5100153 >>> >>>>>>>>>>>>>> Skype ID: gabbasb >>> >>>>>>>>>>>>>> www.enterprisedb.com >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> Follow us on Twitter >>> >>>>>>>>>>>>>> @EnterpriseDB >>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >>> >>>>>>>>>>>>>> more >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> -- >>> >>>>>>>>>>>>> Best Wishes, >>> >>>>>>>>>>>>> Ashutosh Bapat >>> >>>>>>>>>>>>> EntepriseDB Corporation >>> >>>>>>>>>>>>> The Postgres Database Company >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> -- >>> >>>>>>>>>>>> -- >>> >>>>>>>>>>>> Abbas >>> >>>>>>>>>>>> Architect >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> Ph: 92.334.5100153 >>> >>>>>>>>>>>> Skype ID: gabbasb >>> >>>>>>>>>>>> www.enterprisedb.com >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> Follow us on Twitter >>> >>>>>>>>>>>> @EnterpriseDB >>> >>>>>>>>>>>> >>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >>> >>>>>>>>>>>> more >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> >>> >>>>>>>>>>> -- >>> >>>>>>>>>>> Best Wishes, >>> >>>>>>>>>>> Ashutosh Bapat >>> >>>>>>>>>>> EntepriseDB Corporation >>> >>>>>>>>>>> The Postgres Database Company >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> >>> >>>>>>>>>> -- >>> >>>>>>>>>> -- >>> >>>>>>>>>> Abbas >>> >>>>>>>>>> Architect >>> >>>>>>>>>> >>> >>>>>>>>>> Ph: 92.334.5100153 >>> >>>>>>>>>> Skype ID: gabbasb >>> >>>>>>>>>> www.enterprisedb.com >>> >>>>>>>>>> >>> >>>>>>>>>> Follow us on Twitter >>> >>>>>>>>>> @EnterpriseDB >>> >>>>>>>>>> >>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >>> >>>>>>>>>> more >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> -- >>> >>>>>>>>> Best Wishes, >>> >>>>>>>>> Ashutosh Bapat >>> >>>>>>>>> EntepriseDB Corporation >>> >>>>>>>>> The Postgres Database Company >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> -- >>> >>>>>>>> -- >>> >>>>>>>> Abbas >>> >>>>>>>> Architect >>> >>>>>>>> >>> >>>>>>>> Ph: 92.334.5100153 >>> >>>>>>>> Skype ID: gabbasb >>> >>>>>>>> www.enterprisedb.com >>> >>>>>>>> >>> >>>>>>>> Follow us on Twitter >>> >>>>>>>> @EnterpriseDB >>> >>>>>>>> >>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> -- >>> >>>>>>> Best Wishes, >>> >>>>>>> Ashutosh Bapat >>> >>>>>>> EntepriseDB Corporation >>> >>>>>>> The Postgres Database Company >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> -- >>> >>>>>> -- >>> >>>>>> Abbas >>> >>>>>> Architect >>> >>>>>> >>> >>>>>> Ph: 92.334.5100153 >>> >>>>>> Skype ID: gabbasb >>> >>>>>> www.enterprisedb.com >>> >>>>>> >>> >>>>>> Follow us on Twitter >>> >>>>>> @EnterpriseDB >>> >>>>>> >>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> -- >>> >>>>> Best Wishes, >>> >>>>> Ashutosh Bapat >>> >>>>> EntepriseDB Corporation >>> >>>>> The Postgres Database Company >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> -- >>> >>>> -- >>> >>>> Abbas >>> >>>> Architect >>> >>>> >>> >>>> Ph: 92.334.5100153 >>> >>>> Skype ID: gabbasb >>> >>>> www.enterprisedb.com >>> >>>> >>> >>>> Follow us on Twitter >>> >>>> @EnterpriseDB >>> >>>> >>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> -- >>> >>> Abbas >>> >>> Architect >>> >>> >>> >>> Ph: 92.334.5100153 >>> >>> Skype ID: gabbasb >>> >>> www.enterprisedb.com >>> >>> >>> >>> Follow us on Twitter >>> >>> @EnterpriseDB >>> >>> >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >> >>> >> >>> >> >>> >> >>> >> -- >>> >> Best Wishes, >>> >> Ashutosh Bapat >>> >> EntepriseDB Corporation >>> >> The Postgres Database Company >>> >> >>> >> >>> >> ------------------------------------------------------------------------------ >>> >> This SF.net email is sponsored by Windows: >>> >> >>> >> Build for Windows Store. >>> >> >>> >> http://p.sf.net/sfu/windows-dev2dev >>> >> _______________________________________________ >>> >> Postgres-xc-developers mailing list >>> >> Pos...@li... >>> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company |
From: Amit K. <ami...@en...> - 2013-06-26 05:05:02
|
On 26 June 2013 08:56, Ashutosh Bapat <ash...@en...> wrote: > Hi Amit, > From a cursory look, this looks much more cleaner than Abbas's patch. So, it > looks to be the approach we should take. BTW, you need to update the > original outputs as well, instead of just the alternate expected outputs. > Remember, the merge applies changes to the original expected outputs and not > alternate ones (added in XC), thus we come to know about conflicts only when > we apply changes to original expected outputs. Right. Will look into it. > > Regards to temporary namespaces, is it possible to schema-qualify the > temporary namespaces as always pg_temp irrespective of the actual name? get_namespace_name() is used to deparse the schema name. In order to keep the deparsing logic working for both local queries (i.e. for view for e.g.) and remote queries, we need to push in the context that the deparsing is being done for remote queries, and this needs to be done all the way from the uppermost function (say pg_get_querydef) upto get_namespace_name() which does not look good. We need to think about some solution in general for the existing issue of deparsing temp table names. Not sure of any solution. May be, after we resolve the bug id in subject, the fix may even not require the schema qualification. > > > On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar > <ami...@en...> wrote: >> >> On 25 June 2013 19:59, Amit Khandekar <ami...@en...> >> wrote: >> > Attached is a patch that does schema qualification by overriding the >> > search_path just before doing the deparse in deparse_query(). >> > PopOVerrideSearchPath() in the end pops out the temp path. Also, the >> > transaction callback function already takes care of popping out such >> > Push in case of transaction rollback. >> > >> > Unfortunately we cannot apply this solution to temp tables. The >> > problem is, when a pg_temp schema is deparsed, it is deparsed into >> > pg_temp_1, pg_temp_2 etc. and these names are specific to the node. An >> > object in pg_temp_2 at coordinator may be present in pg_temp_1 at >> > datanode. So the remote query generated may or may not work on >> > datanode, so totally unreliable. >> > >> > In fact, the issue with pg_temp_1 names in the deparsed remote query >> > is present even currently. >> > >> > But wherever it is a correctness issue to *not* schema qualify temp >> > object, I have kept the schema qualification. >> > For e.g. user can set search_path to" s1, pg_temp", >> > and have obj1 in both s1 and pg_temp, and want to refer to pg_temp.s1. >> > In such case, the remote query should have pg_temp_[1-9].obj1, >> > although it may cause errors because of the existing issue. >> > >> --- >> > So, the prepare-execute with search_path would remain there for temp >> > tables. >> I mean, the prepare-exute issue with search_patch would remain there >> for temp tables. >> >> > >> > I tried to run the regression by extracting regression expected output >> > files from Abbas's patch , and regression passes, including plancache. >> > >> > I think for this release, we should go ahead by keeping this issue >> > open for temp tables. This solution is an improvement, and does not >> > cause any new issues. >> > >> > Comments welcome. >> > >> > >> > On 24 June 2013 13:00, Ashutosh Bapat <ash...@en...> >> > wrote: >> >> Hi Abbas, >> >> We are changing a lot of PostgreSQL deparsing code, which would create >> >> problems in the future merges. Since this change is in query deparsing >> >> logic >> >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch >> >> should >> >> again be the last resort. >> >> >> >> Please take a look at how view definitions are dumped. That will give a >> >> good >> >> idea as to how PG schema-qualifies (or not) objects. Here's how view >> >> definitions displayed changes with search path. Since the code to dump >> >> views >> >> and display definitions is same, the view definition dumped also >> >> changes >> >> with the search path. Thus pg_dump must be using some trick to always >> >> dump a >> >> consistent view definition (and hence a deparsed query). Thanks Amit >> >> for the >> >> example. >> >> >> >> create table ttt (id int); >> >> >> >> postgres=# create domain dd int; >> >> CREATE DOMAIN >> >> postgres=# create view v2 as select id::dd from ttt; >> >> CREATE VIEW >> >> postgres=# set search_path TO ''; >> >> SET >> >> >> >> postgres=# \d+ public.v2 >> >> View "public.v2" >> >> Column | Type | Modifiers | Storage | Description >> >> --------+-----------+--------- >> >> --+---------+------------- >> >> id | public.dd | | plain | >> >> View definition: >> >> SELECT ttt.id::public.dd AS id >> >> FROM public.ttt; >> >> >> >> postgres=# set search_path TO default ; >> >> SET >> >> postgres=# show search_path ; >> >> search_path >> >> ---------------- >> >> "$user",public >> >> (1 row) >> >> >> >> postgres=# \d+ public.v2 >> >> View "public.v2" >> >> Column | Type | Modifiers | Storage | Description >> >> --------+------+-----------+---------+------------- >> >> id | dd | | plain | >> >> View definition: >> >> SELECT ttt.id::dd AS id >> >> FROM ttt; >> >> >> >> We need to leverage similar mechanism here to reduce PG footprint. >> >> >> >> >> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt >> >> <abb...@en...> >> >> wrote: >> >>> >> >>> Hi, >> >>> As discussed in the last F2F meeting, here is an updated patch that >> >>> provides schema qualification of the following objects: Tables, Views, >> >>> Functions, Types and Domains in case of remote queries. >> >>> Sequence functions are never concerned with datanodes hence, schema >> >>> qualification is not required in case of sequences. >> >>> This solves plancache test case failure issue and does not introduce >> >>> any >> >>> more failures. >> >>> I have also attached some tests with results to aid in review. >> >>> >> >>> Comments are welcome. >> >>> >> >>> Regards >> >>> >> >>> >> >>> >> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt >> >>> <abb...@en...> >> >>> wrote: >> >>>> >> >>>> Hi, >> >>>> Attached please find a WIP patch that provides the functionality of >> >>>> preparing the statement at the datanodes as soon as it is prepared >> >>>> on the coordinator. >> >>>> This is to take care of a test case in plancache that makes sure that >> >>>> change of search_path is ignored by replans. >> >>>> While the patch fixes this replan test case and the regression works >> >>>> fine >> >>>> there are still these two problems I have to take care of. >> >>>> >> >>>> 1. This test case fails >> >>>> >> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE >> >>>> BY >> >>>> HASH(a); >> >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >> >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails >> >>>> >> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; >> >>>> QUERY PLAN >> >>>> >> >>>> ------------------------------------------------------------------- >> >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 >> >>>> width=14) >> >>>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >> >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >> >>>> ((xc_alter_table_3.ctid = $1) AND >> >>>> (xc_alter_table_3.xc_node_id = $2)) >> >>>> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" >> >>>> (cost=0.00..0.00 rows=1000 width=14) >> >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >> >>>> xc_alter_table_3.xc_node_id >> >>>> Node/s: data_node_3 >> >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >> >>>> xc_alter_table_3 WHERE (a = 1) >> >>>> (7 rows) >> >>>> >> >>>> The reason of the failure is that the select query is selecting 3 >> >>>> items, the first of which is an int, >> >>>> whereas the delete query is comparing $1 with a ctid. >> >>>> I am not sure how this works without prepare, but it fails when >> >>>> used >> >>>> with prepare. >> >>>> >> >>>> The reason of this planning is this section of code in function >> >>>> pgxc_build_dml_statement >> >>>> else if (cmdtype == CMD_DELETE) >> >>>> { >> >>>> /* >> >>>> * Since there is no data to update, the first param is going >> >>>> to >> >>>> be >> >>>> * ctid. >> >>>> */ >> >>>> ctid_param_num = 1; >> >>>> } >> >>>> >> >>>> Amit/Ashutosh can you suggest a fix for this problem? >> >>>> There are a number of possibilities. >> >>>> a) The select should not have selected column a. >> >>>> b) The DELETE should have referred to $2 and $3 for ctid and >> >>>> xc_node_id respectively. >> >>>> c) Since the query works without PREPARE, we should make PREPARE >> >>>> work >> >>>> the same way. >> >>>> >> >>>> >> >>>> 2. This test case in plancache fails. >> >>>> >> >>>> -- Try it with a view, which isn't directly used in the resulting >> >>>> plan >> >>>> -- but should trigger invalidation anyway >> >>>> create table tab33 (a int, b int); >> >>>> insert into tab33 values(1,2); >> >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >> >>>> PREPARE vprep AS SELECT * FROM v_tab33; >> >>>> EXECUTE vprep; >> >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; >> >>>> -- does not cause plan invalidation because views are never >> >>>> created >> >>>> on datanodes >> >>>> EXECUTE vprep; >> >>>> >> >>>> and the reason of the failure is that views are never created on >> >>>> the >> >>>> datanodes hence plan invalidation is not triggered. >> >>>> This can be documented as an XC limitation. >> >>>> >> >>>> 3. I still have to add comments in the patch and some ifdefs may be >> >>>> missing too. >> >>>> >> >>>> >> >>>> In addition to the patch I have also attached some example Java >> >>>> programs >> >>>> that test the some basic functionality through JDBC. I found that >> >>>> these >> >>>> programs are working fine after my patch. >> >>>> >> >>>> 1. Prepared.java : Issues parameterized delete, insert and update >> >>>> through >> >>>> JDBC. These are un-named prepared statements and works fine. >> >>>> 2. NamedPrepared.java : Issues two named prepared statements through >> >>>> JDBC >> >>>> and works fine. >> >>>> 3. Retrieve.java : Runs a simple select to verify results. >> >>>> The comments on top of the files explain their usage. >> >>>> >> >>>> Comments are welcome. >> >>>> >> >>>> Thanks >> >>>> Regards >> >>>> >> >>>> >> >>>> >> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat >> >>>> <ash...@en...> wrote: >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt >> >>>>> <abb...@en...> wrote: >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat >> >>>>>> <ash...@en...> wrote: >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt >> >>>>>>> <abb...@en...> wrote: >> >>>>>>>> >> >>>>>>>> Attached please find updated patch to fix the bug. The patch >> >>>>>>>> takes >> >>>>>>>> care of the bug and the regression issues resulting from the >> >>>>>>>> changes done in >> >>>>>>>> the patch. Please note that the issue in test case plancache >> >>>>>>>> still stands >> >>>>>>>> unsolved because of the following test case (simplified but taken >> >>>>>>>> from >> >>>>>>>> plancache.sql) >> >>>>>>>> >> >>>>>>>> create schema s1 create table abc (f1 int); >> >>>>>>>> create schema s2 create table abc (f1 int); >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> insert into s1.abc values(123); >> >>>>>>>> insert into s2.abc values(456); >> >>>>>>>> >> >>>>>>>> set search_path = s1; >> >>>>>>>> >> >>>>>>>> prepare p1 as select f1 from abc; >> >>>>>>>> execute p1; -- works fine, results in 123 >> >>>>>>>> >> >>>>>>>> set search_path = s2; >> >>>>>>>> execute p1; -- works fine after the patch, results in 123 >> >>>>>>>> >> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan >> >>>>>>>> execute p1; -- fails >> >>>>>>>> >> >>>>>>> >> >>>>>>> Huh! The beast bit us. >> >>>>>>> >> >>>>>>> I think the right solution here is either of two >> >>>>>>> 1. Take your previous patch to always use qualified names (but you >> >>>>>>> need to improve it not to affect the view dumps) >> >>>>>>> 2. Prepare the statements at the datanode at the time of prepare. >> >>>>>>> >> >>>>>>> >> >>>>>>> Is this test added new in 9.2? >> >>>>>> >> >>>>>> >> >>>>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c >> >>>>>> in >> >>>>>> March 2007. >> >>>>>> >> >>>>>>> >> >>>>>>> Why didn't we see this issue the first time prepare was >> >>>>>>> implemented? I >> >>>>>>> don't remember (but it was two years back). >> >>>>>> >> >>>>>> >> >>>>>> I was unable to locate the exact reason but since statements were >> >>>>>> not >> >>>>>> being prepared on datanodes due to a merge issue this issue just >> >>>>>> surfaced >> >>>>>> up. >> >>>>>> >> >>>>> >> >>>>> >> >>>>> Well, even though statements were not getting prepared (actually >> >>>>> prepared statements were not being used again and again) on >> >>>>> datanodes, we >> >>>>> never prepared them on datanode at the time of preparing the >> >>>>> statement. So, >> >>>>> this bug should have shown itself long back. >> >>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>>> >> >>>>>>>> The last execute should result in 123, whereas it results in 456. >> >>>>>>>> The >> >>>>>>>> reason is that the search path has already been changed at the >> >>>>>>>> datanode and >> >>>>>>>> a replan would mean select from abc in s2. >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat >> >>>>>>>> <ash...@en...> wrote: >> >>>>>>>>> >> >>>>>>>>> Hi Abbas, >> >>>>>>>>> I think the fix is on the right track. There are couple of >> >>>>>>>>> improvements that we need to do here (but you may not do those >> >>>>>>>>> if the time >> >>>>>>>>> doesn't permit). >> >>>>>>>>> >> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to whether >> >>>>>>>>> the >> >>>>>>>>> query in the node should use extended protocol or not, rather >> >>>>>>>>> than relying >> >>>>>>>>> on the presence of statement name and parameters etc. Amit has >> >>>>>>>>> already added >> >>>>>>>>> a status with that effect. We need to leverage it. >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt >> >>>>>>>>> <abb...@en...> wrote: >> >>>>>>>>>> >> >>>>>>>>>> The patch fixes the dead code issue, that I described earlier. >> >>>>>>>>>> The >> >>>>>>>>>> code was dead because of two issues: >> >>>>>>>>>> >> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting >> >>>>>>>>>> stmt_name to >> >>>>>>>>>> NULL and this was the main reason >> >>>>>>>>>> ActivateDatanodeStatementOnNode was not >> >>>>>>>>>> being called in the function pgxc_start_command_on_connection. >> >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming >> >>>>>>>>>> that a >> >>>>>>>>>> prepared statement must have some parameters. >> >>>>>>>>>> >> >>>>>>>>>> Fixing these two issues makes sure that the function >> >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and statements >> >>>>>>>>>> get prepared on >> >>>>>>>>>> the datanode. >> >>>>>>>>>> This patch would fix bug 3607975. It would however not fix the >> >>>>>>>>>> test >> >>>>>>>>>> case I described in my previous email because of reasons I >> >>>>>>>>>> described. >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat >> >>>>>>>>>> <ash...@en...> wrote: >> >>>>>>>>>>> >> >>>>>>>>>>> Can you please explain what this fix does? It would help to >> >>>>>>>>>>> have >> >>>>>>>>>>> an elaborate explanation with code snippets. >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt >> >>>>>>>>>>> <abb...@en...> wrote: >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat >> >>>>>>>>>>>> <ash...@en...> wrote: >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt >> >>>>>>>>>>>>> <abb...@en...> wrote: >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat >> >>>>>>>>>>>>>> <ash...@en...> wrote: >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt >> >>>>>>>>>>>>>>> <abb...@en...> wrote: >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Hi, >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> While working on test case plancache it was brought up as >> >>>>>>>>>>>>>>>> a >> >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve >> >>>>>>>>>>>>>>>> the problem of the >> >>>>>>>>>>>>>>>> test case. >> >>>>>>>>>>>>>>>> However there is some confusion in the statement of bug >> >>>>>>>>>>>>>>>> id >> >>>>>>>>>>>>>>>> 3607975. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >> >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing >> >>>>>>>>>>>>>>>> the query on >> >>>>>>>>>>>>>>>> datanode al times, as against preparing once and >> >>>>>>>>>>>>>>>> executing multiple times. >> >>>>>>>>>>>>>>>> This is because somehow the remote query is being >> >>>>>>>>>>>>>>>> prepared as an unnamed >> >>>>>>>>>>>>>>>> statement." >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Consider this test case >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); >> >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); >> >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >> >>>>>>>>>>>>>>>> D. execute p1; >> >>>>>>>>>>>>>>>> E. execute p1; >> >>>>>>>>>>>>>>>> F. execute p1; >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Here are the confusions >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response >> >>>>>>>>>>>>>>>> to >> >>>>>>>>>>>>>>>> a prepare issued by a user. >> >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >> >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >> >>>>>>>>>>>>>>>> datanodes. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build >> >>>>>>>>>>>>>>>> a >> >>>>>>>>>>>>>>>> new generic plan, >> >>>>>>>>>>>>>>>> and steps E and F use the already built generic plan. >> >>>>>>>>>>>>>>>> For details see function GetCachedPlan. >> >>>>>>>>>>>>>>>> This means that executing a prepared statement again >> >>>>>>>>>>>>>>>> and >> >>>>>>>>>>>>>>>> again does use cached plans >> >>>>>>>>>>>>>>>> and does not prepare again and again every time we >> >>>>>>>>>>>>>>>> issue >> >>>>>>>>>>>>>>>> an execute. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> The problem is not here. The problem is in do_query() >> >>>>>>>>>>>>>>> where >> >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and >> >>>>>>>>>>>>>>> we keep on >> >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the >> >>>>>>>>>>>>>> datanode. >> >>>>>>>>>>>>>> I spent time looking at the code written in do_query and >> >>>>>>>>>>>>>> functions called >> >>>>>>>>>>>>>> from with in do_query to handle prepared statements but the >> >>>>>>>>>>>>>> code written in >> >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements >> >>>>>>>>>>>>>> prepared on datanodes >> >>>>>>>>>>>>>> is dead as of now. It is never called during the complete >> >>>>>>>>>>>>>> regression run. >> >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never >> >>>>>>>>>>>>>> called. The way >> >>>>>>>>>>>>>> prepared statements are being handled now is the same as I >> >>>>>>>>>>>>>> described earlier >> >>>>>>>>>>>>>> in the mail chain with the help of an example. >> >>>>>>>>>>>>>> The code that is dead was originally added by Mason through >> >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in >> >>>>>>>>>>>>>> December 2010. This >> >>>>>>>>>>>>>> code has been changed a lot over the last two years. This >> >>>>>>>>>>>>>> commit does not >> >>>>>>>>>>>>>> contain any test cases so I am not sure how did it use to >> >>>>>>>>>>>>>> work back then. >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. >> >>>>>>>>>>>>> So, >> >>>>>>>>>>>>> something has gone wrong in-between. That's what we need to >> >>>>>>>>>>>>> find out and >> >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not good >> >>>>>>>>>>>>> for performance >> >>>>>>>>>>>>> either. >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> I was able to find the reason why the code was dead and the >> >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure >> >>>>>>>>>>>> that >> >>>>>>>>>>>> statements are prepared on datanodes whenever required. >> >>>>>>>>>>>> However there is a >> >>>>>>>>>>>> problem in the way prepared statements are handled. The >> >>>>>>>>>>>> problem is that >> >>>>>>>>>>>> unless a prepared statement is executed it is never prepared >> >>>>>>>>>>>> on datanodes, >> >>>>>>>>>>>> hence changing the path before executing the statement gives >> >>>>>>>>>>>> us incorrect >> >>>>>>>>>>>> results. For Example >> >>>>>>>>>>>> >> >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute by >> >>>>>>>>>>>> replication; >> >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute by >> >>>>>>>>>>>> replication; >> >>>>>>>>>>>> >> >>>>>>>>>>>> insert into s1.abc values(123); >> >>>>>>>>>>>> insert into s2.abc values(456); >> >>>>>>>>>>>> set search_path = s2; >> >>>>>>>>>>>> prepare p1 as select f1 from abc; >> >>>>>>>>>>>> set search_path = s1; >> >>>>>>>>>>>> execute p1; >> >>>>>>>>>>>> >> >>>>>>>>>>>> The last execute results in 123, where as it should have >> >>>>>>>>>>>> resulted >> >>>>>>>>>>>> in 456. >> >>>>>>>>>>>> I can finalize the attached patch by fixing any regression >> >>>>>>>>>>>> issues >> >>>>>>>>>>>> that may result and that would fix 3607975 and improve >> >>>>>>>>>>>> performance however >> >>>>>>>>>>>> the above test case would still fail. >> >>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not >> >>>>>>>>>>>>>>>> reproducible. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would have >> >>>>>>>>>>>>>>> been >> >>>>>>>>>>>>>>> the case, we would not have seen this problem if >> >>>>>>>>>>>>>>> search_path changed in >> >>>>>>>>>>>>>>> between steps D and E. >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> If search path is changed between steps D & E, the problem >> >>>>>>>>>>>>>> occurs because when the remote query node is created, >> >>>>>>>>>>>>>> schema qualification >> >>>>>>>>>>>>>> is not added in the sql statement to be sent to the >> >>>>>>>>>>>>>> datanode, but changes in >> >>>>>>>>>>>>>> search path do get communicated to the datanode. The sql >> >>>>>>>>>>>>>> statement is built >> >>>>>>>>>>>>>> when execute is issued for the first time and is reused on >> >>>>>>>>>>>>>> subsequent >> >>>>>>>>>>>>>> executes. The datanode is totally unaware that the select >> >>>>>>>>>>>>>> that it just >> >>>>>>>>>>>>>> received is due to an execute of a prepared statement that >> >>>>>>>>>>>>>> was prepared when >> >>>>>>>>>>>>>> search path was some thing else. >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, would >> >>>>>>>>>>>>> fix >> >>>>>>>>>>>>> the problem, since the statement will get prepared at the >> >>>>>>>>>>>>> datanode, with the >> >>>>>>>>>>>>> same search path settings, as it would on the coordinator. >> >>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Comments are welcome. >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> -- >> >>>>>>>>>>>>>>>> Abbas >> >>>>>>>>>>>>>>>> Architect >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 >> >>>>>>>>>>>>>>>> Skype ID: gabbasb >> >>>>>>>>>>>>>>>> www.enterprisedb.com >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Follow us on Twitter >> >>>>>>>>>>>>>>>> @EnterpriseDB >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers >> >>>>>>>>>>>>>>>> and >> >>>>>>>>>>>>>>>> more >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >> >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >> >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application performance >> >>>>>>>>>>>>>>>> monitoring service >> >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >> >>>>>>>>>>>>>>>> monitor your >> >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. >> >>>>>>>>>>>>>>>> Try >> >>>>>>>>>>>>>>>> New Relic >> >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >> >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >> >>>>>>>>>>>>>>>> _______________________________________________ >> >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list >> >>>>>>>>>>>>>>>> Pos...@li... >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> -- >> >>>>>>>>>>>>>>> Best Wishes, >> >>>>>>>>>>>>>>> Ashutosh Bapat >> >>>>>>>>>>>>>>> EntepriseDB Corporation >> >>>>>>>>>>>>>>> The Postgres Database Company >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> -- >> >>>>>>>>>>>>>> -- >> >>>>>>>>>>>>>> Abbas >> >>>>>>>>>>>>>> Architect >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> Ph: 92.334.5100153 >> >>>>>>>>>>>>>> Skype ID: gabbasb >> >>>>>>>>>>>>>> www.enterprisedb.com >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> Follow us on Twitter >> >>>>>>>>>>>>>> @EnterpriseDB >> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >>>>>>>>>>>>>> more >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> >>>>>>>>>>>>> -- >> >>>>>>>>>>>>> Best Wishes, >> >>>>>>>>>>>>> Ashutosh Bapat >> >>>>>>>>>>>>> EntepriseDB Corporation >> >>>>>>>>>>>>> The Postgres Database Company >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> >> >>>>>>>>>>>> -- >> >>>>>>>>>>>> -- >> >>>>>>>>>>>> Abbas >> >>>>>>>>>>>> Architect >> >>>>>>>>>>>> >> >>>>>>>>>>>> Ph: 92.334.5100153 >> >>>>>>>>>>>> Skype ID: gabbasb >> >>>>>>>>>>>> www.enterprisedb.com >> >>>>>>>>>>>> >> >>>>>>>>>>>> Follow us on Twitter >> >>>>>>>>>>>> @EnterpriseDB >> >>>>>>>>>>>> >> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >>>>>>>>>>>> more >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> >> >>>>>>>>>>> -- >> >>>>>>>>>>> Best Wishes, >> >>>>>>>>>>> Ashutosh Bapat >> >>>>>>>>>>> EntepriseDB Corporation >> >>>>>>>>>>> The Postgres Database Company >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> >> >>>>>>>>>> -- >> >>>>>>>>>> -- >> >>>>>>>>>> Abbas >> >>>>>>>>>> Architect >> >>>>>>>>>> >> >>>>>>>>>> Ph: 92.334.5100153 >> >>>>>>>>>> Skype ID: gabbasb >> >>>>>>>>>> www.enterprisedb.com >> >>>>>>>>>> >> >>>>>>>>>> Follow us on Twitter >> >>>>>>>>>> @EnterpriseDB >> >>>>>>>>>> >> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >> >>>>>>>>>> more >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> -- >> >>>>>>>>> Best Wishes, >> >>>>>>>>> Ashutosh Bapat >> >>>>>>>>> EntepriseDB Corporation >> >>>>>>>>> The Postgres Database Company >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> -- >> >>>>>>>> -- >> >>>>>>>> Abbas >> >>>>>>>> Architect >> >>>>>>>> >> >>>>>>>> Ph: 92.334.5100153 >> >>>>>>>> Skype ID: gabbasb >> >>>>>>>> www.enterprisedb.com >> >>>>>>>> >> >>>>>>>> Follow us on Twitter >> >>>>>>>> @EnterpriseDB >> >>>>>>>> >> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> -- >> >>>>>>> Best Wishes, >> >>>>>>> Ashutosh Bapat >> >>>>>>> EntepriseDB Corporation >> >>>>>>> The Postgres Database Company >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> -- >> >>>>>> -- >> >>>>>> Abbas >> >>>>>> Architect >> >>>>>> >> >>>>>> Ph: 92.334.5100153 >> >>>>>> Skype ID: gabbasb >> >>>>>> www.enterprisedb.com >> >>>>>> >> >>>>>> Follow us on Twitter >> >>>>>> @EnterpriseDB >> >>>>>> >> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> -- >> >>>>> Best Wishes, >> >>>>> Ashutosh Bapat >> >>>>> EntepriseDB Corporation >> >>>>> The Postgres Database Company >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> -- >> >>>> -- >> >>>> Abbas >> >>>> Architect >> >>>> >> >>>> Ph: 92.334.5100153 >> >>>> Skype ID: gabbasb >> >>>> www.enterprisedb.com >> >>>> >> >>>> Follow us on Twitter >> >>>> @EnterpriseDB >> >>>> >> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >>> >> >>> >> >>> >> >>> >> >>> -- >> >>> -- >> >>> Abbas >> >>> Architect >> >>> >> >>> Ph: 92.334.5100153 >> >>> Skype ID: gabbasb >> >>> www.enterprisedb.com >> >>> >> >>> Follow us on Twitter >> >>> @EnterpriseDB >> >>> >> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >> >> >> >> >> >> >> >> >> -- >> >> Best Wishes, >> >> Ashutosh Bapat >> >> EntepriseDB Corporation >> >> The Postgres Database Company >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> This SF.net email is sponsored by Windows: >> >> >> >> Build for Windows Store. >> >> >> >> http://p.sf.net/sfu/windows-dev2dev >> >> _______________________________________________ >> >> Postgres-xc-developers mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-06-26 03:26:56
|
Hi Amit, >From a cursory look, this looks much more cleaner than Abbas's patch. So, it looks to be the approach we should take. BTW, you need to update the original outputs as well, instead of just the alternate expected outputs. Remember, the merge applies changes to the original expected outputs and not alternate ones (added in XC), thus we come to know about conflicts only when we apply changes to original expected outputs. Regards to temporary namespaces, is it possible to schema-qualify the temporary namespaces as always pg_temp irrespective of the actual name? On Tue, Jun 25, 2013 at 8:02 PM, Amit Khandekar < ami...@en...> wrote: > On 25 June 2013 19:59, Amit Khandekar <ami...@en...> > wrote: > > Attached is a patch that does schema qualification by overriding the > > search_path just before doing the deparse in deparse_query(). > > PopOVerrideSearchPath() in the end pops out the temp path. Also, the > > transaction callback function already takes care of popping out such > > Push in case of transaction rollback. > > > > Unfortunately we cannot apply this solution to temp tables. The > > problem is, when a pg_temp schema is deparsed, it is deparsed into > > pg_temp_1, pg_temp_2 etc. and these names are specific to the node. An > > object in pg_temp_2 at coordinator may be present in pg_temp_1 at > > datanode. So the remote query generated may or may not work on > > datanode, so totally unreliable. > > > > In fact, the issue with pg_temp_1 names in the deparsed remote query > > is present even currently. > > > > But wherever it is a correctness issue to *not* schema qualify temp > > object, I have kept the schema qualification. > > For e.g. user can set search_path to" s1, pg_temp", > > and have obj1 in both s1 and pg_temp, and want to refer to pg_temp.s1. > > In such case, the remote query should have pg_temp_[1-9].obj1, > > although it may cause errors because of the existing issue. > > > --- > > So, the prepare-execute with search_path would remain there for temp > tables. > I mean, the prepare-exute issue with search_patch would remain there > for temp tables. > > > > > I tried to run the regression by extracting regression expected output > > files from Abbas's patch , and regression passes, including plancache. > > > > I think for this release, we should go ahead by keeping this issue > > open for temp tables. This solution is an improvement, and does not > > cause any new issues. > > > > Comments welcome. > > > > > > On 24 June 2013 13:00, Ashutosh Bapat <ash...@en...> > wrote: > >> Hi Abbas, > >> We are changing a lot of PostgreSQL deparsing code, which would create > >> problems in the future merges. Since this change is in query deparsing > logic > >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch > should > >> again be the last resort. > >> > >> Please take a look at how view definitions are dumped. That will give a > good > >> idea as to how PG schema-qualifies (or not) objects. Here's how view > >> definitions displayed changes with search path. Since the code to dump > views > >> and display definitions is same, the view definition dumped also changes > >> with the search path. Thus pg_dump must be using some trick to always > dump a > >> consistent view definition (and hence a deparsed query). Thanks Amit > for the > >> example. > >> > >> create table ttt (id int); > >> > >> postgres=# create domain dd int; > >> CREATE DOMAIN > >> postgres=# create view v2 as select id::dd from ttt; > >> CREATE VIEW > >> postgres=# set search_path TO ''; > >> SET > >> > >> postgres=# \d+ public.v2 > >> View "public.v2" > >> Column | Type | Modifiers | Storage | Description > >> --------+-----------+--------- > >> --+---------+------------- > >> id | public.dd | | plain | > >> View definition: > >> SELECT ttt.id::public.dd AS id > >> FROM public.ttt; > >> > >> postgres=# set search_path TO default ; > >> SET > >> postgres=# show search_path ; > >> search_path > >> ---------------- > >> "$user",public > >> (1 row) > >> > >> postgres=# \d+ public.v2 > >> View "public.v2" > >> Column | Type | Modifiers | Storage | Description > >> --------+------+-----------+---------+------------- > >> id | dd | | plain | > >> View definition: > >> SELECT ttt.id::dd AS id > >> FROM ttt; > >> > >> We need to leverage similar mechanism here to reduce PG footprint. > >> > >> > >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt < > abb...@en...> > >> wrote: > >>> > >>> Hi, > >>> As discussed in the last F2F meeting, here is an updated patch that > >>> provides schema qualification of the following objects: Tables, Views, > >>> Functions, Types and Domains in case of remote queries. > >>> Sequence functions are never concerned with datanodes hence, schema > >>> qualification is not required in case of sequences. > >>> This solves plancache test case failure issue and does not introduce > any > >>> more failures. > >>> I have also attached some tests with results to aid in review. > >>> > >>> Comments are welcome. > >>> > >>> Regards > >>> > >>> > >>> > >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt < > abb...@en...> > >>> wrote: > >>>> > >>>> Hi, > >>>> Attached please find a WIP patch that provides the functionality of > >>>> preparing the statement at the datanodes as soon as it is prepared > >>>> on the coordinator. > >>>> This is to take care of a test case in plancache that makes sure that > >>>> change of search_path is ignored by replans. > >>>> While the patch fixes this replan test case and the regression works > fine > >>>> there are still these two problems I have to take care of. > >>>> > >>>> 1. This test case fails > >>>> > >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY > >>>> HASH(a); > >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); > >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails > >>>> > >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; > >>>> QUERY PLAN > >>>> > ------------------------------------------------------------------- > >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 > >>>> width=14) > >>>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 > >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE > >>>> ((xc_alter_table_3.ctid = $1) AND > >>>> (xc_alter_table_3.xc_node_id = $2)) > >>>> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" > >>>> (cost=0.00..0.00 rows=1000 width=14) > >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, > >>>> xc_alter_table_3.xc_node_id > >>>> Node/s: data_node_3 > >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY > >>>> xc_alter_table_3 WHERE (a = 1) > >>>> (7 rows) > >>>> > >>>> The reason of the failure is that the select query is selecting 3 > >>>> items, the first of which is an int, > >>>> whereas the delete query is comparing $1 with a ctid. > >>>> I am not sure how this works without prepare, but it fails when > used > >>>> with prepare. > >>>> > >>>> The reason of this planning is this section of code in function > >>>> pgxc_build_dml_statement > >>>> else if (cmdtype == CMD_DELETE) > >>>> { > >>>> /* > >>>> * Since there is no data to update, the first param is going > to > >>>> be > >>>> * ctid. > >>>> */ > >>>> ctid_param_num = 1; > >>>> } > >>>> > >>>> Amit/Ashutosh can you suggest a fix for this problem? > >>>> There are a number of possibilities. > >>>> a) The select should not have selected column a. > >>>> b) The DELETE should have referred to $2 and $3 for ctid and > >>>> xc_node_id respectively. > >>>> c) Since the query works without PREPARE, we should make PREPARE > work > >>>> the same way. > >>>> > >>>> > >>>> 2. This test case in plancache fails. > >>>> > >>>> -- Try it with a view, which isn't directly used in the resulting > >>>> plan > >>>> -- but should trigger invalidation anyway > >>>> create table tab33 (a int, b int); > >>>> insert into tab33 values(1,2); > >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; > >>>> PREPARE vprep AS SELECT * FROM v_tab33; > >>>> EXECUTE vprep; > >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; > >>>> -- does not cause plan invalidation because views are never > created > >>>> on datanodes > >>>> EXECUTE vprep; > >>>> > >>>> and the reason of the failure is that views are never created on > the > >>>> datanodes hence plan invalidation is not triggered. > >>>> This can be documented as an XC limitation. > >>>> > >>>> 3. I still have to add comments in the patch and some ifdefs may be > >>>> missing too. > >>>> > >>>> > >>>> In addition to the patch I have also attached some example Java > programs > >>>> that test the some basic functionality through JDBC. I found that > these > >>>> programs are working fine after my patch. > >>>> > >>>> 1. Prepared.java : Issues parameterized delete, insert and update > through > >>>> JDBC. These are un-named prepared statements and works fine. > >>>> 2. NamedPrepared.java : Issues two named prepared statements through > JDBC > >>>> and works fine. > >>>> 3. Retrieve.java : Runs a simple select to verify results. > >>>> The comments on top of the files explain their usage. > >>>> > >>>> Comments are welcome. > >>>> > >>>> Thanks > >>>> Regards > >>>> > >>>> > >>>> > >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat > >>>> <ash...@en...> wrote: > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt > >>>>> <abb...@en...> wrote: > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat > >>>>>> <ash...@en...> wrote: > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt > >>>>>>> <abb...@en...> wrote: > >>>>>>>> > >>>>>>>> Attached please find updated patch to fix the bug. The patch takes > >>>>>>>> care of the bug and the regression issues resulting from the > changes done in > >>>>>>>> the patch. Please note that the issue in test case plancache > still stands > >>>>>>>> unsolved because of the following test case (simplified but taken > from > >>>>>>>> plancache.sql) > >>>>>>>> > >>>>>>>> create schema s1 create table abc (f1 int); > >>>>>>>> create schema s2 create table abc (f1 int); > >>>>>>>> > >>>>>>>> > >>>>>>>> insert into s1.abc values(123); > >>>>>>>> insert into s2.abc values(456); > >>>>>>>> > >>>>>>>> set search_path = s1; > >>>>>>>> > >>>>>>>> prepare p1 as select f1 from abc; > >>>>>>>> execute p1; -- works fine, results in 123 > >>>>>>>> > >>>>>>>> set search_path = s2; > >>>>>>>> execute p1; -- works fine after the patch, results in 123 > >>>>>>>> > >>>>>>>> alter table s1.abc add column f2 float8; -- force replan > >>>>>>>> execute p1; -- fails > >>>>>>>> > >>>>>>> > >>>>>>> Huh! The beast bit us. > >>>>>>> > >>>>>>> I think the right solution here is either of two > >>>>>>> 1. Take your previous patch to always use qualified names (but you > >>>>>>> need to improve it not to affect the view dumps) > >>>>>>> 2. Prepare the statements at the datanode at the time of prepare. > >>>>>>> > >>>>>>> > >>>>>>> Is this test added new in 9.2? > >>>>>> > >>>>>> > >>>>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c > in > >>>>>> March 2007. > >>>>>> > >>>>>>> > >>>>>>> Why didn't we see this issue the first time prepare was > implemented? I > >>>>>>> don't remember (but it was two years back). > >>>>>> > >>>>>> > >>>>>> I was unable to locate the exact reason but since statements were > not > >>>>>> being prepared on datanodes due to a merge issue this issue just > surfaced > >>>>>> up. > >>>>>> > >>>>> > >>>>> > >>>>> Well, even though statements were not getting prepared (actually > >>>>> prepared statements were not being used again and again) on > datanodes, we > >>>>> never prepared them on datanode at the time of preparing the > statement. So, > >>>>> this bug should have shown itself long back. > >>>>> > >>>>>>> > >>>>>>> > >>>>>>>> > >>>>>>>> The last execute should result in 123, whereas it results in 456. > The > >>>>>>>> reason is that the search path has already been changed at the > datanode and > >>>>>>>> a replan would mean select from abc in s2. > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat > >>>>>>>> <ash...@en...> wrote: > >>>>>>>>> > >>>>>>>>> Hi Abbas, > >>>>>>>>> I think the fix is on the right track. There are couple of > >>>>>>>>> improvements that we need to do here (but you may not do those > if the time > >>>>>>>>> doesn't permit). > >>>>>>>>> > >>>>>>>>> 1. We should have a status in RemoteQuery node, as to whether the > >>>>>>>>> query in the node should use extended protocol or not, rather > than relying > >>>>>>>>> on the presence of statement name and parameters etc. Amit has > already added > >>>>>>>>> a status with that effect. We need to leverage it. > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt > >>>>>>>>> <abb...@en...> wrote: > >>>>>>>>>> > >>>>>>>>>> The patch fixes the dead code issue, that I described earlier. > The > >>>>>>>>>> code was dead because of two issues: > >>>>>>>>>> > >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting > stmt_name to > >>>>>>>>>> NULL and this was the main reason > ActivateDatanodeStatementOnNode was not > >>>>>>>>>> being called in the function pgxc_start_command_on_connection. > >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming > that a > >>>>>>>>>> prepared statement must have some parameters. > >>>>>>>>>> > >>>>>>>>>> Fixing these two issues makes sure that the function > >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and statements > get prepared on > >>>>>>>>>> the datanode. > >>>>>>>>>> This patch would fix bug 3607975. It would however not fix the > test > >>>>>>>>>> case I described in my previous email because of reasons I > described. > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat > >>>>>>>>>> <ash...@en...> wrote: > >>>>>>>>>>> > >>>>>>>>>>> Can you please explain what this fix does? It would help to > have > >>>>>>>>>>> an elaborate explanation with code snippets. > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt > >>>>>>>>>>> <abb...@en...> wrote: > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat > >>>>>>>>>>>> <ash...@en...> wrote: > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt > >>>>>>>>>>>>> <abb...@en...> wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat > >>>>>>>>>>>>>> <ash...@en...> wrote: > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt > >>>>>>>>>>>>>>> <abb...@en...> wrote: > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Hi, > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> While working on test case plancache it was brought up as > a > >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve > the problem of the > >>>>>>>>>>>>>>>> test case. > >>>>>>>>>>>>>>>> However there is some confusion in the statement of bug id > >>>>>>>>>>>>>>>> 3607975. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple > >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing > the query on > >>>>>>>>>>>>>>>> datanode al times, as against preparing once and > executing multiple times. > >>>>>>>>>>>>>>>> This is because somehow the remote query is being > prepared as an unnamed > >>>>>>>>>>>>>>>> statement." > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Consider this test case > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> A. create table abc(a int, b int); > >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); > >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; > >>>>>>>>>>>>>>>> D. execute p1; > >>>>>>>>>>>>>>>> E. execute p1; > >>>>>>>>>>>>>>>> F. execute p1; > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Here are the confusions > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response > to > >>>>>>>>>>>>>>>> a prepare issued by a user. > >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. > >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all > >>>>>>>>>>>>>>>> datanodes. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build > a > >>>>>>>>>>>>>>>> new generic plan, > >>>>>>>>>>>>>>>> and steps E and F use the already built generic plan. > >>>>>>>>>>>>>>>> For details see function GetCachedPlan. > >>>>>>>>>>>>>>>> This means that executing a prepared statement again > and > >>>>>>>>>>>>>>>> again does use cached plans > >>>>>>>>>>>>>>>> and does not prepare again and again every time we > issue > >>>>>>>>>>>>>>>> an execute. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> The problem is not here. The problem is in do_query() where > >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and > we keep on > >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the > datanode. > >>>>>>>>>>>>>> I spent time looking at the code written in do_query and > functions called > >>>>>>>>>>>>>> from with in do_query to handle prepared statements but the > code written in > >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements > prepared on datanodes > >>>>>>>>>>>>>> is dead as of now. It is never called during the complete > regression run. > >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never > called. The way > >>>>>>>>>>>>>> prepared statements are being handled now is the same as I > described earlier > >>>>>>>>>>>>>> in the mail chain with the help of an example. > >>>>>>>>>>>>>> The code that is dead was originally added by Mason through > >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in > December 2010. This > >>>>>>>>>>>>>> code has been changed a lot over the last two years. This > commit does not > >>>>>>>>>>>>>> contain any test cases so I am not sure how did it use to > work back then. > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. > So, > >>>>>>>>>>>>> something has gone wrong in-between. That's what we need to > find out and > >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not good > for performance > >>>>>>>>>>>>> either. > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> I was able to find the reason why the code was dead and the > >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure > that > >>>>>>>>>>>> statements are prepared on datanodes whenever required. > However there is a > >>>>>>>>>>>> problem in the way prepared statements are handled. The > problem is that > >>>>>>>>>>>> unless a prepared statement is executed it is never prepared > on datanodes, > >>>>>>>>>>>> hence changing the path before executing the statement gives > us incorrect > >>>>>>>>>>>> results. For Example > >>>>>>>>>>>> > >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute by > >>>>>>>>>>>> replication; > >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute by > >>>>>>>>>>>> replication; > >>>>>>>>>>>> > >>>>>>>>>>>> insert into s1.abc values(123); > >>>>>>>>>>>> insert into s2.abc values(456); > >>>>>>>>>>>> set search_path = s2; > >>>>>>>>>>>> prepare p1 as select f1 from abc; > >>>>>>>>>>>> set search_path = s1; > >>>>>>>>>>>> execute p1; > >>>>>>>>>>>> > >>>>>>>>>>>> The last execute results in 123, where as it should have > resulted > >>>>>>>>>>>> in 456. > >>>>>>>>>>>> I can finalize the attached patch by fixing any regression > issues > >>>>>>>>>>>> that may result and that would fix 3607975 and improve > performance however > >>>>>>>>>>>> the above test case would still fail. > >>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not > reproducible. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would have > been > >>>>>>>>>>>>>>> the case, we would not have seen this problem if > search_path changed in > >>>>>>>>>>>>>>> between steps D and E. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> If search path is changed between steps D & E, the problem > >>>>>>>>>>>>>> occurs because when the remote query node is created, > schema qualification > >>>>>>>>>>>>>> is not added in the sql statement to be sent to the > datanode, but changes in > >>>>>>>>>>>>>> search path do get communicated to the datanode. The sql > statement is built > >>>>>>>>>>>>>> when execute is issued for the first time and is reused on > subsequent > >>>>>>>>>>>>>> executes. The datanode is totally unaware that the select > that it just > >>>>>>>>>>>>>> received is due to an execute of a prepared statement that > was prepared when > >>>>>>>>>>>>>> search path was some thing else. > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, would fix > >>>>>>>>>>>>> the problem, since the statement will get prepared at the > datanode, with the > >>>>>>>>>>>>> same search path settings, as it would on the coordinator. > >>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Comments are welcome. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> -- > >>>>>>>>>>>>>>>> Abbas > >>>>>>>>>>>>>>>> Architect > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Ph: 92.334.5100153 > >>>>>>>>>>>>>>>> Skype ID: gabbasb > >>>>>>>>>>>>>>>> www.enterprisedb.com > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Follow us on Twitter > >>>>>>>>>>>>>>>> @EnterpriseDB > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers > and > >>>>>>>>>>>>>>>> more > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > ------------------------------------------------------------------------------ > >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt > >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application performance > >>>>>>>>>>>>>>>> monitoring service > >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and > >>>>>>>>>>>>>>>> monitor your > >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try > >>>>>>>>>>>>>>>> New Relic > >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! > >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may > >>>>>>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list > >>>>>>>>>>>>>>>> Pos...@li... > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> -- > >>>>>>>>>>>>>>> Best Wishes, > >>>>>>>>>>>>>>> Ashutosh Bapat > >>>>>>>>>>>>>>> EntepriseDB Corporation > >>>>>>>>>>>>>>> The Postgres Database Company > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> -- > >>>>>>>>>>>>>> -- > >>>>>>>>>>>>>> Abbas > >>>>>>>>>>>>>> Architect > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Ph: 92.334.5100153 > >>>>>>>>>>>>>> Skype ID: gabbasb > >>>>>>>>>>>>>> www.enterprisedb.com > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Follow us on Twitter > >>>>>>>>>>>>>> @EnterpriseDB > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > >>>>>>>>>>>>>> more > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> -- > >>>>>>>>>>>>> Best Wishes, > >>>>>>>>>>>>> Ashutosh Bapat > >>>>>>>>>>>>> EntepriseDB Corporation > >>>>>>>>>>>>> The Postgres Database Company > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> -- > >>>>>>>>>>>> -- > >>>>>>>>>>>> Abbas > >>>>>>>>>>>> Architect > >>>>>>>>>>>> > >>>>>>>>>>>> Ph: 92.334.5100153 > >>>>>>>>>>>> Skype ID: gabbasb > >>>>>>>>>>>> www.enterprisedb.com > >>>>>>>>>>>> > >>>>>>>>>>>> Follow us on Twitter > >>>>>>>>>>>> @EnterpriseDB > >>>>>>>>>>>> > >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and > more > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> -- > >>>>>>>>>>> Best Wishes, > >>>>>>>>>>> Ashutosh Bapat > >>>>>>>>>>> EntepriseDB Corporation > >>>>>>>>>>> The Postgres Database Company > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> -- > >>>>>>>>>> -- > >>>>>>>>>> Abbas > >>>>>>>>>> Architect > >>>>>>>>>> > >>>>>>>>>> Ph: 92.334.5100153 > >>>>>>>>>> Skype ID: gabbasb > >>>>>>>>>> www.enterprisedb.com > >>>>>>>>>> > >>>>>>>>>> Follow us on Twitter > >>>>>>>>>> @EnterpriseDB > >>>>>>>>>> > >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> -- > >>>>>>>>> Best Wishes, > >>>>>>>>> Ashutosh Bapat > >>>>>>>>> EntepriseDB Corporation > >>>>>>>>> The Postgres Database Company > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> -- > >>>>>>>> -- > >>>>>>>> Abbas > >>>>>>>> Architect > >>>>>>>> > >>>>>>>> Ph: 92.334.5100153 > >>>>>>>> Skype ID: gabbasb > >>>>>>>> www.enterprisedb.com > >>>>>>>> > >>>>>>>> Follow us on Twitter > >>>>>>>> @EnterpriseDB > >>>>>>>> > >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> -- > >>>>>>> Best Wishes, > >>>>>>> Ashutosh Bapat > >>>>>>> EntepriseDB Corporation > >>>>>>> The Postgres Database Company > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> -- > >>>>>> -- > >>>>>> Abbas > >>>>>> Architect > >>>>>> > >>>>>> Ph: 92.334.5100153 > >>>>>> Skype ID: gabbasb > >>>>>> www.enterprisedb.com > >>>>>> > >>>>>> Follow us on Twitter > >>>>>> @EnterpriseDB > >>>>>> > >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> Best Wishes, > >>>>> Ashutosh Bapat > >>>>> EntepriseDB Corporation > >>>>> The Postgres Database Company > >>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> -- > >>>> Abbas > >>>> Architect > >>>> > >>>> Ph: 92.334.5100153 > >>>> Skype ID: gabbasb > >>>> www.enterprisedb.com > >>>> > >>>> Follow us on Twitter > >>>> @EnterpriseDB > >>>> > >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >>> > >>> > >>> > >>> > >>> -- > >>> -- > >>> Abbas > >>> Architect > >>> > >>> Ph: 92.334.5100153 > >>> Skype ID: gabbasb > >>> www.enterprisedb.com > >>> > >>> Follow us on Twitter > >>> @EnterpriseDB > >>> > >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > >> > >> > >> > >> > >> -- > >> Best Wishes, > >> Ashutosh Bapat > >> EntepriseDB Corporation > >> The Postgres Database Company > >> > >> > ------------------------------------------------------------------------------ > >> This SF.net email is sponsored by Windows: > >> > >> Build for Windows Store. > >> > >> http://p.sf.net/sfu/windows-dev2dev > >> _______________________________________________ > >> Postgres-xc-developers mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: 鈴木 幸市 <ko...@in...> - 2013-06-26 01:18:17
|
I tested the patch and found almost everything is okay, except for inherit. PFA related files. I used current master but without pending patches. So inherit may depend upon them. Regards; --- Koichi Suzuki |
From: Amit K. <ami...@en...> - 2013-06-25 14:33:04
|
On 25 June 2013 19:59, Amit Khandekar <ami...@en...> wrote: > Attached is a patch that does schema qualification by overriding the > search_path just before doing the deparse in deparse_query(). > PopOVerrideSearchPath() in the end pops out the temp path. Also, the > transaction callback function already takes care of popping out such > Push in case of transaction rollback. > > Unfortunately we cannot apply this solution to temp tables. The > problem is, when a pg_temp schema is deparsed, it is deparsed into > pg_temp_1, pg_temp_2 etc. and these names are specific to the node. An > object in pg_temp_2 at coordinator may be present in pg_temp_1 at > datanode. So the remote query generated may or may not work on > datanode, so totally unreliable. > > In fact, the issue with pg_temp_1 names in the deparsed remote query > is present even currently. > > But wherever it is a correctness issue to *not* schema qualify temp > object, I have kept the schema qualification. > For e.g. user can set search_path to" s1, pg_temp", > and have obj1 in both s1 and pg_temp, and want to refer to pg_temp.s1. > In such case, the remote query should have pg_temp_[1-9].obj1, > although it may cause errors because of the existing issue. > --- > So, the prepare-execute with search_path would remain there for temp tables. I mean, the prepare-exute issue with search_patch would remain there for temp tables. > > I tried to run the regression by extracting regression expected output > files from Abbas's patch , and regression passes, including plancache. > > I think for this release, we should go ahead by keeping this issue > open for temp tables. This solution is an improvement, and does not > cause any new issues. > > Comments welcome. > > > On 24 June 2013 13:00, Ashutosh Bapat <ash...@en...> wrote: >> Hi Abbas, >> We are changing a lot of PostgreSQL deparsing code, which would create >> problems in the future merges. Since this change is in query deparsing logic >> any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch should >> again be the last resort. >> >> Please take a look at how view definitions are dumped. That will give a good >> idea as to how PG schema-qualifies (or not) objects. Here's how view >> definitions displayed changes with search path. Since the code to dump views >> and display definitions is same, the view definition dumped also changes >> with the search path. Thus pg_dump must be using some trick to always dump a >> consistent view definition (and hence a deparsed query). Thanks Amit for the >> example. >> >> create table ttt (id int); >> >> postgres=# create domain dd int; >> CREATE DOMAIN >> postgres=# create view v2 as select id::dd from ttt; >> CREATE VIEW >> postgres=# set search_path TO ''; >> SET >> >> postgres=# \d+ public.v2 >> View "public.v2" >> Column | Type | Modifiers | Storage | Description >> --------+-----------+--------- >> --+---------+------------- >> id | public.dd | | plain | >> View definition: >> SELECT ttt.id::public.dd AS id >> FROM public.ttt; >> >> postgres=# set search_path TO default ; >> SET >> postgres=# show search_path ; >> search_path >> ---------------- >> "$user",public >> (1 row) >> >> postgres=# \d+ public.v2 >> View "public.v2" >> Column | Type | Modifiers | Storage | Description >> --------+------+-----------+---------+------------- >> id | dd | | plain | >> View definition: >> SELECT ttt.id::dd AS id >> FROM ttt; >> >> We need to leverage similar mechanism here to reduce PG footprint. >> >> >> On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt <abb...@en...> >> wrote: >>> >>> Hi, >>> As discussed in the last F2F meeting, here is an updated patch that >>> provides schema qualification of the following objects: Tables, Views, >>> Functions, Types and Domains in case of remote queries. >>> Sequence functions are never concerned with datanodes hence, schema >>> qualification is not required in case of sequences. >>> This solves plancache test case failure issue and does not introduce any >>> more failures. >>> I have also attached some tests with results to aid in review. >>> >>> Comments are welcome. >>> >>> Regards >>> >>> >>> >>> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt <abb...@en...> >>> wrote: >>>> >>>> Hi, >>>> Attached please find a WIP patch that provides the functionality of >>>> preparing the statement at the datanodes as soon as it is prepared >>>> on the coordinator. >>>> This is to take care of a test case in plancache that makes sure that >>>> change of search_path is ignored by replans. >>>> While the patch fixes this replan test case and the regression works fine >>>> there are still these two problems I have to take care of. >>>> >>>> 1. This test case fails >>>> >>>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY >>>> HASH(a); >>>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >>>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails >>>> >>>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; >>>> QUERY PLAN >>>> ------------------------------------------------------------------- >>>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 >>>> width=14) >>>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >>>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >>>> ((xc_alter_table_3.ctid = $1) AND >>>> (xc_alter_table_3.xc_node_id = $2)) >>>> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" >>>> (cost=0.00..0.00 rows=1000 width=14) >>>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >>>> xc_alter_table_3.xc_node_id >>>> Node/s: data_node_3 >>>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >>>> xc_alter_table_3 WHERE (a = 1) >>>> (7 rows) >>>> >>>> The reason of the failure is that the select query is selecting 3 >>>> items, the first of which is an int, >>>> whereas the delete query is comparing $1 with a ctid. >>>> I am not sure how this works without prepare, but it fails when used >>>> with prepare. >>>> >>>> The reason of this planning is this section of code in function >>>> pgxc_build_dml_statement >>>> else if (cmdtype == CMD_DELETE) >>>> { >>>> /* >>>> * Since there is no data to update, the first param is going to >>>> be >>>> * ctid. >>>> */ >>>> ctid_param_num = 1; >>>> } >>>> >>>> Amit/Ashutosh can you suggest a fix for this problem? >>>> There are a number of possibilities. >>>> a) The select should not have selected column a. >>>> b) The DELETE should have referred to $2 and $3 for ctid and >>>> xc_node_id respectively. >>>> c) Since the query works without PREPARE, we should make PREPARE work >>>> the same way. >>>> >>>> >>>> 2. This test case in plancache fails. >>>> >>>> -- Try it with a view, which isn't directly used in the resulting >>>> plan >>>> -- but should trigger invalidation anyway >>>> create table tab33 (a int, b int); >>>> insert into tab33 values(1,2); >>>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >>>> PREPARE vprep AS SELECT * FROM v_tab33; >>>> EXECUTE vprep; >>>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; >>>> -- does not cause plan invalidation because views are never created >>>> on datanodes >>>> EXECUTE vprep; >>>> >>>> and the reason of the failure is that views are never created on the >>>> datanodes hence plan invalidation is not triggered. >>>> This can be documented as an XC limitation. >>>> >>>> 3. I still have to add comments in the patch and some ifdefs may be >>>> missing too. >>>> >>>> >>>> In addition to the patch I have also attached some example Java programs >>>> that test the some basic functionality through JDBC. I found that these >>>> programs are working fine after my patch. >>>> >>>> 1. Prepared.java : Issues parameterized delete, insert and update through >>>> JDBC. These are un-named prepared statements and works fine. >>>> 2. NamedPrepared.java : Issues two named prepared statements through JDBC >>>> and works fine. >>>> 3. Retrieve.java : Runs a simple select to verify results. >>>> The comments on top of the files explain their usage. >>>> >>>> Comments are welcome. >>>> >>>> Thanks >>>> Regards >>>> >>>> >>>> >>>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat >>>> <ash...@en...> wrote: >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt >>>>> <abb...@en...> wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat >>>>>> <ash...@en...> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt >>>>>>> <abb...@en...> wrote: >>>>>>>> >>>>>>>> Attached please find updated patch to fix the bug. The patch takes >>>>>>>> care of the bug and the regression issues resulting from the changes done in >>>>>>>> the patch. Please note that the issue in test case plancache still stands >>>>>>>> unsolved because of the following test case (simplified but taken from >>>>>>>> plancache.sql) >>>>>>>> >>>>>>>> create schema s1 create table abc (f1 int); >>>>>>>> create schema s2 create table abc (f1 int); >>>>>>>> >>>>>>>> >>>>>>>> insert into s1.abc values(123); >>>>>>>> insert into s2.abc values(456); >>>>>>>> >>>>>>>> set search_path = s1; >>>>>>>> >>>>>>>> prepare p1 as select f1 from abc; >>>>>>>> execute p1; -- works fine, results in 123 >>>>>>>> >>>>>>>> set search_path = s2; >>>>>>>> execute p1; -- works fine after the patch, results in 123 >>>>>>>> >>>>>>>> alter table s1.abc add column f2 float8; -- force replan >>>>>>>> execute p1; -- fails >>>>>>>> >>>>>>> >>>>>>> Huh! The beast bit us. >>>>>>> >>>>>>> I think the right solution here is either of two >>>>>>> 1. Take your previous patch to always use qualified names (but you >>>>>>> need to improve it not to affect the view dumps) >>>>>>> 2. Prepare the statements at the datanode at the time of prepare. >>>>>>> >>>>>>> >>>>>>> Is this test added new in 9.2? >>>>>> >>>>>> >>>>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in >>>>>> March 2007. >>>>>> >>>>>>> >>>>>>> Why didn't we see this issue the first time prepare was implemented? I >>>>>>> don't remember (but it was two years back). >>>>>> >>>>>> >>>>>> I was unable to locate the exact reason but since statements were not >>>>>> being prepared on datanodes due to a merge issue this issue just surfaced >>>>>> up. >>>>>> >>>>> >>>>> >>>>> Well, even though statements were not getting prepared (actually >>>>> prepared statements were not being used again and again) on datanodes, we >>>>> never prepared them on datanode at the time of preparing the statement. So, >>>>> this bug should have shown itself long back. >>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> The last execute should result in 123, whereas it results in 456. The >>>>>>>> reason is that the search path has already been changed at the datanode and >>>>>>>> a replan would mean select from abc in s2. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat >>>>>>>> <ash...@en...> wrote: >>>>>>>>> >>>>>>>>> Hi Abbas, >>>>>>>>> I think the fix is on the right track. There are couple of >>>>>>>>> improvements that we need to do here (but you may not do those if the time >>>>>>>>> doesn't permit). >>>>>>>>> >>>>>>>>> 1. We should have a status in RemoteQuery node, as to whether the >>>>>>>>> query in the node should use extended protocol or not, rather than relying >>>>>>>>> on the presence of statement name and parameters etc. Amit has already added >>>>>>>>> a status with that effect. We need to leverage it. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt >>>>>>>>> <abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>> The patch fixes the dead code issue, that I described earlier. The >>>>>>>>>> code was dead because of two issues: >>>>>>>>>> >>>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>>>>>>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>>>>>>>> being called in the function pgxc_start_command_on_connection. >>>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>>>>>>> prepared statement must have some parameters. >>>>>>>>>> >>>>>>>>>> Fixing these two issues makes sure that the function >>>>>>>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared on >>>>>>>>>> the datanode. >>>>>>>>>> This patch would fix bug 3607975. It would however not fix the test >>>>>>>>>> case I described in my previous email because of reasons I described. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat >>>>>>>>>> <ash...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>> Can you please explain what this fix does? It would help to have >>>>>>>>>>> an elaborate explanation with code snippets. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt >>>>>>>>>>> <abb...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat >>>>>>>>>>>> <ash...@en...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt >>>>>>>>>>>>> <abb...@en...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat >>>>>>>>>>>>>> <ash...@en...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt >>>>>>>>>>>>>>> <abb...@en...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>>>>>>> test case. >>>>>>>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>>>>>>> 3607975. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >>>>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing the query on >>>>>>>>>>>>>>>> datanode al times, as against preparing once and executing multiple times. >>>>>>>>>>>>>>>> This is because somehow the remote query is being prepared as an unnamed >>>>>>>>>>>>>>>> statement." >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Consider this test case >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>>>>>>> D. execute p1; >>>>>>>>>>>>>>>> E. execute p1; >>>>>>>>>>>>>>>> F. execute p1; >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Here are the confusions >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response to >>>>>>>>>>>>>>>> a prepare issued by a user. >>>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>>>>>>>>>>>>>>> datanodes. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a >>>>>>>>>>>>>>>> new generic plan, >>>>>>>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>>>>>>> again does use cached plans >>>>>>>>>>>>>>>> and does not prepare again and again every time we issue >>>>>>>>>>>>>>>> an execute. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> We never prepare any named/unnamed statements on the datanode. >>>>>>>>>>>>>> I spent time looking at the code written in do_query and functions called >>>>>>>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>>>>>>> prepared statements are being handled now is the same as I described earlier >>>>>>>>>>>>>> in the mail chain with the help of an example. >>>>>>>>>>>>>> The code that is dead was originally added by Mason through >>>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This >>>>>>>>>>>>>> code has been changed a lot over the last two years. This commit does not >>>>>>>>>>>>>> contain any test cases so I am not sure how did it use to work back then. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>>>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>>>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>>>>>>>> either. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I was able to find the reason why the code was dead and the >>>>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure that >>>>>>>>>>>> statements are prepared on datanodes whenever required. However there is a >>>>>>>>>>>> problem in the way prepared statements are handled. The problem is that >>>>>>>>>>>> unless a prepared statement is executed it is never prepared on datanodes, >>>>>>>>>>>> hence changing the path before executing the statement gives us incorrect >>>>>>>>>>>> results. For Example >>>>>>>>>>>> >>>>>>>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>>>>>>> replication; >>>>>>>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>>>>>>> replication; >>>>>>>>>>>> >>>>>>>>>>>> insert into s1.abc values(123); >>>>>>>>>>>> insert into s2.abc values(456); >>>>>>>>>>>> set search_path = s2; >>>>>>>>>>>> prepare p1 as select f1 from abc; >>>>>>>>>>>> set search_path = s1; >>>>>>>>>>>> execute p1; >>>>>>>>>>>> >>>>>>>>>>>> The last execute results in 123, where as it should have resulted >>>>>>>>>>>> in 456. >>>>>>>>>>>> I can finalize the attached patch by fixing any regression issues >>>>>>>>>>>> that may result and that would fix 3607975 and improve performance however >>>>>>>>>>>> the above test case would still fail. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Did you verify it under the debugger? If that would have been >>>>>>>>>>>>>>> the case, we would not have seen this problem if search_path changed in >>>>>>>>>>>>>>> between steps D and E. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> If search path is changed between steps D & E, the problem >>>>>>>>>>>>>> occurs because when the remote query node is created, schema qualification >>>>>>>>>>>>>> is not added in the sql statement to be sent to the datanode, but changes in >>>>>>>>>>>>>> search path do get communicated to the datanode. The sql statement is built >>>>>>>>>>>>>> when execute is issued for the first time and is reused on subsequent >>>>>>>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>>>>>>> received is due to an execute of a prepared statement that was prepared when >>>>>>>>>>>>>> search path was some thing else. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Fixing the prepared statements the way I suggested, would fix >>>>>>>>>>>>> the problem, since the statement will get prepared at the datanode, with the >>>>>>>>>>>>> same search path settings, as it would on the coordinator. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Abbas >>>>>>>>>>>>>>>> Architect >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>>>>> www.enterprisedb.com >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Follow us on Twitter >>>>>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >>>>>>>>>>>>>>>> more >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>>>>>>> monitoring service >>>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>>>>>>> monitor your >>>>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try >>>>>>>>>>>>>>>> New Relic >>>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Best Wishes, >>>>>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Abbas >>>>>>>>>>>>>> Architect >>>>>>>>>>>>>> >>>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>>> www.enterprisedb.com >>>>>>>>>>>>>> >>>>>>>>>>>>>> Follow us on Twitter >>>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>>> >>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >>>>>>>>>>>>>> more >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Best Wishes, >>>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> -- >>>>>>>>>>>> Abbas >>>>>>>>>>>> Architect >>>>>>>>>>>> >>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>> www.enterprisedb.com >>>>>>>>>>>> >>>>>>>>>>>> Follow us on Twitter >>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Best Wishes, >>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>> The Postgres Database Company >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> Abbas >>>>>>>>>> Architect >>>>>>>>>> >>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>> Skype ID: gabbasb >>>>>>>>>> www.enterprisedb.com >>>>>>>>>> >>>>>>>>>> Follow us on Twitter >>>>>>>>>> @EnterpriseDB >>>>>>>>>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> Abbas >>>>>>>> Architect >>>>>>>> >>>>>>>> Ph: 92.334.5100153 >>>>>>>> Skype ID: gabbasb >>>>>>>> www.enterprisedb.com >>>>>>>> >>>>>>>> Follow us on Twitter >>>>>>>> @EnterpriseDB >>>>>>>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> Abbas >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.com >>>>>> >>>>>> Follow us on Twitter >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.com >>>> >>>> Follow us on Twitter >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.com >>> >>> Follow us on Twitter >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> |
From: Amit K. <ami...@en...> - 2013-06-25 14:30:04
|
Attached is a patch that does schema qualification by overriding the search_path just before doing the deparse in deparse_query(). PopOVerrideSearchPath() in the end pops out the temp path. Also, the transaction callback function already takes care of popping out such Push in case of transaction rollback. Unfortunately we cannot apply this solution to temp tables. The problem is, when a pg_temp schema is deparsed, it is deparsed into pg_temp_1, pg_temp_2 etc. and these names are specific to the node. An object in pg_temp_2 at coordinator may be present in pg_temp_1 at datanode. So the remote query generated may or may not work on datanode, so totally unreliable. In fact, the issue with pg_temp_1 names in the deparsed remote query is present even currently. But wherever it is a correctness issue to *not* schema qualify temp object, I have kept the schema qualification. For e.g. user can set search_path to" s1, pg_temp", and have obj1 in both s1 and pg_temp, and want to refer to pg_temp.s1. In such case, the remote query should have pg_temp_[1-9].obj1, although it may cause errors because of the existing issue. So, the prepare-execute with search_path would remain there for temp tables. I tried to run the regression by extracting regression expected output files from Abbas's patch , and regression passes, including plancache. I think for this release, we should go ahead by keeping this issue open for temp tables. This solution is an improvement, and does not cause any new issues. Comments welcome. On 24 June 2013 13:00, Ashutosh Bapat <ash...@en...> wrote: > Hi Abbas, > We are changing a lot of PostgreSQL deparsing code, which would create > problems in the future merges. Since this change is in query deparsing logic > any errors here would affect, EXPLAIN/ pg_dump etc. So, this patch should > again be the last resort. > > Please take a look at how view definitions are dumped. That will give a good > idea as to how PG schema-qualifies (or not) objects. Here's how view > definitions displayed changes with search path. Since the code to dump views > and display definitions is same, the view definition dumped also changes > with the search path. Thus pg_dump must be using some trick to always dump a > consistent view definition (and hence a deparsed query). Thanks Amit for the > example. > > create table ttt (id int); > > postgres=# create domain dd int; > CREATE DOMAIN > postgres=# create view v2 as select id::dd from ttt; > CREATE VIEW > postgres=# set search_path TO ''; > SET > > postgres=# \d+ public.v2 > View "public.v2" > Column | Type | Modifiers | Storage | Description > --------+-----------+--------- > --+---------+------------- > id | public.dd | | plain | > View definition: > SELECT ttt.id::public.dd AS id > FROM public.ttt; > > postgres=# set search_path TO default ; > SET > postgres=# show search_path ; > search_path > ---------------- > "$user",public > (1 row) > > postgres=# \d+ public.v2 > View "public.v2" > Column | Type | Modifiers | Storage | Description > --------+------+-----------+---------+------------- > id | dd | | plain | > View definition: > SELECT ttt.id::dd AS id > FROM ttt; > > We need to leverage similar mechanism here to reduce PG footprint. > > > On Mon, Jun 24, 2013 at 8:12 AM, Abbas Butt <abb...@en...> > wrote: >> >> Hi, >> As discussed in the last F2F meeting, here is an updated patch that >> provides schema qualification of the following objects: Tables, Views, >> Functions, Types and Domains in case of remote queries. >> Sequence functions are never concerned with datanodes hence, schema >> qualification is not required in case of sequences. >> This solves plancache test case failure issue and does not introduce any >> more failures. >> I have also attached some tests with results to aid in review. >> >> Comments are welcome. >> >> Regards >> >> >> >> On Mon, Jun 10, 2013 at 5:31 PM, Abbas Butt <abb...@en...> >> wrote: >>> >>> Hi, >>> Attached please find a WIP patch that provides the functionality of >>> preparing the statement at the datanodes as soon as it is prepared >>> on the coordinator. >>> This is to take care of a test case in plancache that makes sure that >>> change of search_path is ignored by replans. >>> While the patch fixes this replan test case and the regression works fine >>> there are still these two problems I have to take care of. >>> >>> 1. This test case fails >>> >>> CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY >>> HASH(a); >>> INSERT INTO xc_alter_table_3 VALUES (1, 'a'); >>> PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails >>> >>> test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; >>> QUERY PLAN >>> ------------------------------------------------------------------- >>> Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 >>> width=14) >>> Node/s: data_node_1, data_node_2, data_node_3, data_node_4 >>> Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE >>> ((xc_alter_table_3.ctid = $1) AND >>> (xc_alter_table_3.xc_node_id = $2)) >>> -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" >>> (cost=0.00..0.00 rows=1000 width=14) >>> Output: xc_alter_table_3.a, xc_alter_table_3.ctid, >>> xc_alter_table_3.xc_node_id >>> Node/s: data_node_3 >>> Remote query: SELECT a, ctid, xc_node_id FROM ONLY >>> xc_alter_table_3 WHERE (a = 1) >>> (7 rows) >>> >>> The reason of the failure is that the select query is selecting 3 >>> items, the first of which is an int, >>> whereas the delete query is comparing $1 with a ctid. >>> I am not sure how this works without prepare, but it fails when used >>> with prepare. >>> >>> The reason of this planning is this section of code in function >>> pgxc_build_dml_statement >>> else if (cmdtype == CMD_DELETE) >>> { >>> /* >>> * Since there is no data to update, the first param is going to >>> be >>> * ctid. >>> */ >>> ctid_param_num = 1; >>> } >>> >>> Amit/Ashutosh can you suggest a fix for this problem? >>> There are a number of possibilities. >>> a) The select should not have selected column a. >>> b) The DELETE should have referred to $2 and $3 for ctid and >>> xc_node_id respectively. >>> c) Since the query works without PREPARE, we should make PREPARE work >>> the same way. >>> >>> >>> 2. This test case in plancache fails. >>> >>> -- Try it with a view, which isn't directly used in the resulting >>> plan >>> -- but should trigger invalidation anyway >>> create table tab33 (a int, b int); >>> insert into tab33 values(1,2); >>> CREATE VIEW v_tab33 AS SELECT * FROM tab33; >>> PREPARE vprep AS SELECT * FROM v_tab33; >>> EXECUTE vprep; >>> CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; >>> -- does not cause plan invalidation because views are never created >>> on datanodes >>> EXECUTE vprep; >>> >>> and the reason of the failure is that views are never created on the >>> datanodes hence plan invalidation is not triggered. >>> This can be documented as an XC limitation. >>> >>> 3. I still have to add comments in the patch and some ifdefs may be >>> missing too. >>> >>> >>> In addition to the patch I have also attached some example Java programs >>> that test the some basic functionality through JDBC. I found that these >>> programs are working fine after my patch. >>> >>> 1. Prepared.java : Issues parameterized delete, insert and update through >>> JDBC. These are un-named prepared statements and works fine. >>> 2. NamedPrepared.java : Issues two named prepared statements through JDBC >>> and works fine. >>> 3. Retrieve.java : Runs a simple select to verify results. >>> The comments on top of the files explain their usage. >>> >>> Comments are welcome. >>> >>> Thanks >>> Regards >>> >>> >>> >>> On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat >>> <ash...@en...> wrote: >>>> >>>> >>>> >>>> >>>> On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt >>>> <abb...@en...> wrote: >>>>> >>>>> >>>>> >>>>> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat >>>>> <ash...@en...> wrote: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt >>>>>> <abb...@en...> wrote: >>>>>>> >>>>>>> Attached please find updated patch to fix the bug. The patch takes >>>>>>> care of the bug and the regression issues resulting from the changes done in >>>>>>> the patch. Please note that the issue in test case plancache still stands >>>>>>> unsolved because of the following test case (simplified but taken from >>>>>>> plancache.sql) >>>>>>> >>>>>>> create schema s1 create table abc (f1 int); >>>>>>> create schema s2 create table abc (f1 int); >>>>>>> >>>>>>> >>>>>>> insert into s1.abc values(123); >>>>>>> insert into s2.abc values(456); >>>>>>> >>>>>>> set search_path = s1; >>>>>>> >>>>>>> prepare p1 as select f1 from abc; >>>>>>> execute p1; -- works fine, results in 123 >>>>>>> >>>>>>> set search_path = s2; >>>>>>> execute p1; -- works fine after the patch, results in 123 >>>>>>> >>>>>>> alter table s1.abc add column f2 float8; -- force replan >>>>>>> execute p1; -- fails >>>>>>> >>>>>> >>>>>> Huh! The beast bit us. >>>>>> >>>>>> I think the right solution here is either of two >>>>>> 1. Take your previous patch to always use qualified names (but you >>>>>> need to improve it not to affect the view dumps) >>>>>> 2. Prepare the statements at the datanode at the time of prepare. >>>>>> >>>>>> >>>>>> Is this test added new in 9.2? >>>>> >>>>> >>>>> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in >>>>> March 2007. >>>>> >>>>>> >>>>>> Why didn't we see this issue the first time prepare was implemented? I >>>>>> don't remember (but it was two years back). >>>>> >>>>> >>>>> I was unable to locate the exact reason but since statements were not >>>>> being prepared on datanodes due to a merge issue this issue just surfaced >>>>> up. >>>>> >>>> >>>> >>>> Well, even though statements were not getting prepared (actually >>>> prepared statements were not being used again and again) on datanodes, we >>>> never prepared them on datanode at the time of preparing the statement. So, >>>> this bug should have shown itself long back. >>>> >>>>>> >>>>>> >>>>>>> >>>>>>> The last execute should result in 123, whereas it results in 456. The >>>>>>> reason is that the search path has already been changed at the datanode and >>>>>>> a replan would mean select from abc in s2. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat >>>>>>> <ash...@en...> wrote: >>>>>>>> >>>>>>>> Hi Abbas, >>>>>>>> I think the fix is on the right track. There are couple of >>>>>>>> improvements that we need to do here (but you may not do those if the time >>>>>>>> doesn't permit). >>>>>>>> >>>>>>>> 1. We should have a status in RemoteQuery node, as to whether the >>>>>>>> query in the node should use extended protocol or not, rather than relying >>>>>>>> on the presence of statement name and parameters etc. Amit has already added >>>>>>>> a status with that effect. We need to leverage it. >>>>>>>> >>>>>>>> >>>>>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt >>>>>>>> <abb...@en...> wrote: >>>>>>>>> >>>>>>>>> The patch fixes the dead code issue, that I described earlier. The >>>>>>>>> code was dead because of two issues: >>>>>>>>> >>>>>>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>>>>>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>>>>>>> being called in the function pgxc_start_command_on_connection. >>>>>>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>>>>>> prepared statement must have some parameters. >>>>>>>>> >>>>>>>>> Fixing these two issues makes sure that the function >>>>>>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared on >>>>>>>>> the datanode. >>>>>>>>> This patch would fix bug 3607975. It would however not fix the test >>>>>>>>> case I described in my previous email because of reasons I described. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat >>>>>>>>> <ash...@en...> wrote: >>>>>>>>>> >>>>>>>>>> Can you please explain what this fix does? It would help to have >>>>>>>>>> an elaborate explanation with code snippets. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt >>>>>>>>>> <abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat >>>>>>>>>>> <ash...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt >>>>>>>>>>>> <abb...@en...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat >>>>>>>>>>>>> <ash...@en...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt >>>>>>>>>>>>>> <abb...@en...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>>>>>> test case. >>>>>>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>>>>>> 3607975. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple >>>>>>>>>>>>>>> times, the coordinator keeps on preparing and executing the query on >>>>>>>>>>>>>>> datanode al times, as against preparing once and executing multiple times. >>>>>>>>>>>>>>> This is because somehow the remote query is being prepared as an unnamed >>>>>>>>>>>>>>> statement." >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Consider this test case >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>>>>>> D. execute p1; >>>>>>>>>>>>>>> E. execute p1; >>>>>>>>>>>>>>> F. execute p1; >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Here are the confusions >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 1. The coordinator never prepares on datanode in response to >>>>>>>>>>>>>>> a prepare issued by a user. >>>>>>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>>>>>>>>>>>>>> datanodes. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a >>>>>>>>>>>>>>> new generic plan, >>>>>>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>>>>>> again does use cached plans >>>>>>>>>>>>>>> and does not prepare again and again every time we issue >>>>>>>>>>>>>>> an execute. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> We never prepare any named/unnamed statements on the datanode. >>>>>>>>>>>>> I spent time looking at the code written in do_query and functions called >>>>>>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>>>>>> prepared statements are being handled now is the same as I described earlier >>>>>>>>>>>>> in the mail chain with the help of an example. >>>>>>>>>>>>> The code that is dead was originally added by Mason through >>>>>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. This >>>>>>>>>>>>> code has been changed a lot over the last two years. This commit does not >>>>>>>>>>>>> contain any test cases so I am not sure how did it use to work back then. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>>>>>>> either. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I was able to find the reason why the code was dead and the >>>>>>>>>>> attached patch (WIP) fixes the problem. This would now ensure that >>>>>>>>>>> statements are prepared on datanodes whenever required. However there is a >>>>>>>>>>> problem in the way prepared statements are handled. The problem is that >>>>>>>>>>> unless a prepared statement is executed it is never prepared on datanodes, >>>>>>>>>>> hence changing the path before executing the statement gives us incorrect >>>>>>>>>>> results. For Example >>>>>>>>>>> >>>>>>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>>>>>> replication; >>>>>>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>>>>>> replication; >>>>>>>>>>> >>>>>>>>>>> insert into s1.abc values(123); >>>>>>>>>>> insert into s2.abc values(456); >>>>>>>>>>> set search_path = s2; >>>>>>>>>>> prepare p1 as select f1 from abc; >>>>>>>>>>> set search_path = s1; >>>>>>>>>>> execute p1; >>>>>>>>>>> >>>>>>>>>>> The last execute results in 123, where as it should have resulted >>>>>>>>>>> in 456. >>>>>>>>>>> I can finalize the attached patch by fixing any regression issues >>>>>>>>>>> that may result and that would fix 3607975 and improve performance however >>>>>>>>>>> the above test case would still fail. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Did you verify it under the debugger? If that would have been >>>>>>>>>>>>>> the case, we would not have seen this problem if search_path changed in >>>>>>>>>>>>>> between steps D and E. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> If search path is changed between steps D & E, the problem >>>>>>>>>>>>> occurs because when the remote query node is created, schema qualification >>>>>>>>>>>>> is not added in the sql statement to be sent to the datanode, but changes in >>>>>>>>>>>>> search path do get communicated to the datanode. The sql statement is built >>>>>>>>>>>>> when execute is issued for the first time and is reused on subsequent >>>>>>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>>>>>> received is due to an execute of a prepared statement that was prepared when >>>>>>>>>>>>> search path was some thing else. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Fixing the prepared statements the way I suggested, would fix >>>>>>>>>>>> the problem, since the statement will get prepared at the datanode, with the >>>>>>>>>>>> same search path settings, as it would on the coordinator. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Abbas >>>>>>>>>>>>>>> Architect >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>>>> www.enterprisedb.com >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Follow us on Twitter >>>>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >>>>>>>>>>>>>>> more >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>>>>>> monitoring service >>>>>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>>>>>> monitor your >>>>>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try >>>>>>>>>>>>>>> New Relic >>>>>>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Best Wishes, >>>>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> -- >>>>>>>>>>>>> Abbas >>>>>>>>>>>>> Architect >>>>>>>>>>>>> >>>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>>> www.enterprisedb.com >>>>>>>>>>>>> >>>>>>>>>>>>> Follow us on Twitter >>>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>>> >>>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and >>>>>>>>>>>>> more >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Best Wishes, >>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>> The Postgres Database Company >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> -- >>>>>>>>>>> Abbas >>>>>>>>>>> Architect >>>>>>>>>>> >>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>> www.enterprisedb.com >>>>>>>>>>> >>>>>>>>>>> Follow us on Twitter >>>>>>>>>>> @EnterpriseDB >>>>>>>>>>> >>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Best Wishes, >>>>>>>>>> Ashutosh Bapat >>>>>>>>>> EntepriseDB Corporation >>>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> -- >>>>>>>>> Abbas >>>>>>>>> Architect >>>>>>>>> >>>>>>>>> Ph: 92.334.5100153 >>>>>>>>> Skype ID: gabbasb >>>>>>>>> www.enterprisedb.com >>>>>>>>> >>>>>>>>> Follow us on Twitter >>>>>>>>> @EnterpriseDB >>>>>>>>> >>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Abbas >>>>>>> Architect >>>>>>> >>>>>>> Ph: 92.334.5100153 >>>>>>> Skype ID: gabbasb >>>>>>> www.enterprisedb.com >>>>>>> >>>>>>> Follow us on Twitter >>>>>>> @EnterpriseDB >>>>>>> >>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Abbas >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.com >>>>> >>>>> Follow us on Twitter >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.com >>> >>> Follow us on Twitter >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >> >> >> >> >> -- >> -- >> Abbas >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.com >> >> Follow us on Twitter >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers and more > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Ashutosh B. <ash...@en...> - 2013-06-25 08:55:50
|
HI Abbas, The testcases are ok now, but please annotate them saying replication tests, round robin tests etc. We don't need both n1 and n2 tests. Either of them would be fine. You may commit the patch, after taking care of these comments. On Mon, Jun 24, 2013 at 4:31 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Thanks for the review. > Here is the updated patch that contains reduced test cases. > Also I have updated code comments. > The statement failing is already in the comments. > Regards > > > > On Mon, Jun 17, 2013 at 12:36 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> I think the patch for this is in the other thread (11_fix ..). I looked >> at the patch. Here are the comments >> 1. There are just too many tests in the patch, without much difference. >> Please add only the tests which are needed, and also add comments about the >> purpose of the statements. Considering the time at hand, I don't think I >> can review all of the tests, so it would be good if it can be reduced to a >> minimal set. >> 2. The code is fine, but the comment need not have specific details of >> the statement failing. Getting preferred node is general practice >> everywhere and not just this portion of the code. By the way, we are not >> getting "just first node" from the node list but we try to get the >> preferred node. >> >> >> On Wed, Mar 27, 2013 at 3:55 PM, Abbas Butt <abb...@en...>wrote: >> >>> Bug ID 3608374 >>> >>> On Fri, Mar 8, 2013 at 12:25 PM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Attached please find revised patch that provides the following in >>>> addition to what it did earlier. >>>> >>>> 1. Uses GetPreferredReplicationNode() instead of list_truncate() >>>> 2. Adds test cases to xc_alter_table and xc_copy. >>>> >>>> I tested the following in reasonable detail to find whether any other >>>> caller of GetRelationNodes() needs some fixing or not and found that none >>>> of the other callers needs any more fixing. >>>> I tested >>>> a) copy >>>> b) alter table redistribute >>>> c) utilities >>>> d) dmls etc >>>> >>>> However while testing ALTER TABLE, I found that replicated to hash is >>>> not working correctly. >>>> >>>> This test case fails, since only SIX rows are expected in the final >>>> result. >>>> >>>> test=# create table t_r_n12(a int, b int) distribute by replication to >>>> node (DATA_NODE_1, DATA_NODE_2); >>>> CREATE TABLE >>>> test=# insert into t_r_n12 >>>> values(1,777),(3,4),(5,6),(20,30),(NULL,999), (NULL, 999); >>>> INSERT 0 6 >>>> test=# -- rep to hash >>>> test=# ALTER TABLE t_r_n12 distribute by hash(a); >>>> ALTER TABLE >>>> test=# SELECT * FROM t_r_n12 order by 1; >>>> a | b >>>> ----+----- >>>> 1 | 777 >>>> 3 | 4 >>>> 5 | 6 >>>> 20 | 30 >>>> | 999 >>>> | 999 >>>> | 999 >>>> | 999 >>>> (8 rows) >>>> >>>> test=# drop table t_r_n12; >>>> DROP TABLE >>>> >>>> I have added a source forge bug tracker id to this case (Artifact >>>> 3607290<https://sourceforge.net/tracker/?func=detail&aid=3607290&group_id=311227&atid=1310232>). >>>> The reason for this error is that the function distrib_delete_hash does not >>>> take into account that the distribution column can be null. I will provide >>>> a separate fix for that one. >>>> Regression shows no extra failure except that test case xc_alter_table >>>> would fail until 3607290 is fixed. >>>> >>>> Regards >>>> >>>> >>>> >>>> On Mon, Feb 25, 2013 at 10:18 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> Thanks a lot Abbas for this quick fix. >>>>> >>>>> I am sorry, it's caused by my refactoring of GetRelationNodes(). >>>>> >>>>> If possible, can you please examine the other callers of >>>>> GetRelationNodes() which would face the problems, esp. the ones for DML and >>>>> utilities. This is other instance, where deciding the nodes to execute on >>>>> at the time of execution will help. >>>>> >>>>> About the fix >>>>> Can you please use GetPreferredReplicationNode() instead of >>>>> list_truncate()? It will pick the preferred node instead of first one. If >>>>> you find more places where we need this fix, it might be better to create a >>>>> wrapper function and use it at those places. >>>>> >>>>> On Sat, Feb 23, 2013 at 2:59 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> Hi, >>>>>> PFA a patch to fix a crash when COPY TO is used on a replicated table. >>>>>> >>>>>> This test case produces a crash >>>>>> >>>>>> create table tab_rep(a int, b int) distribute by replication; >>>>>> insert into tab_rep values(1,2), (3,4), (5,6), (7,8); >>>>>> COPY tab_rep (a, b) TO stdout; >>>>>> >>>>>> Here is a description of the problem and the fix >>>>>> In case of a read from a replicated table GetRelationNodes() >>>>>> returns all nodes and expects that the planner can choose >>>>>> one depending on the rest of the join tree. >>>>>> In case of COPY TO we should choose the first one in the node list >>>>>> This fixes a system crash and makes pg_dump work fine. >>>>>> >>>>>> -- >>>>>> Abbas >>>>>> Architect >>>>>> EnterpriseDB Corporation >>>>>> The Enterprise PostgreSQL Company >>>>>> >>>>>> Phone: 92-334-5100153 >>>>>> >>>>>> Website: www.enterprisedb.com >>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> >>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>> the individual or entity to whom it is addressed. This message >>>>>> contains information from EnterpriseDB Corporation that may be >>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>> law. If you are not the intended recipient or authorized to receive >>>>>> this for the intended recipient, any use, dissemination, distribution, >>>>>> retention, archiving, or copying of this communication is strictly >>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>> the sender immediately by reply e-mail and delete this message. >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Everyone hates slow websites. So do we. >>>>>> Make your web apps faster with AppDynamics >>>>>> Download AppDynamics Lite for free today: >>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> _______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Koichi S. <koi...@gm...> - 2013-06-25 08:27:11
|
Now xc_alter_table is successful. Thanks. ---------- Koichi Suzuki 2013/6/24 Abbas Butt <abb...@en...> > Attached please find a revised patch that contains related test cases. > Test cases were earlier submitted as a part of patch for bug id 3608374, > but since these two patches might not get into the same release , the test > cases are now separated. > > Regards > > > > On Fri, Mar 8, 2013 at 2:01 PM, Koichi Suzuki <koi...@gm...>wrote: > >> Thanks Abbas for the fix. >> ---------- >> Koichi Suzuki >> >> >> 2013/3/8 Abbas Butt <abb...@en...>: >> > Attached please find patch to fix 3607290. >> > >> > Regression shows no extra failure. >> > >> > Test cases for this have already been submitted in email subject [Patch >> to >> > fix a crash in COPY TO from a replicated table] >> > >> > -- >> > -- >> > Abbas >> > Architect >> > EnterpriseDB Corporation >> > The Enterprise PostgreSQL Company >> > >> > Phone: 92-334-5100153 >> > >> > Website: www.enterprisedb.com >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> > >> > This e-mail message (and any attachment) is intended for the use of >> > the individual or entity to whom it is addressed. This message >> > contains information from EnterpriseDB Corporation that may be >> > privileged, confidential, or exempt from disclosure under applicable >> > law. If you are not the intended recipient or authorized to receive >> > this for the intended recipient, any use, dissemination, distribution, >> > retention, archiving, or copying of this communication is strictly >> > prohibited. If you have received this e-mail in error, please notify >> > the sender immediately by reply e-mail and delete this message. >> > >> ------------------------------------------------------------------------------ >> > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >> > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >> > endpoint security space. For insight on selecting the right partner to >> > tackle endpoint security challenges, access the full report. >> > http://p.sf.net/sfu/symantec-dev2dev >> > _______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > |
From: Koichi S. <koi...@gm...> - 2013-06-25 06:10:20
|
I meant that removing "return" statement which returns another function return value will be a good refactoring. Of course, simple return may not be removed. ---------- Koichi Suzuki 2013/6/25 Koichi Suzuki <koi...@gm...> > Year. The code is not harmfull at all. Removing "return" from void > functions could be a good refactoring. Although Solaris is not supported > officieally yet, I think it's a good idea to have it in master. I do hope > Matt continues to test XC so that we can tell XC runs on Solaris. > > Any more inputs? > > Regardsds; > > ---------- > Koichi Suzuki > > > 2013/6/25 Matt Warner <MW...@xi...> > >> I'll double check but I thought I'd only removed return from functions >> declaring void as their return type. >> >> ? >> >> Matt >> >> On Jun 23, 2013, at 6:22 PM, "鈴木 幸市" <ko...@in...> wrote: >> >> The patch looks reasonable. One comment: removing "return" for non-void >> function will cause Linux gcc warning. For this case, we need #ifdef >> SOLARIS directive. >> >> You sent two similar patch for proxy_main.c in separate e-mails. The >> later one seems to resolve my comment above. Although the core team >> cannot declare that XC runs on Solaris so far, I think the patch is >> reasonable to be included. >> >> Any other comments? >> --- >> Koichi Suzuki >> >> >> >> On 2013/06/22, at 1:26, Matt Warner <MW...@XI...> wrote: >> >> Regarding the other changes, they are specific to Solaris. For example, >> in src/backend/pgxc/pool/pgxcnode.c, Solaris requires we include >> sys/filio.h. I’ll be searching to see if I can find a macro already defined >> for Solaris that I can leverage to #ifdef those Solaris-specific items.** >> ** >> >> Matt**** >> >> *From:* Matt Warner >> *Sent:* Friday, June 21, 2013 9:21 AM >> *To:* 'Koichi Suzuki' >> *Cc:* 'pos...@li...' >> *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** >> ** ** >> First patch.**** >> >> *From:* Matt Warner >> *Sent:* Friday, June 21, 2013 8:50 AM >> *To:* 'Koichi Suzuki' >> *Cc:* pos...@li... >> *Subject:* RE: [Postgres-xc-developers] Minor Fixes**** >> ** ** >> Yes, I’m running XC on Solaris x64.**** >> >> *From:* Koichi Suzuki [mailto:koi...@gm...<koi...@gm...> >> ] >> *Sent:* Thursday, June 20, 2013 6:34 PM >> *To:* Matt Warner >> *Cc:* pos...@li... >> *Subject:* Re: [Postgres-xc-developers] Minor Fixes**** >> ** ** >> Thanks a lot for the patch. As Michael mentioned, you can send a patch >> to developers mailing list.**** >> ** ** >> BTW, core team tested current XC on 64bit Intel CentOS and others tested >> it against RedHat. Did you test XC on Solaris?**** >> ** ** >> Regards;**** >> >> **** >> ---------- >> Koichi Suzuki**** >> >> ** ** >> 2013/6/21 Matt Warner <MW...@xi...>**** >> Just a quick question about contributing fixes. I’ve had to make some >> minor changes to get XC compiled on Solaris x64.**** >> What format would you like to see for the changes? Most are very minor, >> such as removing return statements inside void functions (which the Solaris >> compiler flags as incorrect since you can’t return from a void function). >> **** >> Matt**** >> **** >> >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers**** >> ** ** >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> >> http://p.sf.net/sfu/windows-dev2dev_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> > |