You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Pavan D. <pav...@gm...> - 2013-07-22 17:11:33
|
On Fri, Jul 19, 2013 at 12:15 PM, Ahsan Hadi <ahs...@en...>wrote: > Hi Pavan, > > Can you review the patch from Ashutosh and see if that addresses the issue > you have raised. > > Looks good to me. Thanks. |
From: Tomonari K. <kat...@po...> - 2013-07-22 04:16:20
|
Hi Abbas, (2013/07/19 20:37), Abbas Butt wrote: > I had a look at your patch, here is my feedback. > Thank you for the feedback. > 1. The problem you reported seems to be caused by a missing initialization > i.e. > query->rtable = NIL; > Can you please explain why did the server crash and how it got fixed by > having the above initialization statement? The fix has nothing to do with server crash. The test environment(2datanodes and 1cordinator) avoids the crash. > It would be nice to add some comment on top of this line explaining it. > To be honest, I don't know why my fix solves my infinite loop problem. I'll check it deeper and revise the patch. Please wait for it. > 2. I think your patch unintentionally includes a change in > contrib/pgxc_ctl/signature.h > Can you please remove it? > OK, I'll do it. > 3. Can you please add a test case in say xc_misc.sql? > OK, I'll do it too. regards, ------------------- NTT Software Corporation Tomonari Katsumata |
From: Nikhil S. <ni...@st...> - 2013-07-21 16:39:03
|
> I am working on some case study, and wants to know which one is best for > Hive Metastore PostgresSQL/Postgres-XC or MySQL. if any one is having any > document, link or resource please give me the link. > > > Stock PostgreSQL should be good enough as a Hive metastore. Don't think XC would be a good fit for it. No comments about the latter :) Regards, Nikhils > -- > ViVek Raghuwanshi > Mobile -+91-09595950504 > Skype - vivek_raghuwanshi > IRC vivekraghuwanshi > http://vivekraghuwanshi.wordpress.com/ > http://in.linkedin.com/in/vivekraghuwanshi > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Vivek S. R. <viv...@gm...> - 2013-07-21 15:30:00
|
Hi All, I am not sure if this is a right mailing list in not please suggest/forward. I am working on some case study, and wants to know which one is best for Hive Metastore PostgresSQL/Postgres-XC or MySQL. if any one is having any document, link or resource please give me the link. -- ViVek Raghuwanshi Mobile -+91-09595950504 Skype - vivek_raghuwanshi IRC vivekraghuwanshi http://vivekraghuwanshi.wordpress.com/ http://in.linkedin.com/in/vivekraghuwanshi |
From: Michael P. <mic...@gm...> - 2013-07-21 11:43:08
|
On Sat, Jul 20, 2013 at 11:50 PM, Tomasz Straszewski < str...@gm...> wrote: > Hi, > > I have a question about transactions, i readed that postgres-xc doesn't > support savepoints. What happens if transaction fails ? > Same answer as on the other mailing list: All the modifications done by this transaction are dropped on all the nodes involved as it rollbacks. Thanks, -- Michael |
From: Tomasz S. <str...@gm...> - 2013-07-20 14:50:40
|
Hi, I have a question about transactions, i readed that postgres-xc doesn't support savepoints. What happens if transaction fails ? My application use postgreSQL and implement savepoint dao class with create savepoint in database. I want to migrate from postgreSQL to postgres-xc and i don't know what to do in this situation. Regards |
From: Abbas B. <abb...@en...> - 2013-07-19 11:37:42
|
I had a look at your patch, here is my feedback. 1. The problem you reported seems to be caused by a missing initialization i.e. query->rtable = NIL; Can you please explain why did the server crash and how it got fixed by having the above initialization statement? It would be nice to add some comment on top of this line explaining it. 2. I think your patch unintentionally includes a change in contrib/pgxc_ctl/signature.h Can you please remove it? 3. Can you please add a test case in say xc_misc.sql? Regards On Fri, Jul 19, 2013 at 6:46 AM, Tomonari Katsumata < t.k...@gm...> wrote: > Hi Abbas, > > Thank you for your great suggestion! > I've test it with the enviroment as you said. > (against:c296dfec03f2eec15f79363c4111412c27d24ab5) > > Any crash has occured during regression tests. > I could run all regression tests with both original and patched(*) source. > Although I get one FAILED on a combocid test, it doesn't have > nothing to do with revising the source. > Because original source get same FAILED too. > (please see attached results.) > > (*) proposed by Ashutosh on ML(2013/07/02 15:14). > ------------- > In pgxc_collect_RTE_walker() > 3112 crte_context->crte_rtable = > list_concat(crte_context->crte_rtable, > 3113 query->rtable); > see if copying query->rtable solves the issue. The changed code would look > like > 3112 crte_context->crte_rtable = > list_concat(crte_context->crte_rtable, > 3113 > list_copy(query->rtable)); > ------------------------------------ > > If you don't have any problem, please commit this change. > > regards, > --------------------- > NTT Software Corporation > Tomonari Katsumata > > > > 2013/7/16 Abbas Butt <abb...@en...> > >> Hi Tomonari, >> Can you please continue your work by having a cluster configured in such >> a manner that it has at least two datanodes and one coordinator. >> This way you can avoid the crash in the regression and proceed with your >> work. >> Thanks >> Regards >> >> >> >> On Fri, Jul 5, 2013 at 8:53 AM, Tomonari Katsumata < >> kat...@po...> wrote: >> >>> Hi Ashutosh, Abbas, >>> >>> Sorry for slow response.. >>> But still I can not run regression test. >>> >>> >>Ashutosh >>> >>> > see if copying query->rtable solves the issue. The changed code would >>> look >>> > like >>> > >>> > 3112 crte_context->crte_rtable = >>> > list_concat(crte_context->crte_rtable, >>> > 3113 >>> > list_copy(query->rtable)); >>> > >>> I've changed source as you said, and it seems resolving my problem(*). >>> (*) below query falt in endless loop. >>> ------------------------------------------------------------------ >>> >>> EXECUTE DIRECT ON (data1) $$ >>> SELECT >>> count(*) >>> FROM >>> (SELECT * FROM pg_locks l LEFT JOIN >>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> $$ >>> ------------------------------------------------------------------ >>> >>> But I couldn't run regression test with this source too. >>> It stoped same test-case(xmlmap.sql). >>> >>> >>Abbas >>> >>> > Since you are getting a crash even without your patch there must be >>> some >>> > thing wrong in the deployment. >>> > Can you try doing >>> > ./configure --enable-debug --enable-cassert CFLAGS='-O0' >>> > and then see whether its an assertion failure or not? >>> > Also can you see which SQL statement is causing the crash in >>> xmlmap.sql? >>> > >>> I've tried the configure you said. >>> I didn't get any assertion failure, but regression test had >>> stoped in "test collate". >>> >>> >>> When Postgres-XC crashed with xmlmap.sql, >>> I found The SQL statement which is causing the crash. >>> ---- >>> DECLARE xc CURSOR WITH HOLD FOR SELECT * FROM testxmlschema.test1 ORDER >>> BY 1, 2; >>> ---- >>> >>> I think these are another problems. >>> Because I get same result with or without patched Postgres-XC. >>> so I want to discuss on another mails. >>> >>> >>> >>> regards, >>> ------------ >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> >>> >>> >>> (2013/07/03 12:13), Abbas Butt wrote: >>> >>> Since you are getting a crash even without your patch there must be some >>> thing wrong in the deployment. >>> Can you try doing >>> ./configure --enable-debug --enable-cassert CFLAGS='-O0' >>> and then see whether its an assertion failure or not? >>> Also can you see which SQL statement is causing the crash in xmlmap.sql? >>> >>> On Mon, Jul 1, 2013 at 4:51 PM, Tomonari Katsumata <t.k...@gm...> wrote: >>> >>> >>> Hi, >>> >>> I've tried regression test, but I could not >>> get right result. >>> Both patched and un-patched Postgres-XC get >>> same result, so I think my process is bad. >>> >>> I run a gtm, a gtm_proxy, a coordinator, a datanode >>> in one server(CentOS6.4 x86_64 on VMWare), >>> and hit "make installcheck". >>> The configure option is [--enable-debug CFLAGS=""]. >>> >>> What is right way to run the regression test? >>> >>> output is below. >>> --------------------------------------------------------- >>> test tablespace ... ok >>> test boolean ... ok >>> test char ... ok >>> test name ... ok >>> test varchar ... ok >>> test text ... ok >>> test int2 ... ok >>> test int4 ... ok >>> test int8 ... ok >>> test oid ... ok >>> test float4 ... ok >>> test float8 ... ok >>> test bit ... ok >>> test numeric ... ok >>> test txid ... ok >>> test uuid ... ok >>> test enum ... FAILED >>> test money ... ok >>> test rangetypes ... ok >>> test strings ... ok >>> test numerology ... ok >>> test point ... ok >>> test lseg ... ok >>> test box ... ok >>> test path ... ok >>> test polygon ... ok >>> test circle ... ok >>> test date ... ok >>> test time ... ok >>> test timetz ... ok >>> test timestamp ... ok >>> test timestamptz ... ok >>> test interval ... ok >>> test abstime ... ok >>> test reltime ... ok >>> test tinterval ... ok >>> test inet ... ok >>> test macaddr ... ok >>> test tstypes ... ok >>> test comments ... ok >>> test geometry ... ok >>> test horology ... ok >>> test regex ... ok >>> test oidjoins ... ok >>> test type_sanity ... ok >>> test opr_sanity ... ok >>> test insert ... ok >>> test create_function_1 ... ok >>> test create_type ... ok >>> test create_table ... ok >>> test create_function_2 ... ok >>> test create_function_3 ... ok >>> test copy ... ok >>> test copyselect ... ok >>> test create_misc ... ok >>> test create_operator ... ok >>> test create_index ... FAILED >>> test create_view ... ok >>> test create_aggregate ... ok >>> test create_cast ... ok >>> test constraints ... FAILED >>> test triggers ... ok >>> test inherit ... FAILED >>> test create_table_like ... ok >>> test typed_table ... ok >>> test vacuum ... ok >>> test drop_if_exists ... ok >>> test sanity_check ... ok >>> test errors ... ok >>> test select ... ok >>> test select_into ... ok >>> test select_distinct ... ok >>> test select_distinct_on ... ok >>> test select_implicit ... ok >>> test select_having ... ok >>> test subselect ... ok >>> test union ... ok >>> test case ... ok >>> test join ... FAILED >>> test aggregates ... FAILED >>> test transactions ... ok >>> test random ... ok >>> test portals ... ok >>> test arrays ... FAILED >>> test btree_index ... ok >>> test hash_index ... ok >>> test update ... ok >>> test delete ... ok >>> test namespace ... ok >>> test prepared_xacts ... ok >>> test privileges ... FAILED >>> test security_label ... ok >>> test collate ... FAILED >>> test misc ... ok >>> test rules ... ok >>> test select_views ... ok >>> test portals_p2 ... ok >>> test foreign_key ... ok >>> test cluster ... ok >>> test dependency ... ok >>> test guc ... ok >>> test bitmapops ... ok >>> test combocid ... ok >>> test tsearch ... ok >>> test tsdicts ... ok >>> test foreign_data ... ok >>> test window ... ok >>> test xmlmap ... FAILED (test process exited with exit >>> code 2) >>> test functional_deps ... FAILED (test process exited with exit >>> code 2) >>> test advisory_lock ... FAILED (test process exited with exit >>> code 2) >>> test json ... FAILED (test process exited with exit >>> code 2) >>> test plancache ... FAILED (test process exited with exit >>> code 2) >>> test limit ... FAILED (test process exited with exit >>> code 2) >>> test plpgsql ... FAILED (test process exited with exit >>> code 2) >>> test copy2 ... FAILED (test process exited with exit >>> code 2) >>> test temp ... FAILED (test process exited with exit >>> code 2) >>> test domain ... FAILED (test process exited with exit >>> code 2) >>> test rangefuncs ... FAILED (test process exited with exit >>> code 2) >>> test prepare ... FAILED (test process exited with exit >>> code 2) >>> test without_oid ... FAILED (test process exited with exit >>> code 2) >>> test conversion ... FAILED (test process exited with exit >>> code 2) >>> test truncate ... FAILED (test process exited with exit >>> code 2) >>> test alter_table ... FAILED (test process exited with exit >>> code 2) >>> test sequence ... FAILED (test process exited with exit >>> code 2) >>> test polymorphism ... FAILED (test process exited with exit >>> code 2) >>> test rowtypes ... FAILED (test process exited with exit >>> code 2) >>> test returning ... FAILED (test process exited with exit >>> code 2) >>> test largeobject ... FAILED (test process exited with exit >>> code 2) >>> test with ... FAILED (test process exited with exit >>> code 2) >>> test xml ... FAILED (test process exited with exit >>> code 2) >>> test stats ... FAILED (test process exited with exit >>> code 2) >>> test xc_create_function ... FAILED (test process exited with exit >>> code 2) >>> test xc_groupby ... FAILED (test process exited with exit >>> code 2) >>> test xc_distkey ... FAILED (test process exited with exit >>> code 2) >>> test xc_having ... FAILED (test process exited with exit >>> code 2) >>> test xc_temp ... FAILED (test process exited with exit >>> code 2) >>> test xc_remote ... FAILED (test process exited with exit >>> code 2) >>> test xc_node ... FAILED (test process exited with exit >>> code 2) >>> test xc_FQS ... FAILED (test process exited with exit >>> code 2) >>> test xc_FQS_join ... FAILED (test process exited with exit >>> code 2) >>> test xc_misc ... FAILED (test process exited with exit >>> code 2) >>> test xc_triggers ... FAILED (test process exited with exit >>> code 2) >>> test xc_trigship ... FAILED (test process exited with exit >>> code 2) >>> test xc_constraints ... FAILED (test process exited with exit >>> code 2) >>> test xc_copy ... FAILED (test process exited with exit >>> code 2) >>> test xc_alter_table ... FAILED (test process exited with exit >>> code 2) >>> test xc_sequence ... FAILED (test process exited with exit >>> code 2) >>> test xc_prepared_xacts ... FAILED (test process exited with exit >>> code 2) >>> test xc_notrans_block ... FAILED (test process exited with exit >>> code 2) >>> test xc_limit ... FAILED (test process exited with exit >>> code 2) >>> test xc_sort ... FAILED (test process exited with exit >>> code 2) >>> test xc_returning ... FAILED (test process exited with exit >>> code 2) >>> test xc_params ... FAILED (test process exited with exit >>> code 2) >>> ========================= >>> 55 of 153 tests failed. >>> ========================= >>> >>> >>> 2013/7/1 Tomonari Katsumata <kat...@po...> <kat...@po...> >>> >>> Hi Ashutosh, >>> >>> OK, I'll try regression test. >>> Please wait for the result. >>> >>> regards, >>> ------------ >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> (2013/07/01 17:06), Ashutosh Bapat wrote: >>> >>> Hi Tomonori, >>> >>> >>> >>> On Mon, Jul 1, 2013 at 1:33 PM, Tomonari Katsumata <kat...@po...> wrote: >>> >>> >>> Hi Ashutosh and all, >>> >>> Sorry for late response. >>> I made a patch for resolving the problem I mentioned before. >>> >>> I thought the reason of this problem is parsing query twice. >>> because of this, the rtable is made from same Lists and become >>> cycliced List. >>> I fixed this problem with making former List empty. >>> >>> I'm not sure this fix leads any anothre problems but >>> the problem query I mentioned before works fine. >>> >>> This patch is against for >>> >>> "**a074cac9b6b507e6d4b58c5004673f**6cc65fcde1". >>> >>> You can check the robustness of patch by running regression. Please let >>> >>> me >>> >>> know what you see. >>> >>> >>> >>> regards, >>> ------------------ >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> >>> (2013/06/17 18:53), Ashutosh Bapat wrote: >>> >>> >>> Hi Tomonari, >>> In which function have you taken this debug info? What is list1 and >>> >>> list2? >>> >>> On Mon, Jun 17, 2013 at 10:13 AM, Tomonari Katsumata <kat...@po....**jp <kat...@po... >>> >>> wrote: >>> >>> Hi Ashtosh, >>> >>> Sorry for slow response. >>> >>> I've watched the each lists at list_concat function. >>> >>> This function is called several times, but the lists before >>> last infinitely roop are like below. >>> >>> [list1] >>> (gdb) p *list1->head >>> $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, >>> oid_value = 24129768}, next = 0x170d418} >>> (gdb) p *list1->head->next >>> $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, >>> oid_value = 24130512}, next = 0x170fd40} >>> (gdb) p *list1->head->next->next >>> $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, >>> oid_value = 24161880}, next = 0x171e6c8} >>> (gdb) p *list1->head->next->next->next >>> $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, >>> oid_value = 24128680}, next = 0x171ed28} >>> (gdb) p *list1->head->next->next->**next->next >>> $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, >>> oid_value = 24162152}, next = 0x171f3a0} >>> (gdb) p *list1->head->next->next->**next->next->next >>> $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, >>> oid_value = 24162472}, next = 0x170b7c0} >>> (gdb) p *list1->head->next->next->**next->next->next->next >>> $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, >>> oid_value = 24131056}, next = 0x1720998} >>> ---- from --- >>> (gdb) p *list1->head->next->next->**next->next->next->next->next >>> $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >>> oid_value = 24250808}, next = 0x1721190} >>> (gdb) p >>> >>> *list1->head->next->next->**next->next->next->next->next->**next >>> >>> $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >>> oid_value = 24252848}, next = 0x1721988} >>> (gdb) p *list1->head->next->next->**next->next->next->next->next->** >>> next->next >>> $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >>> oid_value = 24254888}, next = 0x1722018} >>> (gdb) p >>> *list1->head->next->next->**next->next->next->next->next->** >>> next->next->next >>> $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, >>> oid_value = 24256568}, next = 0x1722820} >>> (gdb) p >>> >>> *list1->head->next->next->**next->next->next->next->next->** >>> next->next->next->next >>> $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, >>> oid_value = 24258624}, next = 0x0} >>> ---- to ---- >>> >>> [list2] >>> (gdb) p *list2->head >>> $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, >>> oid_value = 24250808}, next = 0x1721190} >>> (gdb) p *list2->head->next >>> $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, >>> oid_value = 24252848}, next = 0x1721988} >>> (gdb) p *list2->head->next->next >>> $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, >>> oid_value = 24254888}, next = 0x1722018} >>> (gdb) p *list2->head->next->next->next >>> $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, >>> oid_value = 24256568}, next = 0x1722820} >>> (gdb) p *list2->head->next->next->**next->next >>> $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, >>> oid_value = 24258624}, next = 0x0} >>> ---- >>> >>> list1's last five elements are same with list2's all elements. >>> (in above example, between "from" and "to" in list1 equal all of >>> >>> list2) >>> >>> This is cause of infinitely roop, but I can not >>> watch deeper. >>> Because some values from gdb are optimized and un-displayed. >>> I tried compile with CFLAGS=O0, but I can't. >>> >>> What can I do more ? >>> >>> regards, >>> ------------------ >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> (2013/06/12 21:04), Ashutosh Bapat wrote: >>> > Hi Tomonari, >>> > Can you please check the list's sanity before calling >>> pgxc_collect_RTE() >>> > and at every point in the minions of this function. My primary >>> suspect >>> is >>> > the line pgxcplan.c:3094. We should copy the list before >>> concatenating it. >>> > >>> > >>> > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < >>> > kat...@po....**jp< >>> >>> kat...@po...>> >>> >>> wrote: >>> > >>> >> Hi Ashutosh, >>> >> >>> >> Thank you for the response. >>> >> >>> >> (2013/06/12 14:43), Ashutosh Bapat wrote: >>> >> >> Hi, >>> >> >> > >>> >> >> > I've investigated this problem(BUG:3614369). >>> >> >> > >>> >> >> > I caught the cause of it, but I can not >>> >> >> > find where to fix. >>> >> >> > >>> >> >> > The problem occurs when "pgxc_collect_RTE_walker" is >>> >>> called >>> >>> >> infinitely. >>> >> >> > It seems that rtable(List of RangeTable) become cyclic >>> >>> List. >>> >>> >> >> > I'm not sure where the List is made. >>> >> >> > >>> >> >> > >>> >> > I guess, we are talking about EXECUTE DIRECT statement that >>> >>> you >>> >>> have >>> >> > mentioned earlier. >>> >> >>> >> Yes, that's right. >>> >> I'm talking about EXECUTE DIRECT statement like below. >>> >> --- >>> >> EXECUTE DIRECT ON (data1) $$ >>> >> SELECT >>> >> count(*) >>> >> FROM >>> >> (SELECT * FROM pg_locks l LEFT JOIN >>> >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> >> $$ >>> >> --- >>> >> >>> >> > The function pgxc_collect_RTE_walker() is a recursive >>> >> > function. The condition to end the recursion is if the given >>> node is >>> >> NULL. >>> >> > We have to look at if that condition is met and if not why. >>> >> > >>> >> I investigated it deeper, and I noticed that >>> >> the infinitely loop happens at the function >>> >>> "range_table_walker()". >>> >>> >> >>> >> Please see below trace. >>> >> =========================== >>> >> Breakpoint 1, range_table_walker (rtable=0x15e7968, >>> >>> walker=0x612c70 >>> >>> >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, >>> >> flags=0) at nodeFuncs.c:1908 >>> >> 1908 in nodeFuncs.c >>> >> >>> >> (gdb) p *rtable >>> >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = >>> 0x15e9820} >>> >> (gdb) p *rtable->head >>> >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, >>> oid_value = >>> >> 22968760}, next = 0x15e8190} >>> >> (gdb) p *rtable->head->next >>> >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, >>> oid_value = >>> >> 22970800}, next = 0x15e8988} >>> >> (gdb) p *rtable->head->next->next >>> >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, >>> oid_value = >>> >> 22972840}, next = 0x15e9018} >>> >> (gdb) p *rtable->head->next->next->**next >>> >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, >>> oid_value = >>> >> 22974520}, next = 0x15e9820} >>> >> (gdb) p *rtable->head->next->next->**next->next >>> >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, >>> oid_value = >>> >> 22976576}, next = 0x15e7998} >>> >> =========================== >>> >> >>> >> The line which starts with "$15" has 0x15e7998 as its next >>> >>> data. >>> >>> >> But it is the head pointer(see the line which starts with $10). >>> >> >>> >> And in range_table_walker(), the function is called >>> >>> recursively. >>> >>> >> -------- >>> >> ... >>> >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) >>> >> { >>> >> if (range_table_walker(query->**rtable, >>> >>> walker, >>> >>> context, >>> >> flags)) >>> >> return true; >>> >> } >>> >> ... >>> >> -------- >>> >> >>> >> We should make rtable right or deal with "flags" properly. >>> >> But I can't find where to do it... >>> >> >>> >> What do you think ? >>> >> >>> >> regards, >>> >> --------- >>> >> NTT Software Corporation >>> >> Tomonari Katsumata >>> >> >>> >> >>> >> >>> >> >>> >> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> >> This SF.net email is sponsored by Windows: >>> >> >>> >> Build for Windows Store. >>> >> >>> >> http://p.sf.net/sfu/windows-**dev2dev< >>> >>> http://p.sf.net/sfu/windows-dev2dev> >>> >>> >> ______________________________**_________________ >>> >> Postgres-xc-developers mailing list >>> >> Postgres-xc-developers@lists.**sourceforge.net< >>> >>> Pos...@li...> >>> >>> >> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-** >>> developers< >>> >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>> >>> >> >>> > >>> > >>> > >>> >>> >>> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> http://p.sf.net/sfu/windows-**dev2dev< >>> >>> http://p.sf.net/sfu/windows-dev2dev> >>> >>> ______________________________**_________________ >>> Postgres-xc-developers mailing listPostgres-xc-developers@lists.**sourceforge.net< >>> >>> Pos...@li...> >>> >>> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers< >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> <https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>> >>> ------------------------------------------------------------------------------ >>> >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> http://p.sf.net/sfu/windows-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> http://p.sf.net/sfu/windows-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> http://p.sf.net/sfu/windows-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> http://p.sf.net/sfu/windows-dev2dev >>> >>> >>> >>> _______________________________________________ >>> Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> >>> http://p.sf.net/sfu/windows-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> >> >> ------------------------------------------------------------------------------ >> See everything from the browser to the database with AppDynamics >> Get end-to-end visibility with application monitoring from AppDynamics >> Isolate bottlenecks and diagnose root cause in seconds. >> Start your free trial of AppDynamics Pro today! >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >> >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ahsan H. <ahs...@en...> - 2013-07-19 06:45:42
|
Hi Pavan, Can you review the patch from Ashutosh and see if that addresses the issue you have raised. -- Ahsan On Thu, Jul 18, 2013 at 1:01 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi, > There was a small mistake in calling pgxc_is_join_reducible(), which > checks whether a given join is reducible or not; the arguments were passed > in wrong order. Thus a LEFT_JOIN turned into INNER_JOIN. Attached is the > patch to fix the issue. The patch is based on master, and should be > backpatched to stable branch. > > > On Thu, Jul 11, 2013 at 2:18 PM, Pavan Deolasee <pav...@gm...>wrote: > >> Hello All, >> >> The following case shows a bug in the FQS logic. I am using fairly recent >> master branch (may be a few days old). So if this has been fixed recently, >> please let me know: >> >> test=# CREATE TABLE ltab_repl (a int, b char(10)) DISTRIBUTE BY >> REPLICATION; >> CREATE TABLE >> test=# CREATE TABLE rtab (ar int, br char(10)); >> CREATE TABLE >> test=# INSERT INTO ltab_repl SELECT generate_series(1,5), 'foo'; >> INSERT 0 5 >> test=# INSERT INTO rtab SELECT generate_series(1,4), 'bar'; >> INSERT 0 4 >> test=# SELECT * FROM ltab_repl ; >> a | b >> ---+------------ >> 1 | foo >> 2 | foo >> 3 | foo >> 4 | foo >> 5 | foo >> (5 rows) >> >> test=# SELECT * FROM rtab; >> ar | br >> ----+------------ >> 1 | bar >> 2 | bar >> 3 | bar >> 4 | bar >> (4 rows) >> >> >> test=# set enable_fast_query_shipping TO off; >> SET >> test=# SELECT * FROM ltab_repl LEFT JOIN rtab ON (a = ar); >> a | b | ar | br >> ---+------------+----+------------ >> 1 | foo | 1 | bar >> 2 | foo | 2 | bar >> 3 | foo | 3 | bar >> 4 | foo | 4 | bar >> 5 | foo | | >> (5 rows) >> >> test=# set enable_fast_query_shipping TO on; >> SET >> test=# SELECT * FROM ltab_repl LEFT JOIN rtab ON (a = ar); >> a | b | ar | br >> ---+------------+----+------------ >> 1 | foo | 1 | bar >> 2 | foo | 2 | bar >> 3 | foo | | >> 4 | foo | | >> 5 | foo | | >> 1 | foo | | >> 2 | foo | | >> 3 | foo | 3 | bar >> 4 | foo | 4 | bar >> 5 | foo | | >> (10 rows) >> >> test=# EXPLAIN VERBOSE SELECT * FROM ltab_repl LEFT JOIN rtab ON (a = ar); >> QUERY >> PLAN >> >> ---------------------------------------------------------------------------------------------------------------------------------------------- >> Data Node Scan on "__REMOTE_FQS_QUERY__" (cost=0.00..0.00 rows=0 >> width=0) >> Output: ltab_repl.a, ltab_repl.b, rtab.ar, rtab.br >> Node/s: d1, d2 >> Remote query: SELECT ltab_repl.a, ltab_repl.b, rtab.ar, rtab.br FROM >> (public.ltab_repl LEFT JOIN public.rtab ON ((ltab_repl.a = rtab.ar))) >> (4 rows) >> >> >> As you would see the query is giving a wrong result when FQS is turned >> ON. I don't think its correct to push down the query as it is to the >> datanodes. What's happening really is that each datanode is returning 5 >> rows each (since ltab_repl is a replicated table, contains 5 rows and is at >> the left side of the left join) and they are being appended together. >> >> I think I'd suggested in the past to run regression by turning these >> various optimization GUCs on/off and comparing the results. While we might >> see some row ordering issues when these GUCs are turned on/off, the final >> result should remain the same. Such an exercise will help to uncover such >> bugs. >> >> Thanks, >> Pavan >> >> -- >> Pavan Deolasee >> http://www.linkedin.com/in/pavandeolasee >> >> >> ------------------------------------------------------------------------------ >> See everything from the browser to the database with AppDynamics >> Get end-to-end visibility with application monitoring from AppDynamics >> Isolate bottlenecks and diagnose root cause in seconds. >> Start your free trial of AppDynamics Pro today! >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Ahsan Hadi Snr Director Product Development EnterpriseDB Corporation The Enterprise Postgres Company Phone: +92-51-8358874 Mobile: +92-333-5162114 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <koi...@gm...> - 2013-07-18 05:15:02
|
Hi, In terms of GTM, you need gtm.control file of the master cluster to start the slave cluster as another master. For this purpose, you have two options. 1. Configure GTM slave. When you start slave cluster as a master, configure coordinators/datanodes with GTM slave, or 2. Copy original GTM's gtm.control to the new GTM's working directory when you start slave coordinators/datanodes as master. You need to configure these slaves with new GTM. ---------- Koichi Suzuki 2013/7/17 Ashutosh Bapat <ash...@en...> > Hi Afonso, > I am including the hackers mailing list for the best answer. > > AFAIU, you need to replicate your database between two Postgres-XC > clusters? OR one Postgres-XC cluster and other PG instance? Is that right? > > > You may > 1. Setup 1 to 1 streaming replication between all the nodes of Postres-XC. > I am not sure about GTM to GTM, but they may not be required. Right now > there is no single channel streaming replication supported by XC, since the > WALs are not available at central point. > > 2. Use trigger based replication between the two Postgres-XC instances. > That might be costly. > > > On Wed, Jul 17, 2013 at 4:23 PM, Afonso Bione <aag...@gm...> wrote: > >> Dear friend, >> >> I installed Moodle using the master GTM and I would like to >> replicate the database on another machine using a Coordinator, but when I >> create the database on another machine, I get an error message >> could send me an example of how to use the postgresxc using the same >> basis of the database on several computers with a GTM and Coordinator >> . >> Thanks again >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Ashutosh B. <ash...@en...> - 2013-07-17 11:11:50
|
Hi Afonso, I am including the hackers mailing list for the best answer. AFAIU, you need to replicate your database between two Postgres-XC clusters? OR one Postgres-XC cluster and other PG instance? Is that right? You may 1. Setup 1 to 1 streaming replication between all the nodes of Postres-XC. I am not sure about GTM to GTM, but they may not be required. Right now there is no single channel streaming replication supported by XC, since the WALs are not available at central point. 2. Use trigger based replication between the two Postgres-XC instances. That might be costly. On Wed, Jul 17, 2013 at 4:23 PM, Afonso Bione <aag...@gm...> wrote: > Dear friend, > > I installed Moodle using the master GTM and I would like to replicate the > database on another machine using a Coordinator, but when I create the > database on another machine, I get an error message > could send me an example of how to use the postgresxc using the same basis of > the database on several computers with a GTM and Coordinator > . > Thanks again > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Michael P. <mic...@gm...> - 2013-07-17 06:35:44
|
On Wed, Jul 17, 2013 at 1:58 PM, 鈴木 幸市 <ko...@in...> wrote: > I think this change is for help message and is okay to commit. Any other > comments? No more comments. This too should be backpatched. -- Michael |
From: Michael P. <mic...@gm...> - 2013-07-17 06:33:59
|
On Wed, Jul 17, 2013 at 1:59 PM, 鈴木 幸市 <ko...@in...> wrote: > This is document change which describes the current code more closely. > > Any more suggestions? For example, changing the default as described in the > current doc? Nope. This patch is correct and should be backpatched down to 1.0. -- Michael |
From: 鈴木 幸市 <ko...@in...> - 2013-07-17 04:59:46
|
This is document change which describes the current code more closely. Any more suggestions? For example, changing the default as described in the current doc? Regards; --- Koichi Suzuki On 2013/07/17, at 11:16, Masataka Saito <pg...@gm...> wrote: > Hello all. > > I found creating nodes without PRIMARY and PREFERRED parameters not works like the description of the documentation. > > The documentation says the default value of PRIMARY and PREFERRED is true. > But unspecifying them to CREATE NODE creates non-primary / non-preferred node. > > Documentation: > > PRIMARY > > A boolean value can be specified. In case no value is specified, PRIMARY acts like true. > > PREFERRED > > A boolean value can be specified. In case no value is specified, PREFERRED acts like true. > > cx=# create node data03 WITH (host='localhost',port=5444, type=datanode); > CREATE NODE > cx=# select * from pg_catalog.pgxc_node ; > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -----------+-----------+-----------+-----------+----------------+------------------+------------- > coord01 | C | 5432 | localhost | f | f | -951114102 > data03 | D | 5444 | localhost | f | f | 1867970210 > (2 rows) > > I speculate that it is bugs of documentation because specifying true for PRIMARY/PREFERRED marks the node special and I think the default value should be false. > > I wrote a patch to the documentation and attached to this mail. > > Regards. > <create_node_doc_fix.patch>------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: 鈴木 幸市 <ko...@in...> - 2013-07-17 04:58:33
|
I think this change is for help message and is okay to commit. Any other comments? Regards; --- Koichi Suzuki On 2013/07/17, at 11:06, Masataka Saito <pg...@gm...> wrote: > Hello all. > > I found the second argument of initdb has some problems and wrote a patch to fix it. > > First, the documentation and the usage is conflict. The documentation says initdb takes only one argument but usage says that second argument may be NODENAME. > > -------- > Documentation's synopsis: > initdb [option...] [--pgdata | -D | --nodename nodename] directory > > $ initdb --help > initdb initializes a PostgreSQL database cluster. > > Usage: > initdb [OPTION]... [DATADIR] [NODENAME] > (snip) > -------- > > Second, giving two argument to initdb makes an error and the error message is also wrong (It says the first argument overflowed is the third argument.) > > -------- > $ initdb datadir nodename > initdb: too many command-line arguments (first is "(null)") > Try "initdb --help" for more information. > > $ initdb datadir nodename third > initdb: too many command-line arguments (first is "third") > Try "initdb --help" for more information. > -------- > > I speculated that it is bug of the usage and message indication from description of -D and --nodename options in help message. > > Regards. > <initdb_argument.patch>------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Michael P. <mic...@gm...> - 2013-07-16 23:30:29
|
On Tue, Jul 16, 2013 at 9:28 PM, Hans-Jürgen Schönig <pos...@cy...> wrote: > hello … > > there is a little issue in the image on the site: > http://postgres-xc.sourceforge.net > > it should actually say "coordinator" instead of "coordinator". Yes, the mistake is "cordinators" on the picture. It has been here for ages... -- Michael |
From: Nikhil S. <ni...@st...> - 2013-07-16 15:37:16
|
> OK so all i need is to use pg_dumpall in psql in any of the coordinators > (this will store my DDL in the file). > When I want to restore it is just like to connect to anyy of the > coordinator, execute psql and load the DDL file using -f option or \i in > interactive mode? > > Yeah. That should work. Regards, Nikhils > > 2013/7/16 Ashutosh Bapat <ash...@en...> > >> Hi Adam, >> You do not need to follow these steps, if you are configuring your >> cluster the first time. You may follow installation steps there. >> >> >> On Mon, Jul 15, 2013 at 5:44 PM, Adam Dec <ada...@gm...> wrote: >> >>> Hi! >>> My topology is like this: >>> I have to machines. On each machine I have 1 master coordinator and 1 >>> master datanode, >>> >>> I have created a backup (only DDL as I read from here: >>> http://postgres-xc.sourceforge.net/docs/1_1_beta/add-node-datanode.html) >>> process, please advice me if this does make any sense to you: >>> >>> 1. Creating backup >>> >>> - Connect to the any of the coordinators (lets say the first one): >>> >>> ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts >>> >>> - Lock the cluster for backup, do not close this session >>> >>> select pgxc_lock_for_backup(); >>> >>> More: >>> http://postgres-xc.sourceforge.net/docs/1_1_beta/functions-admin.html#FUNCTIONS-PGXC-ADD-NEW-NODE >>> >>> - Connect to the any of the coordinators and backup the data: >>> >>> ./pg_dumpall -p 20011 -s --include-nodes --dump-nodes --file= >>> /opt/backup/sts_ddl_backup.sql >>> >>> Only schema (i.e. no data) is to be dumped. Note the use of >>> --include-nodes, so that the CREATE TABLE contains TO NODE clause. >>> >>> Similarly --dump-nodes ensures that the dump does contain existing nodes >>> and node groups. >>> 2. Loading backup >>> >>> - Stop all the coordinators: >>> >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord1/ -Z coordinator -l >>> /opt/postgres-xc-1.1/logs/coord1.log -mf & >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord2/ -Z coordinator -l >>> /opt/postgres-xc-1.1/logs/coord2.log -mf & >>> >>> - Stop all the datanodes: >>> >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode1/ -Z datanode -l >>> /opt/postgres-xc-1.1/logs/datanode1.log -mf & >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode2/ -Z datanode -l >>> /opt/postgres-xc-1.1/logs/datanode2.log -mf & >>> >>> - Start the coordinators in restore mode: >>> >>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/coord1/ -l >>> /opt/postgres-xc-1.1/logs/coord1.log & >>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/coord2/ -l >>> /opt/postgres-xc-1.1/logs/coord2.log & >>> >>> - Start the datanodes in restore mode: >>> >>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/datanode1/ -l >>> /opt/postgres-xc-1.1/logs/datanode1.log & >>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/datanode2/ -l >>> /opt/postgres-xc-1.1/logs/datanode2.log & >>> >>> - Connect to the any of the coordinators (lets say the first one): >>> >>> ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts >>> >>> - Restore DDL data: >>> >>> ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts -f >>> /opt/backup/sts_ddl_backup.sql >>> >>> - Stop the coordinators ( this will unlock the cluster -> select >>> pgxc_lock_for_backup(); ): >>> >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord1/ -Z coordinator -l >>> /opt/postgres-xc-1.1/logs/coord1.log -mf & >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord2/ -Z coordinator -l >>> /opt/postgres-xc-1.1/logs/coord2.log -mf & >>> >>> - Stop the datanodes >>> >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode1/ -Z datanode -l >>> /opt/postgres-xc-1.1/logs/datanode1.log -mf & >>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode2/ -Z datanode -l >>> /opt/postgres-xc-1.1/logs/datanode2.log -mf & >>> >>> - Start the coordinators: >>> >>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/coord1/ -Z coordinator -l >>> /opt/postgres-xc-1.1/logs/coord1.log & >>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/coord2/ -Z coordinator -l >>> /opt/postgres-xc-1.1/logs/coord2.log & >>> >>> - Start the datanodes: >>> >>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/datanode1/ -Z datanode -l >>> /opt/postgres-xc-1.1/logs/datanode1.log & >>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/datanode2/ -Z datanode -l >>> /opt/postgres-xc-1.1/logs/datanode2.log & >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> See everything from the browser to the database with AppDynamics >>> Get end-to-end visibility with application monitoring from AppDynamics >>> Isolate bottlenecks and diagnose root cause in seconds. >>> Start your free trial of AppDynamics Pro today! >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Adam D. <ada...@gm...> - 2013-07-16 15:33:55
|
So this some kind of strategy fos situation: Table A join Table B join Table C ? 2013/7/16 Mason Sharp <ma...@st...> > > > > On Tue, Jul 16, 2013 at 11:00 AM, Adam Dec <ada...@gm...> wrote: > >> So the very basic strategy for data distribution (as I understand it >> right now) is to distribute data by hash (table with a lot of writes) using >> primary key? Are there any other strategies/scenarios? >> >> > If you have fairly static tables, use "distribute by replication" for > them. > > For some other scenarios you could do a bit of denormalization and pull in > a foreign key further. For example, for customer-order-lineitem, with > cust_id in customer and order (as a FK), you could pull cust_id down into > lineitem as well and distribute by it. You would have to make sure that the > application always includes cust_id in lineitem in the WHERE clause when > joining with order to ensure that the query gets pushed down to a single > node. > > > > > >> >> 2013/7/16 Mason Sharp <ma...@st...> >> >>> >>> >>> >>> On Tue, Jul 16, 2013 at 9:30 AM, Adam Dec <ada...@gm...> wrote: >>> >>>> Yes A is like a parent and all the others are like a children (joined >>>> with foreign key)...so using parent primary key (id) I will be sure that >>>> the data (that are joined) will all stay at the same node? What about the >>>> situation when table B is also joined (using foreign key) with some other >>>> table? Is that the same? >>>> >>>> >>> If B is distributed on id but also contains column "id2" that is joined >>> with table E, then it can only push down some of the joins. It will end up >>> needing to join on one single coordinator and shipping all of the data >>> there (This is one of the issues that StormDB addresses in our version). >>> >>> >>> >>>> >>>> 2013/7/16 Mason Sharp <ma...@st...> >>>> >>>>> Adam, >>>>> >>>>> Is "id" present in all of A,B,C, and D? Is A the parent and the other >>>>> children and used as a foreign key from B,C, and D to A? If so, yes, on the >>>>> surface it sounds like you can do that and be able to take advantage of >>>>> pushing down joins to the local data nodes. >>>>> >>>>> Regards, >>>>> >>>>> Mason >>>>> >>>>> >>>>> On Mon, Jul 15, 2013 at 9:03 AM, Adam Dec <ada...@gm...> wrote: >>>>> >>>>>> Hi! >>>>>> >>>>>> My topology: 2 machines (on each machine 1 master coordinator and 1 >>>>>> master datannode) >>>>>> >>>>>> Lets say that I have a table A which has >>>>>> - one-to-many relation with table B >>>>>> - one-to-many relation to table C >>>>>> - one-to-one relation with table D >>>>>> >>>>>> I would like to distribute it in the cluster. All I have to do is to >>>>>> put >>>>>> DISTRIBUTE BY HASH(id); in each of the tables while creating them? >>>>>> id - primary key >>>>>> >>>>>> In my example Table A is like a root of the graph. How to create such >>>>>> a "graph of tables" to be shure >>>>>> that when I will invoke a select with joins all the proceesin will be >>>>>> done only on the one node. >>>>>> I do not want to replicate all the data. >>>>>> >>>>>> Where can I read about data distribution in Postgres XC? Do you have >>>>>> any examples that I could look at? >>>>>> >>>>>> >>>>>> Regards, >>>>>> >>>>>> Adam Dec >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> See everything from the browser to the database with AppDynamics >>>>>> Get end-to-end visibility with application monitoring from AppDynamics >>>>>> Isolate bottlenecks and diagnose root cause in seconds. >>>>>> Start your free trial of AppDynamics Pro today! >>>>>> >>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>>>>> _______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Mason Sharp >>>>> >>>>> StormDB - http://www.stormdb.com >>>>> The Database Cloud >>>>> Postgres-XC Support and Services >>>>> >>>> >>>> >>> >>> >>> -- >>> Mason Sharp >>> >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Services >>> >> >> > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services > |
From: Mason S. <ma...@st...> - 2013-07-16 15:09:15
|
On Tue, Jul 16, 2013 at 11:00 AM, Adam Dec <ada...@gm...> wrote: > So the very basic strategy for data distribution (as I understand it right > now) is to distribute data by hash (table with a lot of writes) using > primary key? Are there any other strategies/scenarios? > > If you have fairly static tables, use "distribute by replication" for them. For some other scenarios you could do a bit of denormalization and pull in a foreign key further. For example, for customer-order-lineitem, with cust_id in customer and order (as a FK), you could pull cust_id down into lineitem as well and distribute by it. You would have to make sure that the application always includes cust_id in lineitem in the WHERE clause when joining with order to ensure that the query gets pushed down to a single node. > > 2013/7/16 Mason Sharp <ma...@st...> > >> >> >> >> On Tue, Jul 16, 2013 at 9:30 AM, Adam Dec <ada...@gm...> wrote: >> >>> Yes A is like a parent and all the others are like a children (joined >>> with foreign key)...so using parent primary key (id) I will be sure that >>> the data (that are joined) will all stay at the same node? What about the >>> situation when table B is also joined (using foreign key) with some other >>> table? Is that the same? >>> >>> >> If B is distributed on id but also contains column "id2" that is joined >> with table E, then it can only push down some of the joins. It will end up >> needing to join on one single coordinator and shipping all of the data >> there (This is one of the issues that StormDB addresses in our version). >> >> >> >>> >>> 2013/7/16 Mason Sharp <ma...@st...> >>> >>>> Adam, >>>> >>>> Is "id" present in all of A,B,C, and D? Is A the parent and the other >>>> children and used as a foreign key from B,C, and D to A? If so, yes, on the >>>> surface it sounds like you can do that and be able to take advantage of >>>> pushing down joins to the local data nodes. >>>> >>>> Regards, >>>> >>>> Mason >>>> >>>> >>>> On Mon, Jul 15, 2013 at 9:03 AM, Adam Dec <ada...@gm...> wrote: >>>> >>>>> Hi! >>>>> >>>>> My topology: 2 machines (on each machine 1 master coordinator and 1 >>>>> master datannode) >>>>> >>>>> Lets say that I have a table A which has >>>>> - one-to-many relation with table B >>>>> - one-to-many relation to table C >>>>> - one-to-one relation with table D >>>>> >>>>> I would like to distribute it in the cluster. All I have to do is to >>>>> put >>>>> DISTRIBUTE BY HASH(id); in each of the tables while creating them? >>>>> id - primary key >>>>> >>>>> In my example Table A is like a root of the graph. How to create such >>>>> a "graph of tables" to be shure >>>>> that when I will invoke a select with joins all the proceesin will be >>>>> done only on the one node. >>>>> I do not want to replicate all the data. >>>>> >>>>> Where can I read about data distribution in Postgres XC? Do you have >>>>> any examples that I could look at? >>>>> >>>>> >>>>> Regards, >>>>> >>>>> Adam Dec >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> See everything from the browser to the database with AppDynamics >>>>> Get end-to-end visibility with application monitoring from AppDynamics >>>>> Isolate bottlenecks and diagnose root cause in seconds. >>>>> Start your free trial of AppDynamics Pro today! >>>>> >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>>>> _______________________________________________ >>>>> Postgres-xc-developers mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>>>> >>>> >>>> >>>> -- >>>> Mason Sharp >>>> >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> Postgres-XC Support and Services >>>> >>> >>> >> >> >> -- >> Mason Sharp >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Services >> > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Adam D. <ada...@gm...> - 2013-07-16 15:08:57
|
My current setup: 1. Creating backup - Connect to the any of the coordinators (lets say the first one): ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts - Lock the cluster for backup, do not close this session select pgxc_lock_for_backup(); More: http://postgres-xc.sourceforge.net/docs/1_1_beta/functions-admin.html#FUNCTIONS-PGXC-ADD-NEW-NODE - Connect to the any of the coordinators and backup the data: ./pg_dumpall -p 20011 -s --include-nodes --dump-nodes --file= /opt/backup/sts_ddl_backup.sql Only schema (i.e. no data) is to be dumped. Note the use of --include-nodes, so that the CREATE TABLE contains TO NODE clause. Similarly --dump-nodes ensures that the dump does contain existing nodes and node groups. 2. Loading backup - Connect to the any of the coordinators (lets say the first one): ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts - Restore DDL data: ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts -f /opt/backup/sts_ddl_backup.sql 3. Referene More: http://postgres-xc.sourceforge.net/docs/1_1_beta/upgrading.html 2013/7/16 Nikhil Sontakke <ni...@st...> > > OK so all i need is to use pg_dumpall in psql in any of the coordinators >> (this will store my DDL in the file). >> When I want to restore it is just like to connect to anyy of the >> coordinator, execute psql and load the DDL file using -f option or \i in >> interactive mode? >> >> Yeah. That should work. > > Regards, > Nikhils > > >> >> 2013/7/16 Ashutosh Bapat <ash...@en...> >> >>> Hi Adam, >>> You do not need to follow these steps, if you are configuring your >>> cluster the first time. You may follow installation steps there. >>> >>> >>> On Mon, Jul 15, 2013 at 5:44 PM, Adam Dec <ada...@gm...> wrote: >>> >>>> Hi! >>>> My topology is like this: >>>> I have to machines. On each machine I have 1 master coordinator and 1 >>>> master datanode, >>>> >>>> I have created a backup (only DDL as I read from here: >>>> http://postgres-xc.sourceforge.net/docs/1_1_beta/add-node-datanode.html) >>>> process, please advice me if this does make any sense to you: >>>> >>>> 1. Creating backup >>>> >>>> - Connect to the any of the coordinators (lets say the first one): >>>> >>>> ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts >>>> >>>> - Lock the cluster for backup, do not close this session >>>> >>>> select pgxc_lock_for_backup(); >>>> >>>> More: >>>> http://postgres-xc.sourceforge.net/docs/1_1_beta/functions-admin.html#FUNCTIONS-PGXC-ADD-NEW-NODE >>>> >>>> - Connect to the any of the coordinators and backup the data: >>>> >>>> ./pg_dumpall -p 20011 -s --include-nodes --dump-nodes --file= >>>> /opt/backup/sts_ddl_backup.sql >>>> >>>> Only schema (i.e. no data) is to be dumped. Note the use of >>>> --include-nodes, so that the CREATE TABLE contains TO NODE clause. >>>> >>>> Similarly --dump-nodes ensures that the dump does contain existing >>>> nodes and node groups. >>>> 2. Loading backup >>>> >>>> - Stop all the coordinators: >>>> >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord1/ -Z coordinator -l >>>> /opt/postgres-xc-1.1/logs/coord1.log -mf & >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord2/ -Z coordinator -l >>>> /opt/postgres-xc-1.1/logs/coord2.log -mf & >>>> >>>> - Stop all the datanodes: >>>> >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode1/ -Z datanode -l >>>> /opt/postgres-xc-1.1/logs/datanode1.log -mf & >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode2/ -Z datanode -l >>>> /opt/postgres-xc-1.1/logs/datanode2.log -mf & >>>> >>>> - Start the coordinators in restore mode: >>>> >>>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/coord1/ -l >>>> /opt/postgres-xc-1.1/logs/coord1.log & >>>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/coord2/ -l >>>> /opt/postgres-xc-1.1/logs/coord2.log & >>>> >>>> - Start the datanodes in restore mode: >>>> >>>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/datanode1/ -l >>>> /opt/postgres-xc-1.1/logs/datanode1.log & >>>> ./bin/pg_ctl start -Z restoremode -D /opt/postgres-xc-1.1/datanode2/ -l >>>> /opt/postgres-xc-1.1/logs/datanode2.log & >>>> >>>> - Connect to the any of the coordinators (lets say the first one): >>>> >>>> ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts >>>> >>>> - Restore DDL data: >>>> >>>> ./bin/psql -p 20011 -h 192.168.123.195 -d sts -U sts -f >>>> /opt/backup/sts_ddl_backup.sql >>>> >>>> - Stop the coordinators ( this will unlock the cluster -> select >>>> pgxc_lock_for_backup(); ): >>>> >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord1/ -Z coordinator -l >>>> /opt/postgres-xc-1.1/logs/coord1.log -mf & >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/coord2/ -Z coordinator -l >>>> /opt/postgres-xc-1.1/logs/coord2.log -mf & >>>> >>>> - Stop the datanodes >>>> >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode1/ -Z datanode -l >>>> /opt/postgres-xc-1.1/logs/datanode1.log -mf & >>>> ./bin/pg_ctl stop -D /opt/postgres-xc-1.1/datanode2/ -Z datanode -l >>>> /opt/postgres-xc-1.1/logs/datanode2.log -mf & >>>> >>>> - Start the coordinators: >>>> >>>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/coord1/ -Z coordinator -l >>>> /opt/postgres-xc-1.1/logs/coord1.log & >>>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/coord2/ -Z coordinator -l >>>> /opt/postgres-xc-1.1/logs/coord2.log & >>>> >>>> - Start the datanodes: >>>> >>>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/datanode1/ -Z datanode -l >>>> /opt/postgres-xc-1.1/logs/datanode1.log & >>>> ./bin/pg_ctl start -D /opt/postgres-xc-1.1/datanode2/ -Z datanode -l >>>> /opt/postgres-xc-1.1/logs/datanode2.log & >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> See everything from the browser to the database with AppDynamics >>>> Get end-to-end visibility with application monitoring from AppDynamics >>>> Isolate bottlenecks and diagnose root cause in seconds. >>>> Start your free trial of AppDynamics Pro today! >>>> >>>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> ------------------------------------------------------------------------------ >> See everything from the browser to the database with AppDynamics >> Get end-to-end visibility with application monitoring from AppDynamics >> Isolate bottlenecks and diagnose root cause in seconds. >> Start your free trial of AppDynamics Pro today! >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > StormDB - http://www.stormdb.com > The Database Cloud |
From: Adam D. <ada...@gm...> - 2013-07-16 15:00:40
|
So the very basic strategy for data distribution (as I understand it right now) is to distribute data by hash (table with a lot of writes) using primary key? Are there any other strategies/scenarios? 2013/7/16 Mason Sharp <ma...@st...> > > > > On Tue, Jul 16, 2013 at 9:30 AM, Adam Dec <ada...@gm...> wrote: > >> Yes A is like a parent and all the others are like a children (joined >> with foreign key)...so using parent primary key (id) I will be sure that >> the data (that are joined) will all stay at the same node? What about the >> situation when table B is also joined (using foreign key) with some other >> table? Is that the same? >> >> > If B is distributed on id but also contains column "id2" that is joined > with table E, then it can only push down some of the joins. It will end up > needing to join on one single coordinator and shipping all of the data > there (This is one of the issues that StormDB addresses in our version). > > > >> >> 2013/7/16 Mason Sharp <ma...@st...> >> >>> Adam, >>> >>> Is "id" present in all of A,B,C, and D? Is A the parent and the other >>> children and used as a foreign key from B,C, and D to A? If so, yes, on the >>> surface it sounds like you can do that and be able to take advantage of >>> pushing down joins to the local data nodes. >>> >>> Regards, >>> >>> Mason >>> >>> >>> On Mon, Jul 15, 2013 at 9:03 AM, Adam Dec <ada...@gm...> wrote: >>> >>>> Hi! >>>> >>>> My topology: 2 machines (on each machine 1 master coordinator and 1 >>>> master datannode) >>>> >>>> Lets say that I have a table A which has >>>> - one-to-many relation with table B >>>> - one-to-many relation to table C >>>> - one-to-one relation with table D >>>> >>>> I would like to distribute it in the cluster. All I have to do is to put >>>> DISTRIBUTE BY HASH(id); in each of the tables while creating them? >>>> id - primary key >>>> >>>> In my example Table A is like a root of the graph. How to create such a >>>> "graph of tables" to be shure >>>> that when I will invoke a select with joins all the proceesin will be >>>> done only on the one node. >>>> I do not want to replicate all the data. >>>> >>>> Where can I read about data distribution in Postgres XC? Do you have >>>> any examples that I could look at? >>>> >>>> >>>> Regards, >>>> >>>> Adam Dec >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> See everything from the browser to the database with AppDynamics >>>> Get end-to-end visibility with application monitoring from AppDynamics >>>> Isolate bottlenecks and diagnose root cause in seconds. >>>> Start your free trial of AppDynamics Pro today! >>>> >>>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Mason Sharp >>> >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Services >>> >> >> > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services > |
From: Mason S. <ma...@st...> - 2013-07-16 14:45:51
|
On Tue, Jul 16, 2013 at 9:30 AM, Adam Dec <ada...@gm...> wrote: > Yes A is like a parent and all the others are like a children (joined with > foreign key)...so using parent primary key (id) I will be sure that the > data (that are joined) will all stay at the same node? What about the > situation when table B is also joined (using foreign key) with some other > table? Is that the same? > > If B is distributed on id but also contains column "id2" that is joined with table E, then it can only push down some of the joins. It will end up needing to join on one single coordinator and shipping all of the data there (This is one of the issues that StormDB addresses in our version). > > 2013/7/16 Mason Sharp <ma...@st...> > >> Adam, >> >> Is "id" present in all of A,B,C, and D? Is A the parent and the other >> children and used as a foreign key from B,C, and D to A? If so, yes, on the >> surface it sounds like you can do that and be able to take advantage of >> pushing down joins to the local data nodes. >> >> Regards, >> >> Mason >> >> >> On Mon, Jul 15, 2013 at 9:03 AM, Adam Dec <ada...@gm...> wrote: >> >>> Hi! >>> >>> My topology: 2 machines (on each machine 1 master coordinator and 1 >>> master datannode) >>> >>> Lets say that I have a table A which has >>> - one-to-many relation with table B >>> - one-to-many relation to table C >>> - one-to-one relation with table D >>> >>> I would like to distribute it in the cluster. All I have to do is to put >>> DISTRIBUTE BY HASH(id); in each of the tables while creating them? >>> id - primary key >>> >>> In my example Table A is like a root of the graph. How to create such a >>> "graph of tables" to be shure >>> that when I will invoke a select with joins all the proceesin will be >>> done only on the one node. >>> I do not want to replicate all the data. >>> >>> Where can I read about data distribution in Postgres XC? Do you have any >>> examples that I could look at? >>> >>> >>> Regards, >>> >>> Adam Dec >>> >>> >>> ------------------------------------------------------------------------------ >>> See everything from the browser to the database with AppDynamics >>> Get end-to-end visibility with application monitoring from AppDynamics >>> Isolate bottlenecks and diagnose root cause in seconds. >>> Start your free trial of AppDynamics Pro today! >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Mason Sharp >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Services >> > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Adam D. <ada...@gm...> - 2013-07-16 13:30:53
|
Yes A is like a parent and all the others are like a children (joined with foreign key)...so using parent primary key (id) I will be sure that the data (that are joined) will all stay at the same node? What about the situation when table B is also joined (using foreign key) with some other table? Is that the same? 2013/7/16 Mason Sharp <ma...@st...> > Adam, > > Is "id" present in all of A,B,C, and D? Is A the parent and the other > children and used as a foreign key from B,C, and D to A? If so, yes, on the > surface it sounds like you can do that and be able to take advantage of > pushing down joins to the local data nodes. > > Regards, > > Mason > > > On Mon, Jul 15, 2013 at 9:03 AM, Adam Dec <ada...@gm...> wrote: > >> Hi! >> >> My topology: 2 machines (on each machine 1 master coordinator and 1 >> master datannode) >> >> Lets say that I have a table A which has >> - one-to-many relation with table B >> - one-to-many relation to table C >> - one-to-one relation with table D >> >> I would like to distribute it in the cluster. All I have to do is to put >> DISTRIBUTE BY HASH(id); in each of the tables while creating them? >> id - primary key >> >> In my example Table A is like a root of the graph. How to create such a >> "graph of tables" to be shure >> that when I will invoke a select with joins all the proceesin will be >> done only on the one node. >> I do not want to replicate all the data. >> >> Where can I read about data distribution in Postgres XC? Do you have any >> examples that I could look at? >> >> >> Regards, >> >> Adam Dec >> >> >> ------------------------------------------------------------------------------ >> See everything from the browser to the database with AppDynamics >> Get end-to-end visibility with application monitoring from AppDynamics >> Isolate bottlenecks and diagnose root cause in seconds. >> Start your free trial of AppDynamics Pro today! >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services > |
From: Hans-Jürgen S. <pos...@cy...> - 2013-07-16 12:45:15
|
hello … there is a little issue in the image on the site: http://postgres-xc.sourceforge.net it should actually say "coordinator" instead of "coordinator". many thanks, hans -- Cybertec Schönig & Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de |
From: Mason S. <ma...@st...> - 2013-07-16 12:32:41
|
Adam, Is "id" present in all of A,B,C, and D? Is A the parent and the other children and used as a foreign key from B,C, and D to A? If so, yes, on the surface it sounds like you can do that and be able to take advantage of pushing down joins to the local data nodes. Regards, Mason On Mon, Jul 15, 2013 at 9:03 AM, Adam Dec <ada...@gm...> wrote: > Hi! > > My topology: 2 machines (on each machine 1 master coordinator and 1 master > datannode) > > Lets say that I have a table A which has > - one-to-many relation with table B > - one-to-many relation to table C > - one-to-one relation with table D > > I would like to distribute it in the cluster. All I have to do is to put > DISTRIBUTE BY HASH(id); in each of the tables while creating them? > id - primary key > > In my example Table A is like a root of the graph. How to create such a > "graph of tables" to be shure > that when I will invoke a select with joins all the proceesin will be done > only on the one node. > I do not want to replicate all the data. > > Where can I read about data distribution in Postgres XC? Do you have any > examples that I could look at? > > > Regards, > > Adam Dec > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Abbas B. <abb...@en...> - 2013-07-16 06:54:14
|
Hi Tomonari, Can you please continue your work by having a cluster configured in such a manner that it has at least two datanodes and one coordinator. This way you can avoid the crash in the regression and proceed with your work. Thanks Regards On Fri, Jul 5, 2013 at 8:53 AM, Tomonari Katsumata < kat...@po...> wrote: > Hi Ashutosh, Abbas, > > Sorry for slow response.. > But still I can not run regression test. > > >>Ashutosh > > > see if copying query->rtable solves the issue. The changed code would > look > > like > > > > 3112 crte_context->crte_rtable = > > list_concat(crte_context->crte_rtable, > > 3113 > > list_copy(query->rtable)); > > > I've changed source as you said, and it seems resolving my problem(*). > (*) below query falt in endless loop. > ------------------------------------------------------------------ > > EXECUTE DIRECT ON (data1) $$ > SELECT > count(*) > FROM > (SELECT * FROM pg_locks l LEFT JOIN > (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > $$ > ------------------------------------------------------------------ > > But I couldn't run regression test with this source too. > It stoped same test-case(xmlmap.sql). > > >>Abbas > > > Since you are getting a crash even without your patch there must be some > > thing wrong in the deployment. > > Can you try doing > > ./configure --enable-debug --enable-cassert CFLAGS='-O0' > > and then see whether its an assertion failure or not? > > Also can you see which SQL statement is causing the crash in xmlmap.sql? > > > I've tried the configure you said. > I didn't get any assertion failure, but regression test had > stoped in "test collate". > > > When Postgres-XC crashed with xmlmap.sql, > I found The SQL statement which is causing the crash. > ---- > DECLARE xc CURSOR WITH HOLD FOR SELECT * FROM testxmlschema.test1 ORDER BY > 1, 2; > ---- > > I think these are another problems. > Because I get same result with or without patched Postgres-XC. > so I want to discuss on another mails. > > > > regards, > ------------ > NTT Software Corporation > Tomonari Katsumata > > > > > (2013/07/03 12:13), Abbas Butt wrote: > > Since you are getting a crash even without your patch there must be some > thing wrong in the deployment. > Can you try doing > ./configure --enable-debug --enable-cassert CFLAGS='-O0' > and then see whether its an assertion failure or not? > Also can you see which SQL statement is causing the crash in xmlmap.sql? > > On Mon, Jul 1, 2013 at 4:51 PM, Tomonari Katsumata <t.k...@gm...> wrote: > > > Hi, > > I've tried regression test, but I could not > get right result. > Both patched and un-patched Postgres-XC get > same result, so I think my process is bad. > > I run a gtm, a gtm_proxy, a coordinator, a datanode > in one server(CentOS6.4 x86_64 on VMWare), > and hit "make installcheck". > The configure option is [--enable-debug CFLAGS=""]. > > What is right way to run the regression test? > > output is below. > --------------------------------------------------------- > test tablespace ... ok > test boolean ... ok > test char ... ok > test name ... ok > test varchar ... ok > test text ... ok > test int2 ... ok > test int4 ... ok > test int8 ... ok > test oid ... ok > test float4 ... ok > test float8 ... ok > test bit ... ok > test numeric ... ok > test txid ... ok > test uuid ... ok > test enum ... FAILED > test money ... ok > test rangetypes ... ok > test strings ... ok > test numerology ... ok > test point ... ok > test lseg ... ok > test box ... ok > test path ... ok > test polygon ... ok > test circle ... ok > test date ... ok > test time ... ok > test timetz ... ok > test timestamp ... ok > test timestamptz ... ok > test interval ... ok > test abstime ... ok > test reltime ... ok > test tinterval ... ok > test inet ... ok > test macaddr ... ok > test tstypes ... ok > test comments ... ok > test geometry ... ok > test horology ... ok > test regex ... ok > test oidjoins ... ok > test type_sanity ... ok > test opr_sanity ... ok > test insert ... ok > test create_function_1 ... ok > test create_type ... ok > test create_table ... ok > test create_function_2 ... ok > test create_function_3 ... ok > test copy ... ok > test copyselect ... ok > test create_misc ... ok > test create_operator ... ok > test create_index ... FAILED > test create_view ... ok > test create_aggregate ... ok > test create_cast ... ok > test constraints ... FAILED > test triggers ... ok > test inherit ... FAILED > test create_table_like ... ok > test typed_table ... ok > test vacuum ... ok > test drop_if_exists ... ok > test sanity_check ... ok > test errors ... ok > test select ... ok > test select_into ... ok > test select_distinct ... ok > test select_distinct_on ... ok > test select_implicit ... ok > test select_having ... ok > test subselect ... ok > test union ... ok > test case ... ok > test join ... FAILED > test aggregates ... FAILED > test transactions ... ok > test random ... ok > test portals ... ok > test arrays ... FAILED > test btree_index ... ok > test hash_index ... ok > test update ... ok > test delete ... ok > test namespace ... ok > test prepared_xacts ... ok > test privileges ... FAILED > test security_label ... ok > test collate ... FAILED > test misc ... ok > test rules ... ok > test select_views ... ok > test portals_p2 ... ok > test foreign_key ... ok > test cluster ... ok > test dependency ... ok > test guc ... ok > test bitmapops ... ok > test combocid ... ok > test tsearch ... ok > test tsdicts ... ok > test foreign_data ... ok > test window ... ok > test xmlmap ... FAILED (test process exited with exit > code 2) > test functional_deps ... FAILED (test process exited with exit > code 2) > test advisory_lock ... FAILED (test process exited with exit > code 2) > test json ... FAILED (test process exited with exit > code 2) > test plancache ... FAILED (test process exited with exit > code 2) > test limit ... FAILED (test process exited with exit > code 2) > test plpgsql ... FAILED (test process exited with exit > code 2) > test copy2 ... FAILED (test process exited with exit > code 2) > test temp ... FAILED (test process exited with exit > code 2) > test domain ... FAILED (test process exited with exit > code 2) > test rangefuncs ... FAILED (test process exited with exit > code 2) > test prepare ... FAILED (test process exited with exit > code 2) > test without_oid ... FAILED (test process exited with exit > code 2) > test conversion ... FAILED (test process exited with exit > code 2) > test truncate ... FAILED (test process exited with exit > code 2) > test alter_table ... FAILED (test process exited with exit > code 2) > test sequence ... FAILED (test process exited with exit > code 2) > test polymorphism ... FAILED (test process exited with exit > code 2) > test rowtypes ... FAILED (test process exited with exit > code 2) > test returning ... FAILED (test process exited with exit > code 2) > test largeobject ... FAILED (test process exited with exit > code 2) > test with ... FAILED (test process exited with exit > code 2) > test xml ... FAILED (test process exited with exit > code 2) > test stats ... FAILED (test process exited with exit > code 2) > test xc_create_function ... FAILED (test process exited with exit > code 2) > test xc_groupby ... FAILED (test process exited with exit > code 2) > test xc_distkey ... FAILED (test process exited with exit > code 2) > test xc_having ... FAILED (test process exited with exit > code 2) > test xc_temp ... FAILED (test process exited with exit > code 2) > test xc_remote ... FAILED (test process exited with exit > code 2) > test xc_node ... FAILED (test process exited with exit > code 2) > test xc_FQS ... FAILED (test process exited with exit > code 2) > test xc_FQS_join ... FAILED (test process exited with exit > code 2) > test xc_misc ... FAILED (test process exited with exit > code 2) > test xc_triggers ... FAILED (test process exited with exit > code 2) > test xc_trigship ... FAILED (test process exited with exit > code 2) > test xc_constraints ... FAILED (test process exited with exit > code 2) > test xc_copy ... FAILED (test process exited with exit > code 2) > test xc_alter_table ... FAILED (test process exited with exit > code 2) > test xc_sequence ... FAILED (test process exited with exit > code 2) > test xc_prepared_xacts ... FAILED (test process exited with exit > code 2) > test xc_notrans_block ... FAILED (test process exited with exit > code 2) > test xc_limit ... FAILED (test process exited with exit > code 2) > test xc_sort ... FAILED (test process exited with exit > code 2) > test xc_returning ... FAILED (test process exited with exit > code 2) > test xc_params ... FAILED (test process exited with exit > code 2) > ========================= > 55 of 153 tests failed. > ========================= > > > 2013/7/1 Tomonari Katsumata <kat...@po...> <kat...@po...> > > Hi Ashutosh, > > OK, I'll try regression test. > Please wait for the result. > > regards, > ------------ > NTT Software Corporation > Tomonari Katsumata > > (2013/07/01 17:06), Ashutosh Bapat wrote: > > Hi Tomonori, > > > > On Mon, Jul 1, 2013 at 1:33 PM, Tomonari Katsumata <kat...@po...> wrote: > > > Hi Ashutosh and all, > > Sorry for late response. > I made a patch for resolving the problem I mentioned before. > > I thought the reason of this problem is parsing query twice. > because of this, the rtable is made from same Lists and become > cycliced List. > I fixed this problem with making former List empty. > > I'm not sure this fix leads any anothre problems but > the problem query I mentioned before works fine. > > This patch is against for > > "**a074cac9b6b507e6d4b58c5004673f**6cc65fcde1". > > You can check the robustness of patch by running regression. Please let > > me > > know what you see. > > > > regards, > ------------------ > NTT Software Corporation > Tomonari Katsumata > > > (2013/06/17 18:53), Ashutosh Bapat wrote: > > > Hi Tomonari, > In which function have you taken this debug info? What is list1 and > > list2? > > On Mon, Jun 17, 2013 at 10:13 AM, Tomonari Katsumata <kat...@po....**jp <kat...@po... > > wrote: > > Hi Ashtosh, > > Sorry for slow response. > > I've watched the each lists at list_concat function. > > This function is called several times, but the lists before > last infinitely roop are like below. > > [list1] > (gdb) p *list1->head > $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, > oid_value = 24129768}, next = 0x170d418} > (gdb) p *list1->head->next > $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, > oid_value = 24130512}, next = 0x170fd40} > (gdb) p *list1->head->next->next > $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, > oid_value = 24161880}, next = 0x171e6c8} > (gdb) p *list1->head->next->next->next > $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, > oid_value = 24128680}, next = 0x171ed28} > (gdb) p *list1->head->next->next->**next->next > $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, > oid_value = 24162152}, next = 0x171f3a0} > (gdb) p *list1->head->next->next->**next->next->next > $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, > oid_value = 24162472}, next = 0x170b7c0} > (gdb) p *list1->head->next->next->**next->next->next->next > $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, > oid_value = 24131056}, next = 0x1720998} > ---- from --- > (gdb) p *list1->head->next->next->**next->next->next->next->next > $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, > oid_value = 24250808}, next = 0x1721190} > (gdb) p > > *list1->head->next->next->**next->next->next->next->next->**next > > $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, > oid_value = 24252848}, next = 0x1721988} > (gdb) p *list1->head->next->next->**next->next->next->next->next->** > next->next > $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, > oid_value = 24254888}, next = 0x1722018} > (gdb) p > *list1->head->next->next->**next->next->next->next->next->** > next->next->next > $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, > oid_value = 24256568}, next = 0x1722820} > (gdb) p > > *list1->head->next->next->**next->next->next->next->next->** > next->next->next->next > $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, > oid_value = 24258624}, next = 0x0} > ---- to ---- > > [list2] > (gdb) p *list2->head > $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, > oid_value = 24250808}, next = 0x1721190} > (gdb) p *list2->head->next > $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, > oid_value = 24252848}, next = 0x1721988} > (gdb) p *list2->head->next->next > $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, > oid_value = 24254888}, next = 0x1722018} > (gdb) p *list2->head->next->next->next > $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, > oid_value = 24256568}, next = 0x1722820} > (gdb) p *list2->head->next->next->**next->next > $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, > oid_value = 24258624}, next = 0x0} > ---- > > list1's last five elements are same with list2's all elements. > (in above example, between "from" and "to" in list1 equal all of > > list2) > > This is cause of infinitely roop, but I can not > watch deeper. > Because some values from gdb are optimized and un-displayed. > I tried compile with CFLAGS=O0, but I can't. > > What can I do more ? > > regards, > ------------------ > NTT Software Corporation > Tomonari Katsumata > > (2013/06/12 21:04), Ashutosh Bapat wrote: > > Hi Tomonari, > > Can you please check the list's sanity before calling > pgxc_collect_RTE() > > and at every point in the minions of this function. My primary > suspect > is > > the line pgxcplan.c:3094. We should copy the list before > concatenating it. > > > > > > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < > > kat...@po....**jp< > > kat...@po...>> > > wrote: > > > >> Hi Ashutosh, > >> > >> Thank you for the response. > >> > >> (2013/06/12 14:43), Ashutosh Bapat wrote: > >> >> Hi, > >> >> > > >> >> > I've investigated this problem(BUG:3614369). > >> >> > > >> >> > I caught the cause of it, but I can not > >> >> > find where to fix. > >> >> > > >> >> > The problem occurs when "pgxc_collect_RTE_walker" is > > called > > >> infinitely. > >> >> > It seems that rtable(List of RangeTable) become cyclic > > List. > > >> >> > I'm not sure where the List is made. > >> >> > > >> >> > > >> > I guess, we are talking about EXECUTE DIRECT statement that > > you > > have > >> > mentioned earlier. > >> > >> Yes, that's right. > >> I'm talking about EXECUTE DIRECT statement like below. > >> --- > >> EXECUTE DIRECT ON (data1) $$ > >> SELECT > >> count(*) > >> FROM > >> (SELECT * FROM pg_locks l LEFT JOIN > >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > >> $$ > >> --- > >> > >> > The function pgxc_collect_RTE_walker() is a recursive > >> > function. The condition to end the recursion is if the given > node is > >> NULL. > >> > We have to look at if that condition is met and if not why. > >> > > >> I investigated it deeper, and I noticed that > >> the infinitely loop happens at the function > > "range_table_walker()". > > >> > >> Please see below trace. > >> =========================== > >> Breakpoint 1, range_table_walker (rtable=0x15e7968, > > walker=0x612c70 > > >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, > >> flags=0) at nodeFuncs.c:1908 > >> 1908 in nodeFuncs.c > >> > >> (gdb) p *rtable > >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = > 0x15e9820} > >> (gdb) p *rtable->head > >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, > oid_value = > >> 22968760}, next = 0x15e8190} > >> (gdb) p *rtable->head->next > >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, > oid_value = > >> 22970800}, next = 0x15e8988} > >> (gdb) p *rtable->head->next->next > >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, > oid_value = > >> 22972840}, next = 0x15e9018} > >> (gdb) p *rtable->head->next->next->**next > >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, > oid_value = > >> 22974520}, next = 0x15e9820} > >> (gdb) p *rtable->head->next->next->**next->next > >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, > oid_value = > >> 22976576}, next = 0x15e7998} > >> =========================== > >> > >> The line which starts with "$15" has 0x15e7998 as its next > > data. > > >> But it is the head pointer(see the line which starts with $10). > >> > >> And in range_table_walker(), the function is called > > recursively. > > >> -------- > >> ... > >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) > >> { > >> if (range_table_walker(query->**rtable, > > walker, > > context, > >> flags)) > >> return true; > >> } > >> ... > >> -------- > >> > >> We should make rtable right or deal with "flags" properly. > >> But I can't find where to do it... > >> > >> What do you think ? > >> > >> regards, > >> --------- > >> NTT Software Corporation > >> Tomonari Katsumata > >> > >> > >> > >> > >> > > ------------------------------**------------------------------** > ------------------ > >> This SF.net email is sponsored by Windows: > >> > >> Build for Windows Store. > >> > >> http://p.sf.net/sfu/windows-**dev2dev< > > http://p.sf.net/sfu/windows-dev2dev> > > >> ______________________________**_________________ > >> Postgres-xc-developers mailing list > >> Postgres-xc-developers@lists.**sourceforge.net< > > Pos...@li...> > > >> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-** > developers< > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> > > >> > > > > > > > > > > > ------------------------------**------------------------------** > ------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > http://p.sf.net/sfu/windows-**dev2dev< > > http://p.sf.net/sfu/windows-dev2dev> > > ______________________________**_________________ > Postgres-xc-developers mailing listPostgres-xc-developers@lists.**sourceforge.net< > > Pos...@li...> > > https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers< > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> <https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> > > ------------------------------------------------------------------------------ > > This SF.net email is sponsored by Windows: > > Build for Windows Store. > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > http://p.sf.net/sfu/windows-dev2dev > > > > _______________________________________________ > Postgres-xc-developers mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |