You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ashutosh B. <ash...@en...> - 2013-06-17 07:36:16
|
Hi Abbas, I think the patch for this is in the other thread (11_fix ..). I looked at the patch. Here are the comments 1. There are just too many tests in the patch, without much difference. Please add only the tests which are needed, and also add comments about the purpose of the statements. Considering the time at hand, I don't think I can review all of the tests, so it would be good if it can be reduced to a minimal set. 2. The code is fine, but the comment need not have specific details of the statement failing. Getting preferred node is general practice everywhere and not just this portion of the code. By the way, we are not getting "just first node" from the node list but we try to get the preferred node. On Wed, Mar 27, 2013 at 3:55 PM, Abbas Butt <abb...@en...>wrote: > Bug ID 3608374 > > On Fri, Mar 8, 2013 at 12:25 PM, Abbas Butt <abb...@en...>wrote: > >> Attached please find revised patch that provides the following in >> addition to what it did earlier. >> >> 1. Uses GetPreferredReplicationNode() instead of list_truncate() >> 2. Adds test cases to xc_alter_table and xc_copy. >> >> I tested the following in reasonable detail to find whether any other >> caller of GetRelationNodes() needs some fixing or not and found that none >> of the other callers needs any more fixing. >> I tested >> a) copy >> b) alter table redistribute >> c) utilities >> d) dmls etc >> >> However while testing ALTER TABLE, I found that replicated to hash is not >> working correctly. >> >> This test case fails, since only SIX rows are expected in the final >> result. >> >> test=# create table t_r_n12(a int, b int) distribute by replication to >> node (DATA_NODE_1, DATA_NODE_2); >> CREATE TABLE >> test=# insert into t_r_n12 values(1,777),(3,4),(5,6),(20,30),(NULL,999), >> (NULL, 999); >> INSERT 0 6 >> test=# -- rep to hash >> test=# ALTER TABLE t_r_n12 distribute by hash(a); >> ALTER TABLE >> test=# SELECT * FROM t_r_n12 order by 1; >> a | b >> ----+----- >> 1 | 777 >> 3 | 4 >> 5 | 6 >> 20 | 30 >> | 999 >> | 999 >> | 999 >> | 999 >> (8 rows) >> >> test=# drop table t_r_n12; >> DROP TABLE >> >> I have added a source forge bug tracker id to this case (Artifact 3607290<https://sourceforge.net/tracker/?func=detail&aid=3607290&group_id=311227&atid=1310232>). >> The reason for this error is that the function distrib_delete_hash does not >> take into account that the distribution column can be null. I will provide >> a separate fix for that one. >> Regression shows no extra failure except that test case xc_alter_table >> would fail until 3607290 is fixed. >> >> Regards >> >> >> >> On Mon, Feb 25, 2013 at 10:18 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Thanks a lot Abbas for this quick fix. >>> >>> I am sorry, it's caused by my refactoring of GetRelationNodes(). >>> >>> If possible, can you please examine the other callers of >>> GetRelationNodes() which would face the problems, esp. the ones for DML and >>> utilities. This is other instance, where deciding the nodes to execute on >>> at the time of execution will help. >>> >>> About the fix >>> Can you please use GetPreferredReplicationNode() instead of >>> list_truncate()? It will pick the preferred node instead of first one. If >>> you find more places where we need this fix, it might be better to create a >>> wrapper function and use it at those places. >>> >>> On Sat, Feb 23, 2013 at 2:59 PM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Hi, >>>> PFA a patch to fix a crash when COPY TO is used on a replicated table. >>>> >>>> This test case produces a crash >>>> >>>> create table tab_rep(a int, b int) distribute by replication; >>>> insert into tab_rep values(1,2), (3,4), (5,6), (7,8); >>>> COPY tab_rep (a, b) TO stdout; >>>> >>>> Here is a description of the problem and the fix >>>> In case of a read from a replicated table GetRelationNodes() >>>> returns all nodes and expects that the planner can choose >>>> one depending on the rest of the join tree. >>>> In case of COPY TO we should choose the first one in the node list >>>> This fixes a system crash and makes pg_dump work fine. >>>> >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>>> ------------------------------------------------------------------------------ >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Tomonari K. <kat...@po...> - 2013-06-17 06:13:37
|
Hi Ashtosh, Sorry for slow response. I've watched the each lists at list_concat function. This function is called several times, but the lists before last infinitely roop are like below. [list1] (gdb) p *list1->head $18 = {data = {ptr_value = 0x17030e8, int_value = 24129768, oid_value = 24129768}, next = 0x170d418} (gdb) p *list1->head->next $19 = {data = {ptr_value = 0x17033d0, int_value = 24130512, oid_value = 24130512}, next = 0x170fd40} (gdb) p *list1->head->next->next $20 = {data = {ptr_value = 0x170ae58, int_value = 24161880, oid_value = 24161880}, next = 0x171e6c8} (gdb) p *list1->head->next->next->next $21 = {data = {ptr_value = 0x1702ca8, int_value = 24128680, oid_value = 24128680}, next = 0x171ed28} (gdb) p *list1->head->next->next->next->next $22 = {data = {ptr_value = 0x170af68, int_value = 24162152, oid_value = 24162152}, next = 0x171f3a0} (gdb) p *list1->head->next->next->next->next->next $23 = {data = {ptr_value = 0x170b0a8, int_value = 24162472, oid_value = 24162472}, next = 0x170b7c0} (gdb) p *list1->head->next->next->next->next->next->next $24 = {data = {ptr_value = 0x17035f0, int_value = 24131056, oid_value = 24131056}, next = 0x1720998} ---- from --- (gdb) p *list1->head->next->next->next->next->next->next->next $25 = {data = {ptr_value = 0x17209b8, int_value = 24250808, oid_value = 24250808}, next = 0x1721190} (gdb) p *list1->head->next->next->next->next->next->next->next->next $26 = {data = {ptr_value = 0x17211b0, int_value = 24252848, oid_value = 24252848}, next = 0x1721988} (gdb) p *list1->head->next->next->next->next->next->next->next->next->next $27 = {data = {ptr_value = 0x17219a8, int_value = 24254888, oid_value = 24254888}, next = 0x1722018} (gdb) p *list1->head->next->next->next->next->next->next->next->next->next->next $28 = {data = {ptr_value = 0x1722038, int_value = 24256568, oid_value = 24256568}, next = 0x1722820} (gdb) p *list1->head->next->next->next->next->next->next->next->next->next->next->next $29 = {data = {ptr_value = 0x1722840, int_value = 24258624, oid_value = 24258624}, next = 0x0} ---- to ---- [list2] (gdb) p *list2->head $31 = {data = {ptr_value = 0x17209b8, int_value = 24250808, oid_value = 24250808}, next = 0x1721190} (gdb) p *list2->head->next $32 = {data = {ptr_value = 0x17211b0, int_value = 24252848, oid_value = 24252848}, next = 0x1721988} (gdb) p *list2->head->next->next $33 = {data = {ptr_value = 0x17219a8, int_value = 24254888, oid_value = 24254888}, next = 0x1722018} (gdb) p *list2->head->next->next->next $34 = {data = {ptr_value = 0x1722038, int_value = 24256568, oid_value = 24256568}, next = 0x1722820} (gdb) p *list2->head->next->next->next->next $35 = {data = {ptr_value = 0x1722840, int_value = 24258624, oid_value = 24258624}, next = 0x0} ---- list1's last five elements are same with list2's all elements. (in above example, between "from" and "to" in list1 equal all of list2) This is cause of infinitely roop, but I can not watch deeper. Because some values from gdb are optimized and un-displayed. I tried compile with CFLAGS=O0, but I can't. What can I do more ? regards, ------------------ NTT Software Corporation Tomonari Katsumata (2013/06/12 21:04), Ashutosh Bapat wrote: > Hi Tomonari, > Can you please check the list's sanity before calling pgxc_collect_RTE() > and at every point in the minions of this function. My primary suspect is > the line pgxcplan.c:3094. We should copy the list before concatenating it. > > > On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < > kat...@po...> wrote: > >> Hi Ashutosh, >> >> Thank you for the response. >> >> (2013/06/12 14:43), Ashutosh Bapat wrote: >> >> Hi, >> >> > >> >> > I've investigated this problem(BUG:3614369). >> >> > >> >> > I caught the cause of it, but I can not >> >> > find where to fix. >> >> > >> >> > The problem occurs when "pgxc_collect_RTE_walker" is called >> infinitely. >> >> > It seems that rtable(List of RangeTable) become cyclic List. >> >> > I'm not sure where the List is made. >> >> > >> >> > >> > I guess, we are talking about EXECUTE DIRECT statement that you have >> > mentioned earlier. >> >> Yes, that's right. >> I'm talking about EXECUTE DIRECT statement like below. >> --- >> EXECUTE DIRECT ON (data1) $$ >> SELECT >> count(*) >> FROM >> (SELECT * FROM pg_locks l LEFT JOIN >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >> $$ >> --- >> >> > The function pgxc_collect_RTE_walker() is a recursive >> > function. The condition to end the recursion is if the given node is >> NULL. >> > We have to look at if that condition is met and if not why. >> > >> I investigated it deeper, and I noticed that >> the infinitely loop happens at the function "range_table_walker()". >> >> Please see below trace. >> =========================== >> Breakpoint 1, range_table_walker (rtable=0x15e7968, walker=0x612c70 >> <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, >> flags=0) at nodeFuncs.c:1908 >> 1908 in nodeFuncs.c >> >> (gdb) p *rtable >> $10 = {type = T_List, length = 5, head = 0x15e7998, tail = 0x15e9820} >> (gdb) p *rtable->head >> $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, oid_value = >> 22968760}, next = 0x15e8190} >> (gdb) p *rtable->head->next >> $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, oid_value = >> 22970800}, next = 0x15e8988} >> (gdb) p *rtable->head->next->next >> $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, oid_value = >> 22972840}, next = 0x15e9018} >> (gdb) p *rtable->head->next->next->next >> $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, oid_value = >> 22974520}, next = 0x15e9820} >> (gdb) p *rtable->head->next->next->next->next >> $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, oid_value = >> 22976576}, next = 0x15e7998} >> =========================== >> >> The line which starts with "$15" has 0x15e7998 as its next data. >> But it is the head pointer(see the line which starts with $10). >> >> And in range_table_walker(), the function is called recursively. >> -------- >> ... >> if (!(flags & QTW_IGNORE_RANGE_TABLE)) >> { >> if (range_table_walker(query->rtable, walker, context, >> flags)) >> return true; >> } >> ... >> -------- >> >> We should make rtable right or deal with "flags" properly. >> But I can't find where to do it... >> >> What do you think ? >> >> regards, >> --------- >> NTT Software Corporation >> Tomonari Katsumata >> >> >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > |
From: Abbas B. <abb...@en...> - 2013-06-13 08:21:58
|
ant test Buildfile: /home/edb/Desktop/plan/pgjdbc/build.xml prepare: check_versions: check_driver: driver: compile: [javac] /home/edb/Desktop/plan/pgjdbc/build.xml:127: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds jar: testjar: [javac] /home/edb/Desktop/plan/pgjdbc/build.xml:399: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds [javac] Compiling 1 source file to /home/edb/Desktop/plan/pgjdbc/build/tests [jar] Building jar: /home/edb/Desktop/plan/pgjdbc/jars/postgresql-tests.jar runtest: [junit] Testsuite: org.postgresql.test.jdbc2.Jdbc2TestSuite [junit] Tests run: 292, Failures: 21, Errors: 31, Time elapsed: 149.41 sec [junit] [junit] Testcase: testTransactionIsolation(org.postgresql.test.jdbc2.ConnectionTest): FAILED [junit] expected:<8> but was:<4> [junit] junit.framework.AssertionFailedError: expected:<8> but was:<4> [junit] at org.postgresql.test.jdbc2.ConnectionTest.testTransactionIsolation(ConnectionTest.java:221) [junit] [junit] [junit] Testcase: testMaxFieldSize(org.postgresql.test.jdbc2.ResultSetTest): FAILED [junit] expected:<[12345]> but was:<[825373492]> [junit] junit.framework.ComparisonFailure: expected:<[12345]> but was:<[825373492]> [junit] at org.postgresql.test.jdbc2.ResultSetTest.testMaxFieldSize(ResultSetTest.java:171) [junit] [junit] [junit] Testcase: testBoolean(org.postgresql.test.jdbc2.ResultSetTest): FAILED [junit] expected:<false> but was:<true> [junit] junit.framework.AssertionFailedError: expected:<false> but was:<true> [junit] at org.postgresql.test.jdbc2.ResultSetTest.booleanTests(ResultSetTest.java:205) [junit] at org.postgresql.test.jdbc2.ResultSetTest.testBoolean(ResultSetTest.java:249) [junit] [junit] [junit] Testcase: testgetByte(org.postgresql.test.jdbc2.ResultSetTest): FAILED [junit] expected:<0> but was:<-1> [junit] junit.framework.AssertionFailedError: expected:<0> but was:<-1> [junit] at org.postgresql.test.jdbc2.ResultSetTest.testgetByte(ResultSetTest.java:262) [junit] [junit] [junit] Testcase: testgetShort(org.postgresql.test.jdbc2.ResultSetTest): FAILED [junit] expected:<0> but was:<-1> [junit] junit.framework.AssertionFailedError: expected:<0> but was:<-1> [junit] at org.postgresql.test.jdbc2.ResultSetTest.testgetShort(ResultSetTest.java:296) [junit] [junit] [junit] Testcase: testgetInt(org.postgresql.test.jdbc2.ResultSetTest): FAILED [junit] expected:<0> but was:<-1> [junit] junit.framework.AssertionFailedError: expected:<0> but was:<-1> [junit] at org.postgresql.test.jdbc2.ResultSetTest.testgetInt(ResultSetTest.java:330) [junit] [junit] [junit] Testcase: testgetLong(org.postgresql.test.jdbc2.ResultSetTest): FAILED [junit] expected:<0> but was:<-1> [junit] junit.framework.AssertionFailedError: expected:<0> but was:<-1> [junit] at org.postgresql.test.jdbc2.ResultSetTest.testgetLong(ResultSetTest.java:382) [junit] [junit] [junit] Testcase: testTurkishLocale(org.postgresql.test.jdbc2.ResultSetTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2588) [junit] at org.postgresql.test.jdbc2.ResultSetTest.testTurkishLocale(ResultSetTest.java:718) [junit] [junit] [junit] Testcase: testClassesMatch(org.postgresql.test.jdbc2.ResultSetMetaDataTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int2(ByteConverter.java:63) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.readLongValue(AbstractJdbc2ResultSet.java:3159) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2126) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.internalGetObject(AbstractJdbc2ResultSet.java:140) [junit] at org.postgresql.jdbc3.AbstractJdbc3ResultSet.internalGetObject(AbstractJdbc3ResultSet.java:36) [junit] at org.postgresql.jdbc4.AbstractJdbc4ResultSet.internalGetObject(AbstractJdbc4ResultSet.java:300) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getObject(AbstractJdbc2ResultSet.java:2703) [junit] at org.postgresql.test.jdbc2.ResultSetMetaDataTest.testClassesMatch(ResultSetMetaDataTest.java:214) [junit] [junit] [junit] Testcase: testRetrieveArrays(org.postgresql.test.jdbc2.ArrayTest): Caused an ERROR [junit] 8 [junit] java.lang.ArrayIndexOutOfBoundsException: 8 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.readBinaryArray(AbstractJdbc2Array.java:177) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getArrayImpl(AbstractJdbc2Array.java:157) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getArray(AbstractJdbc2Array.java:128) [junit] at org.postgresql.test.jdbc2.ArrayTest.testRetrieveArrays(ArrayTest.java:73) [junit] [junit] [junit] Testcase: testRetrieveResultSets(org.postgresql.test.jdbc2.ArrayTest): Caused an ERROR [junit] 8 [junit] java.lang.ArrayIndexOutOfBoundsException: 8 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.readBinaryResultSet(AbstractJdbc2Array.java:250) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getResultSetImpl(AbstractJdbc2Array.java:794) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getResultSet(AbstractJdbc2Array.java:765) [junit] at org.postgresql.test.jdbc2.ArrayTest.testRetrieveResultSets(ArrayTest.java:109) [junit] [junit] [junit] Testcase: testSetArray(org.postgresql.test.jdbc2.ArrayTest): Caused an ERROR [junit] 8 [junit] java.lang.ArrayIndexOutOfBoundsException: 8 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.readBinaryArray(AbstractJdbc2Array.java:177) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getArrayImpl(AbstractJdbc2Array.java:157) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getArray(AbstractJdbc2Array.java:128) [junit] at org.postgresql.test.jdbc2.ArrayTest.testSetArray(ArrayTest.java:180) [junit] [junit] [junit] Testcase: testNonStandardBounds(org.postgresql.test.jdbc2.ArrayTest): Caused an ERROR [junit] Java heap space [junit] java.lang.OutOfMemoryError: Java heap space [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.readBinaryArray(AbstractJdbc2Array.java:179) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getArrayImpl(AbstractJdbc2Array.java:157) [junit] at org.postgresql.jdbc2.AbstractJdbc2Array.getArray(AbstractJdbc2Array.java:128) [junit] at org.postgresql.test.jdbc2.ArrayTest.testNonStandardBounds(ArrayTest.java:204) [junit] [junit] [junit] Testcase: testGetDate(org.postgresql.test.jdbc2.DateTest): FAILED [junit] expected:<1950-02-07> but was:<1970-01-01> [junit] junit.framework.AssertionFailedError: expected:<1950-02-07> but was:<1970-01-01> [junit] at org.postgresql.test.jdbc2.DateTest.dateTest(DateTest.java:160) [junit] at org.postgresql.test.jdbc2.DateTest.testGetDate(DateTest.java:68) [junit] [junit] [junit] Testcase: testSetDate(org.postgresql.test.jdbc2.DateTest): FAILED [junit] expected:<1950-02-07> but was:<1970-01-01> [junit] junit.framework.AssertionFailedError: expected:<1950-02-07> but was:<1970-01-01> [junit] at org.postgresql.test.jdbc2.DateTest.dateTest(DateTest.java:160) [junit] at org.postgresql.test.jdbc2.DateTest.testSetDate(DateTest.java:139) [junit] [junit] [junit] Testcase: testSetTime(org.postgresql.test.jdbc2.TimeTest): FAILED [junit] expected:<01:02:03> but was:<23:32:34> [junit] junit.framework.AssertionFailedError: expected:<01:02:03> but was:<23:32:34> [junit] at org.postgresql.test.jdbc2.TimeTest.timeTest(TimeTest.java:222) [junit] at org.postgresql.test.jdbc2.TimeTest.testSetTime(TimeTest.java:199) [junit] [junit] [junit] Testcase: testGetTimeZone(org.postgresql.test.jdbc2.TimeTest): Caused an ERROR [junit] Unsupported binary encoding of [B@388a2006. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@388a2006. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimeBin(TimestampUtils.java:693) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTime(AbstractJdbc2ResultSet.java:486) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTime(AbstractJdbc2ResultSet.java:2466) [junit] at org.postgresql.test.jdbc2.TimeTest.testGetTimeZone(TimeTest.java:71) [junit] [junit] [junit] Testcase: testGetTime(org.postgresql.test.jdbc2.TimeTest): FAILED [junit] expected:<01:02:03> but was:<23:32:34> [junit] junit.framework.AssertionFailedError: expected:<01:02:03> but was:<23:32:34> [junit] at org.postgresql.test.jdbc2.TimeTest.timeTest(TimeTest.java:222) [junit] at org.postgresql.test.jdbc2.TimeTest.testGetTime(TimeTest.java:152) [junit] [junit] [junit] Testcase: testCalendarModification(org.postgresql.test.jdbc2.TimestampTest): Caused an ERROR [junit] Unsupported binary encoding of [B@3fb35ece. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@3fb35ece. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getDate(AbstractJdbc2ResultSet.java:460) [junit] at org.postgresql.test.jdbc2.TimestampTest.testCalendarModification(TimestampTest.java:81) [junit] [junit] [junit] Testcase: testInfinity(org.postgresql.test.jdbc2.TimestampTest): FAILED [junit] expected:<[infinity]> but was:<[242743-03-22 14:00:30.609529]> [junit] junit.framework.ComparisonFailure: expected:<[infinity]> but was:<[242743-03-22 14:00:30.609529]> [junit] at org.postgresql.test.jdbc2.TimestampTest.runInfinityTests(TimestampTest.java:135) [junit] at org.postgresql.test.jdbc2.TimestampTest.testInfinity(TimestampTest.java:96) [junit] [junit] [junit] Testcase: testGetTimestampWTZ(org.postgresql.test.jdbc2.TimestampTest): Caused an ERROR [junit] Unsupported binary encoding of [B@6ef36e59. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@6ef36e59. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:2471) [junit] at org.postgresql.test.jdbc2.TimestampTest.timestampTestWTZ(TimestampTest.java:422) [junit] at org.postgresql.test.jdbc2.TimestampTest.testGetTimestampWTZ(TimestampTest.java:186) [junit] [junit] [junit] Testcase: testSetTimestampWTZ(org.postgresql.test.jdbc2.TimestampTest): Caused an ERROR [junit] Unsupported binary encoding of [B@59a51312. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@59a51312. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:2471) [junit] at org.postgresql.test.jdbc2.TimestampTest.timestampTestWTZ(TimestampTest.java:422) [junit] at org.postgresql.test.jdbc2.TimestampTest.testSetTimestampWTZ(TimestampTest.java:256) [junit] [junit] [junit] Testcase: testGetTimestampWOTZ(org.postgresql.test.jdbc2.TimestampTest): Caused an ERROR [junit] Unsupported binary encoding of [B@b57b39f. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@b57b39f. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:2471) [junit] at org.postgresql.test.jdbc2.TimestampTest.timestampTestWOTZ(TimestampTest.java:505) [junit] at org.postgresql.test.jdbc2.TimestampTest.testGetTimestampWOTZ(TimestampTest.java:307) [junit] [junit] [junit] Testcase: testSetTimestampWOTZ(org.postgresql.test.jdbc2.TimestampTest): Caused an ERROR [junit] Unsupported binary encoding of [B@36db492. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@36db492. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:2471) [junit] at org.postgresql.test.jdbc2.TimestampTest.timestampTestWOTZ(TimestampTest.java:505) [junit] at org.postgresql.test.jdbc2.TimestampTest.testSetTimestampWOTZ(TimestampTest.java:399) [junit] [junit] [junit] Testcase: testGetDate(org.postgresql.test.jdbc2.TimezoneTest): Caused an ERROR [junit] Unsupported binary encoding of [B@31bca1c3. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@31bca1c3. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getDate(AbstractJdbc2ResultSet.java:460) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getDate(AbstractJdbc2ResultSet.java:2461) [junit] at org.postgresql.test.jdbc2.TimezoneTest.testGetDate(TimezoneTest.java:191) [junit] [junit] [junit] Testcase: testGetTime(org.postgresql.test.jdbc2.TimezoneTest): Caused an ERROR [junit] Unsupported binary encoding of [B@55f35e30. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@55f35e30. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTime(AbstractJdbc2ResultSet.java:489) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTime(AbstractJdbc2ResultSet.java:2466) [junit] at org.postgresql.test.jdbc2.TimezoneTest.testGetTime(TimezoneTest.java:246) [junit] [junit] [junit] Testcase: testGetTimestamp(org.postgresql.test.jdbc2.TimezoneTest): Caused an ERROR [junit] Unsupported binary encoding of [B@73d4f355. [junit] org.postgresql.util.PSQLException: Unsupported binary encoding of [B@73d4f355. [junit] at org.postgresql.jdbc2.TimestampUtils.toTimestampBin(TimestampUtils.java:740) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:516) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getTimestamp(AbstractJdbc2ResultSet.java:2471) [junit] at org.postgresql.test.jdbc2.TimezoneTest.testGetTimestamp(TimezoneTest.java:114) [junit] [junit] [junit] Testcase: testDouble(org.postgresql.test.jdbc2.PreparedStatementTest): FAILED [junit] null [junit] junit.framework.AssertionFailedError [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testDouble(PreparedStatementTest.java:466) [junit] [junit] [junit] Testcase: testFloat(org.postgresql.test.jdbc2.PreparedStatementTest): FAILED [junit] expected 1.0E37,received 1.661525E-4 [junit] junit.framework.AssertionFailedError: expected 1.0E37,received 1.661525E-4 [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testFloat(PreparedStatementTest.java:492) [junit] [junit] [junit] Testcase: testSetFloatInteger(org.postgresql.test.jdbc2.PreparedStatementTest): FAILED [junit] expected 2.147483647E9 ,received 6.381306149573078E-67 [junit] junit.framework.AssertionFailedError: expected 2.147483647E9 ,received 6.381306149573078E-67 [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testSetFloatInteger(PreparedStatementTest.java:551) [junit] [junit] [junit] Testcase: testSetFloatString(org.postgresql.test.jdbc2.PreparedStatementTest): FAILED [junit] expected true,received 2.9104200159592993E-33 [junit] junit.framework.AssertionFailedError: expected true,received 2.9104200159592993E-33 [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testSetFloatString(PreparedStatementTest.java:579) [junit] [junit] [junit] Testcase: testSetFloatBigDecimal(org.postgresql.test.jdbc2.PreparedStatementTest): FAILED [junit] expected 1.0E37 ,received 2.9104200159592993E-33 [junit] junit.framework.AssertionFailedError: expected 1.0E37 ,received 2.9104200159592993E-33 [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testSetFloatBigDecimal(PreparedStatementTest.java:608) [junit] [junit] [junit] Testcase: testSetTinyIntFloat(org.postgresql.test.jdbc2.PreparedStatementTest): Caused an ERROR [junit] 3 [junit] java.lang.ArrayIndexOutOfBoundsException: 3 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.internalGetObject(AbstractJdbc2ResultSet.java:140) [junit] at org.postgresql.jdbc3.AbstractJdbc3ResultSet.internalGetObject(AbstractJdbc3ResultSet.java:36) [junit] at org.postgresql.jdbc4.AbstractJdbc4ResultSet.internalGetObject(AbstractJdbc4ResultSet.java:300) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getObject(AbstractJdbc2ResultSet.java:2703) [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testSetTinyIntFloat(PreparedStatementTest.java:636) [junit] [junit] [junit] Testcase: testSetSmallIntFloat(org.postgresql.test.jdbc2.PreparedStatementTest): FAILED [junit] expected 32767 ,received 858928950 [junit] junit.framework.AssertionFailedError: expected 32767 ,received 858928950 [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testSetSmallIntFloat(PreparedStatementTest.java:665) [junit] [junit] [junit] Testcase: testSetIntFloat(org.postgresql.test.jdbc2.PreparedStatementTest): FAILED [junit] expected 1000 ,received 825241648 [junit] junit.framework.AssertionFailedError: expected 1000 ,received 825241648 [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testSetIntFloat(PreparedStatementTest.java:693) [junit] [junit] [junit] Testcase: testSetBooleanDouble(org.postgresql.test.jdbc2.PreparedStatementTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int8(ByteConverter.java:29) [junit] at org.postgresql.util.ByteConverter.float8(ByteConverter.java:87) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getDouble(AbstractJdbc2ResultSet.java:2377) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.internalGetObject(AbstractJdbc2ResultSet.java:151) [junit] at org.postgresql.jdbc3.AbstractJdbc3ResultSet.internalGetObject(AbstractJdbc3ResultSet.java:36) [junit] at org.postgresql.jdbc4.AbstractJdbc4ResultSet.internalGetObject(AbstractJdbc4ResultSet.java:300) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getObject(AbstractJdbc2ResultSet.java:2703) [junit] at org.postgresql.test.jdbc2.PreparedStatementTest.testSetBooleanDouble(PreparedStatementTest.java:721) [junit] [junit] [junit] Testcase: testParsingSemiColons(org.postgresql.test.jdbc2.StatementTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.jdbc2.StatementTest.testParsingSemiColons(StatementTest.java:401) [junit] [junit] [junit] Testcase: testPreparedStatementsNoBinds(org.postgresql.test.jdbc2.ServerPreparedStmtTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.jdbc2.ServerPreparedStmtTest.testPreparedStatementsNoBinds(ServerPreparedStmtTest.java:87) [junit] [junit] [junit] Testcase: testPreparedStatementsWithOneBind(org.postgresql.test.jdbc2.ServerPreparedStmtTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.jdbc2.ServerPreparedStmtTest.testPreparedStatementsWithOneBind(ServerPreparedStmtTest.java:121) [junit] [junit] [junit] Testcase: testBooleanObjectBind(org.postgresql.test.jdbc2.ServerPreparedStmtTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.jdbc2.ServerPreparedStmtTest.testBooleanObjectBind(ServerPreparedStmtTest.java:157) [junit] [junit] [junit] Testcase: testBooleanIntegerBind(org.postgresql.test.jdbc2.ServerPreparedStmtTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.jdbc2.ServerPreparedStmtTest.testBooleanIntegerBind(ServerPreparedStmtTest.java:171) [junit] [junit] [junit] Testcase: testBooleanBind(org.postgresql.test.jdbc2.ServerPreparedStmtTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.jdbc2.ServerPreparedStmtTest.testBooleanBind(ServerPreparedStmtTest.java:185) [junit] [junit] [junit] Testcase: testPreparedStatementsWithBinds(org.postgresql.test.jdbc2.ServerPreparedStmtTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.jdbc2.ServerPreparedStmtTest.testPreparedStatementsWithBinds(ServerPreparedStmtTest.java:201) [junit] [junit] [junit] Testcase: testWarningsAreCleared(org.postgresql.test.jdbc2.BatchExecuteTest): Caused an ERROR [junit] Batch entry 0 CREATE TABLE unused (a int primary key) was aborted. Call getNextException to see the cause. [junit] java.sql.BatchUpdateException: Batch entry 0 CREATE TABLE unused (a int primary key) was aborted. Call getNextException to see the cause. [junit] at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2753) [junit] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1891) [junit] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405) [junit] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2900) [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.testWarningsAreCleared(BatchExecuteTest.java:252) [junit] [junit] [junit] Testcase: testClearBatch(org.postgresql.test.jdbc2.BatchExecuteTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2588) [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.assertCol1HasValue(BatchExecuteTest.java:87) [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.testClearBatch(BatchExecuteTest.java:115) [junit] [junit] [junit] Testcase: testPreparedStatement(org.postgresql.test.jdbc2.BatchExecuteTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2588) [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.assertCol1HasValue(BatchExecuteTest.java:87) [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.testPreparedStatement(BatchExecuteTest.java:187) [junit] [junit] [junit] Testcase: testTransactionalBehaviour(org.postgresql.test.jdbc2.BatchExecuteTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2588) [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.assertCol1HasValue(BatchExecuteTest.java:87) [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.testTransactionalBehaviour(BatchExecuteTest.java:225) [junit] [junit] [junit] Testcase: testBatchEscapeProcessing(org.postgresql.test.jdbc2.BatchExecuteTest): FAILED [junit] expected:<[2007-11-20]> but was:<[1970-01-01]> [junit] junit.framework.ComparisonFailure: expected:<[2007-11-20]> but was:<[1970-01-01]> [junit] at org.postgresql.test.jdbc2.BatchExecuteTest.testBatchEscapeProcessing(BatchExecuteTest.java:274) [junit] [junit] [junit] Testcase: testUpdateable(org.postgresql.test.jdbc2.UpdateableResultTest): Caused an ERROR [junit] ERROR: Write to replicated table returneddifferent results from the Datanodes [junit] org.postgresql.util.PSQLException: ERROR: Write to replicated table returneddifferent results from the Datanodes [junit] at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161) [junit] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890) [junit] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) [junit] at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559) [junit] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417) [junit] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:363) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.updateRow(AbstractJdbc2ResultSet.java:1386) [junit] at org.postgresql.test.jdbc2.UpdateableResultTest.testUpdateable(UpdateableResultTest.java:349) [junit] [junit] [junit] Testcase: testPGbox(org.postgresql.test.jdbc2.GeometricTest): Caused an ERROR [junit] Failed to create object for: box. [junit] org.postgresql.util.PSQLException: Failed to create object for: box. [junit] at org.postgresql.jdbc2.AbstractJdbc2Connection.getObject(AbstractJdbc2Connection.java:573) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getObject(AbstractJdbc2ResultSet.java:2708) [junit] at org.postgresql.test.jdbc2.GeometricTest.checkReadWrite(GeometricTest.java:54) [junit] at org.postgresql.test.jdbc2.GeometricTest.testPGbox(GeometricTest.java:62) [junit] Caused by: java.lang.ArrayIndexOutOfBoundsException: 11 [junit] at org.postgresql.util.ByteConverter.int8(ByteConverter.java:29) [junit] at org.postgresql.util.ByteConverter.float8(ByteConverter.java:87) [junit] at org.postgresql.geometric.PGpoint.setByteValue(PGpoint.java:94) [junit] at org.postgresql.geometric.PGbox.setByteValue(PGbox.java:94) [junit] at org.postgresql.jdbc2.AbstractJdbc2Connection.getObject(AbstractJdbc2Connection.java:550) [junit] [junit] [junit] Testcase: testPGpoint(org.postgresql.test.jdbc2.GeometricTest): Caused an ERROR [junit] Failed to create object for: point. [junit] org.postgresql.util.PSQLException: Failed to create object for: point. [junit] at org.postgresql.jdbc2.AbstractJdbc2Connection.getObject(AbstractJdbc2Connection.java:573) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getObject(AbstractJdbc2ResultSet.java:2708) [junit] at org.postgresql.test.jdbc2.GeometricTest.checkReadWrite(GeometricTest.java:54) [junit] at org.postgresql.test.jdbc2.GeometricTest.testPGpoint(GeometricTest.java:111) [junit] Caused by: java.lang.ArrayIndexOutOfBoundsException: 5 [junit] at org.postgresql.util.ByteConverter.int8(ByteConverter.java:29) [junit] at org.postgresql.util.ByteConverter.float8(ByteConverter.java:87) [junit] at org.postgresql.geometric.PGpoint.setByteValue(PGpoint.java:93) [junit] at org.postgresql.jdbc2.AbstractJdbc2Connection.getObject(AbstractJdbc2Connection.java:550) [junit] [junit] [junit] Testcase: testCopyOut(org.postgresql.test.jdbc2.CopyTest): FAILED [junit] content changed at byte#0: 8392 expected:<83> but was:<92> [junit] junit.framework.AssertionFailedError: content changed at byte#0: 8392 expected:<83> but was:<92> [junit] at org.postgresql.test.jdbc2.CopyTest.testCopyOut(CopyTest.java:206) [junit] [junit] [junit] Test org.postgresql.test.jdbc2.Jdbc2TestSuite FAILED [junit] Testsuite: org.postgresql.test.jdbc2.optional.OptionalTestSuite [junit] Tests run: 40, Failures: 0, Errors: 0, Time elapsed: 7.217 sec [junit] [junit] Testsuite: org.postgresql.test.jdbc3.Jdbc3TestSuite [junit] Tests run: 48, Failures: 2, Errors: 0, Time elapsed: 22.698 sec [junit] [junit] Testcase: testUpdateReal(org.postgresql.test.jdbc3.Jdbc3CallableStatementTest): FAILED [junit] null [junit] junit.framework.AssertionFailedError [junit] at org.postgresql.test.jdbc3.Jdbc3CallableStatementTest.testUpdateReal(Jdbc3CallableStatementTest.java:607) [junit] [junit] [junit] Testcase: testUpdateDecimal(org.postgresql.test.jdbc3.Jdbc3CallableStatementTest): FAILED [junit] null [junit] junit.framework.AssertionFailedError [junit] at org.postgresql.test.jdbc3.Jdbc3CallableStatementTest.testUpdateDecimal(Jdbc3CallableStatementTest.java:658) [junit] [junit] [junit] Test org.postgresql.test.jdbc3.Jdbc3TestSuite FAILED [junit] Testsuite: org.postgresql.test.xa.XATestSuite [junit] Tests run: 9, Failures: 0, Errors: 1, Time elapsed: 2.333 sec [junit] [junit] Testcase: testCloseBeforeCommit(org.postgresql.test.xa.XADataSourceTest): Caused an ERROR [junit] 1 [junit] java.lang.ArrayIndexOutOfBoundsException: 1 [junit] at org.postgresql.util.ByteConverter.int4(ByteConverter.java:48) [junit] at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:2124) [junit] at org.postgresql.test.xa.XADataSourceTest.testCloseBeforeCommit(XADataSourceTest.java:157) [junit] [junit] [junit] Test org.postgresql.test.xa.XATestSuite FAILED [junit] Testsuite: org.postgresql.test.extensions.ExtensionsSuite [junit] Tests run: 0, Failures: 0, Errors: 0, Time elapsed: 0.383 sec [junit] [junit] Testsuite: org.postgresql.test.jdbc4.Jdbc4TestSuite [junit] Tests run: 26, Failures: 1, Errors: 0, Time elapsed: 2.032 sec [junit] [junit] Testcase: testUUID(org.postgresql.test.jdbc4.UUIDTest): FAILED [junit] expected:<7e535902-a704-4c73-96c7-710df47b5035> but was:<37653533-3539-3032-2d61-3730342d3463> [junit] junit.framework.AssertionFailedError: expected:<7e535902-a704-4c73-96c7-710df47b5035> but was:<37653533-3539-3032-2d61-3730342d3463> [junit] at org.postgresql.test.jdbc4.UUIDTest.testUUID(UUIDTest.java:51) [junit] [junit] [junit] Test org.postgresql.test.jdbc4.Jdbc4TestSuite FAILED [junit] Testsuite: org.postgresql.test.ssl.SslTestSuite [junit] Tests run: 0, Failures: 0, Errors: 0, Time elapsed: 0.417 sec [junit] [junit] ------------- Standard Output --------------- [junit] Skipping ssloff8. [junit] Skipping sslhostnossl8. [junit] Skipping ssloff9. [junit] Skipping sslhostnossl9. [junit] Skipping sslhostgh8. [junit] Skipping sslhostgh9. [junit] Skipping sslhostbh8. [junit] Skipping sslhostbh9. [junit] Skipping sslhostsslgh8. [junit] Skipping sslhostsslgh9. [junit] Skipping sslhostsslbh8. [junit] Skipping sslhostsslbh9. [junit] Skipping sslhostsslcertgh8. [junit] Skipping sslhostsslcertgh9. [junit] Skipping sslhostsslcertbh8. [junit] Skipping sslhostsslcertbh9. [junit] Skipping sslcertgh8. [junit] Skipping sslcertgh9. [junit] Skipping sslcertbh8. [junit] Skipping sslcertbh9. [junit] ------------- ---------------- --------------- test: BUILD SUCCESSFUL Total time: 3 minutes 8 seconds |
From: Ashutosh B. <ash...@en...> - 2013-06-12 12:04:45
|
Hi Tomonari, Can you please check the list's sanity before calling pgxc_collect_RTE() and at every point in the minions of this function. My primary suspect is the line pgxcplan.c:3094. We should copy the list before concatenating it. On Wed, Jun 12, 2013 at 2:26 PM, Tomonari Katsumata < kat...@po...> wrote: > Hi Ashutosh, > > Thank you for the response. > > (2013/06/12 14:43), Ashutosh Bapat wrote: > >> Hi, > >> > > >> > I've investigated this problem(BUG:3614369). > >> > > >> > I caught the cause of it, but I can not > >> > find where to fix. > >> > > >> > The problem occurs when "pgxc_collect_RTE_walker" is called > infinitely. > >> > It seems that rtable(List of RangeTable) become cyclic List. > >> > I'm not sure where the List is made. > >> > > >> > > > I guess, we are talking about EXECUTE DIRECT statement that you have > > mentioned earlier. > > Yes, that's right. > I'm talking about EXECUTE DIRECT statement like below. > --- > EXECUTE DIRECT ON (data1) $$ > SELECT > count(*) > FROM > (SELECT * FROM pg_locks l LEFT JOIN > (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > $$ > --- > > > The function pgxc_collect_RTE_walker() is a recursive > > function. The condition to end the recursion is if the given node is > NULL. > > We have to look at if that condition is met and if not why. > > > I investigated it deeper, and I noticed that > the infinitely loop happens at the function "range_table_walker()". > > Please see below trace. > =========================== > Breakpoint 1, range_table_walker (rtable=0x15e7968, walker=0x612c70 > <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, > flags=0) at nodeFuncs.c:1908 > 1908 in nodeFuncs.c > > (gdb) p *rtable > $10 = {type = T_List, length = 5, head = 0x15e7998, tail = 0x15e9820} > (gdb) p *rtable->head > $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, oid_value = > 22968760}, next = 0x15e8190} > (gdb) p *rtable->head->next > $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, oid_value = > 22970800}, next = 0x15e8988} > (gdb) p *rtable->head->next->next > $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, oid_value = > 22972840}, next = 0x15e9018} > (gdb) p *rtable->head->next->next->next > $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, oid_value = > 22974520}, next = 0x15e9820} > (gdb) p *rtable->head->next->next->next->next > $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, oid_value = > 22976576}, next = 0x15e7998} > =========================== > > The line which starts with "$15" has 0x15e7998 as its next data. > But it is the head pointer(see the line which starts with $10). > > And in range_table_walker(), the function is called recursively. > -------- > ... > if (!(flags & QTW_IGNORE_RANGE_TABLE)) > { > if (range_table_walker(query->rtable, walker, context, > flags)) > return true; > } > ... > -------- > > We should make rtable right or deal with "flags" properly. > But I can't find where to do it... > > What do you think ? > > regards, > --------- > NTT Software Corporation > Tomonari Katsumata > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Tomonari K. <kat...@po...> - 2013-06-12 08:56:42
|
Hi Ashutosh, Thank you for the response. (2013/06/12 14:43), Ashutosh Bapat wrote: >> Hi, >> > >> > I've investigated this problem(BUG:3614369). >> > >> > I caught the cause of it, but I can not >> > find where to fix. >> > >> > The problem occurs when "pgxc_collect_RTE_walker" is called infinitely. >> > It seems that rtable(List of RangeTable) become cyclic List. >> > I'm not sure where the List is made. >> > >> > > I guess, we are talking about EXECUTE DIRECT statement that you have > mentioned earlier. Yes, that's right. I'm talking about EXECUTE DIRECT statement like below. --- EXECUTE DIRECT ON (data1) $$ SELECT count(*) FROM (SELECT * FROM pg_locks l LEFT JOIN (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a $$ --- > The function pgxc_collect_RTE_walker() is a recursive > function. The condition to end the recursion is if the given node is NULL. > We have to look at if that condition is met and if not why. > I investigated it deeper, and I noticed that the infinitely loop happens at the function "range_table_walker()". Please see below trace. =========================== Breakpoint 1, range_table_walker (rtable=0x15e7968, walker=0x612c70 <pgxc_collect_RTE_walker>, context=0x7fffd2de31c0, flags=0) at nodeFuncs.c:1908 1908 in nodeFuncs.c (gdb) p *rtable $10 = {type = T_List, length = 5, head = 0x15e7998, tail = 0x15e9820} (gdb) p *rtable->head $11 = {data = {ptr_value = 0x15e79b8, int_value = 22968760, oid_value = 22968760}, next = 0x15e8190} (gdb) p *rtable->head->next $12 = {data = {ptr_value = 0x15e81b0, int_value = 22970800, oid_value = 22970800}, next = 0x15e8988} (gdb) p *rtable->head->next->next $13 = {data = {ptr_value = 0x15e89a8, int_value = 22972840, oid_value = 22972840}, next = 0x15e9018} (gdb) p *rtable->head->next->next->next $14 = {data = {ptr_value = 0x15e9038, int_value = 22974520, oid_value = 22974520}, next = 0x15e9820} (gdb) p *rtable->head->next->next->next->next $15 = {data = {ptr_value = 0x15e9840, int_value = 22976576, oid_value = 22976576}, next = 0x15e7998} =========================== The line which starts with "$15" has 0x15e7998 as its next data. But it is the head pointer(see the line which starts with $10). And in range_table_walker(), the function is called recursively. -------- ... if (!(flags & QTW_IGNORE_RANGE_TABLE)) { if (range_table_walker(query->rtable, walker, context, flags)) return true; } ... -------- We should make rtable right or deal with "flags" properly. But I can't find where to do it... What do you think ? regards, --------- NTT Software Corporation Tomonari Katsumata |
From: Ashutosh B. <ash...@en...> - 2013-06-12 05:43:44
|
Hi Tomonari, On Wed, Jun 12, 2013 at 10:17 AM, Tomonari Katsumata < kat...@po...> wrote: > Hi, > > I've investigated this problem(BUG:3614369). > > I caught the cause of it, but I can not > find where to fix. > > The problem occurs when "pgxc_collect_RTE_walker" is called infinitely. > It seems that rtable(List of RangeTable) become cyclic List. > I'm not sure where the List is made. > > I guess, we are talking about EXECUTE DIRECT statement that you have mentioned earlier. The function pgxc_collect_RTE_walker() is a recursive function. The condition to end the recursion is if the given node is NULL. We have to look at if that condition is met and if not why. > Anybody, please give me your help ? > If not so difficult to fix it, I want to fix soon. > > -------- > NTT Software Corporation > Tomonari Katsumata > > (2013/06/06 13:55), Koichi Suzuki wrote: > > I added this to the bug tracker with the ID 3614369 > > > > Regards; > > > > ---------- > > Koichi Suzuki > > > > > > 2013/6/5 鈴木 幸市 <ko...@in...> > > > >> Yeah, I found that this command stuck and doing this by direct > connection > >> to datanode works. > >> > >> Regards; > >> --- > >> Koichi Suzuki > >> > >> > >> > >> On 2013/06/05, at 18:12, Tomonari Katsumata < > >> kat...@po...> wrote: > >> > >>> Hi, > >>> > >>> The queries sent by Suzuki-san work fine, > >>> but my problem is still there. > >>> Could you try execute the query I sent before? > >>> > >>> --- > >>> EXECUTE DIRECT ON (data1) $$ > >>> SELECT > >>> count(*) > >>> FROM > >>> (SELECT * FROM pg_locks l LEFT JOIN > >>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > >>> $$ > >>> --- > >>> > >>> I don't change this query because it work with Postgres-XC v1.0. > >>> > >>> regards, > >>> ------- > >>> NTT Software Corporation > >>> Tomonari Katsumata > >>> > >>> (2013/06/05 16:20), Tomonari Katsumata wrote: > >>>> Hi, all > >>>> > >>>> thank you for many responses! > >>>> > >>>> OK, I'll try it with the current master. > >>>> > >>>> It seems that it'll work fine... > >>>> > >>>> Sorry for bothering you. > >>>> > >>>> regards, > >>>> -------- > >>>> NTT Software Corporation > >>>> Tomonari Katsumata > >>>> > >>>> > >>>> (2013/06/05 13:48), 鈴木 幸市 wrote: > >>>>> Now snapshot warning is disabled. Michael committed this patch. > >>>>> > >>>>> I tested the query with the current master as of this noon and I got > >>>> (probably) correct result. > >>>>> Here's the result: > >>>>> > >>>>> koichi=# execute direct on (datanode1) $$ > >>>>> select count(*) from (select * from pg_locks) l left join > >>>>> (select * from pg_stat_activity) s on (l.database=s.datid); > >>>>> $$; > >>>>> count > >>>>> ------- > >>>>> 9 > >>>>> (1 row) > >>>>> > >>>>> koichi=# \q > >>>>> … > >>>>> > >>>>> koichi=# execute direct on (datanode1) $$ > >>>>> koichi$# select count(*) from pg_locks l left join pg_stat_activity > s > >>>>> koichi$# on (l.database=s.datid); > >>>>> koichi$# $$; > >>>>> count > >>>>> ------- > >>>>> 11 > >>>>> (1 row) > >>>>> > >>>>> koichi=# > >>>>> > >>>>> Second statement is simpler version. Anyway, they seem to work find. > >>>>> > >>>>> Katsumata-san, could you try this with the latest head? It is > >>>> available both from sourceforge and github. > >>>>> Regards; > >>>>> --- > >>>>> Koichi Suzuki > >>>>> > >>>>> > >>>>> > >>>>> On 2013/06/05, at 13:39, Ashutosh Bapat > >>>> <ash...@en...> wrote: > >>>>>> > >>>>>> > >>>>>> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata > >>>> <kat...@po...> wrote: > >>>>>> Hi Suzuki-san, Ashutosh, > >>>>>> > >>>>>>> Suzuki-san > >>>>>> I don't make any user tables. > >>>>>> As the simple example I sent before, I use only system-catalogs. > >>>>>> > >>>>>>> Ashtosh > >>>>>> I'm developing database monitor tool and > >>>>>> I use "EXECUTE DIRECT" to get database statistics data from > >>>>>> particular coordinator/datanode. > >>>>>> > >>>>>> :) huh > >>>>>> > >>>>>> I think, monitoring tools should directly query the datanodes or > >>>> coordinators. You will get snapshot warning, but that can be ignored > I > >>>> guess. If they start querying coordinators, there will be performance > >>>> drop since coordinators directly handle the clients. > >>>>>> Any other thoughts? > >>>>>> regard, > >>>>>> > >>>>>> --------- > >>>>>> NTT Software Corporation > >>>>>> Tomonari Katsumata > >>>>>> > >>>>>> (2013/06/04 13:10), Ashutosh Bapat wrote: > >>>>>> Hi Tomonari, > >>>>>> > >>>>>> Thanks for the bug report. > >>>>>> > >>>>>> I am curious to know, what's the purpose of using EXECUTE DIRECT? > We > >>>>>> discourage using Execute Direct in the applications. It's only for > >>>>>> debugging purposes. > >>>>>> > >>>>>> > >>>>>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki > >>>> <koi...@gm...>wrote: > >>>>>> Thank you Katsumata-san for the report. > >>>>>> > >>>>>> Could you provide CREATE TABLE statement for each table involved > with > >>>> some > >>>>>> of the data? > >>>>>> > >>>>>> I will ad this to the bug tracker after I recreate the issue. > >>>>>> > >>>>>> Best Regards; > >>>>>> > >>>>>> ---------- > >>>>>> Koichi Suzuki > >>>>>> > >>>>>> > >>>>>> 2013/6/4 Tomonari Katsumata <kat...@po...> > >>>>>> > >>>>>> Hi, I have a problem with query executing. > >>>>>> > >>>>>> I cant't have any response when I execute a query. > >>>>>> This problem occurs when some conditions are met. > >>>>>> > >>>>>> The conditions are below. > >>>>>> > --------------------------------------------------------------------- > >>>>>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). > >>>>>> > >>>>>> 2. The Query Executing on Datanode has subquery on its FROM-clause. > >>>>>> > >>>>>> 3. In the subquery, it has a JOIN clause. > >>>>>> > >>>>>> 4. The Join clause is consisted with another subquery. > >>>>>> > --------------------------------------------------------------------- > >>>>>> > >>>>>> > >>>>>> Simple example query is below. > >>>>>> --------------------------------------------------------------- > >>>>>> EXECUTE DIRECT ON (data1) $$ > >>>>>> SELECT > >>>>>> count(*) > >>>>>> FROM > >>>>>> (SELECT * FROM pg_locks l LEFT JOIN > >>>>>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > >>>>>> $$ > >>>>>> --------------------------------------------------------------- > >>>>>> > >>>>>> FYI: > >>>>>> This query works fine with Postgres-XC 1.0.3. > >>>>>> Is this already known bug ? > >>>>>> > >>>>>> > >>>>>> How can I avoid this problem ? > >>>>>> And what kind of info do you need to investigate it ? > >>>>>> > >>>>>> ---------- > >>>>>> NTT Software Corporation > >>>>>> Tomonari Katsumata > > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Tomonari K. <kat...@po...> - 2013-06-12 05:11:16
|
Hi, I've investigated this problem(BUG:3614369). I caught the cause of it, but I can not find where to fix. The problem occurs when "pgxc_collect_RTE_walker" is called infinitely. It seems that rtable(List of RangeTable) become cyclic List. I'm not sure where the List is made. Anybody, please give me your help ? If not so difficult to fix it, I want to fix soon. -------- NTT Software Corporation Tomonari Katsumata (2013/06/06 13:55), Koichi Suzuki wrote: > I added this to the bug tracker with the ID 3614369 > > Regards; > > ---------- > Koichi Suzuki > > > 2013/6/5 鈴木 幸市 <ko...@in...> > >> Yeah, I found that this command stuck and doing this by direct connection >> to datanode works. >> >> Regards; >> --- >> Koichi Suzuki >> >> >> >> On 2013/06/05, at 18:12, Tomonari Katsumata < >> kat...@po...> wrote: >> >>> Hi, >>> >>> The queries sent by Suzuki-san work fine, >>> but my problem is still there. >>> Could you try execute the query I sent before? >>> >>> --- >>> EXECUTE DIRECT ON (data1) $$ >>> SELECT >>> count(*) >>> FROM >>> (SELECT * FROM pg_locks l LEFT JOIN >>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> $$ >>> --- >>> >>> I don't change this query because it work with Postgres-XC v1.0. >>> >>> regards, >>> ------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> (2013/06/05 16:20), Tomonari Katsumata wrote: >>>> Hi, all >>>> >>>> thank you for many responses! >>>> >>>> OK, I'll try it with the current master. >>>> >>>> It seems that it'll work fine... >>>> >>>> Sorry for bothering you. >>>> >>>> regards, >>>> -------- >>>> NTT Software Corporation >>>> Tomonari Katsumata >>>> >>>> >>>> (2013/06/05 13:48), 鈴木 幸市 wrote: >>>>> Now snapshot warning is disabled. Michael committed this patch. >>>>> >>>>> I tested the query with the current master as of this noon and I got >>>> (probably) correct result. >>>>> Here's the result: >>>>> >>>>> koichi=# execute direct on (datanode1) $$ >>>>> select count(*) from (select * from pg_locks) l left join >>>>> (select * from pg_stat_activity) s on (l.database=s.datid); >>>>> $$; >>>>> count >>>>> ------- >>>>> 9 >>>>> (1 row) >>>>> >>>>> koichi=# \q >>>>> … >>>>> >>>>> koichi=# execute direct on (datanode1) $$ >>>>> koichi$# select count(*) from pg_locks l left join pg_stat_activity s >>>>> koichi$# on (l.database=s.datid); >>>>> koichi$# $$; >>>>> count >>>>> ------- >>>>> 11 >>>>> (1 row) >>>>> >>>>> koichi=# >>>>> >>>>> Second statement is simpler version. Anyway, they seem to work find. >>>>> >>>>> Katsumata-san, could you try this with the latest head? It is >>>> available both from sourceforge and github. >>>>> Regards; >>>>> --- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> >>>>> On 2013/06/05, at 13:39, Ashutosh Bapat >>>> <ash...@en...> wrote: >>>>>> >>>>>> >>>>>> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata >>>> <kat...@po...> wrote: >>>>>> Hi Suzuki-san, Ashutosh, >>>>>> >>>>>>> Suzuki-san >>>>>> I don't make any user tables. >>>>>> As the simple example I sent before, I use only system-catalogs. >>>>>> >>>>>>> Ashtosh >>>>>> I'm developing database monitor tool and >>>>>> I use "EXECUTE DIRECT" to get database statistics data from >>>>>> particular coordinator/datanode. >>>>>> >>>>>> :) huh >>>>>> >>>>>> I think, monitoring tools should directly query the datanodes or >>>> coordinators. You will get snapshot warning, but that can be ignored I >>>> guess. If they start querying coordinators, there will be performance >>>> drop since coordinators directly handle the clients. >>>>>> Any other thoughts? >>>>>> regard, >>>>>> >>>>>> --------- >>>>>> NTT Software Corporation >>>>>> Tomonari Katsumata >>>>>> >>>>>> (2013/06/04 13:10), Ashutosh Bapat wrote: >>>>>> Hi Tomonari, >>>>>> >>>>>> Thanks for the bug report. >>>>>> >>>>>> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >>>>>> discourage using Execute Direct in the applications. It's only for >>>>>> debugging purposes. >>>>>> >>>>>> >>>>>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki >>>> <koi...@gm...>wrote: >>>>>> Thank you Katsumata-san for the report. >>>>>> >>>>>> Could you provide CREATE TABLE statement for each table involved with >>>> some >>>>>> of the data? >>>>>> >>>>>> I will ad this to the bug tracker after I recreate the issue. >>>>>> >>>>>> Best Regards; >>>>>> >>>>>> ---------- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2013/6/4 Tomonari Katsumata <kat...@po...> >>>>>> >>>>>> Hi, I have a problem with query executing. >>>>>> >>>>>> I cant't have any response when I execute a query. >>>>>> This problem occurs when some conditions are met. >>>>>> >>>>>> The conditions are below. >>>>>> --------------------------------------------------------------------- >>>>>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>>>>> >>>>>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>>>>> >>>>>> 3. In the subquery, it has a JOIN clause. >>>>>> >>>>>> 4. The Join clause is consisted with another subquery. >>>>>> --------------------------------------------------------------------- >>>>>> >>>>>> >>>>>> Simple example query is below. >>>>>> --------------------------------------------------------------- >>>>>> EXECUTE DIRECT ON (data1) $$ >>>>>> SELECT >>>>>> count(*) >>>>>> FROM >>>>>> (SELECT * FROM pg_locks l LEFT JOIN >>>>>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>>>>> $$ >>>>>> --------------------------------------------------------------- >>>>>> >>>>>> FYI: >>>>>> This query works fine with Postgres-XC 1.0.3. >>>>>> Is this already known bug ? >>>>>> >>>>>> >>>>>> How can I avoid this problem ? >>>>>> And what kind of info do you need to investigate it ? >>>>>> >>>>>> ---------- >>>>>> NTT Software Corporation >>>>>> Tomonari Katsumata |
From: Abbas B. <abb...@en...> - 2013-06-10 12:31:57
|
Hi, Attached please find a WIP patch that provides the functionality of preparing the statement at the datanodes as soon as it is prepared on the coordinator. This is to take care of a test case in plancache that makes sure that change of search_path is ignored by replans. While the patch fixes this replan test case and the regression works fine there are still these two problems I have to take care of. 1. This test case fails CREATE TABLE xc_alter_table_3 (a int, b varchar(10)) DISTRIBUTE BY HASH(a); INSERT INTO xc_alter_table_3 VALUES (1, 'a'); PREPARE d3 AS DELETE FROM xc_alter_table_3 WHERE a = $1; -- fails test=# explain verbose DELETE FROM xc_alter_table_3 WHERE a = 1; QUERY PLAN ------------------------------------------------------------------- Delete on public.xc_alter_table_3 (cost=0.00..0.00 rows=1000 width=14) Node/s: data_node_1, data_node_2, data_node_3, data_node_4 Remote query: DELETE FROM ONLY xc_alter_table_3 WHERE ((xc_alter_table_3.ctid = $1) AND (xc_alter_table_3.xc_node_id = $2)) -> Data Node Scan on xc_alter_table_3 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=14) Output: xc_alter_table_3.a, xc_alter_table_3.ctid, xc_alter_table_3.xc_node_id Node/s: data_node_3 Remote query: SELECT a, ctid, xc_node_id FROM ONLY xc_alter_table_3 WHERE (a = 1) (7 rows) The reason of the failure is that the select query is selecting 3 items, the first of which is an int, whereas the delete query is comparing $1 with a ctid. I am not sure how this works without prepare, but it fails when used with prepare. The reason of this planning is this section of code in function pgxc_build_dml_statement else if (cmdtype == CMD_DELETE) { /* * Since there is no data to update, the first param is going to be * ctid. */ ctid_param_num = 1; } Amit/Ashutosh can you suggest a fix for this problem? There are a number of possibilities. a) The select should not have selected column a. b) The DELETE should have referred to $2 and $3 for ctid and xc_node_id respectively. c) Since the query works without PREPARE, we should make PREPARE work the same way. 2. This test case in plancache fails. -- Try it with a view, which isn't directly used in the resulting plan -- but should trigger invalidation anyway create table tab33 (a int, b int); insert into tab33 values(1,2); CREATE VIEW v_tab33 AS SELECT * FROM tab33; PREPARE vprep AS SELECT * FROM v_tab33; EXECUTE vprep; CREATE OR REPLACE VIEW v_tab33 AS SELECT a, b/2 AS q2 FROM tab33; -- does not cause plan invalidation because views are never created on datanodes EXECUTE vprep; and the reason of the failure is that views are never created on the datanodes hence plan invalidation is not triggered. This can be documented as an XC limitation. 3. I still have to add comments in the patch and some ifdefs may be missing too. In addition to the patch I have also attached some example Java programs that test the some basic functionality through JDBC. I found that these programs are working fine after my patch. 1. Prepared.java : Issues parameterized delete, insert and update through JDBC. These are un-named prepared statements and works fine. 2. NamedPrepared.java : Issues two named prepared statements through JDBC and works fine. 3. Retrieve.java : Runs a simple select to verify results. The comments on top of the files explain their usage. Comments are welcome. Thanks Regards On Mon, Jun 3, 2013 at 10:54 AM, Ashutosh Bapat < ash...@en...> wrote: > > > > On Mon, Jun 3, 2013 at 10:51 AM, Abbas Butt <abb...@en...>wrote: > >> >> >> On Mon, Jun 3, 2013 at 8:43 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> >>> On Mon, Jun 3, 2013 at 7:40 AM, Abbas Butt <abb...@en...>wrote: >>> >>>> Attached please find updated patch to fix the bug. The patch takes care >>>> of the bug and the regression issues resulting from the changes done in the >>>> patch. Please note that the issue in test case plancache still stands >>>> unsolved because of the following test case (simplified but taken from >>>> plancache.sql) >>>> >>>> create schema s1 create table abc (f1 int); >>>> create schema s2 create table abc (f1 int); >>>> >>>> >>>> insert into s1.abc values(123); >>>> insert into s2.abc values(456); >>>> >>>> set search_path = s1; >>>> >>>> prepare p1 as select f1 from abc; >>>> execute p1; -- works fine, results in 123 >>>> >>>> set search_path = s2; >>>> execute p1; -- works fine after the patch, results in 123 >>>> >>>> alter table s1.abc add column f2 float8; -- force replan >>>> execute p1; -- fails >>>> >>>> >>> Huh! The beast bit us. >>> >>> I think the right solution here is either of two >>> 1. Take your previous patch to always use qualified names (but you need >>> to improve it not to affect the view dumps) >>> 2. Prepare the statements at the datanode at the time of prepare. >>> >>> >>> Is this test added new in 9.2? >>> >> >> No, it was added by commit 547b6e537aa8bbae83a8a4c4d0d7f216390bdb9c in >> March 2007. >> >> >>> Why didn't we see this issue the first time prepare was implemented? I >>> don't remember (but it was two years back). >>> >> >> I was unable to locate the exact reason but since statements were not >> being prepared on datanodes due to a merge issue this issue just surfaced >> up. >> >> > > Well, even though statements were not getting prepared (actually prepared > statements were not being used again and again) on datanodes, we never > prepared them on datanode at the time of preparing the statement. So, this > bug should have shown itself long back. > > >> >>> >>>> The last execute should result in 123, whereas it results in 456. The >>>> reason is that the search path has already been changed at the datanode and >>>> a replan would mean select from abc in s2. >>>> >>>> >>>> >>>> >>>> On Tue, May 28, 2013 at 7:17 PM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> Hi Abbas, >>>>> I think the fix is on the right track. There are couple of >>>>> improvements that we need to do here (but you may not do those if the time >>>>> doesn't permit). >>>>> >>>>> 1. We should have a status in RemoteQuery node, as to whether the >>>>> query in the node should use extended protocol or not, rather than relying >>>>> on the presence of statement name and parameters etc. Amit has already >>>>> added a status with that effect. We need to leverage it. >>>>> >>>>> >>>>> On Tue, May 28, 2013 at 9:04 AM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> The patch fixes the dead code issue, that I described earlier. The >>>>>> code was dead because of two issues: >>>>>> >>>>>> 1. The function CompleteCachedPlan was wrongly setting stmt_name to >>>>>> NULL and this was the main reason ActivateDatanodeStatementOnNode was not >>>>>> being called in the function pgxc_start_command_on_connection. >>>>>> 2. The function SetRemoteStatementName was wrongly assuming that a >>>>>> prepared statement must have some parameters. >>>>>> >>>>>> Fixing these two issues makes sure that the function >>>>>> ActivateDatanodeStatementOnNode is now called and statements get prepared >>>>>> on the datanode. >>>>>> This patch would fix bug 3607975. It would however not fix the test >>>>>> case I described in my previous email because of reasons I described. >>>>>> >>>>>> >>>>>> On Tue, May 28, 2013 at 5:50 PM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> Can you please explain what this fix does? It would help to have an >>>>>>> elaborate explanation with code snippets. >>>>>>> >>>>>>> >>>>>>> On Sun, May 26, 2013 at 10:18 PM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, May 24, 2013 at 7:04 PM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, May 24, 2013 at 9:01 AM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, May 24, 2013 at 7:22 AM, Ashutosh Bapat < >>>>>>>>>> ash...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, May 23, 2013 at 9:21 PM, Abbas Butt < >>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> While working on test case plancache it was brought up as a >>>>>>>>>>>> review comment that solving bug id 3607975 should solve the problem of the >>>>>>>>>>>> test case. >>>>>>>>>>>> However there is some confusion in the statement of bug id >>>>>>>>>>>> 3607975. >>>>>>>>>>>> >>>>>>>>>>>> "When a user does and PREPARE and then EXECUTEs multiple times, >>>>>>>>>>>> the coordinator keeps on preparing and executing the query on datanode al >>>>>>>>>>>> times, as against preparing once and executing multiple times. This is >>>>>>>>>>>> because somehow the remote query is being prepared as an unnamed statement." >>>>>>>>>>>> >>>>>>>>>>>> Consider this test case >>>>>>>>>>>> >>>>>>>>>>>> A. create table abc(a int, b int); >>>>>>>>>>>> B. insert into abc values(11, 22); >>>>>>>>>>>> C. prepare p1 as select * from abc; >>>>>>>>>>>> D. execute p1; >>>>>>>>>>>> E. execute p1; >>>>>>>>>>>> F. execute p1; >>>>>>>>>>>> >>>>>>>>>>>> Here are the confusions >>>>>>>>>>>> >>>>>>>>>>>> 1. The coordinator never prepares on datanode in response to a >>>>>>>>>>>> prepare issued by a user. >>>>>>>>>>>> In fact step C does nothing on the datanodes. >>>>>>>>>>>> Step D simply sends "SELECT a, b FROM abc" to all >>>>>>>>>>>> datanodes. >>>>>>>>>>>> >>>>>>>>>>>> 2. In step D, ExecuteQuery calls BuildCachedPlan to build a new >>>>>>>>>>>> generic plan, >>>>>>>>>>>> and steps E and F use the already built generic plan. >>>>>>>>>>>> For details see function GetCachedPlan. >>>>>>>>>>>> This means that executing a prepared statement again and >>>>>>>>>>>> again does use cached plans >>>>>>>>>>>> and does not prepare again and again every time we issue an >>>>>>>>>>>> execute. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> The problem is not here. The problem is in do_query() where >>>>>>>>>>> somehow the name of prepared statement gets wiped out and we keep on >>>>>>>>>>> preparing unnamed statements at the datanode. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> We never prepare any named/unnamed statements on the datanode. I >>>>>>>>>> spent time looking at the code written in do_query and functions called >>>>>>>>>> from with in do_query to handle prepared statements but the code written in >>>>>>>>>> pgxc_start_command_on_connection to handle statements prepared on datanodes >>>>>>>>>> is dead as of now. It is never called during the complete regression run. >>>>>>>>>> The function ActivateDatanodeStatementOnNode is never called. The way >>>>>>>>>> prepared statements are being handled now is the same as I described >>>>>>>>>> earlier in the mail chain with the help of an example. >>>>>>>>>> The code that is dead was originally added by Mason through >>>>>>>>>> commit d6d2d3d925f571b0b58ff6b4f6504d88e96bb342, back in December 2010. >>>>>>>>>> This code has been changed a lot over the last two years. This commit does >>>>>>>>>> not contain any test cases so I am not sure how did it use to work back >>>>>>>>>> then. >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> This code wasn't dead, when I worked on prepared statements. So, >>>>>>>>> something has gone wrong in-between. That's what we need to find out and >>>>>>>>> fix. Not preparing statements on the datanode is not good for performance >>>>>>>>> either. >>>>>>>>> >>>>>>>> >>>>>>>> I was able to find the reason why the code was dead and the >>>>>>>> attached patch (WIP) fixes the problem. This would now ensure that >>>>>>>> statements are prepared on datanodes whenever required. However there is a >>>>>>>> problem in the way prepared statements are handled. The problem is that >>>>>>>> unless a prepared statement is executed it is never prepared on datanodes, >>>>>>>> hence changing the path before executing the statement gives us incorrect >>>>>>>> results. For Example >>>>>>>> >>>>>>>> create schema s1 create table abc (f1 int) distribute by >>>>>>>> replication; >>>>>>>> create schema s2 create table abc (f1 int) distribute by >>>>>>>> replication; >>>>>>>> >>>>>>>> insert into s1.abc values(123); >>>>>>>> insert into s2.abc values(456); >>>>>>>> set search_path = s2; >>>>>>>> prepare p1 as select f1 from abc; >>>>>>>> set search_path = s1; >>>>>>>> execute p1; >>>>>>>> >>>>>>>> The last execute results in 123, where as it should have resulted >>>>>>>> in 456. >>>>>>>> I can finalize the attached patch by fixing any regression issues >>>>>>>> that may result and that would fix 3607975 and improve performance however >>>>>>>> the above test case would still fail. >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> My conclusion is that the bug ID 3607975 is not reproducible. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Did you verify it under the debugger? If that would have been >>>>>>>>>>> the case, we would not have seen this problem if search_path changed in >>>>>>>>>>> between steps D and E. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> If search path is changed between steps D & E, the problem occurs >>>>>>>>>> because when the remote query node is created, schema qualification is not >>>>>>>>>> added in the sql statement to be sent to the datanode, but changes in >>>>>>>>>> search path do get communicated to the datanode. The sql statement is built >>>>>>>>>> when execute is issued for the first time and is reused on subsequent >>>>>>>>>> executes. The datanode is totally unaware that the select that it just >>>>>>>>>> received is due to an execute of a prepared statement that was prepared >>>>>>>>>> when search path was some thing else. >>>>>>>>>> >>>>>>>>>> >>>>>>>>> Fixing the prepared statements the way I suggested, would fix the >>>>>>>>> problem, since the statement will get prepared at the datanode, with the >>>>>>>>> same search path settings, as it would on the coordinator. >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Comments are welcome. >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> *Abbas* >>>>>>>>>>>> Architect >>>>>>>>>>>> >>>>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>>>> Skype ID: gabbasb >>>>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>>>> * >>>>>>>>>>>> Follow us on Twitter* >>>>>>>>>>>> @EnterpriseDB >>>>>>>>>>>> >>>>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>>>> monitoring service >>>>>>>>>>>> that delivers powerful full stack analytics. Optimize and >>>>>>>>>>>> monitor your >>>>>>>>>>>> browser, app, & servers with just a few lines of code. Try New >>>>>>>>>>>> Relic >>>>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>> Pos...@li... >>>>>>>>>>>> >>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Best Wishes, >>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>> The Postgres Database Company >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> *Abbas* >>>>>>>>>> Architect >>>>>>>>>> >>>>>>>>>> Ph: 92.334.5100153 >>>>>>>>>> Skype ID: gabbasb >>>>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>>>> * >>>>>>>>>> Follow us on Twitter* >>>>>>>>>> @EnterpriseDB >>>>>>>>>> >>>>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> *Abbas* >>>>>>>> Architect >>>>>>>> >>>>>>>> Ph: 92.334.5100153 >>>>>>>> Skype ID: gabbasb >>>>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>>>> * >>>>>>>> Follow us on Twitter* >>>>>>>> @EnterpriseDB >>>>>>>> >>>>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> *Abbas* >>>>>> Architect >>>>>> >>>>>> Ph: 92.334.5100153 >>>>>> Skype ID: gabbasb >>>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>>> * >>>>>> Follow us on Twitter* >>>>>> @EnterpriseDB >>>>>> >>>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Tomonari K. <kat...@po...> - 2013-06-06 09:51:18
|
Hi Suzuki-sun Thanks a lot. If I notice something, I'll report again! regards, -------- NTT Software Corporation Tomonari Katsumata (2013/06/06 13:55), Koichi Suzuki wrote: > I added this to the bug tracker with the ID 3614369 > > Regards; > > ---------- > Koichi Suzuki > > > 2013/6/5 鈴木 幸市 <ko...@in...> > >> Yeah, I found that this command stuck and doing this by direct connection >> to datanode works. >> >> Regards; >> --- >> Koichi Suzuki >> >> >> >> On 2013/06/05, at 18:12, Tomonari Katsumata < >> kat...@po...> wrote: >> >>> Hi, >>> >>> The queries sent by Suzuki-san work fine, >>> but my problem is still there. >>> Could you try execute the query I sent before? >>> >>> --- >>> EXECUTE DIRECT ON (data1) $$ >>> SELECT >>> count(*) >>> FROM >>> (SELECT * FROM pg_locks l LEFT JOIN >>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> $$ >>> --- >>> >>> I don't change this query because it work with Postgres-XC v1.0. >>> >>> regards, >>> ------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> (2013/06/05 16:20), Tomonari Katsumata wrote: >>>> Hi, all >>>> >>>> thank you for many responses! >>>> >>>> OK, I'll try it with the current master. >>>> >>>> It seems that it'll work fine... >>>> >>>> Sorry for bothering you. >>>> >>>> regards, >>>> -------- >>>> NTT Software Corporation >>>> Tomonari Katsumata >>>> >>>> >>>> (2013/06/05 13:48), 鈴木 幸市 wrote: >>>>> Now snapshot warning is disabled. Michael committed this patch. >>>>> >>>>> I tested the query with the current master as of this noon and I got >>>> (probably) correct result. >>>>> Here's the result: >>>>> >>>>> koichi=# execute direct on (datanode1) $$ >>>>> select count(*) from (select * from pg_locks) l left join >>>>> (select * from pg_stat_activity) s on (l.database=s.datid); >>>>> $$; >>>>> count >>>>> ------- >>>>> 9 >>>>> (1 row) >>>>> >>>>> koichi=# \q >>>>> … >>>>> >>>>> koichi=# execute direct on (datanode1) $$ >>>>> koichi$# select count(*) from pg_locks l left join pg_stat_activity s >>>>> koichi$# on (l.database=s.datid); >>>>> koichi$# $$; >>>>> count >>>>> ------- >>>>> 11 >>>>> (1 row) >>>>> >>>>> koichi=# >>>>> >>>>> Second statement is simpler version. Anyway, they seem to work find. >>>>> >>>>> Katsumata-san, could you try this with the latest head? It is >>>> available both from sourceforge and github. >>>>> Regards; >>>>> --- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> >>>>> On 2013/06/05, at 13:39, Ashutosh Bapat >>>> <ash...@en...> wrote: >>>>>> >>>>>> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata >>>> <kat...@po...> wrote: >>>>>> Hi Suzuki-san, Ashutosh, >>>>>> >>>>>>> Suzuki-san >>>>>> I don't make any user tables. >>>>>> As the simple example I sent before, I use only system-catalogs. >>>>>> >>>>>>> Ashtosh >>>>>> I'm developing database monitor tool and >>>>>> I use "EXECUTE DIRECT" to get database statistics data from >>>>>> particular coordinator/datanode. >>>>>> >>>>>> :) huh >>>>>> >>>>>> I think, monitoring tools should directly query the datanodes or >>>> coordinators. You will get snapshot warning, but that can be ignored I >>>> guess. If they start querying coordinators, there will be performance >>>> drop since coordinators directly handle the clients. >>>>>> Any other thoughts? >>>>>> regard, >>>>>> >>>>>> --------- >>>>>> NTT Software Corporation >>>>>> Tomonari Katsumata >>>>>> >>>>>> (2013/06/04 13:10), Ashutosh Bapat wrote: >>>>>> Hi Tomonari, >>>>>> >>>>>> Thanks for the bug report. >>>>>> >>>>>> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >>>>>> discourage using Execute Direct in the applications. It's only for >>>>>> debugging purposes. >>>>>> >>>>>> >>>>>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki >>>> <koi...@gm...>wrote: >>>>>> Thank you Katsumata-san for the report. >>>>>> >>>>>> Could you provide CREATE TABLE statement for each table involved with >>>> some >>>>>> of the data? >>>>>> >>>>>> I will ad this to the bug tracker after I recreate the issue. >>>>>> >>>>>> Best Regards; >>>>>> >>>>>> ---------- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2013/6/4 Tomonari Katsumata <kat...@po...> >>>>>> >>>>>> Hi, I have a problem with query executing. >>>>>> >>>>>> I cant't have any response when I execute a query. >>>>>> This problem occurs when some conditions are met. >>>>>> >>>>>> The conditions are below. >>>>>> --------------------------------------------------------------------- >>>>>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>>>>> >>>>>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>>>>> >>>>>> 3. In the subquery, it has a JOIN clause. >>>>>> >>>>>> 4. The Join clause is consisted with another subquery. >>>>>> --------------------------------------------------------------------- >>>>>> >>>>>> >>>>>> Simple example query is below. >>>>>> --------------------------------------------------------------- >>>>>> EXECUTE DIRECT ON (data1) $$ >>>>>> SELECT >>>>>> count(*) >>>>>> FROM >>>>>> (SELECT * FROM pg_locks l LEFT JOIN >>>>>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>>>>> $$ >>>>>> --------------------------------------------------------------- >>>>>> >>>>>> FYI: >>>>>> This query works fine with Postgres-XC 1.0.3. >>>>>> Is this already known bug ? >>>>>> >>>>>> >>>>>> How can I avoid this problem ? >>>>>> And what kind of info do you need to investigate it ? >>>>>> >>>>>> ---------- >>>>>> NTT Software Corporation >>>>>> Tomonari Katsumata >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >> ------------------------------------------------------------------------------ >>>>>> How ServiceNow helps IT people transform IT departments: >>>>>> 1. A cloud service to automate IT design, transition and operations >>>>>> 2. Dashboards that offer high-level views of enterprise services >>>>>> 3. A single system of record for all IT processes >>>>>> http://p.sf.net/sfu/servicenow-d2d-j >>>>>> _______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>>>>> >>>>>> >>>>>> >> ------------------------------------------------------------------------------ >>>>>> How ServiceNow helps IT people transform IT departments: >>>>>> 1. A cloud service to automate IT design, transition and operations >>>>>> 2. Dashboards that offer high-level views of enterprise services >>>>>> 3. A single system of record for all IT processes >>>>>> http://p.sf.net/sfu/servicenow-d2d-j >>>>>> _______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -------------------------------------------- >>>>>> NTTソフトウェア株式会社 >>>>>> 技術開発センター OSS基盤技術部門 >>>>>> 勝俣 智成 >>>>>> TEL:045-212-7665 >>>>>> FAX:045-662-7856 >>>>>> E-Mail: kat...@po... >>>>>> -------------------------------------------- >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >> ------------------------------------------------------------------------------ >>>>>> How ServiceNow helps IT people transform IT departments: >>>>>> 1. A cloud service to automate IT design, transition and operations >>>>>> 2. Dashboards that offer high-level views of enterprise services >>>>>> 3. A single system of record for all IT processes >>>>>> >> http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>>> >> ------------------------------------------------------------------------------ >>>> How ServiceNow helps IT people transform IT departments: >>>> 1. A cloud service to automate IT design, transition and operations >>>> 2. Dashboards that offer high-level views of enterprise services >>>> 3. A single system of record for all IT processes >>>> http://p.sf.net/sfu/servicenow-d2d-j >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>> >>> -- >>> -------------------------------------------- >>> NTTソフトウェア株式会社 >>> 技術開発センター OSS基盤技術部門 >>> 勝俣 智成 >>> TEL:045-212-7665 >>> FAX:045-662-7856 >>> E-Mail: kat...@po... >>> -------------------------------------------- >>> >>> >>> >>> >> ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/servicenow-d2d-j >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> -- -------------------------------------------- NTTソフトウェア株式会社 技術開発センター OSS基盤技術部門 勝俣 智成 TEL:045-212-7665 FAX:045-662-7856 E-Mail: kat...@po... -------------------------------------------- |
From: Koichi S. <koi...@gm...> - 2013-06-06 04:55:49
|
I added this to the bug tracker with the ID 3614369 Regards; ---------- Koichi Suzuki 2013/6/5 鈴木 幸市 <ko...@in...> > Yeah, I found that this command stuck and doing this by direct connection > to datanode works. > > Regards; > --- > Koichi Suzuki > > > > On 2013/06/05, at 18:12, Tomonari Katsumata < > kat...@po...> wrote: > > > Hi, > > > > The queries sent by Suzuki-san work fine, > > but my problem is still there. > > Could you try execute the query I sent before? > > > > --- > > EXECUTE DIRECT ON (data1) $$ > > SELECT > > count(*) > > FROM > > (SELECT * FROM pg_locks l LEFT JOIN > > (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > > $$ > > --- > > > > I don't change this query because it work with Postgres-XC v1.0. > > > > regards, > > ------- > > NTT Software Corporation > > Tomonari Katsumata > > > > (2013/06/05 16:20), Tomonari Katsumata wrote: > >> Hi, all > >> > >> thank you for many responses! > >> > >> OK, I'll try it with the current master. > >> > >> It seems that it'll work fine... > >> > >> Sorry for bothering you. > >> > >> regards, > >> -------- > >> NTT Software Corporation > >> Tomonari Katsumata > >> > >> > >> (2013/06/05 13:48), 鈴木 幸市 wrote: > >>> Now snapshot warning is disabled. Michael committed this patch. > >>> > >>> I tested the query with the current master as of this noon and I got > >> (probably) correct result. > >>> Here's the result: > >>> > >>> koichi=# execute direct on (datanode1) $$ > >>> select count(*) from (select * from pg_locks) l left join > >>> (select * from pg_stat_activity) s on (l.database=s.datid); > >>> $$; > >>> count > >>> ------- > >>> 9 > >>> (1 row) > >>> > >>> koichi=# \q > >>> … > >>> > >>> koichi=# execute direct on (datanode1) $$ > >>> koichi$# select count(*) from pg_locks l left join pg_stat_activity s > >>> koichi$# on (l.database=s.datid); > >>> koichi$# $$; > >>> count > >>> ------- > >>> 11 > >>> (1 row) > >>> > >>> koichi=# > >>> > >>> Second statement is simpler version. Anyway, they seem to work find. > >>> > >>> Katsumata-san, could you try this with the latest head? It is > >> available both from sourceforge and github. > >>> Regards; > >>> --- > >>> Koichi Suzuki > >>> > >>> > >>> > >>> On 2013/06/05, at 13:39, Ashutosh Bapat > >> <ash...@en...> wrote: > >>>> > >>>> > >>>> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata > >> <kat...@po...> wrote: > >>>> Hi Suzuki-san, Ashutosh, > >>>> > >>>>> Suzuki-san > >>>> I don't make any user tables. > >>>> As the simple example I sent before, I use only system-catalogs. > >>>> > >>>>> Ashtosh > >>>> I'm developing database monitor tool and > >>>> I use "EXECUTE DIRECT" to get database statistics data from > >>>> particular coordinator/datanode. > >>>> > >>>> :) huh > >>>> > >>>> I think, monitoring tools should directly query the datanodes or > >> coordinators. You will get snapshot warning, but that can be ignored I > >> guess. If they start querying coordinators, there will be performance > >> drop since coordinators directly handle the clients. > >>>> Any other thoughts? > >>>> regard, > >>>> > >>>> --------- > >>>> NTT Software Corporation > >>>> Tomonari Katsumata > >>>> > >>>> (2013/06/04 13:10), Ashutosh Bapat wrote: > >>>> Hi Tomonari, > >>>> > >>>> Thanks for the bug report. > >>>> > >>>> I am curious to know, what's the purpose of using EXECUTE DIRECT? We > >>>> discourage using Execute Direct in the applications. It's only for > >>>> debugging purposes. > >>>> > >>>> > >>>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki > >> <koi...@gm...>wrote: > >>>> Thank you Katsumata-san for the report. > >>>> > >>>> Could you provide CREATE TABLE statement for each table involved with > >> some > >>>> of the data? > >>>> > >>>> I will ad this to the bug tracker after I recreate the issue. > >>>> > >>>> Best Regards; > >>>> > >>>> ---------- > >>>> Koichi Suzuki > >>>> > >>>> > >>>> 2013/6/4 Tomonari Katsumata <kat...@po...> > >>>> > >>>> Hi, I have a problem with query executing. > >>>> > >>>> I cant't have any response when I execute a query. > >>>> This problem occurs when some conditions are met. > >>>> > >>>> The conditions are below. > >>>> --------------------------------------------------------------------- > >>>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). > >>>> > >>>> 2. The Query Executing on Datanode has subquery on its FROM-clause. > >>>> > >>>> 3. In the subquery, it has a JOIN clause. > >>>> > >>>> 4. The Join clause is consisted with another subquery. > >>>> --------------------------------------------------------------------- > >>>> > >>>> > >>>> Simple example query is below. > >>>> --------------------------------------------------------------- > >>>> EXECUTE DIRECT ON (data1) $$ > >>>> SELECT > >>>> count(*) > >>>> FROM > >>>> (SELECT * FROM pg_locks l LEFT JOIN > >>>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > >>>> $$ > >>>> --------------------------------------------------------------- > >>>> > >>>> FYI: > >>>> This query works fine with Postgres-XC 1.0.3. > >>>> Is this already known bug ? > >>>> > >>>> > >>>> How can I avoid this problem ? > >>>> And what kind of info do you need to investigate it ? > >>>> > >>>> ---------- > >>>> NTT Software Corporation > >>>> Tomonari Katsumata > >>>> > >>>> > >>>> > >>>> > >>>> > >> > ------------------------------------------------------------------------------ > >>>> How ServiceNow helps IT people transform IT departments: > >>>> 1. A cloud service to automate IT design, transition and operations > >>>> 2. Dashboards that offer high-level views of enterprise services > >>>> 3. A single system of record for all IT processes > >>>> http://p.sf.net/sfu/servicenow-d2d-j > >>>> _______________________________________________ > >>>> Postgres-xc-developers mailing list > >>>> Pos...@li... > >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>> > >>>> > >>>> > >>>> > >> > ------------------------------------------------------------------------------ > >>>> How ServiceNow helps IT people transform IT departments: > >>>> 1. A cloud service to automate IT design, transition and operations > >>>> 2. Dashboards that offer high-level views of enterprise services > >>>> 3. A single system of record for all IT processes > >>>> http://p.sf.net/sfu/servicenow-d2d-j > >>>> _______________________________________________ > >>>> Postgres-xc-developers mailing list > >>>> Pos...@li... > >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> -------------------------------------------- > >>>> NTTソフトウェア株式会社 > >>>> 技術開発センター OSS基盤技術部門 > >>>> 勝俣 智成 > >>>> TEL:045-212-7665 > >>>> FAX:045-662-7856 > >>>> E-Mail: kat...@po... > >>>> -------------------------------------------- > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> Best Wishes, > >>>> Ashutosh Bapat > >>>> EntepriseDB Corporation > >>>> The Postgres Database Company > >>>> > >> > ------------------------------------------------------------------------------ > >>>> How ServiceNow helps IT people transform IT departments: > >>>> 1. A cloud service to automate IT design, transition and operations > >>>> 2. Dashboards that offer high-level views of enterprise services > >>>> 3. A single system of record for all IT processes > >>>> > >> > http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ > >>>> Postgres-xc-developers mailing list > >>>> Pos...@li... > >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>> > >> > >> > >> > >> > ------------------------------------------------------------------------------ > >> How ServiceNow helps IT people transform IT departments: > >> 1. A cloud service to automate IT design, transition and operations > >> 2. Dashboards that offer high-level views of enterprise services > >> 3. A single system of record for all IT processes > >> http://p.sf.net/sfu/servicenow-d2d-j > >> _______________________________________________ > >> Postgres-xc-developers mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > > > > > > -- > > -------------------------------------------- > > NTTソフトウェア株式会社 > > 技術開発センター OSS基盤技術部門 > > 勝俣 智成 > > TEL:045-212-7665 > > FAX:045-662-7856 > > E-Mail: kat...@po... > > -------------------------------------------- > > > > > > > > > ------------------------------------------------------------------------------ > > How ServiceNow helps IT people transform IT departments: > > 1. A cloud service to automate IT design, transition and operations > > 2. Dashboards that offer high-level views of enterprise services > > 3. A single system of record for all IT processes > > http://p.sf.net/sfu/servicenow-d2d-j > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: 鈴木 幸市 <ko...@in...> - 2013-06-05 09:23:40
|
Yeah, I found that this command stuck and doing this by direct connection to datanode works. Regards; --- Koichi Suzuki On 2013/06/05, at 18:12, Tomonari Katsumata <kat...@po...> wrote: > Hi, > > The queries sent by Suzuki-san work fine, > but my problem is still there. > Could you try execute the query I sent before? > > --- > EXECUTE DIRECT ON (data1) $$ > SELECT > count(*) > FROM > (SELECT * FROM pg_locks l LEFT JOIN > (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > $$ > --- > > I don't change this query because it work with Postgres-XC v1.0. > > regards, > ------- > NTT Software Corporation > Tomonari Katsumata > > (2013/06/05 16:20), Tomonari Katsumata wrote: >> Hi, all >> >> thank you for many responses! >> >> OK, I'll try it with the current master. >> >> It seems that it'll work fine... >> >> Sorry for bothering you. >> >> regards, >> -------- >> NTT Software Corporation >> Tomonari Katsumata >> >> >> (2013/06/05 13:48), 鈴木 幸市 wrote: >>> Now snapshot warning is disabled. Michael committed this patch. >>> >>> I tested the query with the current master as of this noon and I got >> (probably) correct result. >>> Here's the result: >>> >>> koichi=# execute direct on (datanode1) $$ >>> select count(*) from (select * from pg_locks) l left join >>> (select * from pg_stat_activity) s on (l.database=s.datid); >>> $$; >>> count >>> ------- >>> 9 >>> (1 row) >>> >>> koichi=# \q >>> … >>> >>> koichi=# execute direct on (datanode1) $$ >>> koichi$# select count(*) from pg_locks l left join pg_stat_activity s >>> koichi$# on (l.database=s.datid); >>> koichi$# $$; >>> count >>> ------- >>> 11 >>> (1 row) >>> >>> koichi=# >>> >>> Second statement is simpler version. Anyway, they seem to work find. >>> >>> Katsumata-san, could you try this with the latest head? It is >> available both from sourceforge and github. >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> >>> >>> On 2013/06/05, at 13:39, Ashutosh Bapat >> <ash...@en...> wrote: >>>> >>>> >>>> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata >> <kat...@po...> wrote: >>>> Hi Suzuki-san, Ashutosh, >>>> >>>>> Suzuki-san >>>> I don't make any user tables. >>>> As the simple example I sent before, I use only system-catalogs. >>>> >>>>> Ashtosh >>>> I'm developing database monitor tool and >>>> I use "EXECUTE DIRECT" to get database statistics data from >>>> particular coordinator/datanode. >>>> >>>> :) huh >>>> >>>> I think, monitoring tools should directly query the datanodes or >> coordinators. You will get snapshot warning, but that can be ignored I >> guess. If they start querying coordinators, there will be performance >> drop since coordinators directly handle the clients. >>>> Any other thoughts? >>>> regard, >>>> >>>> --------- >>>> NTT Software Corporation >>>> Tomonari Katsumata >>>> >>>> (2013/06/04 13:10), Ashutosh Bapat wrote: >>>> Hi Tomonari, >>>> >>>> Thanks for the bug report. >>>> >>>> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >>>> discourage using Execute Direct in the applications. It's only for >>>> debugging purposes. >>>> >>>> >>>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki >> <koi...@gm...>wrote: >>>> Thank you Katsumata-san for the report. >>>> >>>> Could you provide CREATE TABLE statement for each table involved with >> some >>>> of the data? >>>> >>>> I will ad this to the bug tracker after I recreate the issue. >>>> >>>> Best Regards; >>>> >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/6/4 Tomonari Katsumata <kat...@po...> >>>> >>>> Hi, I have a problem with query executing. >>>> >>>> I cant't have any response when I execute a query. >>>> This problem occurs when some conditions are met. >>>> >>>> The conditions are below. >>>> --------------------------------------------------------------------- >>>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>>> >>>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>>> >>>> 3. In the subquery, it has a JOIN clause. >>>> >>>> 4. The Join clause is consisted with another subquery. >>>> --------------------------------------------------------------------- >>>> >>>> >>>> Simple example query is below. >>>> --------------------------------------------------------------- >>>> EXECUTE DIRECT ON (data1) $$ >>>> SELECT >>>> count(*) >>>> FROM >>>> (SELECT * FROM pg_locks l LEFT JOIN >>>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>>> $$ >>>> --------------------------------------------------------------- >>>> >>>> FYI: >>>> This query works fine with Postgres-XC 1.0.3. >>>> Is this already known bug ? >>>> >>>> >>>> How can I avoid this problem ? >>>> And what kind of info do you need to investigate it ? >>>> >>>> ---------- >>>> NTT Software Corporation >>>> Tomonari Katsumata >>>> >>>> >>>> >>>> >>>> >> ------------------------------------------------------------------------------ >>>> How ServiceNow helps IT people transform IT departments: >>>> 1. A cloud service to automate IT design, transition and operations >>>> 2. Dashboards that offer high-level views of enterprise services >>>> 3. A single system of record for all IT processes >>>> http://p.sf.net/sfu/servicenow-d2d-j >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>>> >>>> >> ------------------------------------------------------------------------------ >>>> How ServiceNow helps IT people transform IT departments: >>>> 1. A cloud service to automate IT design, transition and operations >>>> 2. Dashboards that offer high-level views of enterprise services >>>> 3. A single system of record for all IT processes >>>> http://p.sf.net/sfu/servicenow-d2d-j >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> -------------------------------------------- >>>> NTTソフトウェア株式会社 >>>> 技術開発センター OSS基盤技術部門 >>>> 勝俣 智成 >>>> TEL:045-212-7665 >>>> FAX:045-662-7856 >>>> E-Mail: kat...@po... >>>> -------------------------------------------- >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >> ------------------------------------------------------------------------------ >>>> How ServiceNow helps IT people transform IT departments: >>>> 1. A cloud service to automate IT design, transition and operations >>>> 2. Dashboards that offer high-level views of enterprise services >>>> 3. A single system of record for all IT processes >>>> >> http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > -- > -------------------------------------------- > NTTソフトウェア株式会社 > 技術開発センター OSS基盤技術部門 > 勝俣 智成 > TEL:045-212-7665 > FAX:045-662-7856 > E-Mail: kat...@po... > -------------------------------------------- > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Tomonari K. <kat...@po...> - 2013-06-05 09:12:50
|
Hi, The queries sent by Suzuki-san work fine, but my problem is still there. Could you try execute the query I sent before? --- EXECUTE DIRECT ON (data1) $$ SELECT count(*) FROM (SELECT * FROM pg_locks l LEFT JOIN (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a $$ --- I don't change this query because it work with Postgres-XC v1.0. regards, ------- NTT Software Corporation Tomonari Katsumata (2013/06/05 16:20), Tomonari Katsumata wrote: > Hi, all > > thank you for many responses! > > OK, I'll try it with the current master. > > It seems that it'll work fine... > > Sorry for bothering you. > > regards, > -------- > NTT Software Corporation > Tomonari Katsumata > > > (2013/06/05 13:48), 鈴木 幸市 wrote: >> Now snapshot warning is disabled. Michael committed this patch. >> >> I tested the query with the current master as of this noon and I got > (probably) correct result. >> Here's the result: >> >> koichi=# execute direct on (datanode1) $$ >> select count(*) from (select * from pg_locks) l left join >> (select * from pg_stat_activity) s on (l.database=s.datid); >> $$; >> count >> ------- >> 9 >> (1 row) >> >> koichi=# \q >> … >> >> koichi=# execute direct on (datanode1) $$ >> koichi$# select count(*) from pg_locks l left join pg_stat_activity s >> koichi$# on (l.database=s.datid); >> koichi$# $$; >> count >> ------- >> 11 >> (1 row) >> >> koichi=# >> >> Second statement is simpler version. Anyway, they seem to work find. >> >> Katsumata-san, could you try this with the latest head? It is > available both from sourceforge and github. >> Regards; >> --- >> Koichi Suzuki >> >> >> >> On 2013/06/05, at 13:39, Ashutosh Bapat > <ash...@en...> wrote: >>> >>> >>> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata > <kat...@po...> wrote: >>> Hi Suzuki-san, Ashutosh, >>> >>>> Suzuki-san >>> I don't make any user tables. >>> As the simple example I sent before, I use only system-catalogs. >>> >>>> Ashtosh >>> I'm developing database monitor tool and >>> I use "EXECUTE DIRECT" to get database statistics data from >>> particular coordinator/datanode. >>> >>> :) huh >>> >>> I think, monitoring tools should directly query the datanodes or > coordinators. You will get snapshot warning, but that can be ignored I > guess. If they start querying coordinators, there will be performance > drop since coordinators directly handle the clients. >>> Any other thoughts? >>> regard, >>> >>> --------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> (2013/06/04 13:10), Ashutosh Bapat wrote: >>> Hi Tomonari, >>> >>> Thanks for the bug report. >>> >>> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >>> discourage using Execute Direct in the applications. It's only for >>> debugging purposes. >>> >>> >>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki > <koi...@gm...>wrote: >>> Thank you Katsumata-san for the report. >>> >>> Could you provide CREATE TABLE statement for each table involved with > some >>> of the data? >>> >>> I will ad this to the bug tracker after I recreate the issue. >>> >>> Best Regards; >>> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/6/4 Tomonari Katsumata <kat...@po...> >>> >>> Hi, I have a problem with query executing. >>> >>> I cant't have any response when I execute a query. >>> This problem occurs when some conditions are met. >>> >>> The conditions are below. >>> --------------------------------------------------------------------- >>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>> >>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>> >>> 3. In the subquery, it has a JOIN clause. >>> >>> 4. The Join clause is consisted with another subquery. >>> --------------------------------------------------------------------- >>> >>> >>> Simple example query is below. >>> --------------------------------------------------------------- >>> EXECUTE DIRECT ON (data1) $$ >>> SELECT >>> count(*) >>> FROM >>> (SELECT * FROM pg_locks l LEFT JOIN >>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> $$ >>> --------------------------------------------------------------- >>> >>> FYI: >>> This query works fine with Postgres-XC 1.0.3. >>> Is this already known bug ? >>> >>> >>> How can I avoid this problem ? >>> And what kind of info do you need to investigate it ? >>> >>> ---------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> >>> >>> >>> > ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/servicenow-d2d-j >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> > ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/servicenow-d2d-j >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> >>> >>> -- >>> -------------------------------------------- >>> NTTソフトウェア株式会社 >>> 技術開発センター OSS基盤技術部門 >>> 勝俣 智成 >>> TEL:045-212-7665 >>> FAX:045-662-7856 >>> E-Mail: kat...@po... >>> -------------------------------------------- >>> >>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> > ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> > http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- -------------------------------------------- NTTソフトウェア株式会社 技術開発センター OSS基盤技術部門 勝俣 智成 TEL:045-212-7665 FAX:045-662-7856 E-Mail: kat...@po... -------------------------------------------- |
From: 鈴木 幸市 <ko...@in...> - 2013-06-05 07:34:24
|
You are very welcome. Please do not hesitate to post your problems/questions. Regards; --- Koichi Suzuki On 2013/06/05, at 16:20, Tomonari Katsumata <kat...@po...> wrote: > Hi, all > > thank you for many responses! > > OK, I'll try it with the current master. > > It seems that it'll work fine... > > Sorry for bothering you. > > regards, > -------- > NTT Software Corporation > Tomonari Katsumata > > > (2013/06/05 13:48), 鈴木 幸市 wrote: >> Now snapshot warning is disabled. Michael committed this patch. >> >> I tested the query with the current master as of this noon and I got > (probably) correct result. >> >> Here's the result: >> >> koichi=# execute direct on (datanode1) $$ >> select count(*) from (select * from pg_locks) l left join >> (select * from pg_stat_activity) s on (l.database=s.datid); >> $$; >> count >> ------- >> 9 >> (1 row) >> >> koichi=# \q >> … >> >> koichi=# execute direct on (datanode1) $$ >> koichi$# select count(*) from pg_locks l left join pg_stat_activity s >> koichi$# on (l.database=s.datid); >> koichi$# $$; >> count >> ------- >> 11 >> (1 row) >> >> koichi=# >> >> Second statement is simpler version. Anyway, they seem to work find. >> >> Katsumata-san, could you try this with the latest head? It is > available both from sourceforge and github. >> >> Regards; >> --- >> Koichi Suzuki >> >> >> >> On 2013/06/05, at 13:39, Ashutosh Bapat > <ash...@en...> wrote: >> >>> >>> >>> >>> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata > <kat...@po...> wrote: >>> Hi Suzuki-san, Ashutosh, >>> >>>> Suzuki-san >>> I don't make any user tables. >>> As the simple example I sent before, I use only system-catalogs. >>> >>>> Ashtosh >>> I'm developing database monitor tool and >>> I use "EXECUTE DIRECT" to get database statistics data from >>> particular coordinator/datanode. >>> >>> :) huh >>> >>> I think, monitoring tools should directly query the datanodes or > coordinators. You will get snapshot warning, but that can be ignored I > guess. If they start querying coordinators, there will be performance > drop since coordinators directly handle the clients. >>> >>> Any other thoughts? >>> regard, >>> >>> --------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> (2013/06/04 13:10), Ashutosh Bapat wrote: >>> Hi Tomonari, >>> >>> Thanks for the bug report. >>> >>> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >>> discourage using Execute Direct in the applications. It's only for >>> debugging purposes. >>> >>> >>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki > <koi...@gm...>wrote: >>> >>> Thank you Katsumata-san for the report. >>> >>> Could you provide CREATE TABLE statement for each table involved with > some >>> of the data? >>> >>> I will ad this to the bug tracker after I recreate the issue. >>> >>> Best Regards; >>> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/6/4 Tomonari Katsumata <kat...@po...> >>> >>> Hi, I have a problem with query executing. >>> >>> I cant't have any response when I execute a query. >>> This problem occurs when some conditions are met. >>> >>> The conditions are below. >>> --------------------------------------------------------------------- >>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>> >>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>> >>> 3. In the subquery, it has a JOIN clause. >>> >>> 4. The Join clause is consisted with another subquery. >>> --------------------------------------------------------------------- >>> >>> >>> Simple example query is below. >>> --------------------------------------------------------------- >>> EXECUTE DIRECT ON (data1) $$ >>> SELECT >>> count(*) >>> FROM >>> (SELECT * FROM pg_locks l LEFT JOIN >>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> $$ >>> --------------------------------------------------------------- >>> >>> FYI: >>> This query works fine with Postgres-XC 1.0.3. >>> Is this already known bug ? >>> >>> >>> How can I avoid this problem ? >>> And what kind of info do you need to investigate it ? >>> >>> ---------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> >>> >>> >>> > ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/servicenow-d2d-j >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> > ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/servicenow-d2d-j >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> >>> >>> -- >>> -------------------------------------------- >>> NTTソフトウェア株式会社 >>> 技術開発センター OSS基盤技術部門 >>> 勝俣 智成 >>> TEL:045-212-7665 >>> FAX:045-662-7856 >>> E-Mail: kat...@po... >>> -------------------------------------------- >>> >>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> > ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> > http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > > |
From: Tomonari K. <kat...@po...> - 2013-06-05 07:21:25
|
Hi, all thank you for many responses! OK, I'll try it with the current master. It seems that it'll work fine... Sorry for bothering you. regards, -------- NTT Software Corporation Tomonari Katsumata (2013/06/05 13:48), 鈴木 幸市 wrote: > Now snapshot warning is disabled. Michael committed this patch. > > I tested the query with the current master as of this noon and I got (probably) correct result. > > Here's the result: > > koichi=# execute direct on (datanode1) $$ > select count(*) from (select * from pg_locks) l left join > (select * from pg_stat_activity) s on (l.database=s.datid); > $$; > count > ------- > 9 > (1 row) > > koichi=# \q > … > > koichi=# execute direct on (datanode1) $$ > koichi$# select count(*) from pg_locks l left join pg_stat_activity s > koichi$# on (l.database=s.datid); > koichi$# $$; > count > ------- > 11 > (1 row) > > koichi=# > > Second statement is simpler version. Anyway, they seem to work find. > > Katsumata-san, could you try this with the latest head? It is available both from sourceforge and github. > > Regards; > --- > Koichi Suzuki > > > > On 2013/06/05, at 13:39, Ashutosh Bapat <ash...@en...> wrote: > >> >> >> >> On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata <kat...@po...> wrote: >> Hi Suzuki-san, Ashutosh, >> >>> Suzuki-san >> I don't make any user tables. >> As the simple example I sent before, I use only system-catalogs. >> >>> Ashtosh >> I'm developing database monitor tool and >> I use "EXECUTE DIRECT" to get database statistics data from >> particular coordinator/datanode. >> >> :) huh >> >> I think, monitoring tools should directly query the datanodes or coordinators. You will get snapshot warning, but that can be ignored I guess. If they start querying coordinators, there will be performance drop since coordinators directly handle the clients. >> >> Any other thoughts? >> regard, >> >> --------- >> NTT Software Corporation >> Tomonari Katsumata >> >> (2013/06/04 13:10), Ashutosh Bapat wrote: >> Hi Tomonari, >> >> Thanks for the bug report. >> >> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >> discourage using Execute Direct in the applications. It's only for >> debugging purposes. >> >> >> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki <koi...@gm...>wrote: >> >> Thank you Katsumata-san for the report. >> >> Could you provide CREATE TABLE statement for each table involved with some >> of the data? >> >> I will ad this to the bug tracker after I recreate the issue. >> >> Best Regards; >> >> ---------- >> Koichi Suzuki >> >> >> 2013/6/4 Tomonari Katsumata <kat...@po...> >> >> Hi, I have a problem with query executing. >> >> I cant't have any response when I execute a query. >> This problem occurs when some conditions are met. >> >> The conditions are below. >> --------------------------------------------------------------------- >> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >> >> 2. The Query Executing on Datanode has subquery on its FROM-clause. >> >> 3. In the subquery, it has a JOIN clause. >> >> 4. The Join clause is consisted with another subquery. >> --------------------------------------------------------------------- >> >> >> Simple example query is below. >> --------------------------------------------------------------- >> EXECUTE DIRECT ON (data1) $$ >> SELECT >> count(*) >> FROM >> (SELECT * FROM pg_locks l LEFT JOIN >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >> $$ >> --------------------------------------------------------------- >> >> FYI: >> This query works fine with Postgres-XC 1.0.3. >> Is this already known bug ? >> >> >> How can I avoid this problem ? >> And what kind of info do you need to investigate it ? >> >> ---------- >> NTT Software Corporation >> Tomonari Katsumata >> >> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> >> -- >> -------------------------------------------- >> NTTソフトウェア株式会社 >> 技術開発センター OSS基盤技術部門 >> 勝俣 智成 >> TEL:045-212-7665 >> FAX:045-662-7856 >> E-Mail: kat...@po... >> -------------------------------------------- >> >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: 鈴木 幸市 <ko...@in...> - 2013-06-05 05:10:14
|
Thanks Michael for the input. This is what I remembered. --- Koichi Suzuki On 2013/06/05, at 14:03, Michael Paquier <mic...@gm...> wrote: > > > > On Wed, Jun 5, 2013 at 1:39 PM, Ashutosh Bapat <ash...@en...> wrote: > > > > On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata <kat...@po...> wrote: > Hi Suzuki-san, Ashutosh, > > >Suzuki-san > I don't make any user tables. > As the simple example I sent before, I use only system-catalogs. > > >Ashtosh > I'm developing database monitor tool and > I use "EXECUTE DIRECT" to get database statistics data from > particular coordinator/datanode. > > :) huh > > I think, monitoring tools should directly query the datanodes or coordinators. You will get snapshot warning, but that can be ignored I guess. > Worth mentioning that it is not the case for 1.1: > https://github.com/postgres-xc/postgres-xc/commit/fe9985c168d85738e5d88ed9407b840449f31b75 > You get a clean snapshot for read queries run directly from Datanodes. > > If they start querying coordinators, there will be performance drop since coordinators directly handle the clients. > Yep. > -- > Michael > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Michael P. <mic...@gm...> - 2013-06-05 05:03:56
|
On Wed, Jun 5, 2013 at 1:39 PM, Ashutosh Bapat < ash...@en...> wrote: > > > > On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata < > kat...@po...> wrote: > >> Hi Suzuki-san, Ashutosh, >> >> >Suzuki-san >> I don't make any user tables. >> As the simple example I sent before, I use only system-catalogs. >> >> >Ashtosh >> I'm developing database monitor tool and >> I use "EXECUTE DIRECT" to get database statistics data from >> particular coordinator/datanode. >> >> :) huh > > I think, monitoring tools should directly query the datanodes or > coordinators. You will get snapshot warning, but that can be ignored I > guess. > Worth mentioning that it is not the case for 1.1: https://github.com/postgres-xc/postgres-xc/commit/fe9985c168d85738e5d88ed9407b840449f31b75 You get a clean snapshot for read queries run directly from Datanodes. If they start querying coordinators, there will be performance drop since > coordinators directly handle the clients. > Yep. -- Michael |
From: Abbas B. <abb...@en...> - 2013-06-05 04:53:40
|
Snapshot warning comes when you connect directly to the datanode and not when an execute direct is issued for a datanode. That warning is still there. if (IS_PGXC_DATANODE && !isRestoreMode && snapshot_source == SNAPSHOT_UNDEFINED && IsPostmasterEnvironment && IsNormalProcessingMode() && !IsAutoVacuumLauncherProcess()) { elog(WARNING, "Do not have a GTM snapshot available"); } On Wed, Jun 5, 2013 at 9:48 AM, 鈴木 幸市 <ko...@in...> wrote: > Now snapshot warning is disabled. Michael committed this patch. > > I tested the query with the current master as of this noon and I got > (probably) correct result. > > Here's the result: > > koichi=# execute direct on (datanode1) $$ > select count(*) from (select * from pg_locks) l left join > (select * from pg_stat_activity) s on (l.database=s.datid); > $$; > count > ------- > 9 > (1 row) > > koichi=# \q > … > > koichi=# execute direct on (datanode1) $$ > koichi$# select count(*) from pg_locks l left join pg_stat_activity s > koichi$# on (l.database=s.datid); > koichi$# $$; > count > ------- > 11 > (1 row) > > koichi=# > > Second statement is simpler version. Anyway, they seem to work find. > > Katsumata-san, could you try this with the latest head? It is available > both from sourceforge and github. > > Regards; > --- > Koichi Suzuki > > > > On 2013/06/05, at 13:39, Ashutosh Bapat <ash...@en...> > wrote: > > > > > On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata < > kat...@po...> wrote: > >> Hi Suzuki-san, Ashutosh, >> >> >Suzuki-san >> I don't make any user tables. >> As the simple example I sent before, I use only system-catalogs. >> >> >Ashtosh >> I'm developing database monitor tool and >> I use "EXECUTE DIRECT" to get database statistics data from >> particular coordinator/datanode. >> >> :) huh > > I think, monitoring tools should directly query the datanodes or > coordinators. You will get snapshot warning, but that can be ignored I > guess. If they start querying coordinators, there will be performance drop > since coordinators directly handle the clients. > > Any other thoughts? > >> regard, >> >> --------- >> NTT Software Corporation >> Tomonari Katsumata >> >> (2013/06/04 13:10), Ashutosh Bapat wrote: >> >>> Hi Tomonari, >>> >>> Thanks for the bug report. >>> >>> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >>> discourage using Execute Direct in the applications. It's only for >>> debugging purposes. >>> >>> >>> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki <koi...@gm... >>> >**wrote: >>> >>> Thank you Katsumata-san for the report. >>>> >>>> Could you provide CREATE TABLE statement for each table involved with >>>> some >>>> of the data? >>>> >>>> I will ad this to the bug tracker after I recreate the issue. >>>> >>>> Best Regards; >>>> >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/6/4 Tomonari Katsumata <katsumata.tomonari@po.ntts.**co.jp<kat...@po...> >>>> > >>>> >>>> Hi, I have a problem with query executing. >>>>> >>>>> I cant't have any response when I execute a query. >>>>> This problem occurs when some conditions are met. >>>>> >>>>> The conditions are below. >>>>> ------------------------------**------------------------------** >>>>> --------- >>>>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>>>> >>>>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>>>> >>>>> 3. In the subquery, it has a JOIN clause. >>>>> >>>>> 4. The Join clause is consisted with another subquery. >>>>> ------------------------------**------------------------------** >>>>> --------- >>>>> >>>>> >>>>> Simple example query is below. >>>>> ------------------------------**------------------------------**--- >>>>> EXECUTE DIRECT ON (data1) $$ >>>>> SELECT >>>>> count(*) >>>>> FROM >>>>> (SELECT * FROM pg_locks l LEFT JOIN >>>>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>>>> $$ >>>>> ------------------------------**------------------------------**--- >>>>> >>>>> FYI: >>>>> This query works fine with Postgres-XC 1.0.3. >>>>> Is this already known bug ? >>>>> >>>>> >>>>> How can I avoid this problem ? >>>>> And what kind of info do you need to investigate it ? >>>>> >>>>> ---------- >>>>> NTT Software Corporation >>>>> Tomonari Katsumata >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------**------------------------------** >>>>> ------------------ >>>>> How ServiceNow helps IT people transform IT departments: >>>>> 1. A cloud service to automate IT design, transition and operations >>>>> 2. Dashboards that offer high-level views of enterprise services >>>>> 3. A single system of record for all IT processes >>>>> http://p.sf.net/sfu/**servicenow-d2d-j<http://p.sf.net/sfu/servicenow-d2d-j> >>>>> ______________________________**_________________ >>>>> Postgres-xc-developers mailing list >>>>> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>>>> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-** >>>>> developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>>>> >>>>> >>>> >>>> ------------------------------**------------------------------** >>>> ------------------ >>>> How ServiceNow helps IT people transform IT departments: >>>> 1. A cloud service to automate IT design, transition and operations >>>> 2. Dashboards that offer high-level views of enterprise services >>>> 3. A single system of record for all IT processes >>>> http://p.sf.net/sfu/**servicenow-d2d-j<http://p.sf.net/sfu/servicenow-d2d-j> >>>> ______________________________**_________________ >>>> Postgres-xc-developers mailing list >>>> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>>> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>>> >>>> >>>> >>> >> >> -- >> ------------------------------**-------------- >> NTTソフトウェア株式会社 >> 技術開発センター OSS基盤技術部門 >> 勝俣 智成 >> TEL:045-212-7665 >> FAX:045-662-7856 >> E-Mail: kat...@po....**jp<kat...@po...> >> ------------------------------**-------------- >> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > > http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: 鈴木 幸市 <ko...@in...> - 2013-06-05 04:48:50
|
Now snapshot warning is disabled. Michael committed this patch. I tested the query with the current master as of this noon and I got (probably) correct result. Here's the result: koichi=# execute direct on (datanode1) $$ select count(*) from (select * from pg_locks) l left join (select * from pg_stat_activity) s on (l.database=s.datid); $$; count ------- 9 (1 row) koichi=# \q … koichi=# execute direct on (datanode1) $$ koichi$# select count(*) from pg_locks l left join pg_stat_activity s koichi$# on (l.database=s.datid); koichi$# $$; count ------- 11 (1 row) koichi=# Second statement is simpler version. Anyway, they seem to work find. Katsumata-san, could you try this with the latest head? It is available both from sourceforge and github. Regards; --- Koichi Suzuki On 2013/06/05, at 13:39, Ashutosh Bapat <ash...@en...> wrote: > > > > On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata <kat...@po...> wrote: > Hi Suzuki-san, Ashutosh, > > >Suzuki-san > I don't make any user tables. > As the simple example I sent before, I use only system-catalogs. > > >Ashtosh > I'm developing database monitor tool and > I use "EXECUTE DIRECT" to get database statistics data from > particular coordinator/datanode. > > :) huh > > I think, monitoring tools should directly query the datanodes or coordinators. You will get snapshot warning, but that can be ignored I guess. If they start querying coordinators, there will be performance drop since coordinators directly handle the clients. > > Any other thoughts? > regard, > > --------- > NTT Software Corporation > Tomonari Katsumata > > (2013/06/04 13:10), Ashutosh Bapat wrote: > Hi Tomonari, > > Thanks for the bug report. > > I am curious to know, what's the purpose of using EXECUTE DIRECT? We > discourage using Execute Direct in the applications. It's only for > debugging purposes. > > > On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki <koi...@gm...>wrote: > > Thank you Katsumata-san for the report. > > Could you provide CREATE TABLE statement for each table involved with some > of the data? > > I will ad this to the bug tracker after I recreate the issue. > > Best Regards; > > ---------- > Koichi Suzuki > > > 2013/6/4 Tomonari Katsumata <kat...@po...> > > Hi, I have a problem with query executing. > > I cant't have any response when I execute a query. > This problem occurs when some conditions are met. > > The conditions are below. > --------------------------------------------------------------------- > 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). > > 2. The Query Executing on Datanode has subquery on its FROM-clause. > > 3. In the subquery, it has a JOIN clause. > > 4. The Join clause is consisted with another subquery. > --------------------------------------------------------------------- > > > Simple example query is below. > --------------------------------------------------------------- > EXECUTE DIRECT ON (data1) $$ > SELECT > count(*) > FROM > (SELECT * FROM pg_locks l LEFT JOIN > (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > $$ > --------------------------------------------------------------- > > FYI: > This query works fine with Postgres-XC 1.0.3. > Is this already known bug ? > > > How can I avoid this problem ? > And what kind of info do you need to investigate it ? > > ---------- > NTT Software Corporation > Tomonari Katsumata > > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > > -- > -------------------------------------------- > NTTソフトウェア株式会社 > 技術開発センター OSS基盤技術部門 > 勝俣 智成 > TEL:045-212-7665 > FAX:045-662-7856 > E-Mail: kat...@po... > -------------------------------------------- > > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Ashutosh B. <ash...@en...> - 2013-06-05 04:40:01
|
On Wed, Jun 5, 2013 at 9:57 AM, Tomonari Katsumata < kat...@po...> wrote: > Hi Suzuki-san, Ashutosh, > > >Suzuki-san > I don't make any user tables. > As the simple example I sent before, I use only system-catalogs. > > >Ashtosh > I'm developing database monitor tool and > I use "EXECUTE DIRECT" to get database statistics data from > particular coordinator/datanode. > > :) huh I think, monitoring tools should directly query the datanodes or coordinators. You will get snapshot warning, but that can be ignored I guess. If they start querying coordinators, there will be performance drop since coordinators directly handle the clients. Any other thoughts? > regard, > > --------- > NTT Software Corporation > Tomonari Katsumata > > (2013/06/04 13:10), Ashutosh Bapat wrote: > >> Hi Tomonari, >> >> Thanks for the bug report. >> >> I am curious to know, what's the purpose of using EXECUTE DIRECT? We >> discourage using Execute Direct in the applications. It's only for >> debugging purposes. >> >> >> On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki <koi...@gm...> >> **wrote: >> >> Thank you Katsumata-san for the report. >>> >>> Could you provide CREATE TABLE statement for each table involved with >>> some >>> of the data? >>> >>> I will ad this to the bug tracker after I recreate the issue. >>> >>> Best Regards; >>> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/6/4 Tomonari Katsumata <katsumata.tomonari@po.ntts.**co.jp<kat...@po...> >>> > >>> >>> Hi, I have a problem with query executing. >>>> >>>> I cant't have any response when I execute a query. >>>> This problem occurs when some conditions are met. >>>> >>>> The conditions are below. >>>> ------------------------------**------------------------------** >>>> --------- >>>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>>> >>>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>>> >>>> 3. In the subquery, it has a JOIN clause. >>>> >>>> 4. The Join clause is consisted with another subquery. >>>> ------------------------------**------------------------------** >>>> --------- >>>> >>>> >>>> Simple example query is below. >>>> ------------------------------**------------------------------**--- >>>> EXECUTE DIRECT ON (data1) $$ >>>> SELECT >>>> count(*) >>>> FROM >>>> (SELECT * FROM pg_locks l LEFT JOIN >>>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>>> $$ >>>> ------------------------------**------------------------------**--- >>>> >>>> FYI: >>>> This query works fine with Postgres-XC 1.0.3. >>>> Is this already known bug ? >>>> >>>> >>>> How can I avoid this problem ? >>>> And what kind of info do you need to investigate it ? >>>> >>>> ---------- >>>> NTT Software Corporation >>>> Tomonari Katsumata >>>> >>>> >>>> >>>> >>>> ------------------------------**------------------------------** >>>> ------------------ >>>> How ServiceNow helps IT people transform IT departments: >>>> 1. A cloud service to automate IT design, transition and operations >>>> 2. Dashboards that offer high-level views of enterprise services >>>> 3. A single system of record for all IT processes >>>> http://p.sf.net/sfu/**servicenow-d2d-j<http://p.sf.net/sfu/servicenow-d2d-j> >>>> ______________________________**_________________ >>>> Postgres-xc-developers mailing list >>>> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>>> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>>> >>>> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/**servicenow-d2d-j<http://p.sf.net/sfu/servicenow-d2d-j> >>> ______________________________**_________________ >>> Postgres-xc-developers mailing list >>> Postgres-xc-developers@lists.**sourceforge.net<Pos...@li...> >>> https://lists.sourceforge.net/**lists/listinfo/postgres-xc-**developers<https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers> >>> >>> >>> >> > > -- > ------------------------------**-------------- > NTTソフトウェア株式会社 > 技術開発センター OSS基盤技術部門 > 勝俣 智成 > TEL:045-212-7665 > FAX:045-662-7856 > E-Mail: kat...@po....**jp<kat...@po...> > ------------------------------**-------------- > > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Tomonari K. <kat...@po...> - 2013-06-05 04:28:22
|
Hi Suzuki-san, Ashutosh, >Suzuki-san I don't make any user tables. As the simple example I sent before, I use only system-catalogs. >Ashtosh I'm developing database monitor tool and I use "EXECUTE DIRECT" to get database statistics data from particular coordinator/datanode. regard, --------- NTT Software Corporation Tomonari Katsumata (2013/06/04 13:10), Ashutosh Bapat wrote: > Hi Tomonari, > > Thanks for the bug report. > > I am curious to know, what's the purpose of using EXECUTE DIRECT? We > discourage using Execute Direct in the applications. It's only for > debugging purposes. > > > On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki <koi...@gm...>wrote: > >> Thank you Katsumata-san for the report. >> >> Could you provide CREATE TABLE statement for each table involved with some >> of the data? >> >> I will ad this to the bug tracker after I recreate the issue. >> >> Best Regards; >> >> ---------- >> Koichi Suzuki >> >> >> 2013/6/4 Tomonari Katsumata <kat...@po...> >> >>> Hi, I have a problem with query executing. >>> >>> I cant't have any response when I execute a query. >>> This problem occurs when some conditions are met. >>> >>> The conditions are below. >>> --------------------------------------------------------------------- >>> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >>> >>> 2. The Query Executing on Datanode has subquery on its FROM-clause. >>> >>> 3. In the subquery, it has a JOIN clause. >>> >>> 4. The Join clause is consisted with another subquery. >>> --------------------------------------------------------------------- >>> >>> >>> Simple example query is below. >>> --------------------------------------------------------------- >>> EXECUTE DIRECT ON (data1) $$ >>> SELECT >>> count(*) >>> FROM >>> (SELECT * FROM pg_locks l LEFT JOIN >>> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >>> $$ >>> --------------------------------------------------------------- >>> >>> FYI: >>> This query works fine with Postgres-XC 1.0.3. >>> Is this already known bug ? >>> >>> >>> How can I avoid this problem ? >>> And what kind of info do you need to investigate it ? >>> >>> ---------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> How ServiceNow helps IT people transform IT departments: >>> 1. A cloud service to automate IT design, transition and operations >>> 2. Dashboards that offer high-level views of enterprise services >>> 3. A single system of record for all IT processes >>> http://p.sf.net/sfu/servicenow-d2d-j >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > -- -------------------------------------------- NTTソフトウェア株式会社 技術開発センター OSS基盤技術部門 勝俣 智成 TEL:045-212-7665 FAX:045-662-7856 E-Mail: kat...@po... -------------------------------------------- |
From: Ashutosh B. <ash...@en...> - 2013-06-04 04:10:19
|
Hi Tomonari, Thanks for the bug report. I am curious to know, what's the purpose of using EXECUTE DIRECT? We discourage using Execute Direct in the applications. It's only for debugging purposes. On Tue, Jun 4, 2013 at 7:28 AM, Koichi Suzuki <koi...@gm...>wrote: > Thank you Katsumata-san for the report. > > Could you provide CREATE TABLE statement for each table involved with some > of the data? > > I will ad this to the bug tracker after I recreate the issue. > > Best Regards; > > ---------- > Koichi Suzuki > > > 2013/6/4 Tomonari Katsumata <kat...@po...> > >> Hi, I have a problem with query executing. >> >> I cant't have any response when I execute a query. >> This problem occurs when some conditions are met. >> >> The conditions are below. >> --------------------------------------------------------------------- >> 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). >> >> 2. The Query Executing on Datanode has subquery on its FROM-clause. >> >> 3. In the subquery, it has a JOIN clause. >> >> 4. The Join clause is consisted with another subquery. >> --------------------------------------------------------------------- >> >> >> Simple example query is below. >> --------------------------------------------------------------- >> EXECUTE DIRECT ON (data1) $$ >> SELECT >> count(*) >> FROM >> (SELECT * FROM pg_locks l LEFT JOIN >> (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a >> $$ >> --------------------------------------------------------------- >> >> FYI: >> This query works fine with Postgres-XC 1.0.3. >> Is this already known bug ? >> >> >> How can I avoid this problem ? >> And what kind of info do you need to investigate it ? >> >> ---------- >> NTT Software Corporation >> Tomonari Katsumata >> >> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Koichi S. <koi...@gm...> - 2013-06-04 01:59:04
|
Thank you Katsumata-san for the report. Could you provide CREATE TABLE statement for each table involved with some of the data? I will ad this to the bug tracker after I recreate the issue. Best Regards; ---------- Koichi Suzuki 2013/6/4 Tomonari Katsumata <kat...@po...> > Hi, I have a problem with query executing. > > I cant't have any response when I execute a query. > This problem occurs when some conditions are met. > > The conditions are below. > --------------------------------------------------------------------- > 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). > > 2. The Query Executing on Datanode has subquery on its FROM-clause. > > 3. In the subquery, it has a JOIN clause. > > 4. The Join clause is consisted with another subquery. > --------------------------------------------------------------------- > > > Simple example query is below. > --------------------------------------------------------------- > EXECUTE DIRECT ON (data1) $$ > SELECT > count(*) > FROM > (SELECT * FROM pg_locks l LEFT JOIN > (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a > $$ > --------------------------------------------------------------- > > FYI: > This query works fine with Postgres-XC 1.0.3. > Is this already known bug ? > > > How can I avoid this problem ? > And what kind of info do you need to investigate it ? > > ---------- > NTT Software Corporation > Tomonari Katsumata > > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Tomonari K. <kat...@po...> - 2013-06-04 01:32:19
|
Hi, I have a problem with query executing. I cant't have any response when I execute a query. This problem occurs when some conditions are met. The conditions are below. --------------------------------------------------------------------- 1. Issuing "EXECUTE DIRECT" to Datanode(ofcourse, via Coordinator). 2. The Query Executing on Datanode has subquery on its FROM-clause. 3. In the subquery, it has a JOIN clause. 4. The Join clause is consisted with another subquery. --------------------------------------------------------------------- Simple example query is below. --------------------------------------------------------------- EXECUTE DIRECT ON (data1) $$ SELECT count(*) FROM (SELECT * FROM pg_locks l LEFT JOIN (SELECT * FROM pg_stat_activity) s ON l.database = s.datid) a $$ --------------------------------------------------------------- FYI: This query works fine with Postgres-XC 1.0.3. Is this already known bug ? How can I avoid this problem ? And what kind of info do you need to investigate it ? ---------- NTT Software Corporation Tomonari Katsumata |
From: 鈴木 幸市 <ko...@in...> - 2013-06-03 10:37:57
|
Thanks a lot. Another fix is welcome. --- Koichi Suzuki On 2013/06/03, at 17:49, Tomonari Katsumata <kat...@po...> wrote: > Hi, > > I'm testing with pgxc_ctl utility. > This is very helpful tool, > but I noticed two things to fix. > > 1. gtm_proxy.conf written by pgxc_ctl is unavailable. > > Because the value of gtm_host parameter is not quoted. > > 2. misspelled message when deploying. > > Not "deplloying", it's "deploying". > > > I attached a patch against master(*) > (*)5c07b4ec0623dfb78c7472ae28112bd8a84c5c0d > > regards, > -------------- > NTT Software Corporation > Tomonari Katsumata > > <pgxc_ctl_small_fix.patch>------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite > It's a free troubleshooting tool designed for production > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://p.sf.net/sfu/appdyn_d2d_ap2_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Tomonari K. <kat...@po...> - 2013-06-03 08:50:18
|
Hi, I'm testing with pgxc_ctl utility. This is very helpful tool, but I noticed two things to fix. 1. gtm_proxy.conf written by pgxc_ctl is unavailable. Because the value of gtm_host parameter is not quoted. 2. misspelled message when deploying. Not "deplloying", it's "deploying". I attached a patch against master(*) (*)5c07b4ec0623dfb78c7472ae28112bd8a84c5c0d regards, -------------- NTT Software Corporation Tomonari Katsumata |