You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michael P. <mic...@gm...> - 2013-08-21 00:10:03
|
On Wed, Aug 21, 2013 at 9:05 AM, West, William <ww...@uc...> wrote: > Michael, > > I ran the following query successfully on a remote node coord2 from node > coord1: > > postgres=# EXECUTE DIRECT on coord2 'select clock_timestamp()'; > clock_timestamp > ------------------------------- > 2013-08-20 16:42:16.647818-07 > (1 row) > > And the reverse works as well > > postgres=# EXECUTE DIRECT on coord2 'select clock_timestamp()'; > clock_timestamp > ------------------------------- > 2013-08-20 16:50:25.552218-07 > (1 row) > > > > So they are capable of communication. Does this give you an idea of why > can't I do DML statements across nodes? Did you try to the Datanodes? -- Michael |
From: Michael P. <mic...@gm...> - 2013-08-20 23:24:09
|
On Wed, Aug 21, 2013 at 2:09 AM, West, William <ww...@uc...> wrote: > Thanks Michael, > > The value for coord1 is: pgxc_node_name = 'coord1' and for coord2 it is: > pgxc_node_name = 'coord2' > > When I run the function to reload the pool I get the following results on > both nodes: > > postgres=# SELECT pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > > Here are the nodes setup in each VM database: > > > VM1: > > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > -----------+-----------+-----------+--------------------+----------------+- > -----------------+------------- > coord1 | C | 5432 | localhost | f | > f | 1885696643 > coord2 | C | 5432 | bigonc-db.sdsc.edu | f | > f | -1197102633 > data1 | D | 15432 | localhost | t | > t | -1008673296 > data2 | D | 15432 | bigonc-db.sdsc.edu | f | > t | -1370618993 > (4 rows) > > > > VM2: > > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > -----------+-----------+-----------+-----------------+----------------+---- > --------------+------------- > coord2 | C | 5432 | localhost | f | f > | -1197102633 > coord1 | C | 5432 | bigonc.sdsc.edu | f | f > | 1885696643 > data2 | D | 15432 | localhost | f | t > | -1370618993 > data1 | D | 15432 | bigonc.sdsc.edu | t | t > | -1008673296 > (4 rows) > > However I still get the following error using the CREATE TABLE statement: > > > ERROR: Failed to get pooled connections > SQL state: 53000 > > > > Do you have any other ideas on which configuration might be incorrect? Is > there any setting for pooled connections I might have missed (I didn't see > this in the documentation but I am thinking there is something set > incorrectly regarding this) OK. Can you run EXECUTE DIRECT when connecting on a Coordinator to a remote node? http://postgres-xc.sourceforge.net/docs/1_1/sql-executedirect.html (Take care grammar has changed a bit between 1.0 and 1.1 to fit with support of ALTER TABLE for data redistribution). With your problem you shouldn't be able to connect to them, so there are two possibilities: 1) Did you set up pg_hba.conf to allow connections from remote nodes? 2) Disable firewalls and retry, connections might be stopped because of that. Regards, -- Michael |
From: West, W. <ww...@uc...> - 2013-08-20 17:09:57
|
Thanks Michael, The value for coord1 is: pgxc_node_name = 'coord1' and for coord2 it is: pgxc_node_name = 'coord2' When I run the function to reload the pool I get the following results on both nodes: postgres=# SELECT pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) Here are the nodes setup in each VM database: VM1: postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+--------------------+----------------+- -----------------+------------- coord1 | C | 5432 | localhost | f | f | 1885696643 coord2 | C | 5432 | bigonc-db.sdsc.edu | f | f | -1197102633 data1 | D | 15432 | localhost | t | t | -1008673296 data2 | D | 15432 | bigonc-db.sdsc.edu | f | t | -1370618993 (4 rows) VM2: node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+-----------------+----------------+---- --------------+------------- coord2 | C | 5432 | localhost | f | f | -1197102633 coord1 | C | 5432 | bigonc.sdsc.edu | f | f | 1885696643 data2 | D | 15432 | localhost | f | t | -1370618993 data1 | D | 15432 | bigonc.sdsc.edu | t | t | -1008673296 (4 rows) However I still get the following error using the CREATE TABLE statement: ERROR: Failed to get pooled connections SQL state: 53000 Do you have any other ideas on which configuration might be incorrect? Is there any setting for pooled connections I might have missed (I didn't see this in the documentation but I am thinking there is something set incorrectly regarding this) Thanks again, Bill West On 8/19/13 6:28 PM, "Michael Paquier" <mic...@gm...> wrote: >On Tue, Aug 20, 2013 at 9:17 AM, West, William <ww...@uc...> wrote: >> ERROR: Failed to get pooled connections >> >> Queries and CREATE NODE DML statements work fine. >> >> The configuration looks like this VM 1 = 1 GTM, 1 Coordinator, 1 >>DataNode >> (Primary, Preferred) - VM 2 = 1 Coordinator, 1 Datanode (Preferred). >> Does this error indicate any configuration setting that might be off? >Check that the content on pgxc_node is correct on each Coordinator and >be sure to have run SELECT pgxc_pool_reload() on each Coordinator. > >Regards, >-- >Michael |
From: Himpich, S. <Ste...@se...> - 2013-08-20 12:48:38
|
Hi all, I thought the postgres-xc database is needed for XC operation and therefor was worried. If it is optional (or well just default db of psql), everything is fine. Thanks for your feedback! Greetings, Stefan -----Original Message----- From: Koichi Suzuki [mailto:koi...@gm...] Sent: Tue 8/20/2013 3:06 AM To: Himpich, Stefan Cc: pgxc-hackers mailing list Subject: Re: [Postgres-xc-developers] Database postgres-xc is not created anymore - please help I suppose that "init all" command is successful first, then tried to run psql and found this message. Your pgxc_ctl log shows this. The database "postgres-xc" is the default database when you run psql (or pgxc_ctl's Psql). Unfortunately, this database is not created automatically by pgxc_ctl's init command. You should run createdb (or pgxc_ctl's Createdb) command to create it, or you should supply database name to use to psql or Psql command such as: pgxc# Createdb postgres-xc Or you can supply other existing database name to psql or Psql as: pgxc$ Psql postgres Good luck; --- Koichi Suzuki 2013/8/12 Himpich, Stefan <Ste...@se...> > Hi Guys, > > I have a problem. While upgrading to latest sources last week, my testbed > stopped working. I get the error message: > > FATAL: 3D000: database "postgres-xc" does not exist > > I get this message on all datanodes and all coordinators. GMT Log shows no > errors. > > > I have no idea where this problem comes from and would be greatful for any > hints! > > > Attached you find the pgxc_ctl.conf, the logfile from the init all run and > the output of show configuration all > > I included the full Logfile from the datanode-master (node dbms181 on > server dbms181) in this message. > > > > Greetings, > Stefan > > > Full Logfile from Datanode-Master: > LOG: 00000: database system was shut down at 2013-08-12 13:25:44 UTC > LOCATION: StartupXLOG, xlog.c:6344 > LOG: 00000: database system is ready to accept connections > LOCATION: reaper, postmaster.c:2560 > LOG: 00000: autovacuum launcher started > LOCATION: AutoVacLauncherMain, autovacuum.c:407 > LOG: 00000: connection received: host=gtm81 port=60914 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: connection authorized: user=postgres-xc database=postgres-xc > LOCATION: PerformAuthentication, postinit.c:229 > FATAL: 3D000: database "postgres-xc" does not exist > LOCATION: InitPostgres, postinit.c:716 > LOG: 00000: connection received: host=dbms297 port=38572 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: replication connection authorized: user=postgres-xc > LOCATION: PerformAuthentication, postinit.c:225 > LOG: 00000: checkpoint starting: force wait > LOCATION: LogCheckpointStart, xlog.c:7902 > LOG: 00000: checkpoint complete: wrote 0 buffers (0.0%); 0 transaction > log file(s) added, 0 removed, 0 recycled; write=0.000 s, sync=0.000 s, > total=0.365 s; sync files=0, longest=0.000 s, average=0.000 s > LOCATION: LogCheckpointEnd, xlog.c:7990 > Warning: Permanently added 'dbms297,10.182.201.109' (ECDSA) to the list of > known hosts. > Warning: Permanently added 'dbms297,10.182.201.109' (ECDSA) to the list of > known hosts. > LOG: 00000: disconnection: session time: 0:00:04.583 user=postgres-xc > database= host=dbms297 port=38572 > LOCATION: log_disconnections, postgres.c:4707 > Warning: Permanently added 'dbms297,10.182.201.109' (ECDSA) to the list of > known hosts. > LOG: 00000: connection received: host=gtm81 port=60950 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: connection authorized: user=postgres-xc database=postgres-xc > LOCATION: PerformAuthentication, postinit.c:229 > FATAL: 3D000: database "postgres-xc" does not exist > LOCATION: InitPostgres, postinit.c:716 > LOG: 00000: connection received: host=dbms297 port=38573 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: replication connection authorized: user=postgres-xc > LOCATION: PerformAuthentication, postinit.c:225 > LOG: 00000: connection received: host=gtm81 port=60983 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: connection authorized: user=postgres-xc database=postgres-xc > LOCATION: PerformAuthentication, postinit.c:229 > FATAL: 3D000: database "postgres-xc" does not exist > LOCATION: InitPostgres, postinit.c:716 > > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Masataka S. <pg...@gm...> - 2013-08-20 09:34:37
|
On Sat, Aug 10, 2013 at 2:57 PM, Abbas Butt <abb...@en...> wrote: >>> With this patch the JDBC regression runs without any failures or >>> errors on my machine. >>> Strangely I did not have to change any thing on the server side. >> >> What do you experience the strangeness of? > > I meant that none of the tests failing in JDBC test suite require any > change in the server, when this work was started we thought JDBC test > suite might uncover some bug in the server. I got your point. The patch seems good. > >> Your patch eliminated "oid" column from the selection, therefore, >> UPDATE query does not contain "oid" anymore. I think it is >> straightforward. >> >> Regards. >> >>> >>> -- >>> Abbas >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.com >>> >>> Follow us on Twitter >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>> ------------------------------------------------------------------------------ >>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>> It's a free troubleshooting tool designed for production. >>> Get down to code-level detail for bottlenecks, with <2% overhead. >>> Download for free and get started troubleshooting in minutes. >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> > > > > -- > -- > Abbas > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.com > > Follow us on Twitter > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers and more |
From: Masataka S. <pg...@gm...> - 2013-08-20 09:27:38
|
It's fine. On Sat, Aug 10, 2013 at 2:49 PM, Abbas Butt <abb...@en...> wrote: > PFA revised patch > > On Fri, Aug 9, 2013 at 9:47 AM, Masataka Saito <pg...@gm...> wrote: >> My comment is same as the one for 10_1_date. patch. >> >> On Thu, Aug 8, 2013 at 6:32 PM, Abbas Butt <abb...@en...> wrote: >>> Hi, >>> PFA patch to fix time tests needing order by. >>> >>> -- >>> Abbas >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.com >>> >>> Follow us on Twitter >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>> ------------------------------------------------------------------------------ >>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>> It's a free troubleshooting tool designed for production. >>> Get down to code-level detail for bottlenecks, with <2% overhead. >>> Download for free and get started troubleshooting in minutes. >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> > > > > -- > -- > Abbas > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.com > > Follow us on Twitter > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers and more |
From: Masataka S. <pg...@gm...> - 2013-08-20 05:55:35
|
It's fine. On Sat, Aug 10, 2013 at 3:04 PM, Abbas Butt <abb...@en...> wrote: > PFA revised patch. > > > On Fri, Aug 9, 2013 at 9:31 AM, Masataka Saito <pg...@gm...> wrote: >> It will work. But it took a bit time to understand that >> selectSQL("testdate order by i", "dt")) is expanded to "SELECT dt FROM >> testdate order by i" because the first argument name of selectSQL is >> "table". >> You'd better use selectSQL (String table, String columns, String >> where, String other) rather than selectSQL(String table, String >> columns). >> >> On Thu, Aug 8, 2013 at 6:31 PM, Abbas Butt <abb...@en...> wrote: >>> Hi, >>> PFA patch to fix date tests needing order by. >>> >>> -- >>> Abbas >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.com >>> >>> Follow us on Twitter >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>> ------------------------------------------------------------------------------ >>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>> It's a free troubleshooting tool designed for production. >>> Get down to code-level detail for bottlenecks, with <2% overhead. >>> Download for free and get started troubleshooting in minutes. >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> > > > > -- > -- > Abbas > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.com > > Follow us on Twitter > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers and more |
From: Masataka S. <pg...@gm...> - 2013-08-20 05:53:37
|
It's OK. On Sat, Aug 10, 2013 at 2:41 PM, Abbas Butt <abb...@en...> wrote: > PFA revised patch > > On Fri, Aug 9, 2013 at 11:32 AM, Masataka Saito <pg...@gm...> wrote: >> I think "order by" in runInfinityTests is unnecessary. >> >> On Thu, Aug 8, 2013 at 6:33 PM, Abbas Butt <abb...@en...> wrote: >>> Hi, >>> PFA path to fix time-stamp tests needing order by >>> >>> -- >>> Abbas >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.com >>> >>> Follow us on Twitter >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers and more >>> >>> ------------------------------------------------------------------------------ >>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>> It's a free troubleshooting tool designed for production. >>> Get down to code-level detail for bottlenecks, with <2% overhead. >>> Download for free and get started troubleshooting in minutes. >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> > > > > -- > -- > Abbas > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.com > > Follow us on Twitter > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers and more |
From: Michael P. <mic...@gm...> - 2013-08-20 01:29:01
|
On Tue, Aug 20, 2013 at 9:17 AM, West, William <ww...@uc...> wrote: > ERROR: Failed to get pooled connections > > Queries and CREATE NODE DML statements work fine. > > The configuration looks like this VM 1 = 1 GTM, 1 Coordinator, 1 DataNode > (Primary, Preferred) - VM 2 = 1 Coordinator, 1 Datanode (Preferred). > Does this error indicate any configuration setting that might be off? Check that the content on pgxc_node is correct on each Coordinator and be sure to have run SELECT pgxc_pool_reload() on each Coordinator. Regards, -- Michael |
From: Koichi S. <koi...@gm...> - 2013-08-20 01:06:57
|
I suppose that "init all" command is successful first, then tried to run psql and found this message. Your pgxc_ctl log shows this. The database "postgres-xc" is the default database when you run psql (or pgxc_ctl's Psql). Unfortunately, this database is not created automatically by pgxc_ctl's init command. You should run createdb (or pgxc_ctl's Createdb) command to create it, or you should supply database name to use to psql or Psql command such as: pgxc# Createdb postgres-xc Or you can supply other existing database name to psql or Psql as: pgxc$ Psql postgres Good luck; --- Koichi Suzuki 2013/8/12 Himpich, Stefan <Ste...@se...> > Hi Guys, > > I have a problem. While upgrading to latest sources last week, my testbed > stopped working. I get the error message: > > FATAL: 3D000: database "postgres-xc" does not exist > > I get this message on all datanodes and all coordinators. GMT Log shows no > errors. > > > I have no idea where this problem comes from and would be greatful for any > hints! > > > Attached you find the pgxc_ctl.conf, the logfile from the init all run and > the output of show configuration all > > I included the full Logfile from the datanode-master (node dbms181 on > server dbms181) in this message. > > > > Greetings, > Stefan > > > Full Logfile from Datanode-Master: > LOG: 00000: database system was shut down at 2013-08-12 13:25:44 UTC > LOCATION: StartupXLOG, xlog.c:6344 > LOG: 00000: database system is ready to accept connections > LOCATION: reaper, postmaster.c:2560 > LOG: 00000: autovacuum launcher started > LOCATION: AutoVacLauncherMain, autovacuum.c:407 > LOG: 00000: connection received: host=gtm81 port=60914 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: connection authorized: user=postgres-xc database=postgres-xc > LOCATION: PerformAuthentication, postinit.c:229 > FATAL: 3D000: database "postgres-xc" does not exist > LOCATION: InitPostgres, postinit.c:716 > LOG: 00000: connection received: host=dbms297 port=38572 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: replication connection authorized: user=postgres-xc > LOCATION: PerformAuthentication, postinit.c:225 > LOG: 00000: checkpoint starting: force wait > LOCATION: LogCheckpointStart, xlog.c:7902 > LOG: 00000: checkpoint complete: wrote 0 buffers (0.0%); 0 transaction > log file(s) added, 0 removed, 0 recycled; write=0.000 s, sync=0.000 s, > total=0.365 s; sync files=0, longest=0.000 s, average=0.000 s > LOCATION: LogCheckpointEnd, xlog.c:7990 > Warning: Permanently added 'dbms297,10.182.201.109' (ECDSA) to the list of > known hosts. > Warning: Permanently added 'dbms297,10.182.201.109' (ECDSA) to the list of > known hosts. > LOG: 00000: disconnection: session time: 0:00:04.583 user=postgres-xc > database= host=dbms297 port=38572 > LOCATION: log_disconnections, postgres.c:4707 > Warning: Permanently added 'dbms297,10.182.201.109' (ECDSA) to the list of > known hosts. > LOG: 00000: connection received: host=gtm81 port=60950 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: connection authorized: user=postgres-xc database=postgres-xc > LOCATION: PerformAuthentication, postinit.c:229 > FATAL: 3D000: database "postgres-xc" does not exist > LOCATION: InitPostgres, postinit.c:716 > LOG: 00000: connection received: host=dbms297 port=38573 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: replication connection authorized: user=postgres-xc > LOCATION: PerformAuthentication, postinit.c:225 > LOG: 00000: connection received: host=gtm81 port=60983 > LOCATION: BackendInitialize, postmaster.c:3666 > LOG: 00000: connection authorized: user=postgres-xc database=postgres-xc > LOCATION: PerformAuthentication, postinit.c:229 > FATAL: 3D000: database "postgres-xc" does not exist > LOCATION: InitPostgres, postinit.c:716 > > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: West, W. <ww...@uc...> - 2013-08-20 00:17:55
|
Thanks Michael, That was the problem. One of the coordinators was misnamed in the conf file. This is fixed now and I am running a server on two different Vms. I am still having an issue where these servers are not communicating with one another. I am getting this error only when attempting to run CREATE TABLE DML statements: ERROR: Failed to get pooled connections Queries and CREATE NODE DML statements work fine. The configuration looks like this VM 1 = 1 GTM, 1 Coordinator, 1 DataNode (Primary, Preferred) - VM 2 = 1 Coordinator, 1 Datanode (Preferred). Does this error indicate any configuration setting that might be off? Thanks, Bill West On 8/18/13 6:50 PM, "Michael Paquier" <mic...@gm...> wrote: >On Sat, Aug 17, 2013 at 8:42 AM, West, William <ww...@uc...> wrote: >> Hi all, >> >> I have been trying for the last week to install postgresql-xc on 2 VMs I >> have. Based on what I have read, I am setting a a GTM, coordinator and >> datanode on one VM and a coordinator and datanode on the other VM. The >> instructions are pretty straight forward for a single node and I was >> successful at getting it up an running. Now with services running on >>both >> nodes I am receiving this error on each node: >> >> psql: FATAL: Coordinator cannot identify itself >> >> So, two questions. 1. Does anyone know what is causing the error? 2. I >>am >> guessing that in the many conf files that need to be configured, I have >> missed something. My reading of the documentation suggest that changes >>to >> the postgres.conf and pg_hba.conf should be enough to do the trick but I >> suspect this is wrong for more complicated layouts (clusters with 1+n >> nodes). Is there any more comprehensive documentation for all the >>required >> setting in the various conf files/directories for larger scale >>deployments? >Something incorrect with node_name perhaps? On the top of my mind, the >error you are finding means that a given Coordinator is not able to >find its own name defined by the GUC parameter in pgxc_node. The value >name of this parameter is enforced by initdb. >-- >Michael |
From: Koichi S. <koi...@gm...> - 2013-08-19 02:30:36
|
The database is not created automatically by pgxc_ctl. You need to run createdb to have non-default database even with pgxc_ctl. Let me find a time to re-run attached pgxc_ctl.conf. Regards; --- Koichi Suzuki 2013/8/19 Michael Paquier <mic...@gm...> > On Mon, Aug 12, 2013 at 8:39 PM, Himpich, Stefan > <Ste...@se...> wrote: > > Hi Guys, > > > > I have a problem. While upgrading to latest sources last week, my > testbed stopped working. I get the error message: > > > > FATAL: 3D000: database "postgres-xc" does not exist > > > > I get this message on all datanodes and all coordinators. GMT Log shows > no errors. > > > > > > I have no idea where this problem comes from and would be greatful for > any hints! > > > > > > Attached you find the pgxc_ctl.conf, the logfile from the init all run > and the output of show configuration all > > > > I included the full Logfile from the datanode-master (node dbms181 on > server dbms181) in this message. > Smells like a bug in pgxc_ctl, and perhaps with double-quotes as this > database name has a "-" character embedded with... But this is only an > intuition. Is the database postgres-xc incriminated created > automatically with pgxc_ctl? > -- > Michael > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Koichi S. <koi...@gm...> - 2013-08-19 02:28:03
|
Thanks. They will be included in the next minor, if it is available as a part of PG minor release. Regards; --- Koichi Suzuki 2013/8/19 Michael Paquier <mic...@gm...> > On Mon, Aug 19, 2013 at 10:54 AM, Koichi Suzuki <koi...@gm...> > wrote: > > Hmm... As Michael suggested, applying this patch only locally to XC will > > make it difficult to keep XC code synchronized with vanilla PG. We > should > > wait until this patch is included in vanilla PG. > Actually it has been committed a couple of weeks ago and back-patched > to all the supported branches: > - master: 55cbfa5 > - REL9_3_STABLE: 8cbf8df > - REL9_2_STABLE: 9822dc3 > - REL9_1_STABLE: aa49821 > > Regards, > -- > Michael > |
From: Koichi S. <koi...@gm...> - 2013-08-19 02:26:25
|
gtm_ctl handles -m option in the same way as pg_ctl. With -m i, it sends SIGQUIT to the target GTM process. Unfortunately, this signal can be blocked (and is actually blocked in some of GTM situation). This is very similar to the other background and I suspect gtm standby is in such a situation. Gtm_ctl keeps checking if pid file exists for a while (default is 60sec and you can specify this value with -t option of gtm_ctl. Could you try to give larger value for -t option? At present, gtm_ctl checks if gtm.pid is available but not check if the process is alive. Checking if the process is alive, not gtm.pid file, will be more straightforward. Please understand that the current GTM implementation is very similar to pg_ctl and I don't think we have to change this code. Any more input to this? --- Koichi Suzuki 2013/8/12 Tomonari Katsumata <t.k...@gm...> > Hi, > > I found a odd behavior with gtm_standby(gtm_sby). > When the network to gtm_master is broken, gtm_ctl couldn't stop gtm_sby > even if I add an option "-m i". > > I simulates the situation with iptables command. > (gtm_sby access to gtm_master via eth1) > # iptables -I INPUT -i eth1 -j DROP > # iptables -I OUTPUT -o eth1 -j DROP > > And then, I start and stop immediately with these commands. > $ gtm_ctl -D <gtm_sby-PATH> -Z gtm start > $ gtm_ctl -D <gtm_sby-PATH> -Z gtm stop -m i > > The last command left below message. > ------- > waiting for server to shut > down............................................................... failed > gtm_ctl: server does not shut down > -------- > > But actually gtm_sby has stoped while the last command. > > In my opinion, stopping with "-m i" should stop gtm_sby immediately. > Because vanilla-PostgreSQL do so. > > Should we resolve this problem? > > regards, > -------- > NTT Software Corporation > Tomonari Katsumata > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Michael P. <mic...@gm...> - 2013-08-19 02:02:52
|
On Mon, Aug 19, 2013 at 10:54 AM, Koichi Suzuki <koi...@gm...> wrote: > Hmm... As Michael suggested, applying this patch only locally to XC will > make it difficult to keep XC code synchronized with vanilla PG. We should > wait until this patch is included in vanilla PG. Actually it has been committed a couple of weeks ago and back-patched to all the supported branches: - master: 55cbfa5 - REL9_3_STABLE: 8cbf8df - REL9_2_STABLE: 9822dc3 - REL9_1_STABLE: aa49821 Regards, -- Michael |
From: Michael P. <mic...@gm...> - 2013-08-19 01:57:15
|
On Mon, Aug 12, 2013 at 8:39 PM, Himpich, Stefan <Ste...@se...> wrote: > Hi Guys, > > I have a problem. While upgrading to latest sources last week, my testbed stopped working. I get the error message: > > FATAL: 3D000: database "postgres-xc" does not exist > > I get this message on all datanodes and all coordinators. GMT Log shows no errors. > > > I have no idea where this problem comes from and would be greatful for any hints! > > > Attached you find the pgxc_ctl.conf, the logfile from the init all run and the output of show configuration all > > I included the full Logfile from the datanode-master (node dbms181 on server dbms181) in this message. Smells like a bug in pgxc_ctl, and perhaps with double-quotes as this database name has a "-" character embedded with... But this is only an intuition. Is the database postgres-xc incriminated created automatically with pgxc_ctl? -- Michael |
From: Koichi S. <koi...@gm...> - 2013-08-19 01:54:16
|
Hmm... As Michael suggested, applying this patch only locally to XC will make it difficult to keep XC code synchronized with vanilla PG. We should wait until this patch is included in vanilla PG. Regards; --- Koichi Suzuki 2013/8/19 Michael Paquier <mic...@gm...> > On Tue, Aug 13, 2013 at 5:28 AM, Jure Kobal <j....@gm...> wrote: > > Since Bison 3.0 they stopped support for YYPARSE_PARAM, which is used in > > contrib/cube and contrib/seg. More can be read at: > > http://www.postgresql.org/message-id/736...@ss... > > > > There is a patch for PostgreSQL which fixes this but doesn't apply clean > on > > Postgres-XC 1.0.3 without some minor changes. > > > > Attached is the patch for 1.0.3. Tested with bison 3.0 and 2.7.1. Both > compile > > without errors. > Yes, this is definitely something to be aware of, but -1 for the patch > as this fix has been committed in the Postgres code tree, and XC > should directly pick it up from there by merging all its active > branches with the latest commits of postgres. > -- > Michael > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Michael P. <mic...@gm...> - 2013-08-19 01:50:39
|
On Sat, Aug 17, 2013 at 8:42 AM, West, William <ww...@uc...> wrote: > Hi all, > > I have been trying for the last week to install postgresql-xc on 2 VMs I > have. Based on what I have read, I am setting a a GTM, coordinator and > datanode on one VM and a coordinator and datanode on the other VM. The > instructions are pretty straight forward for a single node and I was > successful at getting it up an running. Now with services running on both > nodes I am receiving this error on each node: > > psql: FATAL: Coordinator cannot identify itself > > So, two questions. 1. Does anyone know what is causing the error? 2. I am > guessing that in the many conf files that need to be configured, I have > missed something. My reading of the documentation suggest that changes to > the postgres.conf and pg_hba.conf should be enough to do the trick but I > suspect this is wrong for more complicated layouts (clusters with 1+n > nodes). Is there any more comprehensive documentation for all the required > setting in the various conf files/directories for larger scale deployments? Something incorrect with node_name perhaps? On the top of my mind, the error you are finding means that a given Coordinator is not able to find its own name defined by the GUC parameter in pgxc_node. The value name of this parameter is enforced by initdb. -- Michael |
From: Michael P. <mic...@gm...> - 2013-08-19 01:48:20
|
On Tue, Aug 13, 2013 at 5:28 AM, Jure Kobal <j....@gm...> wrote: > Since Bison 3.0 they stopped support for YYPARSE_PARAM, which is used in > contrib/cube and contrib/seg. More can be read at: > http://www.postgresql.org/message-id/736...@ss... > > There is a patch for PostgreSQL which fixes this but doesn't apply clean on > Postgres-XC 1.0.3 without some minor changes. > > Attached is the patch for 1.0.3. Tested with bison 3.0 and 2.7.1. Both compile > without errors. Yes, this is definitely something to be aware of, but -1 for the patch as this fix has been committed in the Postgres code tree, and XC should directly pick it up from there by merging all its active branches with the latest commits of postgres. -- Michael |
From: West, W. <ww...@uc...> - 2013-08-16 23:42:45
|
Hi all, I have been trying for the last week to install postgresql-xc on 2 VMs I have. Based on what I have read, I am setting a a GTM, coordinator and datanode on one VM and a coordinator and datanode on the other VM. The instructions are pretty straight forward for a single node and I was successful at getting it up an running. Now with services running on both nodes I am receiving this error on each node: psql: FATAL: Coordinator cannot identify itself So, two questions. 1. Does anyone know what is causing the error? 2. I am guessing that in the many conf files that need to be configured, I have missed something. My reading of the documentation suggest that changes to the postgres.conf and pg_hba.conf should be enough to do the trick but I suspect this is wrong for more complicated layouts (clusters with 1+n nodes). Is there any more comprehensive documentation for all the required setting in the various conf files/directories for larger scale deployments? Thanks, Bill West |
From: Nikhil S. <ni...@st...> - 2013-08-16 12:20:29
|
> Additionally, ISTM, that the tuplestore should only be used for inner > nodes. There's no need to store outer nodes in tuplestores. Also if it's a > single SELECT query, then there's no need to use tuplestore at all as well. > > PFA, a patch which tries to avoid using the tuplestore in the above two cases. During planning we decide if a tuplestore should be used for the RemoteQuery. The default is true, and we set it to false for the above two cases for now. I ran regression test cases with and without the patch and got the exact same set of failures (and more importantly same diffs). To be clear this patch is not specific to COPY TO, but it's a generic change to avoid using tuplestore in certain simple scenarios thereby reducing the memory footprint of the remote query execution. Note that it also does not solve Hitoshi-san's COPY FROM issues. Will submit a separate patch for that. Regards, Nikhils > Looks like if we can pass hints during plan creation as to whether the > remote scan is part of a join (and is inner node) or not, then accordingly > decision can be taken to materialize into the tuplestore. > > Regards, > Nikhils > > On Thu, Aug 15, 2013 at 10:43 PM, Nikhil Sontakke <ni...@st...>wrote: > >> Looks like my theory was wrong, make installcheck is giving more errors >> with this patch applied. Will have to look at a different solution.. >> >> Regards, >> Nikhils >> >> >> On Thu, Aug 15, 2013 at 2:11 PM, Nikhil Sontakke <ni...@st...>wrote: >> >>> So, I looked at this code carefully and ISTM, that because of the way we >>> fetch the data from the connections and return it immediately inside >>> RemoteQueryNext, storing it in the tuplestore using tuplestore_puttupleslot >>> is NOT required at all. >>> >>> So, I have removed the call to tuplestore_puttupleslot and things seem >>> to be ok for me. I guess we should do a FULL test run with this patch just >>> to ensure that it does not cause issues in any scenarios. >>> >>> A careful look by new set of eyes will help here. I think, if there are >>> no issues, this plugs a major leak in the RemoteQuery code path which is >>> almost always used in our case. >>> >>> Regards, >>> Nikhils >>> >>> >>> On Wed, Aug 14, 2013 at 7:05 PM, Nikhil Sontakke <ni...@st...>wrote: >>> >>>> Using a tuplestore for data coming from RemoteQuery is kinda wrong and >>>> that's what has introduced this issue. Looks like just changing the memory >>>> context will not work as it interferes with the other functioning of the >>>> tuplestore :-| >>>> >>>> Regards, >>>> Nikhils >>>> >>>> >>>> >>>> On Wed, Aug 14, 2013 at 3:07 PM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> Yes, that's correct. >>>>> >>>>> My patch was not intended to fix this. This was added while fixing a >>>>> bug for parameterised quals on RemoteQuery I think. Check commits by Amit >>>>> in this area. >>>>> >>>>> >>>>> On Wed, Aug 14, 2013 at 3:03 PM, Nikhil Sontakke <ni...@st...>wrote: >>>>> >>>>>> Yeah, but AFAICS, even 1.1 (and head) *still* has a leak in it. >>>>>> >>>>>> Here's the snippet from RemoteQueryNext: >>>>>> >>>>>> if (tuplestorestate && !TupIsNull(scanslot)) >>>>>> tuplestore_puttupleslot(tuplestorestate, >>>>>> scanslot); >>>>>> >>>>>> I printed the current memory context inside this function, it is >>>>>> ""ExecutorState". This means that the tuple will stay around till the query >>>>>> is executing in its entirety! For large COPY queries this is bound to cause >>>>>> issues as is also reported by Hitoshi san on another thread. >>>>>> >>>>>> I propose that in RemoteQueryNext, before calling the >>>>>> tuplestore_puttupleslot we switch into >>>>>> scan_node->ps.ps_ExprContext's ecxt_per_tuple_memory context. It will >>>>>> get reset, when the next tuple has to be returned to the caller and the >>>>>> leak will be curtailed. Thoughts? >>>>>> >>>>>> Regards, >>>>>> Nikhils >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Aug 14, 2013 at 11:33 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> There has been an overhaul in the planner (and corresponding parts >>>>>>> of executor) in 1.1, so it would be better if they move to 1.1 after GA. >>>>>>> >>>>>>> >>>>>>> On Wed, Aug 14, 2013 at 10:54 AM, Nikhil Sontakke < >>>>>>> ni...@st...> wrote: >>>>>>> >>>>>>>> Ah, I see. >>>>>>>> >>>>>>>> I was looking at REL_1_0 sources. >>>>>>>> >>>>>>>> There are people out there using REL_1_0 as well. >>>>>>>> >>>>>>>> Regards, >>>>>>>> Nikhils >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Aug 13, 2013 at 9:52 AM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> It should be part of 1.1 as well. It was done to support >>>>>>>>> projection out of RemoteQuery node. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Aug 13, 2013 at 7:17 AM, Nikhil Sontakke < >>>>>>>>> ni...@st...> wrote: >>>>>>>>> >>>>>>>>>> Hi Ashutosh, >>>>>>>>>> >>>>>>>>>> I guess you have changed it in pgxc head? I was looking at 103 >>>>>>>>>> and 11 branches and saw this. In that even ExecRemoteQuery seems to have an >>>>>>>>>> issue wherein it's not using the appropriate context. >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Nikhils >>>>>>>>>> >>>>>>>>>> Sent from my iPhone >>>>>>>>>> >>>>>>>>>> On Aug 12, 2013, at 9:54 AM, Ashutosh Bapat < >>>>>>>>>> ash...@en...> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Welcome to the mess ;) and enjoy junk food. >>>>>>>>>> >>>>>>>>>> Sometime back, I have changed ExecRemoteQuery to be called in the >>>>>>>>>> same fashion as other Scan nodes. So, you will see ExecRemoteQuery calling >>>>>>>>>> ExecScan with RemoteQueryNext as the iterator. So, I assume your comment >>>>>>>>>> pertains to RemoteQueryNext and its minions and not ExecRemoteQuery per say! >>>>>>>>>> >>>>>>>>>> This code needs a lot of rework, removing duplications, using >>>>>>>>>> proper way of materialisation, central response handler and error handler >>>>>>>>>> etc. If we clean up this code, some improvements in planner (like using >>>>>>>>>> MergeAppend plan) for Sort, will be possible. Regarding materialisation, >>>>>>>>>> the code uses a linked list for materialising the rows from datanodes (in >>>>>>>>>> case the same connection needs to be given to other remote query node), >>>>>>>>>> which must be eating a lot of performance. Instead we should be using some >>>>>>>>>> kind of tuplestore there. We actually use tuplestore (as well) in the >>>>>>>>>> RemoteQuery node; the same method can be used. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sat, Aug 10, 2013 at 10:48 PM, Nikhil Sontakke < >>>>>>>>>> ni...@st...> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> Have a Query about ExecRemoteQuery. >>>>>>>>>>> >>>>>>>>>>> The logic seems to have been modeled after ExecMaterial. ISTM, >>>>>>>>>>> that it should have been modeled after ExecScan because we fetch tuples, >>>>>>>>>>> and those which match the qual should be sent up. ExecMaterial is for >>>>>>>>>>> materializing and collecting and storing tuples. >>>>>>>>>>> >>>>>>>>>>> Can anyone explain? The reason for asking this is I am >>>>>>>>>>> suspecting a big memory leak in this code path. We are not using any >>>>>>>>>>> expression context nor we are freeing up tuples as we scan for the one >>>>>>>>>>> which qualifies. >>>>>>>>>>> >>>>>>>>>>> Regards, >>>>>>>>>>> Nikhils >>>>>>>>>>> -- >>>>>>>>>>> StormDB - http://www.stormdb.com >>>>>>>>>>> The Database Cloud >>>>>>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>>>>>>>>>> It's a free troubleshooting tool designed for production. >>>>>>>>>>> Get down to code-level detail for bottlenecks, with <2% overhead. >>>>>>>>>>> Download for free and get started troubleshooting in minutes. >>>>>>>>>>> >>>>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>> Pos...@li... >>>>>>>>>>> >>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Best Wishes, >>>>>>>>>> Ashutosh Bapat >>>>>>>>>> EntepriseDB Corporation >>>>>>>>>> The Postgres Database Company >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> StormDB - http://www.stormdb.com >>>>>>>> The Database Cloud >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> StormDB - http://www.stormdb.com >>>>>> The Database Cloud >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> >>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Nikhil S. <ni...@st...> - 2013-08-15 18:27:51
|
Looks like we cannot do away with the tuplestore that easily. Especially if the remote query is an inner scan node, then it makes sense to materialize and store the rows in the tuplestore for subsequent rescans. However this should have been based on row statistics. If the query brings in a lot of rows, then we are sure to run out of memory like some reports that we have seen. Additionally, ISTM, that the tuplestore should only be used for inner nodes. There's no need to store outer nodes in tuplestores. Also if it's a single SELECT query, then there's no need to use tuplestore at all as well. Looks like if we can pass hints during plan creation as to whether the remote scan is part of a join (and is inner node) or not, then accordingly decision can be taken to materialize into the tuplestore. Regards, Nikhils On Thu, Aug 15, 2013 at 10:43 PM, Nikhil Sontakke <ni...@st...>wrote: > Looks like my theory was wrong, make installcheck is giving more errors > with this patch applied. Will have to look at a different solution.. > > Regards, > Nikhils > > > On Thu, Aug 15, 2013 at 2:11 PM, Nikhil Sontakke <ni...@st...>wrote: > >> So, I looked at this code carefully and ISTM, that because of the way we >> fetch the data from the connections and return it immediately inside >> RemoteQueryNext, storing it in the tuplestore using tuplestore_puttupleslot >> is NOT required at all. >> >> So, I have removed the call to tuplestore_puttupleslot and things seem to >> be ok for me. I guess we should do a FULL test run with this patch just to >> ensure that it does not cause issues in any scenarios. >> >> A careful look by new set of eyes will help here. I think, if there are >> no issues, this plugs a major leak in the RemoteQuery code path which is >> almost always used in our case. >> >> Regards, >> Nikhils >> >> >> On Wed, Aug 14, 2013 at 7:05 PM, Nikhil Sontakke <ni...@st...>wrote: >> >>> Using a tuplestore for data coming from RemoteQuery is kinda wrong and >>> that's what has introduced this issue. Looks like just changing the memory >>> context will not work as it interferes with the other functioning of the >>> tuplestore :-| >>> >>> Regards, >>> Nikhils >>> >>> >>> >>> On Wed, Aug 14, 2013 at 3:07 PM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> Yes, that's correct. >>>> >>>> My patch was not intended to fix this. This was added while fixing a >>>> bug for parameterised quals on RemoteQuery I think. Check commits by Amit >>>> in this area. >>>> >>>> >>>> On Wed, Aug 14, 2013 at 3:03 PM, Nikhil Sontakke <ni...@st...>wrote: >>>> >>>>> Yeah, but AFAICS, even 1.1 (and head) *still* has a leak in it. >>>>> >>>>> Here's the snippet from RemoteQueryNext: >>>>> >>>>> if (tuplestorestate && !TupIsNull(scanslot)) >>>>> tuplestore_puttupleslot(tuplestorestate, scanslot); >>>>> >>>>> I printed the current memory context inside this function, it is >>>>> ""ExecutorState". This means that the tuple will stay around till the query >>>>> is executing in its entirety! For large COPY queries this is bound to cause >>>>> issues as is also reported by Hitoshi san on another thread. >>>>> >>>>> I propose that in RemoteQueryNext, before calling the >>>>> tuplestore_puttupleslot we switch into >>>>> scan_node->ps.ps_ExprContext's ecxt_per_tuple_memory context. It will >>>>> get reset, when the next tuple has to be returned to the caller and the >>>>> leak will be curtailed. Thoughts? >>>>> >>>>> Regards, >>>>> Nikhils >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Aug 14, 2013 at 11:33 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> There has been an overhaul in the planner (and corresponding parts of >>>>>> executor) in 1.1, so it would be better if they move to 1.1 after GA. >>>>>> >>>>>> >>>>>> On Wed, Aug 14, 2013 at 10:54 AM, Nikhil Sontakke < >>>>>> ni...@st...> wrote: >>>>>> >>>>>>> Ah, I see. >>>>>>> >>>>>>> I was looking at REL_1_0 sources. >>>>>>> >>>>>>> There are people out there using REL_1_0 as well. >>>>>>> >>>>>>> Regards, >>>>>>> Nikhils >>>>>>> >>>>>>> >>>>>>> On Tue, Aug 13, 2013 at 9:52 AM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> It should be part of 1.1 as well. It was done to support projection >>>>>>>> out of RemoteQuery node. >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Aug 13, 2013 at 7:17 AM, Nikhil Sontakke < >>>>>>>> ni...@st...> wrote: >>>>>>>> >>>>>>>>> Hi Ashutosh, >>>>>>>>> >>>>>>>>> I guess you have changed it in pgxc head? I was looking at 103 and >>>>>>>>> 11 branches and saw this. In that even ExecRemoteQuery seems to have an >>>>>>>>> issue wherein it's not using the appropriate context. >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Nikhils >>>>>>>>> >>>>>>>>> Sent from my iPhone >>>>>>>>> >>>>>>>>> On Aug 12, 2013, at 9:54 AM, Ashutosh Bapat < >>>>>>>>> ash...@en...> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> Welcome to the mess ;) and enjoy junk food. >>>>>>>>> >>>>>>>>> Sometime back, I have changed ExecRemoteQuery to be called in the >>>>>>>>> same fashion as other Scan nodes. So, you will see ExecRemoteQuery calling >>>>>>>>> ExecScan with RemoteQueryNext as the iterator. So, I assume your comment >>>>>>>>> pertains to RemoteQueryNext and its minions and not ExecRemoteQuery per say! >>>>>>>>> >>>>>>>>> This code needs a lot of rework, removing duplications, using >>>>>>>>> proper way of materialisation, central response handler and error handler >>>>>>>>> etc. If we clean up this code, some improvements in planner (like using >>>>>>>>> MergeAppend plan) for Sort, will be possible. Regarding materialisation, >>>>>>>>> the code uses a linked list for materialising the rows from datanodes (in >>>>>>>>> case the same connection needs to be given to other remote query node), >>>>>>>>> which must be eating a lot of performance. Instead we should be using some >>>>>>>>> kind of tuplestore there. We actually use tuplestore (as well) in the >>>>>>>>> RemoteQuery node; the same method can be used. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sat, Aug 10, 2013 at 10:48 PM, Nikhil Sontakke < >>>>>>>>> ni...@st...> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Have a Query about ExecRemoteQuery. >>>>>>>>>> >>>>>>>>>> The logic seems to have been modeled after ExecMaterial. ISTM, >>>>>>>>>> that it should have been modeled after ExecScan because we fetch tuples, >>>>>>>>>> and those which match the qual should be sent up. ExecMaterial is for >>>>>>>>>> materializing and collecting and storing tuples. >>>>>>>>>> >>>>>>>>>> Can anyone explain? The reason for asking this is I am suspecting >>>>>>>>>> a big memory leak in this code path. We are not using any expression >>>>>>>>>> context nor we are freeing up tuples as we scan for the one which qualifies. >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Nikhils >>>>>>>>>> -- >>>>>>>>>> StormDB - http://www.stormdb.com >>>>>>>>>> The Database Cloud >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>>>>>>>>> It's a free troubleshooting tool designed for production. >>>>>>>>>> Get down to code-level detail for bottlenecks, with <2% overhead. >>>>>>>>>> Download for free and get started troubleshooting in minutes. >>>>>>>>>> >>>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>>>>>>> _______________________________________________ >>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>> Pos...@li... >>>>>>>>>> >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Postgres Database Company >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> StormDB - http://www.stormdb.com >>>>>>> The Database Cloud >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> StormDB - http://www.stormdb.com >>>>> The Database Cloud >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Postgres Database Company >>>> >>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Nikhil S. <ni...@st...> - 2013-08-15 17:14:13
|
Looks like my theory was wrong, make installcheck is giving more errors with this patch applied. Will have to look at a different solution.. Regards, Nikhils On Thu, Aug 15, 2013 at 2:11 PM, Nikhil Sontakke <ni...@st...>wrote: > So, I looked at this code carefully and ISTM, that because of the way we > fetch the data from the connections and return it immediately inside > RemoteQueryNext, storing it in the tuplestore using tuplestore_puttupleslot > is NOT required at all. > > So, I have removed the call to tuplestore_puttupleslot and things seem to > be ok for me. I guess we should do a FULL test run with this patch just to > ensure that it does not cause issues in any scenarios. > > A careful look by new set of eyes will help here. I think, if there are no > issues, this plugs a major leak in the RemoteQuery code path which is > almost always used in our case. > > Regards, > Nikhils > > > On Wed, Aug 14, 2013 at 7:05 PM, Nikhil Sontakke <ni...@st...>wrote: > >> Using a tuplestore for data coming from RemoteQuery is kinda wrong and >> that's what has introduced this issue. Looks like just changing the memory >> context will not work as it interferes with the other functioning of the >> tuplestore :-| >> >> Regards, >> Nikhils >> >> >> >> On Wed, Aug 14, 2013 at 3:07 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Yes, that's correct. >>> >>> My patch was not intended to fix this. This was added while fixing a bug >>> for parameterised quals on RemoteQuery I think. Check commits by Amit in >>> this area. >>> >>> >>> On Wed, Aug 14, 2013 at 3:03 PM, Nikhil Sontakke <ni...@st...>wrote: >>> >>>> Yeah, but AFAICS, even 1.1 (and head) *still* has a leak in it. >>>> >>>> Here's the snippet from RemoteQueryNext: >>>> >>>> if (tuplestorestate && !TupIsNull(scanslot)) >>>> tuplestore_puttupleslot(tuplestorestate, scanslot); >>>> >>>> I printed the current memory context inside this function, it is >>>> ""ExecutorState". This means that the tuple will stay around till the query >>>> is executing in its entirety! For large COPY queries this is bound to cause >>>> issues as is also reported by Hitoshi san on another thread. >>>> >>>> I propose that in RemoteQueryNext, before calling the >>>> tuplestore_puttupleslot we switch into >>>> scan_node->ps.ps_ExprContext's ecxt_per_tuple_memory context. It will >>>> get reset, when the next tuple has to be returned to the caller and the >>>> leak will be curtailed. Thoughts? >>>> >>>> Regards, >>>> Nikhils >>>> >>>> >>>> >>>> >>>> On Wed, Aug 14, 2013 at 11:33 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> There has been an overhaul in the planner (and corresponding parts of >>>>> executor) in 1.1, so it would be better if they move to 1.1 after GA. >>>>> >>>>> >>>>> On Wed, Aug 14, 2013 at 10:54 AM, Nikhil Sontakke <ni...@st... >>>>> > wrote: >>>>> >>>>>> Ah, I see. >>>>>> >>>>>> I was looking at REL_1_0 sources. >>>>>> >>>>>> There are people out there using REL_1_0 as well. >>>>>> >>>>>> Regards, >>>>>> Nikhils >>>>>> >>>>>> >>>>>> On Tue, Aug 13, 2013 at 9:52 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> It should be part of 1.1 as well. It was done to support projection >>>>>>> out of RemoteQuery node. >>>>>>> >>>>>>> >>>>>>> On Tue, Aug 13, 2013 at 7:17 AM, Nikhil Sontakke < >>>>>>> ni...@st...> wrote: >>>>>>> >>>>>>>> Hi Ashutosh, >>>>>>>> >>>>>>>> I guess you have changed it in pgxc head? I was looking at 103 and >>>>>>>> 11 branches and saw this. In that even ExecRemoteQuery seems to have an >>>>>>>> issue wherein it's not using the appropriate context. >>>>>>>> >>>>>>>> Regards, >>>>>>>> Nikhils >>>>>>>> >>>>>>>> Sent from my iPhone >>>>>>>> >>>>>>>> On Aug 12, 2013, at 9:54 AM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>> >>>>>>>> Welcome to the mess ;) and enjoy junk food. >>>>>>>> >>>>>>>> Sometime back, I have changed ExecRemoteQuery to be called in the >>>>>>>> same fashion as other Scan nodes. So, you will see ExecRemoteQuery calling >>>>>>>> ExecScan with RemoteQueryNext as the iterator. So, I assume your comment >>>>>>>> pertains to RemoteQueryNext and its minions and not ExecRemoteQuery per say! >>>>>>>> >>>>>>>> This code needs a lot of rework, removing duplications, using >>>>>>>> proper way of materialisation, central response handler and error handler >>>>>>>> etc. If we clean up this code, some improvements in planner (like using >>>>>>>> MergeAppend plan) for Sort, will be possible. Regarding materialisation, >>>>>>>> the code uses a linked list for materialising the rows from datanodes (in >>>>>>>> case the same connection needs to be given to other remote query node), >>>>>>>> which must be eating a lot of performance. Instead we should be using some >>>>>>>> kind of tuplestore there. We actually use tuplestore (as well) in the >>>>>>>> RemoteQuery node; the same method can be used. >>>>>>>> >>>>>>>> >>>>>>>> On Sat, Aug 10, 2013 at 10:48 PM, Nikhil Sontakke < >>>>>>>> ni...@st...> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Have a Query about ExecRemoteQuery. >>>>>>>>> >>>>>>>>> The logic seems to have been modeled after ExecMaterial. ISTM, >>>>>>>>> that it should have been modeled after ExecScan because we fetch tuples, >>>>>>>>> and those which match the qual should be sent up. ExecMaterial is for >>>>>>>>> materializing and collecting and storing tuples. >>>>>>>>> >>>>>>>>> Can anyone explain? The reason for asking this is I am suspecting >>>>>>>>> a big memory leak in this code path. We are not using any expression >>>>>>>>> context nor we are freeing up tuples as we scan for the one which qualifies. >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Nikhils >>>>>>>>> -- >>>>>>>>> StormDB - http://www.stormdb.com >>>>>>>>> The Database Cloud >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>>>>>>>> It's a free troubleshooting tool designed for production. >>>>>>>>> Get down to code-level detail for bottlenecks, with <2% overhead. >>>>>>>>> Download for free and get started troubleshooting in minutes. >>>>>>>>> >>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Postgres Database Company >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Postgres Database Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> StormDB - http://www.stormdb.com >>>>>> The Database Cloud >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Nikhil S. <ni...@st...> - 2013-08-14 13:36:00
|
Using a tuplestore for data coming from RemoteQuery is kinda wrong and that's what has introduced this issue. Looks like just changing the memory context will not work as it interferes with the other functioning of the tuplestore :-| Regards, Nikhils On Wed, Aug 14, 2013 at 3:07 PM, Ashutosh Bapat < ash...@en...> wrote: > Yes, that's correct. > > My patch was not intended to fix this. This was added while fixing a bug > for parameterised quals on RemoteQuery I think. Check commits by Amit in > this area. > > > On Wed, Aug 14, 2013 at 3:03 PM, Nikhil Sontakke <ni...@st...>wrote: > >> Yeah, but AFAICS, even 1.1 (and head) *still* has a leak in it. >> >> Here's the snippet from RemoteQueryNext: >> >> if (tuplestorestate && !TupIsNull(scanslot)) >> tuplestore_puttupleslot(tuplestorestate, scanslot); >> >> I printed the current memory context inside this function, it is >> ""ExecutorState". This means that the tuple will stay around till the query >> is executing in its entirety! For large COPY queries this is bound to cause >> issues as is also reported by Hitoshi san on another thread. >> >> I propose that in RemoteQueryNext, before calling the >> tuplestore_puttupleslot we switch into >> scan_node->ps.ps_ExprContext's ecxt_per_tuple_memory context. It will get >> reset, when the next tuple has to be returned to the caller and the leak >> will be curtailed. Thoughts? >> >> Regards, >> Nikhils >> >> >> >> >> On Wed, Aug 14, 2013 at 11:33 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> There has been an overhaul in the planner (and corresponding parts of >>> executor) in 1.1, so it would be better if they move to 1.1 after GA. >>> >>> >>> On Wed, Aug 14, 2013 at 10:54 AM, Nikhil Sontakke <ni...@st...>wrote: >>> >>>> Ah, I see. >>>> >>>> I was looking at REL_1_0 sources. >>>> >>>> There are people out there using REL_1_0 as well. >>>> >>>> Regards, >>>> Nikhils >>>> >>>> >>>> On Tue, Aug 13, 2013 at 9:52 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> It should be part of 1.1 as well. It was done to support projection >>>>> out of RemoteQuery node. >>>>> >>>>> >>>>> On Tue, Aug 13, 2013 at 7:17 AM, Nikhil Sontakke <ni...@st...>wrote: >>>>> >>>>>> Hi Ashutosh, >>>>>> >>>>>> I guess you have changed it in pgxc head? I was looking at 103 and 11 >>>>>> branches and saw this. In that even ExecRemoteQuery seems to have an issue >>>>>> wherein it's not using the appropriate context. >>>>>> >>>>>> Regards, >>>>>> Nikhils >>>>>> >>>>>> Sent from my iPhone >>>>>> >>>>>> On Aug 12, 2013, at 9:54 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>> >>>>>> Welcome to the mess ;) and enjoy junk food. >>>>>> >>>>>> Sometime back, I have changed ExecRemoteQuery to be called in the >>>>>> same fashion as other Scan nodes. So, you will see ExecRemoteQuery calling >>>>>> ExecScan with RemoteQueryNext as the iterator. So, I assume your comment >>>>>> pertains to RemoteQueryNext and its minions and not ExecRemoteQuery per say! >>>>>> >>>>>> This code needs a lot of rework, removing duplications, using proper >>>>>> way of materialisation, central response handler and error handler etc. If >>>>>> we clean up this code, some improvements in planner (like using MergeAppend >>>>>> plan) for Sort, will be possible. Regarding materialisation, the code uses >>>>>> a linked list for materialising the rows from datanodes (in case the same >>>>>> connection needs to be given to other remote query node), which must be >>>>>> eating a lot of performance. Instead we should be using some kind of >>>>>> tuplestore there. We actually use tuplestore (as well) in the RemoteQuery >>>>>> node; the same method can be used. >>>>>> >>>>>> >>>>>> On Sat, Aug 10, 2013 at 10:48 PM, Nikhil Sontakke < >>>>>> ni...@st...> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Have a Query about ExecRemoteQuery. >>>>>>> >>>>>>> The logic seems to have been modeled after ExecMaterial. ISTM, that >>>>>>> it should have been modeled after ExecScan because we fetch tuples, and >>>>>>> those which match the qual should be sent up. ExecMaterial is for >>>>>>> materializing and collecting and storing tuples. >>>>>>> >>>>>>> Can anyone explain? The reason for asking this is I am suspecting a >>>>>>> big memory leak in this code path. We are not using any expression context >>>>>>> nor we are freeing up tuples as we scan for the one which qualifies. >>>>>>> >>>>>>> Regards, >>>>>>> Nikhils >>>>>>> -- >>>>>>> StormDB - http://www.stormdb.com >>>>>>> The Database Cloud >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>>>>>> It's a free troubleshooting tool designed for production. >>>>>>> Get down to code-level detail for bottlenecks, with <2% overhead. >>>>>>> Download for free and get started troubleshooting in minutes. >>>>>>> >>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-developers mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Postgres Database Company >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Postgres Database Company >>>>> >>>> >>>> >>>> >>>> -- >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Tomonari K. <kat...@po...> - 2013-08-14 10:56:21
|
Hi Ashtosh, Thanks for your comment! > The code changes are fine, but, the comment in pgxc_collect_RTE_walker() > seems to specific. I think it should read, "create a copy of query's range > table, so that it can be linked with other RTEs in the collector's context." > I've revised the comment as you suggested. > In the test file, isn't there any way, you can add the offending statent > without any wrapper function test_execute_direct_all_xc_nodes()? We should > minimize the test-code to use only the minimal set of features. Since this > statement no more gets into infinite loop, please do not use the statement > timeout. I've gotten rid of using statement timeout. But I couldn't come up any idea to testing more simply, so the function is remaining as same as before patch. Any ideas? Here is the new patch. (against dec40008b3d689911566514614c5111c0a61327d) regards, ------------- NTT Software Corporation Tomonari Katsumata |