|
From: Sandeep G. <gup...@gm...> - 2014-01-31 17:51:21
|
Hi, I was debugging an outstanding issue with pgxc ( http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). I couldn't reproduce that error. But I do get this error. LOG: database system is ready to accept connections LOG: autovacuum launcher started LOG: sending cancel to blocking autovacuum PID 17222 DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 of database 12626. STATEMENT: drop index mdn ERROR: canceling autovacuum task CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" PreAbort Remote It seems to be a deadlock issue and may be related to the earlier problem as well. Please let me know your comments. -Sandeep |
|
From: Koichi S. <koi...@gm...> - 2014-02-01 08:22:15
|
Did you configure XC cluster manually? Then could you share how you did? To save your effort, pgxc_ctl provides simpler way to configure and run XC cluster. It is a contrib module and the document will be found at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html Regards; --- Koichi Suzuki 2014-02-01 Sandeep Gupta <gup...@gm...>: > Hi, > > I was debugging an outstanding issue with pgxc > (http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). > > I couldn't reproduce that error. But I do get this error. > > > LOG: database system is ready to accept connections > LOG: autovacuum launcher started > LOG: sending cancel to blocking autovacuum PID 17222 > DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 of > database 12626. > STATEMENT: drop index mdn > ERROR: canceling autovacuum task > CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" > PreAbort Remote > > > It seems to be a deadlock issue and may be related to the earlier problem as > well. > Please let me know your comments. > > -Sandeep > > > ------------------------------------------------------------------------------ > WatchGuard Dimension instantly turns raw network data into actionable > security intelligence. It gives you real-time visual feedback on key > security issues and trends. Skip the complicated setup - simply import > a virtual appliance and go from zero to informed in seconds. > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Sandeep G. <gup...@gm...> - 2014-02-01 18:02:01
|
Hi Koichi, Thank you for looking into this. I did setup the pgxc manually. I have a script that performs 1. initdb and initgtm for the coordinator and gtm respectively 2. make changes in the config file of gtm to setup the port numbers 3. launch gtm and launch the coordinator 4. Then I ssh into the remote machine and launch 4 datanode instances (ports configured appropriately) 5. Finally, I add the datanodes to the coordinator followed by pgxc_reload I will take a look into pgxc_ctl. I would say that the deadlock happens 1 out of 10 times. Not sure if that is helpful. -Sandeep On Sat, Feb 1, 2014 at 3:22 AM, Koichi Suzuki <koi...@gm...> wrote: > Did you configure XC cluster manually? Then could you share how you did? > > To save your effort, pgxc_ctl provides simpler way to configure and > run XC cluster. It is a contrib module and the document will be > found at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html > > Regards; > --- > Koichi Suzuki > > > 2014-02-01 Sandeep Gupta <gup...@gm...>: > > Hi, > > > > I was debugging an outstanding issue with pgxc > > ( > http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general > ). > > > > I couldn't reproduce that error. But I do get this error. > > > > > > LOG: database system is ready to accept connections > > LOG: autovacuum launcher started > > LOG: sending cancel to blocking autovacuum PID 17222 > > DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 of > > database 12626. > > STATEMENT: drop index mdn > > ERROR: canceling autovacuum task > > CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" > > PreAbort Remote > > > > > > It seems to be a deadlock issue and may be related to the earlier > problem as > > well. > > Please let me know your comments. > > > > -Sandeep > > > > > > > ------------------------------------------------------------------------------ > > WatchGuard Dimension instantly turns raw network data into actionable > > security intelligence. It gives you real-time visual feedback on key > > security issues and trends. Skip the complicated setup - simply import > > a virtual appliance and go from zero to informed in seconds. > > > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
|
From: 鈴木 幸市 <ko...@in...> - 2014-02-03 01:03:04
|
You need to import catalog from existing coordinator/datanode depending what node you are adding. You should run pg_dumpall and psql while the adding node is in specific mode. Pgxc_ctl source code will give you what it is doing for adding/removing nodes, Pgxc_ctl source code will be found in contrib/pgxc_ctl directory and the following functions may help: 1) add_coordinatorMaster(), add_coordinatorSlave(), remove_coordinatorMaster(), and remove_coordinatorSlave() in coord_cmd.c, 2) add_datanodeMaster(), add_datanodeSlave(), remove_datanodeMaster() and remove_datanodeSlave() in datanode_cmd.c, and 3) add_gtmSlave(), add_gtmProxy(), remove_gtmSlave(), remove_gtmProxy() and reconnect_gtm_proxy() in gtm_cmd.c Good luck. --- Koichi Suzuki 2014/02/02 3:01、Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>> のメール: Hi Koichi, Thank you for looking into this. I did setup the pgxc manually. I have a script that performs 1. initdb and initgtm for the coordinator and gtm respectively 2. make changes in the config file of gtm to setup the port numbers 3. launch gtm and launch the coordinator 4. Then I ssh into the remote machine and launch 4 datanode instances (ports configured appropriately) 5. Finally, I add the datanodes to the coordinator followed by pgxc_reload I will take a look into pgxc_ctl. I would say that the deadlock happens 1 out of 10 times. Not sure if that is helpful. -Sandeep On Sat, Feb 1, 2014 at 3:22 AM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you configure XC cluster manually? Then could you share how you did? To save your effort, pgxc_ctl provides simpler way to configure and run XC cluster. It is a contrib module and the document will be found at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html Regards; --- Koichi Suzuki 2014-02-01 Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>>: > Hi, > > I was debugging an outstanding issue with pgxc > (http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). > > I couldn't reproduce that error. But I do get this error. > > > LOG: database system is ready to accept connections > LOG: autovacuum launcher started > LOG: sending cancel to blocking autovacuum PID 17222 > DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 of > database 12626. > STATEMENT: drop index mdn > ERROR: canceling autovacuum task > CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" > PreAbort Remote > > > It seems to be a deadlock issue and may be related to the earlier problem as > well. > Please let me know your comments. > > -Sandeep > > > ------------------------------------------------------------------------------ > WatchGuard Dimension instantly turns raw network data into actionable > security intelligence. It gives you real-time visual feedback on key > security issues and trends. Skip the complicated setup - simply import > a virtual appliance and go from zero to informed in seconds. > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > ------------------------------------------------------------------------------ WatchGuard Dimension instantly turns raw network data into actionable security intelligence. It gives you real-time visual feedback on key security issues and trends. Skip the complicated setup - simply import a virtual appliance and go from zero to informed in seconds. http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Sandeep G. <gup...@gm...> - 2014-02-03 01:58:35
|
Hi Koichi, I can try pgxc_ctl as well but I am not sure how it will help with the current issue I am facing and the error I was facing a couple of months back with tuple not found error http://postgresql.1045698.n5.nabble.com/quot-Tuple-not-found-error-quot-during-Index-creation-td5782462.html I will post an followup to the tuple not found error as well. The problem with debugging the tuple not found error was that we couldn't reproduce the error. I can do so now with somewhat consistency. But still unsure of any short-term and long term fixes. Any advice on this would be very helpful. Thanks *-Sandeep* On Sun, Feb 2, 2014 at 5:02 PM, 鈴木 幸市 <ko...@in...> wrote: > You need to import catalog from existing coordinator/datanode depending > what node you are adding. You should run pg_dumpall and psql while the > adding node is in specific mode. Pgxc_ctl source code will give you what > it is doing for adding/removing nodes, > > Pgxc_ctl source code will be found in contrib/pgxc_ctl directory and the > following functions may help: > > 1) add_coordinatorMaster(), add_coordinatorSlave(), > remove_coordinatorMaster(), and remove_coordinatorSlave() in coord_cmd.c, > 2) add_datanodeMaster(), add_datanodeSlave(), remove_datanodeMaster() and > remove_datanodeSlave() in datanode_cmd.c, and > 3) add_gtmSlave(), add_gtmProxy(), remove_gtmSlave(), remove_gtmProxy() > and reconnect_gtm_proxy() in gtm_cmd.c > > Good luck. > --- > Koichi Suzuki > > 2014/02/02 3:01、Sandeep Gupta <gup...@gm...> のメール: > > Hi Koichi, > > Thank you for looking into this. I did setup the pgxc manually. I have > a script that performs > > 1. initdb and initgtm for the coordinator and gtm respectively > > 2. make changes in the config file of gtm to setup the port numbers > > 3. launch gtm and launch the coordinator > > 4. Then I ssh into the remote machine and launch 4 datanode instances > (ports configured appropriately) > > 5. Finally, I add the datanodes to the coordinator followed by pgxc_reload > > I will take a look into pgxc_ctl. I would say that the deadlock happens 1 > out of 10 times. Not sure if that is helpful. > > -Sandeep > > > > > On Sat, Feb 1, 2014 at 3:22 AM, Koichi Suzuki <koi...@gm...>wrote: > >> Did you configure XC cluster manually? Then could you share how you did? >> >> To save your effort, pgxc_ctl provides simpler way to configure and >> run XC cluster. It is a contrib module and the document will be >> found at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html >> >> Regards; >> --- >> Koichi Suzuki >> >> >> 2014-02-01 Sandeep Gupta <gup...@gm...>: >> > Hi, >> > >> > I was debugging an outstanding issue with pgxc >> > ( >> http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general >> ). >> > >> > I couldn't reproduce that error. But I do get this error. >> > >> > >> > LOG: database system is ready to accept connections >> > LOG: autovacuum launcher started >> > LOG: sending cancel to blocking autovacuum PID 17222 >> > DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 >> of >> > database 12626. >> > STATEMENT: drop index mdn >> > ERROR: canceling autovacuum task >> > CONTEXT: automatic analyze of table >> "postgres.public.la_directednetwork" >> > PreAbort Remote >> > >> > >> > It seems to be a deadlock issue and may be related to the earlier >> problem as >> > well. >> > Please let me know your comments. >> > >> > -Sandeep >> > >> > >> > >> ------------------------------------------------------------------------------ >> > WatchGuard Dimension instantly turns raw network data into actionable >> > security intelligence. It gives you real-time visual feedback on key >> > security issues and trends. Skip the complicated setup - simply import >> > a virtual appliance and go from zero to informed in seconds. >> > >> http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk >> > _______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > >> > > ------------------------------------------------------------------------------ > WatchGuard Dimension instantly turns raw network data into actionable > security intelligence. It gives you real-time visual feedback on key > security issues and trends. Skip the complicated setup - simply import > a virtual appliance and go from zero to informed in seconds. > > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
|
From: 鈴木 幸市 <ko...@in...> - 2014-02-03 02:58:35
|
I’m afraid this is caused by different reason. Sorry for the late response. I’m afraid this is XC-specific problem, not for PG. It's helpful if you set log_error_verbosity to VERBOSE which will let you know what source code is involved in such an error. Best; --- Koichi Suzuki # It is not a good idea to post Postgres-XC issues to Postgres ML. 2014/02/03 10:58、Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>> のメール: Hi Koichi, I can try pgxc_ctl as well but I am not sure how it will help with the current issue I am facing and the error I was facing a couple of months back with tuple not found error http://postgresql.1045698.n5.nabble.com/quot-Tuple-not-found-error-quot-during-Index-creation-td5782462.html I will post an followup to the tuple not found error as well. The problem with debugging the tuple not found error was that we couldn't reproduce the error. I can do so now with somewhat consistency. But still unsure of any short-term and long term fixes. Any advice on this would be very helpful. Thanks -Sandeep On Sun, Feb 2, 2014 at 5:02 PM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: You need to import catalog from existing coordinator/datanode depending what node you are adding. You should run pg_dumpall and psql while the adding node is in specific mode. Pgxc_ctl source code will give you what it is doing for adding/removing nodes, Pgxc_ctl source code will be found in contrib/pgxc_ctl directory and the following functions may help: 1) add_coordinatorMaster(), add_coordinatorSlave(), remove_coordinatorMaster(), and remove_coordinatorSlave() in coord_cmd.c, 2) add_datanodeMaster(), add_datanodeSlave(), remove_datanodeMaster() and remove_datanodeSlave() in datanode_cmd.c, and 3) add_gtmSlave(), add_gtmProxy(), remove_gtmSlave(), remove_gtmProxy() and reconnect_gtm_proxy() in gtm_cmd.c Good luck. --- Koichi Suzuki 2014/02/02 3:01、Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>> のメール: Hi Koichi, Thank you for looking into this. I did setup the pgxc manually. I have a script that performs 1. initdb and initgtm for the coordinator and gtm respectively 2. make changes in the config file of gtm to setup the port numbers 3. launch gtm and launch the coordinator 4. Then I ssh into the remote machine and launch 4 datanode instances (ports configured appropriately) 5. Finally, I add the datanodes to the coordinator followed by pgxc_reload I will take a look into pgxc_ctl. I would say that the deadlock happens 1 out of 10 times. Not sure if that is helpful. -Sandeep On Sat, Feb 1, 2014 at 3:22 AM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you configure XC cluster manually? Then could you share how you did? To save your effort, pgxc_ctl provides simpler way to configure and run XC cluster. It is a contrib module and the document will be found at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html Regards; --- Koichi Suzuki 2014-02-01 Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>>: > Hi, > > I was debugging an outstanding issue with pgxc > (http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). > > I couldn't reproduce that error. But I do get this error. > > > LOG: database system is ready to accept connections > LOG: autovacuum launcher started > LOG: sending cancel to blocking autovacuum PID 17222 > DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 of > database 12626. > STATEMENT: drop index mdn > ERROR: canceling autovacuum task > CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" > PreAbort Remote > > > It seems to be a deadlock issue and may be related to the earlier problem as > well. > Please let me know your comments. > > -Sandeep > > > ------------------------------------------------------------------------------ > WatchGuard Dimension instantly turns raw network data into actionable > security intelligence. It gives you real-time visual feedback on key > security issues and trends. Skip the complicated setup - simply import > a virtual appliance and go from zero to informed in seconds. > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg..clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > ------------------------------------------------------------------------------ WatchGuard Dimension instantly turns raw network data into actionable security intelligence. It gives you real-time visual feedback on key security issues and trends. Skip the complicated setup - simply import a virtual appliance and go from zero to informed in seconds. http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Ashutosh B. <ash...@en...> - 2014-02-03 04:49:50
|
Hi Sandeep, Can you please check if similar error happens on vanilla PG. It may be an application + auto-vacuum error, which can happen in PG as well and might be harmless. It's auto-vacuum being cancelled. Auto-vacuum will run again during the next iteration. On Fri, Jan 31, 2014 at 11:21 PM, Sandeep Gupta <gup...@gm...>wrote: > Hi, > > I was debugging an outstanding issue with pgxc ( > http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). > > > I couldn't reproduce that error. But I do get this error. > > > LOG: database system is ready to accept connections > LOG: autovacuum launcher started > LOG: sending cancel to blocking autovacuum PID 17222 > DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 of > database 12626. > STATEMENT: drop index mdn > ERROR: canceling autovacuum task > CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" > PreAbort Remote > > > It seems to be a deadlock issue and may be related to the earlier problem > as well. > Please let me know your comments. > > -Sandeep > > > > ------------------------------------------------------------------------------ > WatchGuard Dimension instantly turns raw network data into actionable > security intelligence. It gives you real-time visual feedback on key > security issues and trends. Skip the complicated setup - simply import > a virtual appliance and go from zero to informed in seconds. > > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: Sandeep G. <gup...@gm...> - 2014-02-03 18:13:23
|
LOG: 00000: database system was shut down at 2014-02-03 11:42:41 EST LOCATION: StartupXLOG, xlog.c:6348 LOG: 00000: database system is ready to accept connections LOCATION: reaper, postmaster.c:2560 LOG: 00000: autovacuum launcher started LOCATION: AutoVacLauncherMain, autovacuum.c:407 WARNING: 01000: unexpected EOF on datanode connection LOCATION: pgxc_node_receive, pgxcnode.c:463 ERROR: XX000: failed to send PREPARE TRANSACTION command to the node 16384 LOCATION: pgxc_node_remote_prepare, execRemote.c:1629 STATEMENT: create INDEX mdn on la_directednetwork(head) WARNING: 01000: unexpected EOF on datanode connection LOCATION: pgxc_node_receive, pgxcnode.c:463 LOG: 00000: Failed to ABORT at node 16384 Detail: unexpected EOF on datanode connection LOCATION: pgxc_node_remote_abort, execRemote.c:2039 LOG: 00000: Failed to ABORT an implicitly PREPARED transaction status - 7 LOCATION: pgxc_node_remote_abort, execRemote.c:2070 ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn WARNING: 01000: Unexpected data on connection, cleaning. LOCATION: acquire_connection, poolmgr.c:2141 LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn WARNING: 25P01: there is no transaction in progress LOCATION: EndTransactionBlock, xact.c:4086 LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: CHECKPOINT ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: CHECKPOINT |
|
From: Sandeep G. <gup...@gm...> - 2014-02-03 18:20:53
|
Hi Ashutosh, For us the app+ autovaccum is quite harmful. We are not able to run the application because the index creation gets aborted in middle. The datanodes crashes. We could somehow restart the datanodes and start the index creation but my feeling it will happen quite often. I have a related question: is there anyway to know if a command has failed. Usually we fire a command using psql. And move to the next command. Is there any way to know if the previous command failed or was a success? Thanks. Sandeep On Mon, Feb 3, 2014 at 1:13 PM, Sandeep Gupta <gup...@gm...>wrote: > Hi Ashutosh, Koichi, > > Initially my feeling was that this was a postgres bug. That is why I > posted it in the postgres community. However, I now feel that it is due to > the changes made in XC. > > I have started the same test on standalone postgres. So far it hasn't > crashed. My feeling is that it won't. If in case it does I will report > accordingly. > > As requested, I started the test with verbose log on. Attached are the log > files for the coordinator and the datanodes. > There are several redundant messages that gets printed such as "checkpoint > too often". Please use some filters etc. to view the log file. I thought it > was best to send across the whole file. > > To clarify, I create a very large table (using copy) and then repeatedly > create and drop index. I understand this is not the actual workload but > that was the only way to reproduce the error. > > The other complication is that in real system we get two kinds of errors > "tuple on found" and this deadlock. I feel that they connected. > > Let me know if the log file help or is any other suggestions that you have > may. > > -Sandeep > > > > > > > On Sun, Feb 2, 2014 at 11:49 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Sandeep, >> Can you please check if similar error happens on vanilla PG. It may be an >> application + auto-vacuum error, which can happen in PG as well and might >> be harmless. It's auto-vacuum being cancelled. Auto-vacuum will run again >> during the next iteration. >> >> >> On Fri, Jan 31, 2014 at 11:21 PM, Sandeep Gupta <gup...@gm...>wrote: >> >>> Hi, >>> >>> I was debugging an outstanding issue with pgxc ( >>> http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). >>> >>> >>> I couldn't reproduce that error. But I do get this error. >>> >>> >>> LOG: database system is ready to accept connections >>> LOG: autovacuum launcher started >>> LOG: sending cancel to blocking autovacuum PID 17222 >>> DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 >>> of database 12626. >>> STATEMENT: drop index mdn >>> ERROR: canceling autovacuum task >>> CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" >>> PreAbort Remote >>> >>> >>> It seems to be a deadlock issue and may be related to the earlier >>> problem as well. >>> Please let me know your comments. >>> >>> -Sandeep >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> WatchGuard Dimension instantly turns raw network data into actionable >>> security intelligence. It gives you real-time visual feedback on key >>> security issues and trends. Skip the complicated setup - simply import >>> a virtual appliance and go from zero to informed in seconds. >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EnterpriseDB Corporation >> The Postgres Database Company >> > > |
|
From: Koichi S. <koi...@gm...> - 2014-02-04 00:46:45
|
Unless you invoke "CONCURRENTLY" statement, having psql to request next command means the command completed. At present, XC does not support "CONCURRENTLY" command. Please let us find a time to test "CREATE INDEX" in parallel with autovacuum. Do you need to create index on the fly, I mean as a part of usual database operation? If not, there could be some more workaround on this. Regards; --- Koichi Suzuki 2014-02-04 Sandeep Gupta <gup...@gm...>: > Hi Ashutosh, > > For us the app+ autovaccum is quite harmful. We are not able to run the > application because the index creation gets aborted in middle. The datanodes > crashes. > > We could somehow restart the datanodes and start the index creation but my > feeling it will happen quite often. > > > I have a related question: is there anyway to know if a command has failed. > Usually we fire a command using psql. > And move to the next command. Is there any way to know if the previous > command failed or was a success? > > Thanks. > Sandeep > > > > > On Mon, Feb 3, 2014 at 1:13 PM, Sandeep Gupta <gup...@gm...> > wrote: >> >> Hi Ashutosh, Koichi, >> >> Initially my feeling was that this was a postgres bug. That is why I >> posted it in the postgres community. However, I now feel that it is due to >> the changes made in XC. >> >> I have started the same test on standalone postgres. So far it hasn't >> crashed. My feeling is that it won't. If in case it does I will report >> accordingly. >> >> As requested, I started the test with verbose log on. Attached are the log >> files for the coordinator and the datanodes. >> There are several redundant messages that gets printed such as "checkpoint >> too often". Please use some filters etc. to view the log file. I thought it >> was best to send across the whole file. >> >> To clarify, I create a very large table (using copy) and then repeatedly >> create and drop index. I understand this is not the actual workload but that >> was the only way to reproduce the error. >> >> The other complication is that in real system we get two kinds of errors >> "tuple on found" and this deadlock. I feel that they connected. >> >> Let me know if the log file help or is any other suggestions that you have >> may. >> >> -Sandeep >> >> >> >> >> >> >> On Sun, Feb 2, 2014 at 11:49 PM, Ashutosh Bapat >> <ash...@en...> wrote: >>> >>> Hi Sandeep, >>> Can you please check if similar error happens on vanilla PG. It may be an >>> application + auto-vacuum error, which can happen in PG as well and might be >>> harmless. It's auto-vacuum being cancelled. Auto-vacuum will run again >>> during the next iteration. >>> >>> >>> On Fri, Jan 31, 2014 at 11:21 PM, Sandeep Gupta <gup...@gm...> >>> wrote: >>>> >>>> Hi, >>>> >>>> I was debugging an outstanding issue with pgxc >>>> (http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). >>>> >>>> I couldn't reproduce that error. But I do get this error. >>>> >>>> >>>> LOG: database system is ready to accept connections >>>> LOG: autovacuum launcher started >>>> LOG: sending cancel to blocking autovacuum PID 17222 >>>> DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 >>>> of database 12626. >>>> STATEMENT: drop index mdn >>>> ERROR: canceling autovacuum task >>>> CONTEXT: automatic analyze of table >>>> "postgres.public.la_directednetwork" >>>> PreAbort Remote >>>> >>>> >>>> It seems to be a deadlock issue and may be related to the earlier >>>> problem as well. >>>> Please let me know your comments. >>>> >>>> -Sandeep >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> WatchGuard Dimension instantly turns raw network data into actionable >>>> security intelligence. It gives you real-time visual feedback on key >>>> security issues and trends. Skip the complicated setup - simply import >>>> a virtual appliance and go from zero to informed in seconds. >>>> >>>> http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EnterpriseDB Corporation >>> The Postgres Database Company >> >> > > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Sandeep G. <gup...@gm...> - 2014-02-04 01:49:06
|
Hi Koichi, We are not invoking the statements concurrently. On second thoughts my question on how "how to know if a command failed...etc" doesn't make sense. What is happening right now is that index creation fails with datanode shutting down. We did try a workaround by turning off the autovacuum but then the memory usage will hit 100% and would result in tuple not found error. We don't have to create index on the fly. We would just like to get the index build somehow. -Sandeep On Mon, Feb 3, 2014 at 4:46 PM, Koichi Suzuki <koi...@gm...> wrote: > Unless you invoke "CONCURRENTLY" statement, having psql to request > next command means the command completed. At present, XC does not > support "CONCURRENTLY" command. > > Please let us find a time to test "CREATE INDEX" in parallel with > autovacuum. > > Do you need to create index on the fly, I mean as a part of usual > database operation? If not, there could be some more workaround on > this. > > Regards; > --- > Koichi Suzuki > > > 2014-02-04 Sandeep Gupta <gup...@gm...>: > > Hi Ashutosh, > > > > For us the app+ autovaccum is quite harmful. We are not able to run the > > application because the index creation gets aborted in middle. The > datanodes > > crashes. > > > > We could somehow restart the datanodes and start the index creation but > my > > feeling it will happen quite often. > > > > > > I have a related question: is there anyway to know if a command has > failed. > > Usually we fire a command using psql. > > And move to the next command. Is there any way to know if the previous > > command failed or was a success? > > > > Thanks. > > Sandeep > > > > > > > > > > On Mon, Feb 3, 2014 at 1:13 PM, Sandeep Gupta <gup...@gm...> > > wrote: > >> > >> Hi Ashutosh, Koichi, > >> > >> Initially my feeling was that this was a postgres bug. That is why I > >> posted it in the postgres community. However, I now feel that it is due > to > >> the changes made in XC. > >> > >> I have started the same test on standalone postgres. So far it hasn't > >> crashed. My feeling is that it won't. If in case it does I will report > >> accordingly. > >> > >> As requested, I started the test with verbose log on. Attached are the > log > >> files for the coordinator and the datanodes. > >> There are several redundant messages that gets printed such as > "checkpoint > >> too often". Please use some filters etc. to view the log file. I > thought it > >> was best to send across the whole file. > >> > >> To clarify, I create a very large table (using copy) and then repeatedly > >> create and drop index. I understand this is not the actual workload but > that > >> was the only way to reproduce the error. > >> > >> The other complication is that in real system we get two kinds of > errors > >> "tuple on found" and this deadlock. I feel that they connected. > >> > >> Let me know if the log file help or is any other suggestions that you > have > >> may. > >> > >> -Sandeep > >> > >> > >> > >> > >> > >> > >> On Sun, Feb 2, 2014 at 11:49 PM, Ashutosh Bapat > >> <ash...@en...> wrote: > >>> > >>> Hi Sandeep, > >>> Can you please check if similar error happens on vanilla PG. It may be > an > >>> application + auto-vacuum error, which can happen in PG as well and > might be > >>> harmless. It's auto-vacuum being cancelled. Auto-vacuum will run again > >>> during the next iteration. > >>> > >>> > >>> On Fri, Jan 31, 2014 at 11:21 PM, Sandeep Gupta < > gup...@gm...> > >>> wrote: > >>>> > >>>> Hi, > >>>> > >>>> I was debugging an outstanding issue with pgxc > >>>> ( > http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general > ). > >>>> > >>>> I couldn't reproduce that error. But I do get this error. > >>>> > >>>> > >>>> LOG: database system is ready to accept connections > >>>> LOG: autovacuum launcher started > >>>> LOG: sending cancel to blocking autovacuum PID 17222 > >>>> DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 > >>>> of database 12626. > >>>> STATEMENT: drop index mdn > >>>> ERROR: canceling autovacuum task > >>>> CONTEXT: automatic analyze of table > >>>> "postgres.public.la_directednetwork" > >>>> PreAbort Remote > >>>> > >>>> > >>>> It seems to be a deadlock issue and may be related to the earlier > >>>> problem as well. > >>>> Please let me know your comments. > >>>> > >>>> -Sandeep > >>>> > >>>> > >>>> > >>>> > ------------------------------------------------------------------------------ > >>>> WatchGuard Dimension instantly turns raw network data into actionable > >>>> security intelligence. It gives you real-time visual feedback on key > >>>> security issues and trends. Skip the complicated setup - simply > import > >>>> a virtual appliance and go from zero to informed in seconds. > >>>> > >>>> > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk > >>>> _______________________________________________ > >>>> Postgres-xc-general mailing list > >>>> Pos...@li... > >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>>> > >>> > >>> > >>> > >>> -- > >>> Best Wishes, > >>> Ashutosh Bapat > >>> EnterpriseDB Corporation > >>> The Postgres Database Company > >> > >> > > > > > > > ------------------------------------------------------------------------------ > > Managing the Performance of Cloud-Based Applications > > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > > Read the Whitepaper. > > > http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
|
From: Sandeep G. <gup...@gm...> - 2014-02-04 05:33:51
|
Hi Koichi, Just wanted to add that I have send across the datanode and coordinator log files in my previous email. My hope is that it may give some insights into what could be amiss and any ideas for workaround. Thanks. Sandeep |
|
From: Koichi S. <koi...@gm...> - 2014-02-04 13:34:34
|
I looked at the log at datanode and found checkpoint is running too frequently. Default checkpoint timeout is 5min. In your case, checkpoint runs almost every five seconds (not minutes) in each datanode. It is extraordinary. Could you try to tweak each datanode's postgresql.conf as follows? 1. Longer period for checkpoint_timeout. Default is 5min. 15min. will be okay. 2. Larger value for checkpoint_completion_target. Default is 0.5. It should be okay. Larger value, such as 0.7, will make make checkpoint work more smoothly. 3. Larger value of checkpoint_segment. Default is 3. Because your application updates the database very frequently, this number of checkpoint segment will be exhausted very easily. Increase this to, say, 30 or more. Each checkpoint_segment (in fact, WAL file) consumes 16MB of your file space. I hope this is no problem to you at all. I'm afraid too frequent checkpoint causes this kind of error (even with vanilla PostgreSQL) and this situation is what you should avoid both in PG and XC. Would like to know if things are improved. Best; --- Koichi Suzuki 2014-02-04 Sandeep Gupta <gup...@gm...>: > Hi Koichi, > > Just wanted to add that I have send across the datanode and coordinator log > files in my previous email. My hope is that it may give some insights into > what could be amiss and any ideas for workaround. > > > Thanks. > Sandeep > |
|
From: Sandeep G. <gup...@gm...> - 2014-02-05 00:20:18
|
Hi Koichi, Thank you suggesting these parameters. Initially we did play around these. However, we used significantly higher values such checkpoint_timeout=30mins etc. Essentially we were trying parameters so as to avoid interference from autovaccum in the first place. The reason was using low values was to recreate the problem in the test setup. I did the regression tests with the new setting it is certainly better. It does crash but not so often. I will try to use in the application and see if runs in the main application. Also, I am running the same tests over a standalone PG (9.3 version I believe) and so far it has crashed. I haven't been too careful to make sure to use the exact same values for checkpoint parameters. Next email I will attach log files for review. Thanks. Sandeep On Tue, Feb 4, 2014 at 8:34 AM, Koichi Suzuki <koi...@gm...> wrote: > I looked at the log at datanode and found checkpoint is running too > frequently. Default checkpoint timeout is 5min. In your case, > checkpoint runs almost every five seconds (not minutes) in each > datanode. It is extraordinary. > > Could you try to tweak each datanode's postgresql.conf as follows? > > 1. Longer period for checkpoint_timeout. Default is 5min. 15min. > will be okay. > 2. Larger value for checkpoint_completion_target. Default is 0.5. > It should be okay. Larger value, such as 0.7, will make make > checkpoint work more smoothly. > 3. Larger value of checkpoint_segment. Default is 3. Because your > application updates the database very frequently, this number of > checkpoint segment will be exhausted very easily. Increase this to, > say, 30 or more. Each checkpoint_segment (in fact, WAL file) > consumes 16MB of your file space. I hope this is no problem to you at > all. > > I'm afraid too frequent checkpoint causes this kind of error (even > with vanilla PostgreSQL) and this situation is what you should avoid > both in PG and XC. > > Would like to know if things are improved. > > Best; > --- > Koichi Suzuki > > > 2014-02-04 Sandeep Gupta <gup...@gm...>: > > Hi Koichi, > > > > Just wanted to add that I have send across the datanode and coordinator > log > > files in my previous email. My hope is that it may give some insights > into > > what could be amiss and any ideas for workaround. > > > > > > Thanks. > > Sandeep > > > |
|
From: 鈴木 幸市 <ko...@in...> - 2014-02-05 01:32:38
|
Autovacuum is separate from these checkpoint setups. Because checkpoint is launched almost every five seconds, it is very likely that active WAL file goes full very easily with very heavy updates (including CREATE INDEX). Unless you change this parameter, checkpoint_timeout setting will not work. When WAL file is full, checkpoint will be launched and no database update will be allowed unless you have a room to write to WAL files. Regards; --- Koichi Suzuki 2014/02/05 9:20、Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>> のメール: Hi Koichi, Thank you suggesting these parameters. Initially we did play around these. However, we used significantly higher values such checkpoint_timeout=30mins etc. Essentially we were trying parameters so as to avoid interference from autovaccum in the first place. The reason was using low values was to recreate the problem in the test setup. I did the regression tests with the new setting it is certainly better. It does crash but not so often. I will try to use in the application and see if runs in the main application. Also, I am running the same tests over a standalone PG (9.3 version I believe) and so far it has crashed. I haven't been too careful to make sure to use the exact same values for checkpoint parameters. Next email I will attach log files for review. Thanks. Sandeep On Tue, Feb 4, 2014 at 8:34 AM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: I looked at the log at datanode and found checkpoint is running too frequently. Default checkpoint timeout is 5min. In your case, checkpoint runs almost every five seconds (not minutes) in each datanode. It is extraordinary. Could you try to tweak each datanode's postgresql.conf as follows? 1. Longer period for checkpoint_timeout. Default is 5min. 15min. will be okay. 2. Larger value for checkpoint_completion_target. Default is 0.5. It should be okay. Larger value, such as 0.7, will make make checkpoint work more smoothly. 3. Larger value of checkpoint_segment. Default is 3. Because your application updates the database very frequently, this number of checkpoint segment will be exhausted very easily. Increase this to, say, 30 or more. Each checkpoint_segment (in fact, WAL file) consumes 16MB of your file space. I hope this is no problem to you at all. I'm afraid too frequent checkpoint causes this kind of error (even with vanilla PostgreSQL) and this situation is what you should avoid both in PG and XC. Would like to know if things are improved. Best; --- Koichi Suzuki 2014-02-04 Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>>: > Hi Koichi, > > Just wanted to add that I have send across the datanode and coordinator log > files in my previous email. My hope is that it may give some insights into > what could be amiss and any ideas for workaround. > > > Thanks. > Sandeep > ------------------------------------------------------------------------------ Managing the Performance of Cloud-Based Applications Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. Read the Whitepaper. http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Sandeep G. <gup...@gm...> - 2014-02-05 00:20:59
|
LOG: 00000: database system was shut down at 2014-02-04 13:31:47 EST LOCATION: StartupXLOG, xlog.c:6348 LOG: 00000: database system is ready to accept connections LOCATION: reaper, postmaster.c:2560 LOG: 00000: autovacuum launcher started LOCATION: AutoVacLauncherMain, autovacuum.c:407 WARNING: 25P01: there is no transaction in progress LOCATION: EndTransactionBlock, xact.c:4086 ERROR: 42P07: relation "la_directednetwork" already exists LOCATION: heap_create_with_catalog, heap.c:1408 STATEMENT: create table LA_directednetwork (head int, tail int, tail_type int, duration int) DISTRIBUTE BY HASH(head) WARNING: 01000: unexpected EOF on datanode connection LOCATION: pgxc_node_receive, pgxcnode.c:463 ERROR: XX000: failed to send PREPARE TRANSACTION command to the node 16384 LOCATION: pgxc_node_remote_prepare, execRemote.c:1629 STATEMENT: create INDEX mdn on la_directednetwork(head) WARNING: 01000: unexpected EOF on datanode connection LOCATION: pgxc_node_receive, pgxcnode.c:463 LOG: 00000: Failed to ABORT at node 16384 Detail: unexpected EOF on datanode connection LOCATION: pgxc_node_remote_abort, execRemote.c:2039 LOG: 00000: Failed to ABORT an implicitly PREPARED transaction status - 7 LOCATION: pgxc_node_remote_abort, execRemote.c:2070 ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn WARNING: 01000: Unexpected data on connection, cleaning. LOCATION: acquire_connection, poolmgr.c:2141 LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn WARNING: 25P01: there is no transaction in progress LOCATION: EndTransactionBlock, xact.c:4086 LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: CHECKPOINT ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: CHECKPOINT |