|
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 07:29:25
|
Reverse order.
First, exclude removing node from the node set your tables are distributed or replicated. You can do this with ALTER TABLE. All the row in the table will be redistributed to modified set of nodes for distributed tables. In the case of replicated tables, tables in the removing node will just be detached.
Then, you can issue DROP NODE before you really stop and clear the node resource.
---
Koichi Suzuki
2014/04/18 16:13、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール:
what if i just remove datanode slave and add it again ? all data will be copied to slave ?
will it impact on master ?
On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote:
The impact to the master server is almost the same as in the case of vanilla PG to build a slave using pg_basebackup. It sends all the master database file resource to the slave together with its WAL. Good thing compared with more primitive way of pg_start_backup() and pg_stop_backup() is the data are read directly from the cache so impact to I/O workload will be smaller.
If you’re concerning safety to the master resource, there could be another means, for example, to stop the master, copy everything to the slave, change master to enable WAL shipping and slave to run as the slave. Principally, this should work and you can keep the master resource safer but I’ve not tested this yet.
Regards;
---
Koichi Suzuki
2014/04/17 20:23、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール:
Hi koichi,
Is there any other short solution to fix this issue ?
1. Run checkpoint and vacuum full at the master,
Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually.
2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means).
should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432)
And very important thing how it will impact on datanode master server ? as of now i have only this master running on server.
so as of now i just don't want to take a chance, and somehow want to start slave for backup.
Please advice
Regards
Juned Khan
On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote:
so i have to do experiment for this. i really need that much connection.
Thanks for the suggestion @Michael
On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote:
On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote:
> And i can use pgpool and pgbouncer with pgxc right ?
In front of the Coordinators, that's fine. But I am not sure in from
of the Datanodes as XC has one extra connection parameter to identify
the node type from which connection is from, and a couple of
additional message types to pass down transaction ID, timestamp and
snapshot data from Coordinator to Datanodes (actually Coordinators as
well for DDL queries). If those message types and/or connection
parameters get filtered by pgpool or pgbouncer, well you cannot use
them. I've personally never given a try though, but the idea is worth
an attempt to reduce lock contention that could be caused by a too
high value of max_connections.
--
Michael
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech_______________________________________________
Postgres-xc-general mailing list
Pos...@li...<mailto:Pos...@li...>
https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
--
Thanks,
Juned Khan
iNextrix Technologies Pvt Ltd.
www.inextrix.com<http://www.inextrix.com/>
|