|
From: Juned K. <jkh...@gm...> - 2014-04-17 11:24:06
|
Hi koichi,
Is there any other short solution to fix this issue ?
1. Run checkpoint and vacuum full at the master,
Found the docs to perform vacuum full but i have confusion about how to
run checkpoint manually.
2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides
this means).
should i run this command from datanode slave server something like (
pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17
--port=5432)
And very important thing how it will impact on datanode master server ? as
of now i have only this master running on server.
so as of now i just don't want to take a chance, and somehow want to start
slave for backup.
Please advice
Regards
Juned Khan
On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote:
> so i have to do experiment for this. i really need that much connection.
>
> Thanks for the suggestion @Michael
>
>
> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <
> mic...@gm...> wrote:
>
>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote:
>> > And i can use pgpool and pgbouncer with pgxc right ?
>> In front of the Coordinators, that's fine. But I am not sure in from
>> of the Datanodes as XC has one extra connection parameter to identify
>> the node type from which connection is from, and a couple of
>> additional message types to pass down transaction ID, timestamp and
>> snapshot data from Coordinator to Datanodes (actually Coordinators as
>> well for DDL queries). If those message types and/or connection
>> parameters get filtered by pgpool or pgbouncer, well you cannot use
>> them. I've personally never given a try though, but the idea is worth
>> an attempt to reduce lock contention that could be caused by a too
>> high value of max_connections.
>> --
>> Michael
>>
>
>
>
>
|