|
From: Juned K. <jkh...@gm...> - 2014-04-16 12:53:23
|
Hi All, When i tried to set max_connection value to 1000 it gave me this error FATAL: could not create semaphores: No space left on device DETAIL: Failed system call was semget(20008064, 17, 03600). >From the docs i came to know that to use than much connection i have to modify kernel configuration But now i am now trying to set 500 connection instead of 1000 then its giving below error. LOG: database system was interrupted while in recovery at log time 2014-04-16 09:06:06 WAT HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target. LOG: entering standby mode LOG: restored log file "000000010000000B00000011" from archive FATAL: hot standby is not possible because max_connections = 500 is a lower setting than on the master server (its value was 1000) LOG: startup process (PID 16829) exited with exit code 1 LOG: aborting startup due to startup process failure It says master value is 1000 but i have set it to 500. seems it reading old values i guess. database=# show max_connections; max_connections ----------------- 500 (1 row) I have restarted all components several times but no luck. Anyone please suggest if any other way is there to fix this like clearing cache or something so it can read correct values. Regards Juned Khan |
|
From: 鈴木 幸市 <ko...@in...> - 2014-04-17 01:07:50
|
The setup of max_connection=1000 might have written to WAL and shipped to the slave. This could be the cause of the issue. You can do the following to recover from the issue. 1. Run checkpoint and vacuum full at the master, 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). 3. Start the slave. Regards; --- Koichi Suzuki 2014/04/16 21:53、Juned Khan <jkh...@gm...> のメール: > Hi All, > > When i tried to set max_connection value to 1000 it gave me this error > > FATAL: could not create semaphores: No space left on device > DETAIL: Failed system call was semget(20008064, 17, 03600). > > From the docs i came to know that to use than much connection i have to modify kernel configuration > > But now i am now trying to set 500 connection instead of 1000 then its giving below error. > > LOG: database system was interrupted while in recovery at log time 2014-04-16 09:06:06 WAT > HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target. > LOG: entering standby mode > LOG: restored log file "000000010000000B00000011" from archive > FATAL: hot standby is not possible because max_connections = 500 is a lower setting than on the master server (its value was 1000) > LOG: startup process (PID 16829) exited with exit code 1 > LOG: aborting startup due to startup process failure > > It says master value is 1000 but i have set it to 500. seems it reading old values i guess. > > database=# show max_connections; > max_connections > ----------------- > 500 > (1 row) > > I have restarted all components several times but no luck. > > Anyone please suggest if any other way is there to fix this like clearing cache or something so it can read correct values. > > > Regards > Juned Khan > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Michael P. <mic...@gm...> - 2014-04-17 01:11:13
|
On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: > Hi All, > > When i tried to set max_connection value to 1000 it gave me this error > That's a lot, man! Concurrency between sessions is going to blow up your performance. >From the docs i came to know that to use than much connection i have to > modify kernel configuration > > But now i am now trying to set 500 connection instead of 1000 then its > giving below error. > > LOG: database system was interrupted while in recovery at log time > 2014-04-16 09:06:06 WAT > HINT: If this has occurred more than once some data might be corrupted > and you might need to choose an earlier recovery target. > LOG: entering standby mode > LOG: restored log file "000000010000000B00000011" from archive > FATAL: hot standby is not possible because max_connections = 500 is a > lower setting than on the master server (its value was 1000) > LOG: startup process (PID 16829) exited with exit code 1 > LOG: aborting startup due to startup process failure > Anyone please suggest if any other way is there to fix this like clearing > cache or something so it can read correct values. > Update the master first, then the slave. -- Michael |
|
From: Juned K. <jkh...@gm...> - 2014-04-17 04:13:24
|
Hi Michael, Why i had to set max_connection value to that much because it was giving me error like. FATAL: sorry, too many clients already at /ust/local/... should i use pgpool to handle that much connection? On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier <mic...@gm...>wrote: > > > > On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi All, >> >> When i tried to set max_connection value to 1000 it gave me this error >> > That's a lot, man! Concurrency between sessions is going to blow up your > performance. > > From the docs i came to know that to use than much connection i have to >> modify kernel configuration >> >> But now i am now trying to set 500 connection instead of 1000 then its >> giving below error. >> >> LOG: database system was interrupted while in recovery at log time >> 2014-04-16 09:06:06 WAT >> HINT: If this has occurred more than once some data might be corrupted >> and you might need to choose an earlier recovery target. >> LOG: entering standby mode >> LOG: restored log file "000000010000000B00000011" from archive >> FATAL: hot standby is not possible because max_connections = 500 is a >> lower setting than on the master server (its value was 1000) >> LOG: startup process (PID 16829) exited with exit code 1 >> LOG: aborting startup due to startup process failure >> > Anyone please suggest if any other way is there to fix this like clearing >> cache or something so it can read correct values. >> > Update the master first, then the slave. > -- > Michael > |
|
From: Juned K. <jkh...@gm...> - 2014-04-17 04:16:12
|
Hi koicihi, Can i search for the WAL file which contains this value and edit it and then try to start datanode slave again ? How it will impact to existing components, i mean what risks are there ? On Thu, Apr 17, 2014 at 9:43 AM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > > > > On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier < > mic...@gm...> wrote: > >> >> >> >> On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: >> >>> Hi All, >>> >>> When i tried to set max_connection value to 1000 it gave me this error >>> >> That's a lot, man! Concurrency between sessions is going to blow up your >> performance. >> >> From the docs i came to know that to use than much connection i have to >>> modify kernel configuration >>> >>> But now i am now trying to set 500 connection instead of 1000 then its >>> giving below error. >>> >>> LOG: database system was interrupted while in recovery at log time >>> 2014-04-16 09:06:06 WAT >>> HINT: If this has occurred more than once some data might be corrupted >>> and you might need to choose an earlier recovery target. >>> LOG: entering standby mode >>> LOG: restored log file "000000010000000B00000011" from archive >>> FATAL: hot standby is not possible because max_connections = 500 is a >>> lower setting than on the master server (its value was 1000) >>> LOG: startup process (PID 16829) exited with exit code 1 >>> LOG: aborting startup due to startup process failure >>> >> Anyone please suggest if any other way is there to fix this like clearing >>> cache or something so it can read correct values. >>> >> Update the master first, then the slave. >> -- >> Michael >> > > > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Michael P. <mic...@gm...> - 2014-04-17 04:37:21
|
On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > pgbouncer is a better choice if you just want connection pooling. pgpool is a swiss knife with more features that you may need. -- Michael |
|
From: Juned K. <jkh...@gm...> - 2014-04-17 05:51:57
|
And i can use pgpool and pgbouncer with pgxc right ? On Thu, Apr 17, 2014 at 10:07 AM, Michael Paquier <mic...@gm... > wrote: > > > > On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi Michael, >> >> Why i had to set max_connection value to that much because it was giving >> me error like. >> >> FATAL: sorry, too many clients already at /ust/local/... >> >> should i use pgpool to handle that much connection? >> > pgbouncer is a better choice if you just want connection pooling. pgpool > is a swiss knife with more features that you may need. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Michael P. <mic...@gm...> - 2014-04-17 06:10:27
|
On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael |
|
From: Juned K. <jkh...@gm...> - 2014-04-17 11:13:29
|
so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm... > wrote: > On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > > And i can use pgpool and pgbouncer with pgxc right ? > In front of the Coordinators, that's fine. But I am not sure in from > of the Datanodes as XC has one extra connection parameter to identify > the node type from which connection is from, and a couple of > additional message types to pass down transaction ID, timestamp and > snapshot data from Coordinator to Datanodes (actually Coordinators as > well for DDL queries). If those message types and/or connection > parameters get filtered by pgpool or pgbouncer, well you cannot use > them. I've personally never given a try though, but the idea is worth > an attempt to reduce lock contention that could be caused by a too > high value of max_connections. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Juned K. <jkh...@gm...> - 2014-04-17 11:24:06
|
Hi koichi,
Is there any other short solution to fix this issue ?
1. Run checkpoint and vacuum full at the master,
Found the docs to perform vacuum full but i have confusion about how to
run checkpoint manually.
2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides
this means).
should i run this command from datanode slave server something like (
pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17
--port=5432)
And very important thing how it will impact on datanode master server ? as
of now i have only this master running on server.
so as of now i just don't want to take a chance, and somehow want to start
slave for backup.
Please advice
Regards
Juned Khan
On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote:
> so i have to do experiment for this. i really need that much connection.
>
> Thanks for the suggestion @Michael
>
>
> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <
> mic...@gm...> wrote:
>
>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote:
>> > And i can use pgpool and pgbouncer with pgxc right ?
>> In front of the Coordinators, that's fine. But I am not sure in from
>> of the Datanodes as XC has one extra connection parameter to identify
>> the node type from which connection is from, and a couple of
>> additional message types to pass down transaction ID, timestamp and
>> snapshot data from Coordinator to Datanodes (actually Coordinators as
>> well for DDL queries). If those message types and/or connection
>> parameters get filtered by pgpool or pgbouncer, well you cannot use
>> them. I've personally never given a try though, but the idea is worth
>> an attempt to reduce lock contention that could be caused by a too
>> high value of max_connections.
>> --
>> Michael
>>
>
>
>
>
|
|
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 00:59:43
|
The impact to the master server is almost the same as in the case of vanilla PG to build a slave using pg_basebackup. It sends all the master database file resource to the slave together with its WAL. Good thing compared with more primitive way of pg_start_backup() and pg_stop_backup() is the data are read directly from the cache so impact to I/O workload will be smaller.
If you’re concerning safety to the master resource, there could be another means, for example, to stop the master, copy everything to the slave, change master to enable WAL shipping and slave to run as the slave. Principally, this should work and you can keep the master resource safer but I’ve not tested this yet.
Regards;
---
Koichi Suzuki
2014/04/17 20:23、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール:
Hi koichi,
Is there any other short solution to fix this issue ?
1. Run checkpoint and vacuum full at the master,
Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually.
2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means).
should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432)
And very important thing how it will impact on datanode master server ? as of now i have only this master running on server.
so as of now i just don't want to take a chance, and somehow want to start slave for backup.
Please advice
Regards
Juned Khan
On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote:
so i have to do experiment for this. i really need that much connection.
Thanks for the suggestion @Michael
On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote:
On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote:
> And i can use pgpool and pgbouncer with pgxc right ?
In front of the Coordinators, that's fine. But I am not sure in from
of the Datanodes as XC has one extra connection parameter to identify
the node type from which connection is from, and a couple of
additional message types to pass down transaction ID, timestamp and
snapshot data from Coordinator to Datanodes (actually Coordinators as
well for DDL queries). If those message types and/or connection
parameters get filtered by pgpool or pgbouncer, well you cannot use
them. I've personally never given a try though, but the idea is worth
an attempt to reduce lock contention that could be caused by a too
high value of max_connections.
--
Michael
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech_______________________________________________
Postgres-xc-general mailing list
Pos...@li...
https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
|
|
From: Juned K. <jkh...@gm...> - 2014-04-18 07:14:03
|
what if i just remove datanode slave and add it again ? all data will be copied to slave ? will it impact on master ? On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: > The impact to the master server is almost the same as in the case of > vanilla PG to build a slave using pg_basebackup. It sends all the master > database file resource to the slave together with its WAL. Good thing > compared with more primitive way of pg_start_backup() and pg_stop_backup() > is the data are read directly from the cache so impact to I/O workload will > be smaller. > > If you’re concerning safety to the master resource, there could be > another means, for example, to stop the master, copy everything to the > slave, change master to enable WAL shipping and slave to run as the slave. > Principally, this should work and you can keep the master resource safer > but I’ve not tested this yet. > > Regards; > --- > Koichi Suzuki > > 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: > > Hi koichi, > > Is there any other short solution to fix this issue ? > > 1. Run checkpoint and vacuum full at the master, > Found the docs to perform vacuum full but i have confusion about how > to run checkpoint manually. > > 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides > this means). > should i run this command from datanode slave server something like > ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 > --port=5432) > > And very important thing how it will impact on datanode master server ? > as of now i have only this master running on server. > so as of now i just don't want to take a chance, and somehow want to > start slave for backup. > > Please advice > > Regards > Juned Khan > > On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: > >> so i have to do experiment for this. i really need that much connection. >> >> Thanks for the suggestion @Michael >> >> >> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier < >> mic...@gm...> wrote: >> >>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>> > And i can use pgpool and pgbouncer with pgxc right ? >>> In front of the Coordinators, that's fine. But I am not sure in from >>> of the Datanodes as XC has one extra connection parameter to identify >>> the node type from which connection is from, and a couple of >>> additional message types to pass down transaction ID, timestamp and >>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>> well for DDL queries). If those message types and/or connection >>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>> them. I've personally never given a try though, but the idea is worth >>> an attempt to reduce lock contention that could be caused by a too >>> high value of max_connections. >>> -- >>> Michael >>> >> >> >> >> > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 07:29:25
|
Reverse order.
First, exclude removing node from the node set your tables are distributed or replicated. You can do this with ALTER TABLE. All the row in the table will be redistributed to modified set of nodes for distributed tables. In the case of replicated tables, tables in the removing node will just be detached.
Then, you can issue DROP NODE before you really stop and clear the node resource.
---
Koichi Suzuki
2014/04/18 16:13、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール:
what if i just remove datanode slave and add it again ? all data will be copied to slave ?
will it impact on master ?
On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote:
The impact to the master server is almost the same as in the case of vanilla PG to build a slave using pg_basebackup. It sends all the master database file resource to the slave together with its WAL. Good thing compared with more primitive way of pg_start_backup() and pg_stop_backup() is the data are read directly from the cache so impact to I/O workload will be smaller.
If you’re concerning safety to the master resource, there could be another means, for example, to stop the master, copy everything to the slave, change master to enable WAL shipping and slave to run as the slave. Principally, this should work and you can keep the master resource safer but I’ve not tested this yet.
Regards;
---
Koichi Suzuki
2014/04/17 20:23、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール:
Hi koichi,
Is there any other short solution to fix this issue ?
1. Run checkpoint and vacuum full at the master,
Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually.
2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means).
should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432)
And very important thing how it will impact on datanode master server ? as of now i have only this master running on server.
so as of now i just don't want to take a chance, and somehow want to start slave for backup.
Please advice
Regards
Juned Khan
On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote:
so i have to do experiment for this. i really need that much connection.
Thanks for the suggestion @Michael
On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote:
On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote:
> And i can use pgpool and pgbouncer with pgxc right ?
In front of the Coordinators, that's fine. But I am not sure in from
of the Datanodes as XC has one extra connection parameter to identify
the node type from which connection is from, and a couple of
additional message types to pass down transaction ID, timestamp and
snapshot data from Coordinator to Datanodes (actually Coordinators as
well for DDL queries). If those message types and/or connection
parameters get filtered by pgpool or pgbouncer, well you cannot use
them. I've personally never given a try though, but the idea is worth
an attempt to reduce lock contention that could be caused by a too
high value of max_connections.
--
Michael
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech_______________________________________________
Postgres-xc-general mailing list
Pos...@li...<mailto:Pos...@li...>
https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
--
Thanks,
Juned Khan
iNextrix Technologies Pvt Ltd.
www.inextrix.com<http://www.inextrix.com/>
|
|
From: Juned K. <jkh...@gm...> - 2014-04-18 07:55:05
|
You mean first i have to modify my table structure lets consider i have replicated tables so i have to remove "DISTRIBUTE by REPLICATION" right ? And then i have to run DROP node command to remove datanode slave pgxc_ctl won't do it for me ? i.e PGXC remove datanode slave datanode1 PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave And how it will copy database after adding datanode slave ? On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: > Reverse order. > > First, exclude removing node from the node set your tables are distributed > or replicated. You can do this with ALTER TABLE. All the row in the > table will be redistributed to modified set of nodes for distributed tables. > In the case of replicated tables, tables in the removing node will just be > detached. > > Then, you can issue DROP NODE before you really stop and clear the node > resource. > --- > Koichi Suzuki > > 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: > > what if i just remove datanode slave and add it again ? all data will be > copied to slave ? > will it impact on master ? > > > On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >> >> The impact to the master server is almost the same as in the case of >> vanilla PG to build a slave using pg_basebackup. It sends all the master >> database file resource to the slave together with its WAL. Good thing >> compared with more primitive way of pg_start_backup() and pg_stop_backup() >> is the data are read directly from the cache so impact to I/O workload will >> be smaller. >> >> If you’re concerning safety to the master resource, there could be another >> means, for example, to stop the master, copy everything to the slave, change >> master to enable WAL shipping and slave to run as the slave. Principally, >> this should work and you can keep the master resource safer but I’ve not >> tested this yet. >> >> Regards; >> --- >> Koichi Suzuki >> >> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >> >> Hi koichi, >> >> Is there any other short solution to fix this issue ? >> >> 1. Run checkpoint and vacuum full at the master, >> Found the docs to perform vacuum full but i have confusion about how >> to run checkpoint manually. >> >> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >> this means). >> should i run this command from datanode slave server something like ( >> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >> --port=5432) >> >> And very important thing how it will impact on datanode master server ? as >> of now i have only this master running on server. >> so as of now i just don't want to take a chance, and somehow want to start >> slave for backup. >> >> Please advice >> >> Regards >> Juned Khan >> >> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>> >>> so i have to do experiment for this. i really need that much connection. >>> >>> Thanks for the suggestion @Michael >>> >>> >>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>> <mic...@gm...> wrote: >>>> >>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>> > And i can use pgpool and pgbouncer with pgxc right ? >>>> In front of the Coordinators, that's fine. But I am not sure in from >>>> of the Datanodes as XC has one extra connection parameter to identify >>>> the node type from which connection is from, and a couple of >>>> additional message types to pass down transaction ID, timestamp and >>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>> well for DDL queries). If those message types and/or connection >>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>> them. I've personally never given a try though, but the idea is worth >>>> an attempt to reduce lock contention that could be caused by a too >>>> high value of max_connections. >>>> -- >>>> Michael >>> >>> >>> >>> >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and their >> applications. Written by three acclaimed leaders in the field, >> this first edition is now available. Download your free book today! >> http://p.sf.net/sfu/NeoTech_______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > > -- |
|
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 09:00:38
|
No, ALTER TABLE tablename DELETE NODE (nodename); This is very similar to the thing you should do to add the node ALTER TABLE table name ADD NODE (ndoename); It is included in my demo story at the last PG Open presentation. If you’re not sure, you can test it using smaller work table. Regards; --- Koichi Suzuki 2014/04/18 16:54、Juned Khan <jkh...@gm...> のメール: > You mean first i have to modify my table structure lets consider i > have replicated tables so i have to remove "DISTRIBUTE by > REPLICATION" right ? > > And then i have to run DROP node command to remove datanode slave > > pgxc_ctl won't do it for me ? > i.e > PGXC remove datanode slave datanode1 > > PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave > > And how it will copy database after adding datanode slave ? > > > > On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: >> Reverse order. >> >> First, exclude removing node from the node set your tables are distributed >> or replicated. You can do this with ALTER TABLE. All the row in the >> table will be redistributed to modified set of nodes for distributed tables. >> In the case of replicated tables, tables in the removing node will just be >> detached. >> >> Then, you can issue DROP NODE before you really stop and clear the node >> resource. >> --- >> Koichi Suzuki >> >> 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: >> >> what if i just remove datanode slave and add it again ? all data will be >> copied to slave ? >> will it impact on master ? >> >> >> On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >>> >>> The impact to the master server is almost the same as in the case of >>> vanilla PG to build a slave using pg_basebackup. It sends all the master >>> database file resource to the slave together with its WAL. Good thing >>> compared with more primitive way of pg_start_backup() and pg_stop_backup() >>> is the data are read directly from the cache so impact to I/O workload will >>> be smaller. >>> >>> If you’re concerning safety to the master resource, there could be another >>> means, for example, to stop the master, copy everything to the slave, change >>> master to enable WAL shipping and slave to run as the slave. Principally, >>> this should work and you can keep the master resource safer but I’ve not >>> tested this yet. >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >>> >>> Hi koichi, >>> >>> Is there any other short solution to fix this issue ? >>> >>> 1. Run checkpoint and vacuum full at the master, >>> Found the docs to perform vacuum full but i have confusion about how >>> to run checkpoint manually. >>> >>> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >>> this means). >>> should i run this command from datanode slave server something like ( >>> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >>> --port=5432) >>> >>> And very important thing how it will impact on datanode master server ? as >>> of now i have only this master running on server. >>> so as of now i just don't want to take a chance, and somehow want to start >>> slave for backup. >>> >>> Please advice >>> >>> Regards >>> Juned Khan >>> >>> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>>> >>>> so i have to do experiment for this. i really need that much connection. >>>> >>>> Thanks for the suggestion @Michael >>>> >>>> >>>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>>> <mic...@gm...> wrote: >>>>> >>>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>>>> And i can use pgpool and pgbouncer with pgxc right ? >>>>> In front of the Coordinators, that's fine. But I am not sure in from >>>>> of the Datanodes as XC has one extra connection parameter to identify >>>>> the node type from which connection is from, and a couple of >>>>> additional message types to pass down transaction ID, timestamp and >>>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>>> well for DDL queries). If those message types and/or connection >>>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>>> them. I've personally never given a try though, but the idea is worth >>>>> an attempt to reduce lock contention that could be caused by a too >>>>> high value of max_connections. >>>>> -- >>>>> Michael >>>> >>>> >>>> >>>> >>> >>> ------------------------------------------------------------------------------ >>> Learn Graph Databases - Download FREE O'Reilly Book >>> "Graph Databases" is the definitive new guide to graph databases and their >>> applications. Written by three acclaimed leaders in the field, >>> this first edition is now available. Download your free book today! >>> http://p.sf.net/sfu/NeoTech_______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> >> -- > |
|
From: Juned K. <jkh...@gm...> - 2014-04-18 10:38:11
|
yeah sure i would like to test it first you mean this presentation http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 ? On Fri, Apr 18, 2014 at 2:30 PM, 鈴木 幸市 <ko...@in...> wrote: > No, ALTER TABLE tablename DELETE NODE (nodename); > > This is very similar to the thing you should do to add the node > > ALTER TABLE table name ADD NODE (ndoename); > > It is included in my demo story at the last PG Open presentation. > > If you’re not sure, you can test it using smaller work table. > > Regards; > --- > Koichi Suzuki > > 2014/04/18 16:54、Juned Khan <jkh...@gm...> のメール: > >> You mean first i have to modify my table structure lets consider i >> have replicated tables so i have to remove "DISTRIBUTE by >> REPLICATION" right ? >> >> And then i have to run DROP node command to remove datanode slave >> >> pgxc_ctl won't do it for me ? >> i.e >> PGXC remove datanode slave datanode1 >> >> PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave >> >> And how it will copy database after adding datanode slave ? >> >> >> >> On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: >>> Reverse order. >>> >>> First, exclude removing node from the node set your tables are distributed >>> or replicated. You can do this with ALTER TABLE. All the row in the >>> table will be redistributed to modified set of nodes for distributed tables. >>> In the case of replicated tables, tables in the removing node will just be >>> detached. >>> >>> Then, you can issue DROP NODE before you really stop and clear the node >>> resource. >>> --- >>> Koichi Suzuki >>> >>> 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: >>> >>> what if i just remove datanode slave and add it again ? all data will be >>> copied to slave ? >>> will it impact on master ? >>> >>> >>> On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >>>> >>>> The impact to the master server is almost the same as in the case of >>>> vanilla PG to build a slave using pg_basebackup. It sends all the master >>>> database file resource to the slave together with its WAL. Good thing >>>> compared with more primitive way of pg_start_backup() and pg_stop_backup() >>>> is the data are read directly from the cache so impact to I/O workload will >>>> be smaller. >>>> >>>> If you’re concerning safety to the master resource, there could be another >>>> means, for example, to stop the master, copy everything to the slave, change >>>> master to enable WAL shipping and slave to run as the slave. Principally, >>>> this should work and you can keep the master resource safer but I’ve not >>>> tested this yet. >>>> >>>> Regards; >>>> --- >>>> Koichi Suzuki >>>> >>>> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >>>> >>>> Hi koichi, >>>> >>>> Is there any other short solution to fix this issue ? >>>> >>>> 1. Run checkpoint and vacuum full at the master, >>>> Found the docs to perform vacuum full but i have confusion about how >>>> to run checkpoint manually. >>>> >>>> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >>>> this means). >>>> should i run this command from datanode slave server something like ( >>>> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >>>> --port=5432) >>>> >>>> And very important thing how it will impact on datanode master server ? as >>>> of now i have only this master running on server. >>>> so as of now i just don't want to take a chance, and somehow want to start >>>> slave for backup. >>>> >>>> Please advice >>>> >>>> Regards >>>> Juned Khan >>>> >>>> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>>>> >>>>> so i have to do experiment for this. i really need that much connection. >>>>> >>>>> Thanks for the suggestion @Michael >>>>> >>>>> >>>>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>>>> <mic...@gm...> wrote: >>>>>> >>>>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>>>>> And i can use pgpool and pgbouncer with pgxc right ? >>>>>> In front of the Coordinators, that's fine. But I am not sure in from >>>>>> of the Datanodes as XC has one extra connection parameter to identify >>>>>> the node type from which connection is from, and a couple of >>>>>> additional message types to pass down transaction ID, timestamp and >>>>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>>>> well for DDL queries). If those message types and/or connection >>>>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>>>> them. I've personally never given a try though, but the idea is worth >>>>>> an attempt to reduce lock contention that could be caused by a too >>>>>> high value of max_connections. >>>>>> -- >>>>>> Michael >>>>> >>>>> >>>>> >>>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Learn Graph Databases - Download FREE O'Reilly Book >>>> "Graph Databases" is the definitive new guide to graph databases and their >>>> applications. Written by three acclaimed leaders in the field, >>>> this first edition is now available. Download your free book today! >>>> http://p.sf.net/sfu/NeoTech_______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> >>> -- >> > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |