You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michael M. <me...@po...> - 2013-04-08 09:58:56
|
Hi, I take it you're all aware of the security update PostgreSQL did last week. I'm wondering why there is no security update on Postgres-XC so far. A quick glance suggests we're vulnerable, too, or are we not? At least the patch applies with some manual work. Michael -- Michael Meskes Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org Jabber: michael.meskes at gmail dot com VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL |
From: Koichi S. <koi...@gm...> - 2013-04-08 07:36:25
|
Thanks Michael for a good advice. Now I recovered our master branch to the status "before" 9.2.3 merge. The merge work itself is now in 923merge branch, which will be used to merge with 9.2.3, as well as 9.2.4, after REL1_1_STABLE is created. Github was recovered as well (in fact, it was so noce that 9.2.3 merge has not been pushed to github yet). If any local branch pulled 9.2.3 merge work, please reset them with git reset --hard to make your local master branch consistent. I will write how to recover them to our Wiki site. Warmest Regards; ---------- Koichi Suzuki 2013/4/5 Michael Paquier <mic...@gm...> > > > > On Fri, Apr 5, 2013 at 2:26 PM, Pavan Deolasee <pav...@gm...>wrote: > >> >> >> >> On Thu, Apr 4, 2013 at 7:13 PM, Michael Paquier < >> mic...@gm...> wrote: >> >>> OK guys you just put the XC master out-of-sync with PG master: >>> >>> http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=52a8aea4290851e5d40c3bb4e3237ad8aeceaf68 >>> >>> On Thu, Apr 4, 2013 at 7:01 PM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> >>>> >>>> >>>> On Thu, Apr 4, 2013 at 3:20 PM, Ahsan Hadi <ahs...@en... >>>> > wrote: >>>> >>>>> Hi Pavan, >>>>> >>>>> Thanks for raising this. Just to make sure i understand the problem, >>>>> the next release of postgres-xc will be 1.1. The 1.1 release will be based >>>>> on PG 9.2, >>>>> >>>> >>>> and that we should merge from master branch of PostgreSQL upto the >>>> point from where REL_9_2 is cut. >>>> >>> Correcting you here, you will have to merge master branch up to a commit >>> which is the intersection of master and REL9_3_STABLE, the intersection >>> commit determined by: >>> git merge-base master REL9_3_STABLE. >>> >> >> I am sure you mean REL9_2_STABLE because thats the branch we are >> interested in. >> > Oh OK I missed the point. What is aimed here is the stable branch for 1.1. > In this case yes, it is REL9_2_STABLE. > I thought about merging XC-master with future PG-9.3 stable. > > >> >> >>> . >>> >>> Resolving it is possible of course, simply delete the existing master >>> branch and recreate it down to the commit before the merge. >>> >> >> That's not a clean way and I am not sure how it would impact the users >> who are already tracking the current master branch. Somebody need to study >> and experiment carefully before doing more damage. One way I have seen by >> reading docs is to use "git revert -m 1 <merge commit id>". This indeed >> would revert the merge commit, but unfortunately will keep the history >> around. Also, this would cause problems when next time we try to merge the >> REL9_2_STABLE branch to the corresponding XC stable branch. >> > I still vote for cleaning up history and rebasing the master branch. I > recall that you did it once in the past when master was synced with PG-8.4 > stable. > -- > Michael > > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Abbas B. <abb...@en...> - 2013-04-08 06:24:00
|
Thanks. I will commit it later today. On Mon, Apr 8, 2013 at 9:52 AM, Amit Khandekar < ami...@en...> wrote: > Hi Abbas, > > The patch looks good to go. > > -Amit > > > On 6 April 2013 01:02, Abbas Butt <abb...@en...> wrote: > >> Hi, >> >> Consider this test case when run on a single coordinator cluster. >> >> From one session acquire a lock >> >> edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres >> psql (PGXC 1.1devel, based on PG 9.2beta2) >> Type "help" for help. >> >> postgres=# select pg_try_advisory_lock(1234,5678); >> pg_try_advisory_lock >> ---------------------- >> t >> (1 row) >> >> >> and from another terminal try to acquire the same lock >> >> edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres >> psql (PGXC 1.1devel, based on PG 9.2beta2) >> Type "help" for help. >> >> postgres=# select pg_try_advisory_lock(1234,5678); >> pg_try_advisory_lock >> ---------------------- >> t >> (1 row) >> >> Note that the second request succeeds where as the lock is already held >> by the first session. >> >> The problem is that pgxc_advisory_lock neglects the return of LockAcquire >> function in case of single coordinator. >> The attached patch corrects the problem. >> >> Comments are welcome. >> >> >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> >> ------------------------------------------------------------------------------ >> Minimize network downtime and maximize team effectiveness. >> Reduce network management and security costs.Learn how to hire >> the most talented Cisco Certified professionals. Visit the >> Employer Resources Portal >> http://www.cisco.com/web/learning/employer_resources/index.html >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Amit K. <ami...@en...> - 2013-04-08 04:53:37
|
Hi Abbas, The patch looks good to go. -Amit On 6 April 2013 01:02, Abbas Butt <abb...@en...> wrote: > Hi, > > Consider this test case when run on a single coordinator cluster. > > From one session acquire a lock > > edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres > psql (PGXC 1.1devel, based on PG 9.2beta2) > Type "help" for help. > > postgres=# select pg_try_advisory_lock(1234,5678); > pg_try_advisory_lock > ---------------------- > t > (1 row) > > > and from another terminal try to acquire the same lock > > edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres > psql (PGXC 1.1devel, based on PG 9.2beta2) > Type "help" for help. > > postgres=# select pg_try_advisory_lock(1234,5678); > pg_try_advisory_lock > ---------------------- > t > (1 row) > > Note that the second request succeeds where as the lock is already held by > the first session. > > The problem is that pgxc_advisory_lock neglects the return of LockAcquire > function in case of single coordinator. > The attached patch corrects the problem. > > Comments are welcome. > > > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Koichi S. <koi...@gm...> - 2013-04-07 14:56:43
|
Thanks. It looks much better. Regards; ---------- Koichi Suzuki 2013/4/6 Abbas Butt <abb...@en...> > Hi, > > Consider this test case when run on a single coordinator cluster. > > From one session acquire a lock > > edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres > psql (PGXC 1.1devel, based on PG 9.2beta2) > Type "help" for help. > > postgres=# select pg_try_advisory_lock(1234,5678); > pg_try_advisory_lock > ---------------------- > t > (1 row) > > > and from another terminal try to acquire the same lock > > edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres > psql (PGXC 1.1devel, based on PG 9.2beta2) > Type "help" for help. > > postgres=# select pg_try_advisory_lock(1234,5678); > pg_try_advisory_lock > ---------------------- > t > (1 row) > > Note that the second request succeeds where as the lock is already held by > the first session. > > The problem is that pgxc_advisory_lock neglects the return of LockAcquire > function in case of single coordinator. > The attached patch corrects the problem. > > Comments are welcome. > > > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Abbas B. <abb...@en...> - 2013-04-05 19:24:52
|
On Fri, Apr 5, 2013 at 11:05 AM, Amit Khandekar < ami...@en...> wrote: > > > > On 1 April 2013 14:23, Abbas Butt <abb...@en...> wrote: > >> >> >> On Mon, Apr 1, 2013 at 11:02 AM, Amit Khandekar < >> ami...@en...> wrote: >> >>> >>> >>> >>> On 31 March 2013 14:07, Abbas Butt <abb...@en...> wrote: >>> >>>> Hi, >>>> Attached please find the revised patch for restore mode. This patch has >>>> to be applied on top of the patches I sent earlier for >>>> 3608377, >>>> 3608376 & >>>> 3608375. >>>> >>>> I have also attached some scripts and a C file useful for testing the >>>> whole procedure. It is a database that has many objects in it. >>>> >>>> Here are the revised instructions for adding new nodes to the cluster. >>>> >>>> ====================================== >>>> >>>> Here are the steps to add a new coordinator >>>> >>>> 1) Initdb new coordinator >>>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_cord3 --nodename >>>> coord_3 >>>> >>>> 2) Make necessary changes in its postgresql.conf, in particular >>>> specify new coordinator name and pooler port >>>> >>>> 3) Connect to any of the existing coordinators & lock the cluster for >>>> backup, do not close this session >>>> ./psql postgres -p 5432 >>>> select pgxc_lock_for_backup(); >>>> >>>> 4) Connect to any of the existing coordinators and take backup of the >>>> database >>>> ./pg_dumpall -p 5432 -s --include-nodes --dump-nodes >>>> --file=/home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1100_all_objects_coord.sql >>>> >>>> 5) Start the new coordinator specify --restoremode while starting the >>>> coordinator >>>> ./postgres --restoremode -D ../data_cord3 -p 5455 >>>> >>>> 6) Create the new database on the new coordinator - optional >>>> ./createdb test -p 5455 >>>> >>>> 7) Restore the backup that was taken from an existing coordinator by >>>> connecting to the new coordinator directly >>>> ./psql -d test -f >>>> /home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1100_all_objects_coord.sql >>>> -p 5455 >>>> >>>> 8) Quit the new coordinator >>>> >>>> 9) Start the new coordinator as a by specifying --coordinator >>>> ./postgres --coordinator -D ../data_cord3 -p 5455 >>>> >>>> 10) Create the new coordinator on rest of the coordinators and reload >>>> configuration >>>> CREATE NODE COORD_3 WITH (HOST = 'localhost', type = >>>> 'coordinator', PORT = 5455); >>>> SELECT pgxc_pool_reload(); >>>> >>>> 11) Quit the session of step 3, this will unlock the cluster >>>> >>>> 12) The new coordinator is now ready >>>> ./psql test -p 5455 >>>> create table test_new_coord(a int, b int); >>>> \q >>>> ./psql test -p 5432 >>>> select * from test_new_coord; >>>> >>>> *======================================* >>>> *======================================* >>>> >>>> Here are the steps to add a new datanode >>>> >>>> >>>> 1) Initdb new datanode >>>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data3 --nodename >>>> data_node_3 >>>> >>>> 2) Make necessary changes in its postgresql.conf, in particular >>>> specify new datanode name >>>> >>>> 3) Connect to any of the existing coordinators & lock the cluster for >>>> backup, do not close this session >>>> ./psql postgres -p 5432 >>>> select pgxc_lock_for_backup(); >>>> >>>> 4) Connect to any of the existing datanodes and take backup of the >>>> database >>>> ./pg_dumpall -p 15432 -s --include-nodes >>>> --file=/home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1122_all_objects_dn1.sql >>>> >>> >>> >>> Why do we need --include-nodes on datanode ? >>> >> >> Agreed, this option should not be used. >> >> >>> >>> ---- >>> >>> >>> + * The dump taken from a datanode does NOT contain any >>> DISTRIBUTE BY >>> + * clause. This fact is used here to make sure that when the >>> + * DISTRIBUTE BY clause is missing in the statemnet the system >>> + * should not try to find out the node list itself. >>> + */ >>> + if ((IS_PGXC_COORDINATOR || (isRestoreMode && stmt->distributeby != >>> NULL)) >>> + && relkind == RELKIND_RELATION) >>> >>> How do we enforce not having DISTRIBUTE BY clause in the pg_dump output >>> if it's a datanode ? >>> >> >> We do not have to enforce it, since the pgxc_class catalog table has no >> information in it on datanodes, hence dump will not contain any DISTRIBUTE >> BY clause. >> >> >>> Also, can we just error out in restore mode if the DISTRIBUTE BY clause >>> is present ? >>> >> >> No we cannot error out, because while adding a coordinator DISTRIBUTE BY >> clause will be present, and since we have started the server by using >> --restoremode in place of --datanode or --coordinator we do not know >> whether the user is adding a new datanode or a new coordinator. >> > > Understood. > > >> >> >>> >>> ----- >>> >>> >>>> 5) Start the new datanode specify --restoremode while starting the it >>>> ./postgres --restoremode -D ../data3 -p 35432 >>>> >>> >>> >>> It seems you have disabled use of GTM in restore mode. >>> >> >> I did not. >> >> >>> For e.g. in GetNewTransactionId(), we get a global tansaction id only >>> if it's a coordinator or if IsPGXCNodeXactDatanodeDirect() is true. But >>> IsPGXCNodeXactDatanodeDirect() will now return false in restore mode. >>> >> >> No, I have not changed the function IsPGXCNodeXactDatanodeDirect, it >> would behave exactly as it used to. I changed the >> function IsPGXCNodeXactReadOnly. >> > > > Oh ok. I did not correctly see the details. Agreed now. > >> >> >> >>> Is there any specific reason for disabling use of GTM in restore mode ? >>> >> >> No reason. GTM should be used. >> >> >>> I don't see any harm in using GTM. In fact, it is better to start >>> using global xids as soon as possible. >>> >> >> Exactly. I just verified that the statement >> xid = (TransactionId) BeginTranGTM(timestamp) >> in function GetNewTransactionId is called in restore mode. >> > > Got it now. > > > I have no more comments. Please keep all the steps in some central > location so that everybody can access it. > Attached please find patch for documentation. It adds a new chapter (# 30) in Server Administration section, called Adding a New Node. This chapter has two sub sections 30.1 Adding a new coordinator and 30.2 Adding a new datanode. Each subsection lists all the steps to add the new node. > > >> >>> >>> >>>> >>>> 6) Restore the backup that was taken from an existing datanode by >>>> connecting to the new datanode directly >>>> ./psql -d postgres -f >>>> /home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1122_all_objects_dn1.sql >>>> -p 35432 >>>> >>>> 7) Quit the new datanode >>>> >>>> 8) Start the new datanode as a datanode by specifying --datanode >>>> ./postgres --datanode -D ../data3 -p 35432 >>>> >>>> 9) Create the new datanode on all the coordinators and reload >>>> configuration >>>> CREATE NODE DATA_NODE_3 WITH (HOST = 'localhost', type = >>>> 'datanode', PORT = 35432); >>>> SELECT pgxc_pool_reload(); >>>> >>>> 10) Quit the session of step 3, this will unlock the cluster >>>> >>>> 11) Redistribute data by using ALTER TABLE REDISTRIBUTE >>>> >>>> 12) The new daatnode is now ready >>>> ./psql test >>>> create table test_new_dn(a int, b int) distribute by replication; >>>> insert into test_new_dn values(1,2); >>>> EXECUTE DIRECT ON (data_node_1) 'SELECT * from test_new_dn'; >>>> EXECUTE DIRECT ON (data_node_2) 'SELECT * from test_new_dn'; >>>> EXECUTE DIRECT ON (data_node_3) 'SELECT * from test_new_dn'; >>>> >>>> ====================================== >>>> >>>> On Wed, Mar 27, 2013 at 5:02 PM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> Feature ID 3608379 >>>>> >>>>> On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar < >>>>> ami...@en...> wrote: >>>>> >>>>>> On 1 March 2013 01:30, Abbas Butt <abb...@en...> >>>>>> wrote: >>>>>> > >>>>>> > >>>>>> > On Thu, Feb 28, 2013 at 12:44 PM, Amit Khandekar >>>>>> > <ami...@en...> wrote: >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> On 28 February 2013 10:23, Abbas Butt <abb...@en...> >>>>>> wrote: >>>>>> >>> >>>>>> >>> Hi All, >>>>>> >>> >>>>>> >>> Attached please find a patch that provides a new command line >>>>>> argument >>>>>> >>> for postgres called --restoremode. >>>>>> >>> >>>>>> >>> While adding a new node to the cluster we need to restore the >>>>>> schema of >>>>>> >>> existing database to the new node. >>>>>> >>> If the new node is a datanode and we connect directly to it, it >>>>>> does not >>>>>> >>> allow DDL, because it is in read only mode & >>>>>> >>> If the new node is a coordinator, it will send DDLs to all the >>>>>> other >>>>>> >>> coordinators which we do not want it to do. >>>>>> >> >>>>>> >> >>>>>> >> What if we allow writes in standalone mode, so that we would >>>>>> initialize >>>>>> >> the new node using standalone mode instead of --restoremode ? >>>>>> > >>>>>> > >>>>>> > Please take a look at the patch, I am using --restoremode in place >>>>>> of >>>>>> > --coordinator & --datanode. I am not sure how would stand alone >>>>>> mode fit in >>>>>> > here. >>>>>> >>>>>> I was trying to see if we can avoid adding a new mode, instead, use >>>>>> standalone mode for all the purposes for which restoremode is used. >>>>>> Actually I checked the documentation, it says this mode is used only >>>>>> for debugging or recovery purposes, so now I myself am a bit hesitent >>>>>> about this mode for the purpose of restoring. >>>>>> >>>>>> > >>>>>> >> >>>>>> >> >>>>>> >>> >>>>>> >>> To provide ability to restore on the new node a new command line >>>>>> argument >>>>>> >>> is provided. >>>>>> >>> It is to be provided in place of --coordinator OR --datanode. >>>>>> >>> In restore mode both coordinator and datanode are internally >>>>>> treated as a >>>>>> >>> datanode. >>>>>> >>> For more details see patch comments. >>>>>> >>> >>>>>> >>> After this patch one can add a new node to the cluster. >>>>>> >>> >>>>>> >>> Here are the steps to add a new coordinator >>>>>> >>> >>>>>> >>> >>>>>> >>> 1) Initdb new coordinator >>>>>> >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_cord3 >>>>>> >>> --nodename coord_3 >>>>>> >>> >>>>>> >>> 2) Make necessary changes in its postgresql.conf, in particular >>>>>> specify >>>>>> >>> new coordinator name and pooler port >>>>>> >>> >>>>>> >>> 3) Connect to any of the existing coordinators & lock the >>>>>> cluster for >>>>>> >>> backup >>>>>> >>> ./psql postgres -p 5432 >>>>>> >>> SET xc_lock_for_backup=yes; >>>>>> >>> \q >>>>>> >> >>>>>> >> >>>>>> >> I haven't given a thought on the earlier patch you sent for >>>>>> cluster lock >>>>>> >> implementation; may be we can discuss this on that thread, but >>>>>> just a quick >>>>>> >> question: >>>>>> >> >>>>>> >> Does the cluster-lock command wait for the ongoing DDL commands to >>>>>> finish >>>>>> >> ? If not, we have problems. The subsequent pg_dump would not >>>>>> contain objects >>>>>> >> created by these particular DDLs. >>>>>> > >>>>>> > >>>>>> > Suppose you have a two coordinator cluster. Assume one client >>>>>> connected to >>>>>> > each. Suppose one client issues a lock cluster command and the >>>>>> other issues >>>>>> > a DDL. Is this what you mean by an ongoing DDL? If true then answer >>>>>> to your >>>>>> > question is Yes. >>>>>> > >>>>>> > Suppose you have a prepared transaction that has a DDL in it, again >>>>>> if this >>>>>> > can be considered an on going DDL, then again answer to your >>>>>> question is >>>>>> > Yes. >>>>>> > >>>>>> > Suppose you have a two coordinator cluster. Assume one client >>>>>> connected to >>>>>> > each. One client starts a transaction and issues a DDL, the second >>>>>> client >>>>>> > issues a lock cluster command, the first commits the transaction. >>>>>> If this is >>>>>> > an ongoing DDL, then the answer to your question is No. But its a >>>>>> matter of >>>>>> > deciding which camp are we going to put COMMIT in, the allow camp, >>>>>> or the >>>>>> > deny camp. I decided to put it in allow camp, because I have not >>>>>> yet written >>>>>> > any code to detect whether a transaction being committed has a DDL >>>>>> in it or >>>>>> > not, and stopping all transactions from committing looks too >>>>>> restrictive to >>>>>> > me. >>>>>> > >>>>>> > Do you have some other meaning of an ongoing DDL? >>>>>> > >>>>>> > I agree that we should have discussed this on the right thread. Lets >>>>>> > continue this discussion on that thread. >>>>>> >>>>>> Continued on the other thread. >>>>>> >>>>>> > >>>>>> >> >>>>>> >> >>>>>> >>> >>>>>> >>> >>>>>> >>> 4) Connect to any of the existing coordinators and take backup >>>>>> of the >>>>>> >>> database >>>>>> >>> ./pg_dump -p 5432 -C -s >>>>>> >>> >>>>>> --file=/home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql test >>>>>> >>> >>>>>> >>> 5) Start the new coordinator specify --restoremode while >>>>>> starting the >>>>>> >>> coordinator >>>>>> >>> ./postgres --restoremode -D ../data_cord3 -p 5455 >>>>>> >>> >>>>>> >>> 6) connect to the new coordinator directly >>>>>> >>> ./psql postgres -p 5455 >>>>>> >>> >>>>>> >>> 7) create all the datanodes and the rest of the coordinators on >>>>>> the new >>>>>> >>> coordiantor & reload configuration >>>>>> >>> CREATE NODE DATA_NODE_1 WITH (HOST = 'localhost', type = >>>>>> >>> 'datanode', PORT = 15432, PRIMARY); >>>>>> >>> CREATE NODE DATA_NODE_2 WITH (HOST = 'localhost', type = >>>>>> >>> 'datanode', PORT = 25432); >>>>>> >>> >>>>>> >>> CREATE NODE COORD_1 WITH (HOST = 'localhost', type = >>>>>> >>> 'coordinator', PORT = 5432); >>>>>> >>> CREATE NODE COORD_2 WITH (HOST = 'localhost', type = >>>>>> >>> 'coordinator', PORT = 5433); >>>>>> >>> >>>>>> >>> SELECT pgxc_pool_reload(); >>>>>> >>> >>>>>> >>> 8) quit psql >>>>>> >>> >>>>>> >>> 9) Create the new database on the new coordinator >>>>>> >>> ./createdb test -p 5455 >>>>>> >>> >>>>>> >>> 10) create the roles and table spaces manually, the dump does not >>>>>> contain >>>>>> >>> roles or table spaces >>>>>> >>> ./psql test -p 5455 >>>>>> >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; >>>>>> >>> CREATE TABLESPACE my_space LOCATION >>>>>> >>> '/usr/local/pgsql/my_space_location'; >>>>>> >>> \q >>>>>> >>> >>>>>> >> >>>>>> >> Will pg_dumpall help ? It dumps roles also. >>>>>> > >>>>>> > >>>>>> > Yah , but I am giving example of pg_dump so this step has to be >>>>>> there. >>>>>> > >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >>> >>>>>> >>> 11) Restore the backup that was taken from an existing >>>>>> coordinator by >>>>>> >>> connecting to the new coordinator directly >>>>>> >>> ./psql -d test -f >>>>>> >>> /home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql -p >>>>>> 5455 >>>>>> >>> >>>>>> >>> 11) Quit the new coordinator >>>>>> >>> >>>>>> >>> 12) Connect to any of the existing coordinators & unlock the >>>>>> cluster >>>>>> >>> ./psql postgres -p 5432 >>>>>> >>> SET xc_lock_for_backup=no; >>>>>> >>> \q >>>>>> >>> >>>>>> >> >>>>>> >> Unlocking the cluster has to be done *after* the node is added >>>>>> into the >>>>>> >> cluster. >>>>>> > >>>>>> > >>>>>> > Very true. I stand corrected. This means CREATE NODE has to be >>>>>> allowed when >>>>>> > xc_lock_for_backup is set. >>>>>> > >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >>> >>>>>> >>> 13) Start the new coordinator as a by specifying --coordinator >>>>>> >>> ./postgres --coordinator -D ../data_cord3 -p 5455 >>>>>> >>> >>>>>> >>> 14) Create the new coordinator on rest of the coordinators and >>>>>> reload >>>>>> >>> configuration >>>>>> >>> CREATE NODE COORD_3 WITH (HOST = 'localhost', type = >>>>>> >>> 'coordinator', PORT = 5455); >>>>>> >>> SELECT pgxc_pool_reload(); >>>>>> >>> >>>>>> >>> 15) The new coordinator is now ready >>>>>> >>> ./psql test -p 5455 >>>>>> >>> create table test_new_coord(a int, b int); >>>>>> >>> \q >>>>>> >>> ./psql test -p 5432 >>>>>> >>> select * from test_new_coord; >>>>>> >>> >>>>>> >>> >>>>>> >>> Here are the steps to add a new datanode >>>>>> >>> >>>>>> >>> >>>>>> >>> 1) Initdb new datanode >>>>>> >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data3 >>>>>> --nodename >>>>>> >>> data_node_3 >>>>>> >>> >>>>>> >>> 2) Make necessary changes in its postgresql.conf, in particular >>>>>> specify >>>>>> >>> new datanode name >>>>>> >>> >>>>>> >>> 3) Connect to any of the existing coordinators & lock the >>>>>> cluster for >>>>>> >>> backup >>>>>> >>> ./psql postgres -p 5432 >>>>>> >>> SET xc_lock_for_backup=yes; >>>>>> >>> \q >>>>>> >>> >>>>>> >>> 4) Connect to any of the existing datanodes and take backup of >>>>>> the >>>>>> >>> database >>>>>> >>> ./pg_dump -p 15432 -C -s >>>>>> >>> >>>>>> --file=/home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql test >>>>>> >>> >>>>>> >>> 5) Start the new datanode specify --restoremode while starting >>>>>> the it >>>>>> >>> ./postgres --restoremode -D ../data3 -p 35432 >>>>>> >>> >>>>>> >>> 6) Create the new database on the new datanode >>>>>> >>> ./createdb test -p 35432 >>>>>> >>> >>>>>> >>> 7) create the roles and table spaces manually, the dump does not >>>>>> contain >>>>>> >>> roles or table spaces >>>>>> >>> ./psql test -p 35432 >>>>>> >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; >>>>>> >>> CREATE TABLESPACE my_space LOCATION >>>>>> >>> '/usr/local/pgsql/my_space_location'; >>>>>> >>> \q >>>>>> >>> >>>>>> >>> 8) Restore the backup that was taken from an existing datanode by >>>>>> >>> connecting to the new datanode directly >>>>>> >>> ./psql -d test -f >>>>>> >>> /home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql -p >>>>>> 35432 >>>>>> >>> >>>>>> >>> 9) Quit the new datanode >>>>>> >>> >>>>>> >>> 10) Connect to any of the existing coordinators & unlock the >>>>>> cluster >>>>>> >>> ./psql postgres -p 5432 >>>>>> >>> SET xc_lock_for_backup=no; >>>>>> >>> \q >>>>>> >>> >>>>>> >>> 11) Start the new datanode as a datanode by specifying --datanode >>>>>> >>> ./postgres --datanode -D ../data3 -p 35432 >>>>>> >>> >>>>>> >>> 12) Create the new datanode on all the coordinators and reload >>>>>> >>> configuration >>>>>> >>> CREATE NODE DATA_NODE_3 WITH (HOST = 'localhost', type = >>>>>> >>> 'datanode', PORT = 35432); >>>>>> >>> SELECT pgxc_pool_reload(); >>>>>> >>> >>>>>> >>> 13) Redistribute data by using ALTER TABLE REDISTRIBUTE >>>>>> >>> >>>>>> >>> 14) The new daatnode is now ready >>>>>> >>> ./psql test >>>>>> >>> create table test_new_dn(a int, b int) distribute by >>>>>> replication; >>>>>> >>> insert into test_new_dn values(1,2); >>>>>> >>> EXECUTE DIRECT ON (data_node_1) 'SELECT * from >>>>>> test_new_dn'; >>>>>> >>> EXECUTE DIRECT ON (data_node_2) 'SELECT * from >>>>>> test_new_dn'; >>>>>> >>> EXECUTE DIRECT ON (data_node_3) 'SELECT * from >>>>>> test_new_dn'; >>>>>> >>> >>>>>> >>> Please note that the steps assume that the patch sent earlier >>>>>> >>> 1_lock_cluster.patch in mail subject [Patch to lock cluster] is >>>>>> applied. >>>>>> >>> >>>>>> >>> I have also attached test database scripts, that would help in >>>>>> patch >>>>>> >>> review. >>>>>> >>> >>>>>> >>> Comments are welcome. >>>>>> >>> >>>>>> >>> -- >>>>>> >>> Abbas >>>>>> >>> Architect >>>>>> >>> EnterpriseDB Corporation >>>>>> >>> The Enterprise PostgreSQL Company >>>>>> >>> >>>>>> >>> Phone: 92-334-5100153 >>>>>> >>> >>>>>> >>> Website: www.enterprisedb.com >>>>>> >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> >>> >>>>>> >>> This e-mail message (and any attachment) is intended for the use >>>>>> of >>>>>> >>> the individual or entity to whom it is addressed. This message >>>>>> >>> contains information from EnterpriseDB Corporation that may be >>>>>> >>> privileged, confidential, or exempt from disclosure under >>>>>> applicable >>>>>> >>> law. If you are not the intended recipient or authorized to >>>>>> receive >>>>>> >>> this for the intended recipient, any use, dissemination, >>>>>> distribution, >>>>>> >>> retention, archiving, or copying of this communication is strictly >>>>>> >>> prohibited. If you have received this e-mail in error, please >>>>>> notify >>>>>> >>> the sender immediately by reply e-mail and delete this message. >>>>>> >>> >>>>>> >>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>> Everyone hates slow websites. So do we. >>>>>> >>> Make your web apps faster with AppDynamics >>>>>> >>> Download AppDynamics Lite for free today: >>>>>> >>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> >>> _______________________________________________ >>>>>> >>> Postgres-xc-developers mailing list >>>>>> >>> Pos...@li... >>>>>> >>> >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>> >>>>>> >> >>>>>> > >>>>>> > >>>>>> > >>>>>> > -- >>>>>> > -- >>>>>> > Abbas >>>>>> > Architect >>>>>> > EnterpriseDB Corporation >>>>>> > The Enterprise PostgreSQL Company >>>>>> > >>>>>> > Phone: 92-334-5100153 >>>>>> > >>>>>> > Website: www.enterprisedb.com >>>>>> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> > Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> > >>>>>> > This e-mail message (and any attachment) is intended for the use of >>>>>> > the individual or entity to whom it is addressed. This message >>>>>> > contains information from EnterpriseDB Corporation that may be >>>>>> > privileged, confidential, or exempt from disclosure under applicable >>>>>> > law. If you are not the intended recipient or authorized to receive >>>>>> > this for the intended recipient, any use, dissemination, >>>>>> distribution, >>>>>> > retention, archiving, or copying of this communication is strictly >>>>>> > prohibited. If you have received this e-mail in error, please notify >>>>>> > the sender immediately by reply e-mail and delete this message. >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Abbas >>>>> Architect >>>>> EnterpriseDB Corporation >>>>> The Enterprise PostgreSQL Company >>>>> >>>>> Phone: 92-334-5100153 >>>>> >>>>> Website: www.enterprisedb.com >>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>>>> This e-mail message (and any attachment) is intended for the use of >>>>> the individual or entity to whom it is addressed. This message >>>>> contains information from EnterpriseDB Corporation that may be >>>>> privileged, confidential, or exempt from disclosure under applicable >>>>> law. If you are not the intended recipient or authorized to receive >>>>> this for the intended recipient, any use, dissemination, distribution, >>>>> retention, archiving, or copying of this communication is strictly >>>>> prohibited. If you have received this e-mail in error, please notify >>>>> the sender immediately by reply e-mail and delete this message. >>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-04-05 19:24:26
|
Hi, Consider this test case when run on a single coordinator cluster. >From one session acquire a lock edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres psql (PGXC 1.1devel, based on PG 9.2beta2) Type "help" for help. postgres=# select pg_try_advisory_lock(1234,5678); pg_try_advisory_lock ---------------------- t (1 row) and from another terminal try to acquire the same lock edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres psql (PGXC 1.1devel, based on PG 9.2beta2) Type "help" for help. postgres=# select pg_try_advisory_lock(1234,5678); pg_try_advisory_lock ---------------------- t (1 row) Note that the second request succeeds where as the lock is already held by the first session. The problem is that pgxc_advisory_lock neglects the return of LockAcquire function in case of single coordinator. The attached patch corrects the problem. Comments are welcome. -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-04-05 18:52:12
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd"> <HTML ><HEAD ><TITLE >Adding a New Coordinator</TITLE ><META NAME="GENERATOR" CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK REV="MADE" HREF="mailto:pgs...@po..."><LINK REL="HOME" TITLE="Postgres-XC 1.1devel Documentation" HREF="index.html"><LINK REL="UP" TITLE="Adding a New Node" HREF="add-node.html"><LINK REL="PREVIOUS" TITLE="Adding a New Node" HREF="add-node.html"><LINK REL="NEXT" TITLE="Adding a New Datanode" HREF="add-node-datanode.html"><LINK REL="STYLESHEET" TYPE="text/css" HREF="stylesheet.css"><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=ISO-8859-1"><META NAME="creation" CONTENT="2013-04-05T18:44:20"></HEAD ><BODY CLASS="SECT1" ><DIV CLASS="NAVHEADER" ><TABLE SUMMARY="Header navigation table" WIDTH="100%" BORDER="0" CELLPADDING="0" CELLSPACING="0" ><TR ><TH COLSPAN="5" ALIGN="center" VALIGN="bottom" ><A HREF="index.html" >Postgres-XC 1.1devel Documentation</A ></TH ></TR ><TR ><TD WIDTH="10%" ALIGN="left" VALIGN="top" ><A TITLE="Adding a New Node" HREF="add-node.html" ACCESSKEY="P" >Prev</A ></TD ><TD WIDTH="10%" ALIGN="left" VALIGN="top" ><A TITLE="Adding a New Node" HREF="add-node.html" >Fast Backward</A ></TD ><TD WIDTH="60%" ALIGN="center" VALIGN="bottom" >Chapter 30. Adding a New Node</TD ><TD WIDTH="10%" ALIGN="right" VALIGN="top" ><A TITLE="Adding a New Node" HREF="add-node.html" >Fast Forward</A ></TD ><TD WIDTH="10%" ALIGN="right" VALIGN="top" ><A TITLE="Adding a New Datanode" HREF="add-node-datanode.html" ACCESSKEY="N" >Next</A ></TD ></TR ></TABLE ><HR ALIGN="LEFT" WIDTH="100%"></DIV ><DIV CLASS="SECT1" ><H1 CLASS="SECT1" ><A NAME="ADD-NODE-COORDINATOR" >30.1. Adding a New Coordinator</A ></H1 ><P > Following steps should be performed to add a new coordinator to a running cluster: </P ><P > <P ></P ></P><OL TYPE="1" ><LI ><P >Initialize the new coordinator. The following example initilizes a coordinator named coord_3.</P ><PRE CLASS="PROGRAMLISTING" > /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_cord3 --nodename coord_3 </PRE ></LI ><LI ><P > Make necessary changes in postgresql.conf of the new coordinator, in particular specify new coordinator name and pooler port. </P ></LI ><LI ><P > Connect to any of the existing coordinators and lock the cluster for backup, do not close this session. The following example assumes a coordinator is running on port 5432. Make sure the function call returns true. The detailed description of the function <CODE CLASS="FUNCTION" >pgxc_lock_for_backup</CODE > can be found in <A HREF="functions-admin.html#FUNCTIONS-PGXC-ADD-NEW-NODE" >Table 9-64</A > </P ><PRE CLASS="PROGRAMLISTING" > ./psql postgres -p 5432 select pgxc_lock_for_backup(); </PRE ></LI ><LI ><P > Connect to any of the existing coordinators and take backup of the database. Please note that only schema (i.e. no data) is to be dumped. Also note the use of <TT CLASS="OPTION" >--include-nodes</TT >, so that the <TT CLASS="COMMAND" >CREATE TABLE</TT > contains <TT CLASS="COMMAND" >TO NODE</TT > clause. Similarly <TT CLASS="OPTION" >--dump-nodes</TT > ensures that the dump does contain existing nodes and node groups. </P ><PRE CLASS="PROGRAMLISTING" > ./pg_dumpall -p 5432 -s --include-nodes --dump-nodes --file=/some/valid/path/some_file_name.sql </PRE ></LI ><LI ><P > Start the new coordinator specifying <TT CLASS="OPTION" >--restoremode</TT > while starting. The following example starts the new coordinator on port 5455 </P ><PRE CLASS="PROGRAMLISTING" > ./postgres --restoremode -D ../data_cord3 -p 5455 </PRE ></LI ><LI ><P > Restore the backup (taken in step 4) by connecting to the new coordinator directly. </P ><PRE CLASS="PROGRAMLISTING" > ./psql -d postgres -f /some/valid/path/some_file_name.sql -p 5455 </PRE ></LI ><LI ><P > Quit the new coordinator. </P ></LI ><LI ><P > Start the new coordinator specifying <TT CLASS="OPTION" >--coordinator</TT > while starting. The following example starts the new coordinator on port 5455 </P ><PRE CLASS="PROGRAMLISTING" > ./postgres --coordinator -D ../data_cord3 -p 5455 </PRE ></LI ><LI ><P > Create the new coordinator on rest of the coordinators and reload configuration. The following example creates coord_3, with host localhost and port 5455. </P ><PRE CLASS="PROGRAMLISTING" > CREATE NODE COORD_3 WITH (HOST = 'localhost', type = 'coordinator', PORT = 5455); SELECT pgxc_pool_reload(); </PRE ></LI ><LI ><P > Quit the session of step 3, this will unlock the cluster. The new coordinator is now ready. </P ></LI ></OL ><P> </P ></DIV ><DIV CLASS="NAVFOOTER" ><HR ALIGN="LEFT" WIDTH="100%"><TABLE SUMMARY="Footer navigation table" WIDTH="100%" BORDER="0" CELLPADDING="0" CELLSPACING="0" ><TR ><TD WIDTH="33%" ALIGN="left" VALIGN="top" ><A HREF="add-node.html" ACCESSKEY="P" >Prev</A ></TD ><TD WIDTH="34%" ALIGN="center" VALIGN="top" ><A HREF="index.html" ACCESSKEY="H" >Home</A ></TD ><TD WIDTH="33%" ALIGN="right" VALIGN="top" ><A HREF="add-node-datanode.html" ACCESSKEY="N" >Next</A ></TD ></TR ><TR ><TD WIDTH="33%" ALIGN="left" VALIGN="top" >Adding a New Node</TD ><TD WIDTH="34%" ALIGN="center" VALIGN="top" ><A HREF="add-node.html" ACCESSKEY="U" >Up</A ></TD ><TD WIDTH="33%" ALIGN="right" VALIGN="top" >Adding a New Datanode</TD ></TR ></TABLE ></DIV ></BODY ></HTML > |
From: Ashutosh B. <ash...@en...> - 2013-04-05 12:23:08
|
And BTW, we need corresponding document changes for this one. On Fri, Apr 5, 2013 at 5:48 PM, Ashutosh Bapat < ash...@en...> wrote: > Sorry, here's the updated patch. > > > On Fri, Apr 5, 2013 at 5:46 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas >> I reviewed your changes, they look good. I have made some minor changes. >> Please find them in attached patch. >> >> I want to test this in a scenario on adding new node. Can you please >> point me as to what steps I should follow? I don't want to lock the >> cluster, but do this >> 1. Take the dump of existing coordinator using this patch >> 2. initdb a new coordinator >> 3. boot and use this dump to update the coordinator >> >> Now with some magic this coordinator should be useful. Can you please >> provide me with the steps to do this? I will run those thus testing your >> patch. >> >> >> On Sun, Mar 31, 2013 at 12:14 AM, Abbas Butt <abb...@en... >> > wrote: >> >>> I just realized I had attached the wrong patch file with the previous >>> mail, please ignore the previous attachment and use the one attached with >>> this email for review. >>> >>> >>> On Fri, Mar 29, 2013 at 5:33 PM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Hi, >>>> Attached please find a revised patch that provides support in >>>> pg_dumpall to dump nodes and node groups if the command line option >>>> --dump-nodes is provided. >>>> >>>> I tested and found that pg_dumpall works as expected. >>>> >>>> >>>> On Wed, Mar 27, 2013 at 5:04 PM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> Feature ID 3608376 >>>>> >>>>> On Sun, Mar 10, 2013 at 7:59 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> Hi, >>>>>> Attached please find a patch that adds support in pg_dump to dump >>>>>> nodes and node groups. This is required while adding a new node to the >>>>>> cluster. >>>>>> >>>>>> -- >>>>>> Abbas >>>>>> Architect >>>>>> EnterpriseDB Corporation >>>>>> The Enterprise PostgreSQL Company >>>>>> >>>>>> Phone: 92-334-5100153 >>>>>> >>>>>> Website: www.enterprisedb.com >>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> >>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>> the individual or entity to whom it is addressed. This message >>>>>> contains information from EnterpriseDB Corporation that may be >>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>> law. If you are not the intended recipient or authorized to receive >>>>>> this for the intended recipient, any use, dissemination, distribution, >>>>>> retention, archiving, or copying of this communication is strictly >>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>> the sender immediately by reply e-mail and delete this message. >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Abbas >>>>> Architect >>>>> EnterpriseDB Corporation >>>>> The Enterprise PostgreSQL Company >>>>> >>>>> Phone: 92-334-5100153 >>>>> >>>>> Website: www.enterprisedb.com >>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>>>> This e-mail message (and any attachment) is intended for the use of >>>>> the individual or entity to whom it is addressed. This message >>>>> contains information from EnterpriseDB Corporation that may be >>>>> privileged, confidential, or exempt from disclosure under applicable >>>>> law. If you are not the intended recipient or authorized to receive >>>>> this for the intended recipient, any use, dissemination, distribution, >>>>> retention, archiving, or copying of this communication is strictly >>>>> prohibited. If you have received this e-mail in error, please notify >>>>> the sender immediately by reply e-mail and delete this message. >>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >>> >>> ------------------------------------------------------------------------------ >>> Own the Future-Intel(R) Level Up Game Demo Contest 2013 >>> Rise to greatness in Intel's independent game demo contest. Compete >>> for recognition, cash, and the chance to get your game on Steam. >>> $5K grand prize plus 10 genre and skill prizes. Submit your demo >>> by 6/6/13. http://altfarm.mediaplex.com/ad/ck/12124-176961-30367-2 >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-04-05 12:16:49
|
Hi Abbas I reviewed your changes, they look good. I have made some minor changes. Please find them in attached patch. I want to test this in a scenario on adding new node. Can you please point me as to what steps I should follow? I don't want to lock the cluster, but do this 1. Take the dump of existing coordinator using this patch 2. initdb a new coordinator 3. boot and use this dump to update the coordinator Now with some magic this coordinator should be useful. Can you please provide me with the steps to do this? I will run those thus testing your patch. On Sun, Mar 31, 2013 at 12:14 AM, Abbas Butt <abb...@en...>wrote: > I just realized I had attached the wrong patch file with the previous > mail, please ignore the previous attachment and use the one attached with > this email for review. > > > On Fri, Mar 29, 2013 at 5:33 PM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Attached please find a revised patch that provides support in pg_dumpall >> to dump nodes and node groups if the command line option --dump-nodes is >> provided. >> >> I tested and found that pg_dumpall works as expected. >> >> >> On Wed, Mar 27, 2013 at 5:04 PM, Abbas Butt <abb...@en...>wrote: >> >>> Feature ID 3608376 >>> >>> On Sun, Mar 10, 2013 at 7:59 PM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Hi, >>>> Attached please find a patch that adds support in pg_dump to dump nodes >>>> and node groups. This is required while adding a new node to the cluster. >>>> >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > > > ------------------------------------------------------------------------------ > Own the Future-Intel(R) Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. Compete > for recognition, cash, and the chance to get your game on Steam. > $5K grand prize plus 10 genre and skill prizes. Submit your demo > by 6/6/13. http://altfarm.mediaplex.com/ad/ck/12124-176961-30367-2 > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-04-05 11:02:48
|
Hi Amit, This isn't your change but, The prologue of function SetDataRowForIntParams is using non-standard format of adding "----" at the start and end of prologue. Please remove those. sourceSlot and newSlot seem to be generic names in the context of function SetDataRowForIntParams() which says "Form a bind row for internal parameters". Since the function is going to change a bit, it will take some data from one slot and some from other to create data row, I think we should change the previous names of variables/functions to convey changed symantic. Please add prologues for function append_val and append_junkval(). These functions need better names like append_paramval or append_param_junkval etc., better if you add prefix pgxc_ that we are doing for all xc specific function. Instead of having macro SET_PARAM_TYPES (which makes debugging difficult), can you please use function? In fact, if we set parameters just after setting paramtypes_set? I see we have added jf_xc_node_id and jf_whole_row as storage for attribute numbers of corresponding fields from the source tuple. Please add some comments specifying their usage (may be why we need these extra fields apart from jf_junkAttNo) -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Amit K. <ami...@en...> - 2013-04-05 09:28:43
|
On 5 April 2013 15:03, Pavan Deolasee <pav...@gm...> wrote: > *"AFTER ROW triggers are queued for each of the rows processed, and are > then executed at then end of statement. So we need to store the OLD and NEW > rows in memory until the end of statement or until the end of transaction > in case of deferred constraint triggers. The idea is to use tuplestore for > saving them."* > > > Why is that ? Aren't after row triggers processed immediately after > processing a row in PostgreSQL ? Why do we make a significant deviation in > XC ? > They are queued immediately, but executed at the end of statement. But the execution is per row, whereas for statement AFTER triggers, it is per statement. This is PG behaviour. By the way, I did not consider that a ROW can be shared by multiple triggers so I am slightly going to change the design. > Thanks, > Pavan > > On Fri, Apr 5, 2013 at 2:38 PM, Amit Khandekar < > ami...@en...> wrote: > >> FYI .. I will use the following document to keep updating the >> implementation details for "Saving AR trigger rows in tuplestore" : >> >> >> https://docs.google.com/document/d/158IPS9npmfNsOWPN6ZYgPy91aowTUNP7L7Fl9zBBGqs/edit?usp=sharing >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Minimize network downtime and maximize team effectiveness. >> Reduce network management and security costs.Learn how to hire >> the most talented Cisco Certified professionals. Visit the >> Employer Resources Portal >> http://www.cisco.com/web/learning/employer_resources/index.html >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Pavan Deolasee > http://www.linkedin.com/in/pavandeolasee > |
From: Pavan D. <pav...@gm...> - 2013-04-05 09:18:48
|
*"AFTER ROW triggers are queued for each of the rows processed, and are then executed at then end of statement. So we need to store the OLD and NEW rows in memory until the end of statement or until the end of transaction in case of deferred constraint triggers. The idea is to use tuplestore for saving them."* Why is that ? Aren't after row triggers processed immediately after processing a row in PostgreSQL ? Why do we make a significant deviation in XC ? Thanks, Pavan On Fri, Apr 5, 2013 at 2:38 PM, Amit Khandekar < ami...@en...> wrote: > FYI .. I will use the following document to keep updating the > implementation details for "Saving AR trigger rows in tuplestore" : > > > https://docs.google.com/document/d/158IPS9npmfNsOWPN6ZYgPy91aowTUNP7L7Fl9zBBGqs/edit?usp=sharing > > > > > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Amit K. <ami...@en...> - 2013-04-05 09:08:53
|
FYI .. I will use the following document to keep updating the implementation details for "Saving AR trigger rows in tuplestore" : https://docs.google.com/document/d/158IPS9npmfNsOWPN6ZYgPy91aowTUNP7L7Fl9zBBGqs/edit?usp=sharing |
From: Ashutosh B. <ash...@en...> - 2013-04-05 07:09:04
|
Hi Amit, This is a general comment about trigger execution on coordinator. Catalog tables are local to coordinator and any modifcations to those happen at coordinator in the same way as it would in PostgreSQL. I am not sure if one can create a trigger on catalog table ( I think not). In case we could create triggers on catalog tables, we will need to be careful to make sure that they execute the same way as PostgreSQL. On Thu, Apr 4, 2013 at 7:48 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Amit, > Thanks for creating the branch and the commits. > > I will give my comments on each of the commits in separate mail. I am > starting with 1dc081ebe097e63009bab219231e55679db6fae0. Is that the correct > one? > > > On Thu, Apr 4, 2013 at 12:26 PM, Amit Khandekar < > ami...@en...> wrote: > >> >> >> >> On 3 April 2013 15:10, Ashutosh Bapat <ash...@en...>wrote: >> >>> Hi Amit, >>> Given the magnitude of the change, I think we should break this work >>> into smaller self-sufficient patches and commit them (either on master or >>> in a separate branch for trigger work). This will allow us to review and >>> commit small amount of work and set it aside, rather than going over >>> everything in every round. >>> >> >> I have created a new branch "rowtriggers" in >> postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc, where I >> have dumped incremental changes. >> >> >>> >>> On Wed, Apr 3, 2013 at 10:46 AM, Amit Khandekar < >>> ami...@en...> wrote: >>> >>>> >>>> >>>> >>>> On 26 March 2013 15:53, Ashutosh Bapat <ash...@en... >>>> > wrote: >>>> >>>>> >>>>> >>>>> On Tue, Mar 26, 2013 at 8:56 AM, Amit Khandekar < >>>>> ami...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On 4 March 2013 11:11, Amit Khandekar < >>>>>> ami...@en...> wrote: >>>>>> >>>>>>> On 1 March 2013 13:53, Nikhil Sontakke <ni...@st...> wrote: >>>>>>> >> >>>>>>> >> Issue: Whether we should fetch the whole from the datanode (OLD >>>>>>> row) and not >>>>>>> >> just ctid and node_id and required columns and store it at the >>>>>>> coordinator >>>>>>> >> for the processing OR whether we should fetch each row (OLD and >>>>>>> NEW >>>>>>> >> variants) while processing each row. >>>>>>> >> >>>>>>> >> Both of them have performance impacts - the first one has disk >>>>>>> impact for >>>>>>> >> large number of rows whereas the second has network impact for >>>>>>> querying >>>>>>> >> rows. Is it possible to do some analytical assessment as to which >>>>>>> of them >>>>>>> >> would be better? If you can come up with something concrete (may >>>>>>> be numbers >>>>>>> >> or formulae) we will be able to judge better as to which one to >>>>>>> pick up. >>>>>>> >>>>>>> Will check if we can come up with some sensible analysis or figures. >>>>>>> >>>>>>> >>>>>> I have done some analysis on both of these approaches here: >>>>>> >>>>>> https://docs.google.com/document/d/10QPPq_go_wHqKqhmOFXjJAokfdLR8OaUyZVNDu47GWk/edit?usp=sharing >>>>>> >>>>>> In practical terms, we anyways would need to implement (B). The >>>>>> reason is because when the trigger has conditional execution(WHEN clause) >>>>>> we *have* to fetch the rows beforehand, so there is no point in fetching >>>>>> all of them again at the end of the statement when we already have them >>>>>> locally. So may be it would be too ambitious to have have both >>>>>> implementations, at least for this release. >>>>>> >>>>>> >>>>> I agree here. We can certainly optimize for various cases later, but >>>>> we should have something which would give all the functionality (albeit at >>>>> a lower performance for now). >>>>> >>>>> >>>>>> So I am focussing on (B) right now. We have two options: >>>>>> >>>>>> 1. Store all rows in palloced memory, and save the HeapTuple pointers >>>>>> in the trigger queue, and directly access the OLD and NEW rows using these >>>>>> pointers when needed. Here we will have no control over how much memory we >>>>>> should use for the old and new records, and this might even hamper system >>>>>> performance, let alone XC performance. >>>>>> 2. Other option is to use tuplestore. Here, we need to store the >>>>>> positions of the records in the tuplestore. So for a particular tigger >>>>>> event, fetch by the position. From what I understand, tuplestore can be >>>>>> advanced only sequentially in either direction. So when the read pointer is >>>>>> at position 6 and we need to fetch a record at position 10, we need to call >>>>>> tuplestore_advance() 4 times, and this call involves palloc/pfree overhead >>>>>> because it calls tuplestore_gettuple(). But the trigger records are not >>>>>> distributed so randomly. In fact a set of trigger events for a particular >>>>>> event id are accessed in the same order as the order in which they are >>>>>> queued. So for a particular event id, only the first access call will >>>>>> require random access. tuplestore supports multiple read pointers, so may >>>>>> be we can make use of that to access the first record using the closest >>>>>> read pointer. >>>>>> >>>>>> >>>>> Using palloc will be a problem if the size of data fetched is more >>>>> that what could fit in memory. Also pallocing frequently is going to be >>>>> performance problem. Let's see how does the tuple store approach go. >>>>> >>>> >>>> While I am working on the AFTER ROW optimization, here's a patch that >>>> has only BEFORE ROW trigger support, so that it can get you started with >>>> first round of review. The regression is not analyzed fully yet. Besides >>>> the AR trigger related changes, I have also stripped the logic of whether >>>> to run the trigger on datanode or coordinator; this logic depends on both >>>> before and after triggers. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>>> >> >>>>>>> > >>>>>>> > Or we can consider a hybrid approach of getting the rows in >>>>>>> batches of >>>>>>> > 1000 or so if possible as well. That ways they get into coordinator >>>>>>> > memory in one shot and can be processed in batches. Obviously this >>>>>>> > should be considered if it's not going to be a complicated >>>>>>> > implementation. >>>>>>> >>>>>>> It just occurred to me that it would not be that hard to optimize the >>>>>>> row-fetching-by-ctid as shown below: >>>>>>> 1. When it is time to fire the queued triggers at the >>>>>>> statement/transaction end, initialize cursors - one cursor per >>>>>>> datanode - which would do: SELECT remote_heap_fetch(table_name, >>>>>>> '<ctidlist>'); We can form this ctidlist out of the trigger even >>>>>>> list. >>>>>>> 2. For each trigger event entry in the trigger queue, FETCH NEXT >>>>>>> using >>>>>>> the appropriate cursor name according to the datanode id to which the >>>>>>> trigger entry belongs. >>>>>>> >>>>>>> > >>>>>>> >>> Currently we fetch all attributes in the SELECT subplans. I have >>>>>>> >>> created another patch to fetch only the required attribtues, but >>>>>>> have >>>>>>> >>> not merged that into this patch. >>>>>>> > >>>>>>> > Do we have other places where we unnecessary fetch all attributes? >>>>>>> > ISTM, this should be fixed as a performance improvement first >>>>>>> ahead of >>>>>>> > everything else. >>>>>>> >>>>>>> I believe DML subplan is the only remaining place where we fetch all >>>>>>> attributes. And yes, this is a must-have for triggers, otherwise, the >>>>>>> other optimizations would be of no use. >>>>>>> >>>>>>> > >>>>>>> >>> 2. One important TODO for BEFORE trigger is this: Just before >>>>>>> >>> invoking the trigger functions, in PG, the tuple is row-locked >>>>>>> >>> (exclusive) by GetTupleTrigger() and the locked version is >>>>>>> fetched >>>>>>> >>> from the table. So it is made sure that while all the triggers >>>>>>> for >>>>>>> >>> that table are executed, no one can update that particular row. >>>>>>> >>> In the patch, we haven't locked the row. We need to lock it >>>>>>> either by >>>>>>> >>> executing : >>>>>>> >>> 1. SELECT * from tab1 where ctid = <ctid_val> FOR UPDATE, and >>>>>>> then >>>>>>> >>> use the returned ROW as the OLD row. >>>>>>> >>> OR >>>>>>> >>> 2. The UPDATE subplan itself should have SELECT for UPDATE so >>>>>>> that >>>>>>> >>> the row is already locked, and we don't have to lock it again. >>>>>>> >>> #2 is simple though it might cause some amount of longer waits >>>>>>> in general. >>>>>>> >>> Using #1, though the locks would be acquired only when the >>>>>>> particular >>>>>>> >>> row is updated, the locks would be released only after >>>>>>> transaction >>>>>>> >>> end, so #1 might not be worth implementing. >>>>>>> >>> Also #1 requires another explicit remote fetch for the >>>>>>> >>> lock-and-get-latest-version operation. >>>>>>> >>> I am more inclined towards #2. >>>>>>> >>> >>>>>>> >> The option #2 however, has problem of locking too many rows if >>>>>>> there are >>>>>>> >> coordinator quals in the subplans IOW the number of rows finally >>>>>>> updated are >>>>>>> >> lesser than the number of rows fetched from the datanode. It can >>>>>>> cause >>>>>>> >> unwanted deadlocks. Unless there is a way to release these extra >>>>>>> locks, I am >>>>>>> >> afraid this option will be a problem. >>>>>>> >>>>>>> True. Regardless of anything else - whether it is deadlocks or longer >>>>>>> waits, we should not lock rows that are not to be updated. >>>>>>> >>>>>>> There is a more general row-locking issue that we need to solve first >>>>>>> : 3606317. I anticipate that solving this will solve the trigger >>>>>>> specific lock issue. So for triggers, this is a must-have, and I am >>>>>>> going to solve this issue as part of this bug 3606317. >>>>>>> >>>>>>> >> >>>>>>> > Deadlocks? ISTM, we can get more lock waits because of this but I >>>>>>> do >>>>>>> > not see deadlock scenarios.. >>>>>>> > >>>>>>> > With the FQS shipping work being done by Ashutosh, will we also >>>>>>> ship >>>>>>> > major chunks of subplans to the datanodes? If yes, then row locking >>>>>>> > will only involve required tuples (hopefully) from the >>>>>>> coordinator's >>>>>>> > point of view. >>>>>>> > >>>>>>> > Also, something radical is can be invent a new type of FOR [NODE] >>>>>>> > UPDATE type lock to minimize the impact of such locking of rows on >>>>>>> > datanodes? >>>>>>> > >>>>>>> > Regards, >>>>>>> > Nikhils >>>>>>> > >>>>>>> >>> >>>>>>> >>> 3. The BEFORE trigger function can change the distribution >>>>>>> column >>>>>>> >>> itself. We need to add a check at the end of the trigger >>>>>>> executions. >>>>>>> >>> >>>>>>> >> >>>>>>> >> Good, you thought about that. Yes we should check it. >>>>>>> >> >>>>>>> >>> >>>>>>> >>> 4. Fetching OLD row for WHEN clause handling. >>>>>>> >>> >>>>>>> >>> 5. Testing with mix of Shippable and non-shippable ROW triggers >>>>>>> >>> >>>>>>> >>> 6. Other types of triggers. INSTEAD triggers are anticipated to >>>>>>> work >>>>>>> >>> without significant changes, but they are yet to be tested. >>>>>>> >>> INSERT/DELETE triggers: Most of the infrastructure has been done >>>>>>> while >>>>>>> >>> implementing UPDATE triggers. But some changes specific to >>>>>>> INSERT and >>>>>>> >>> DELETE are yet to be done. >>>>>>> >>> Deferred triggers to be tested. >>>>>>> >>> >>>>>>> >>> 7. Regression analysis. There are some new failures. Will post >>>>>>> another >>>>>>> >>> fair version of the patch after regression analysis and fixing >>>>>>> various >>>>>>> >>> TODOs. >>>>>>> >>> >>>>>>> >>> Comments welcome. >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> >>> Everyone hates slow websites. So do we. >>>>>>> >>> Make your web apps faster with AppDynamics >>>>>>> >>> Download AppDynamics Lite for free today: >>>>>>> >>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> >>> _______________________________________________ >>>>>>> >>> Postgres-xc-developers mailing list >>>>>>> >>> Pos...@li... >>>>>>> >>> >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> >>> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> -- >>>>>>> >> Best Wishes, >>>>>>> >> Ashutosh Bapat >>>>>>> >> EntepriseDB Corporation >>>>>>> >> The Enterprise Postgres Company >>>>>>> >> >>>>>>> >> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> >> Everyone hates slow websites. So do we. >>>>>>> >> Make your web apps faster with AppDynamics >>>>>>> >> Download AppDynamics Lite for free today: >>>>>>> >> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> >> _______________________________________________ >>>>>>> >> Postgres-xc-developers mailing list >>>>>>> >> Pos...@li... >>>>>>> >> >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> >> >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > -- >>>>>>> > StormDB - http://www.stormdb.com >>>>>>> > The Database Cloud >>>>>>> > Postgres-XC Support and Service >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Amit K. <ami...@en...> - 2013-04-05 06:06:36
|
On 1 April 2013 14:23, Abbas Butt <abb...@en...> wrote: > > > On Mon, Apr 1, 2013 at 11:02 AM, Amit Khandekar < > ami...@en...> wrote: > >> >> >> >> On 31 March 2013 14:07, Abbas Butt <abb...@en...> wrote: >> >>> Hi, >>> Attached please find the revised patch for restore mode. This patch has >>> to be applied on top of the patches I sent earlier for >>> 3608377, >>> 3608376 & >>> 3608375. >>> >>> I have also attached some scripts and a C file useful for testing the >>> whole procedure. It is a database that has many objects in it. >>> >>> Here are the revised instructions for adding new nodes to the cluster. >>> >>> ====================================== >>> >>> Here are the steps to add a new coordinator >>> >>> 1) Initdb new coordinator >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_cord3 --nodename >>> coord_3 >>> >>> 2) Make necessary changes in its postgresql.conf, in particular specify >>> new coordinator name and pooler port >>> >>> 3) Connect to any of the existing coordinators & lock the cluster for >>> backup, do not close this session >>> ./psql postgres -p 5432 >>> select pgxc_lock_for_backup(); >>> >>> 4) Connect to any of the existing coordinators and take backup of the >>> database >>> ./pg_dumpall -p 5432 -s --include-nodes --dump-nodes >>> --file=/home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1100_all_objects_coord.sql >>> >>> 5) Start the new coordinator specify --restoremode while starting the >>> coordinator >>> ./postgres --restoremode -D ../data_cord3 -p 5455 >>> >>> 6) Create the new database on the new coordinator - optional >>> ./createdb test -p 5455 >>> >>> 7) Restore the backup that was taken from an existing coordinator by >>> connecting to the new coordinator directly >>> ./psql -d test -f >>> /home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1100_all_objects_coord.sql >>> -p 5455 >>> >>> 8) Quit the new coordinator >>> >>> 9) Start the new coordinator as a by specifying --coordinator >>> ./postgres --coordinator -D ../data_cord3 -p 5455 >>> >>> 10) Create the new coordinator on rest of the coordinators and reload >>> configuration >>> CREATE NODE COORD_3 WITH (HOST = 'localhost', type = >>> 'coordinator', PORT = 5455); >>> SELECT pgxc_pool_reload(); >>> >>> 11) Quit the session of step 3, this will unlock the cluster >>> >>> 12) The new coordinator is now ready >>> ./psql test -p 5455 >>> create table test_new_coord(a int, b int); >>> \q >>> ./psql test -p 5432 >>> select * from test_new_coord; >>> >>> *======================================* >>> *======================================* >>> >>> Here are the steps to add a new datanode >>> >>> >>> 1) Initdb new datanode >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data3 --nodename >>> data_node_3 >>> >>> 2) Make necessary changes in its postgresql.conf, in particular specify >>> new datanode name >>> >>> 3) Connect to any of the existing coordinators & lock the cluster for >>> backup, do not close this session >>> ./psql postgres -p 5432 >>> select pgxc_lock_for_backup(); >>> >>> 4) Connect to any of the existing datanodes and take backup of the >>> database >>> ./pg_dumpall -p 15432 -s --include-nodes >>> --file=/home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1122_all_objects_dn1.sql >>> >> >> >> Why do we need --include-nodes on datanode ? >> > > Agreed, this option should not be used. > > >> >> ---- >> >> >> + * The dump taken from a datanode does NOT contain any DISTRIBUTE >> BY >> + * clause. This fact is used here to make sure that when the >> + * DISTRIBUTE BY clause is missing in the statemnet the system >> + * should not try to find out the node list itself. >> + */ >> + if ((IS_PGXC_COORDINATOR || (isRestoreMode && stmt->distributeby != >> NULL)) >> + && relkind == RELKIND_RELATION) >> >> How do we enforce not having DISTRIBUTE BY clause in the pg_dump output >> if it's a datanode ? >> > > We do not have to enforce it, since the pgxc_class catalog table has no > information in it on datanodes, hence dump will not contain any DISTRIBUTE > BY clause. > > >> Also, can we just error out in restore mode if the DISTRIBUTE BY clause >> is present ? >> > > No we cannot error out, because while adding a coordinator DISTRIBUTE BY > clause will be present, and since we have started the server by using > --restoremode in place of --datanode or --coordinator we do not know > whether the user is adding a new datanode or a new coordinator. > Understood. > > >> >> ----- >> >> >>> 5) Start the new datanode specify --restoremode while starting the it >>> ./postgres --restoremode -D ../data3 -p 35432 >>> >> >> >> It seems you have disabled use of GTM in restore mode. >> > > I did not. > > >> For e.g. in GetNewTransactionId(), we get a global tansaction id only if >> it's a coordinator or if IsPGXCNodeXactDatanodeDirect() is true. But >> IsPGXCNodeXactDatanodeDirect() will now return false in restore mode. >> > > No, I have not changed the function IsPGXCNodeXactDatanodeDirect, it would > behave exactly as it used to. I changed the function IsPGXCNodeXactReadOnly. > Oh ok. I did not correctly see the details. Agreed now. > > > >> Is there any specific reason for disabling use of GTM in restore mode ? >> > > No reason. GTM should be used. > > >> I don't see any harm in using GTM. In fact, it is better to start using >> global xids as soon as possible. >> > > Exactly. I just verified that the statement > xid = (TransactionId) BeginTranGTM(timestamp) > in function GetNewTransactionId is called in restore mode. > Got it now. I have no more comments. Please keep all the steps in some central location so that everybody can access it. > >> >> >>> >>> 6) Restore the backup that was taken from an existing datanode by >>> connecting to the new datanode directly >>> ./psql -d postgres -f >>> /home/edb/Desktop/NodeAddition/revised_patches/misc_dumps/1122_all_objects_dn1.sql >>> -p 35432 >>> >>> 7) Quit the new datanode >>> >>> 8) Start the new datanode as a datanode by specifying --datanode >>> ./postgres --datanode -D ../data3 -p 35432 >>> >>> 9) Create the new datanode on all the coordinators and reload >>> configuration >>> CREATE NODE DATA_NODE_3 WITH (HOST = 'localhost', type = 'datanode', >>> PORT = 35432); >>> SELECT pgxc_pool_reload(); >>> >>> 10) Quit the session of step 3, this will unlock the cluster >>> >>> 11) Redistribute data by using ALTER TABLE REDISTRIBUTE >>> >>> 12) The new daatnode is now ready >>> ./psql test >>> create table test_new_dn(a int, b int) distribute by replication; >>> insert into test_new_dn values(1,2); >>> EXECUTE DIRECT ON (data_node_1) 'SELECT * from test_new_dn'; >>> EXECUTE DIRECT ON (data_node_2) 'SELECT * from test_new_dn'; >>> EXECUTE DIRECT ON (data_node_3) 'SELECT * from test_new_dn'; >>> >>> ====================================== >>> >>> On Wed, Mar 27, 2013 at 5:02 PM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Feature ID 3608379 >>>> >>>> On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar < >>>> ami...@en...> wrote: >>>> >>>>> On 1 March 2013 01:30, Abbas Butt <abb...@en...> wrote: >>>>> > >>>>> > >>>>> > On Thu, Feb 28, 2013 at 12:44 PM, Amit Khandekar >>>>> > <ami...@en...> wrote: >>>>> >> >>>>> >> >>>>> >> >>>>> >> On 28 February 2013 10:23, Abbas Butt <abb...@en...> >>>>> wrote: >>>>> >>> >>>>> >>> Hi All, >>>>> >>> >>>>> >>> Attached please find a patch that provides a new command line >>>>> argument >>>>> >>> for postgres called --restoremode. >>>>> >>> >>>>> >>> While adding a new node to the cluster we need to restore the >>>>> schema of >>>>> >>> existing database to the new node. >>>>> >>> If the new node is a datanode and we connect directly to it, it >>>>> does not >>>>> >>> allow DDL, because it is in read only mode & >>>>> >>> If the new node is a coordinator, it will send DDLs to all the >>>>> other >>>>> >>> coordinators which we do not want it to do. >>>>> >> >>>>> >> >>>>> >> What if we allow writes in standalone mode, so that we would >>>>> initialize >>>>> >> the new node using standalone mode instead of --restoremode ? >>>>> > >>>>> > >>>>> > Please take a look at the patch, I am using --restoremode in place of >>>>> > --coordinator & --datanode. I am not sure how would stand alone mode >>>>> fit in >>>>> > here. >>>>> >>>>> I was trying to see if we can avoid adding a new mode, instead, use >>>>> standalone mode for all the purposes for which restoremode is used. >>>>> Actually I checked the documentation, it says this mode is used only >>>>> for debugging or recovery purposes, so now I myself am a bit hesitent >>>>> about this mode for the purpose of restoring. >>>>> >>>>> > >>>>> >> >>>>> >> >>>>> >>> >>>>> >>> To provide ability to restore on the new node a new command line >>>>> argument >>>>> >>> is provided. >>>>> >>> It is to be provided in place of --coordinator OR --datanode. >>>>> >>> In restore mode both coordinator and datanode are internally >>>>> treated as a >>>>> >>> datanode. >>>>> >>> For more details see patch comments. >>>>> >>> >>>>> >>> After this patch one can add a new node to the cluster. >>>>> >>> >>>>> >>> Here are the steps to add a new coordinator >>>>> >>> >>>>> >>> >>>>> >>> 1) Initdb new coordinator >>>>> >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_cord3 >>>>> >>> --nodename coord_3 >>>>> >>> >>>>> >>> 2) Make necessary changes in its postgresql.conf, in particular >>>>> specify >>>>> >>> new coordinator name and pooler port >>>>> >>> >>>>> >>> 3) Connect to any of the existing coordinators & lock the cluster >>>>> for >>>>> >>> backup >>>>> >>> ./psql postgres -p 5432 >>>>> >>> SET xc_lock_for_backup=yes; >>>>> >>> \q >>>>> >> >>>>> >> >>>>> >> I haven't given a thought on the earlier patch you sent for cluster >>>>> lock >>>>> >> implementation; may be we can discuss this on that thread, but just >>>>> a quick >>>>> >> question: >>>>> >> >>>>> >> Does the cluster-lock command wait for the ongoing DDL commands to >>>>> finish >>>>> >> ? If not, we have problems. The subsequent pg_dump would not >>>>> contain objects >>>>> >> created by these particular DDLs. >>>>> > >>>>> > >>>>> > Suppose you have a two coordinator cluster. Assume one client >>>>> connected to >>>>> > each. Suppose one client issues a lock cluster command and the other >>>>> issues >>>>> > a DDL. Is this what you mean by an ongoing DDL? If true then answer >>>>> to your >>>>> > question is Yes. >>>>> > >>>>> > Suppose you have a prepared transaction that has a DDL in it, again >>>>> if this >>>>> > can be considered an on going DDL, then again answer to your >>>>> question is >>>>> > Yes. >>>>> > >>>>> > Suppose you have a two coordinator cluster. Assume one client >>>>> connected to >>>>> > each. One client starts a transaction and issues a DDL, the second >>>>> client >>>>> > issues a lock cluster command, the first commits the transaction. If >>>>> this is >>>>> > an ongoing DDL, then the answer to your question is No. But its a >>>>> matter of >>>>> > deciding which camp are we going to put COMMIT in, the allow camp, >>>>> or the >>>>> > deny camp. I decided to put it in allow camp, because I have not yet >>>>> written >>>>> > any code to detect whether a transaction being committed has a DDL >>>>> in it or >>>>> > not, and stopping all transactions from committing looks too >>>>> restrictive to >>>>> > me. >>>>> > >>>>> > Do you have some other meaning of an ongoing DDL? >>>>> > >>>>> > I agree that we should have discussed this on the right thread. Lets >>>>> > continue this discussion on that thread. >>>>> >>>>> Continued on the other thread. >>>>> >>>>> > >>>>> >> >>>>> >> >>>>> >>> >>>>> >>> >>>>> >>> 4) Connect to any of the existing coordinators and take backup of >>>>> the >>>>> >>> database >>>>> >>> ./pg_dump -p 5432 -C -s >>>>> >>> >>>>> --file=/home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql test >>>>> >>> >>>>> >>> 5) Start the new coordinator specify --restoremode while starting >>>>> the >>>>> >>> coordinator >>>>> >>> ./postgres --restoremode -D ../data_cord3 -p 5455 >>>>> >>> >>>>> >>> 6) connect to the new coordinator directly >>>>> >>> ./psql postgres -p 5455 >>>>> >>> >>>>> >>> 7) create all the datanodes and the rest of the coordinators on >>>>> the new >>>>> >>> coordiantor & reload configuration >>>>> >>> CREATE NODE DATA_NODE_1 WITH (HOST = 'localhost', type = >>>>> >>> 'datanode', PORT = 15432, PRIMARY); >>>>> >>> CREATE NODE DATA_NODE_2 WITH (HOST = 'localhost', type = >>>>> >>> 'datanode', PORT = 25432); >>>>> >>> >>>>> >>> CREATE NODE COORD_1 WITH (HOST = 'localhost', type = >>>>> >>> 'coordinator', PORT = 5432); >>>>> >>> CREATE NODE COORD_2 WITH (HOST = 'localhost', type = >>>>> >>> 'coordinator', PORT = 5433); >>>>> >>> >>>>> >>> SELECT pgxc_pool_reload(); >>>>> >>> >>>>> >>> 8) quit psql >>>>> >>> >>>>> >>> 9) Create the new database on the new coordinator >>>>> >>> ./createdb test -p 5455 >>>>> >>> >>>>> >>> 10) create the roles and table spaces manually, the dump does not >>>>> contain >>>>> >>> roles or table spaces >>>>> >>> ./psql test -p 5455 >>>>> >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; >>>>> >>> CREATE TABLESPACE my_space LOCATION >>>>> >>> '/usr/local/pgsql/my_space_location'; >>>>> >>> \q >>>>> >>> >>>>> >> >>>>> >> Will pg_dumpall help ? It dumps roles also. >>>>> > >>>>> > >>>>> > Yah , but I am giving example of pg_dump so this step has to be >>>>> there. >>>>> > >>>>> >> >>>>> >> >>>>> >> >>>>> >>> >>>>> >>> 11) Restore the backup that was taken from an existing coordinator >>>>> by >>>>> >>> connecting to the new coordinator directly >>>>> >>> ./psql -d test -f >>>>> >>> /home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql -p >>>>> 5455 >>>>> >>> >>>>> >>> 11) Quit the new coordinator >>>>> >>> >>>>> >>> 12) Connect to any of the existing coordinators & unlock the >>>>> cluster >>>>> >>> ./psql postgres -p 5432 >>>>> >>> SET xc_lock_for_backup=no; >>>>> >>> \q >>>>> >>> >>>>> >> >>>>> >> Unlocking the cluster has to be done *after* the node is added into >>>>> the >>>>> >> cluster. >>>>> > >>>>> > >>>>> > Very true. I stand corrected. This means CREATE NODE has to be >>>>> allowed when >>>>> > xc_lock_for_backup is set. >>>>> > >>>>> >> >>>>> >> >>>>> >> >>>>> >>> >>>>> >>> 13) Start the new coordinator as a by specifying --coordinator >>>>> >>> ./postgres --coordinator -D ../data_cord3 -p 5455 >>>>> >>> >>>>> >>> 14) Create the new coordinator on rest of the coordinators and >>>>> reload >>>>> >>> configuration >>>>> >>> CREATE NODE COORD_3 WITH (HOST = 'localhost', type = >>>>> >>> 'coordinator', PORT = 5455); >>>>> >>> SELECT pgxc_pool_reload(); >>>>> >>> >>>>> >>> 15) The new coordinator is now ready >>>>> >>> ./psql test -p 5455 >>>>> >>> create table test_new_coord(a int, b int); >>>>> >>> \q >>>>> >>> ./psql test -p 5432 >>>>> >>> select * from test_new_coord; >>>>> >>> >>>>> >>> >>>>> >>> Here are the steps to add a new datanode >>>>> >>> >>>>> >>> >>>>> >>> 1) Initdb new datanode >>>>> >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data3 >>>>> --nodename >>>>> >>> data_node_3 >>>>> >>> >>>>> >>> 2) Make necessary changes in its postgresql.conf, in particular >>>>> specify >>>>> >>> new datanode name >>>>> >>> >>>>> >>> 3) Connect to any of the existing coordinators & lock the cluster >>>>> for >>>>> >>> backup >>>>> >>> ./psql postgres -p 5432 >>>>> >>> SET xc_lock_for_backup=yes; >>>>> >>> \q >>>>> >>> >>>>> >>> 4) Connect to any of the existing datanodes and take backup of the >>>>> >>> database >>>>> >>> ./pg_dump -p 15432 -C -s >>>>> >>> >>>>> --file=/home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql test >>>>> >>> >>>>> >>> 5) Start the new datanode specify --restoremode while starting >>>>> the it >>>>> >>> ./postgres --restoremode -D ../data3 -p 35432 >>>>> >>> >>>>> >>> 6) Create the new database on the new datanode >>>>> >>> ./createdb test -p 35432 >>>>> >>> >>>>> >>> 7) create the roles and table spaces manually, the dump does not >>>>> contain >>>>> >>> roles or table spaces >>>>> >>> ./psql test -p 35432 >>>>> >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; >>>>> >>> CREATE TABLESPACE my_space LOCATION >>>>> >>> '/usr/local/pgsql/my_space_location'; >>>>> >>> \q >>>>> >>> >>>>> >>> 8) Restore the backup that was taken from an existing datanode by >>>>> >>> connecting to the new datanode directly >>>>> >>> ./psql -d test -f >>>>> >>> /home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql -p >>>>> 35432 >>>>> >>> >>>>> >>> 9) Quit the new datanode >>>>> >>> >>>>> >>> 10) Connect to any of the existing coordinators & unlock the >>>>> cluster >>>>> >>> ./psql postgres -p 5432 >>>>> >>> SET xc_lock_for_backup=no; >>>>> >>> \q >>>>> >>> >>>>> >>> 11) Start the new datanode as a datanode by specifying --datanode >>>>> >>> ./postgres --datanode -D ../data3 -p 35432 >>>>> >>> >>>>> >>> 12) Create the new datanode on all the coordinators and reload >>>>> >>> configuration >>>>> >>> CREATE NODE DATA_NODE_3 WITH (HOST = 'localhost', type = >>>>> >>> 'datanode', PORT = 35432); >>>>> >>> SELECT pgxc_pool_reload(); >>>>> >>> >>>>> >>> 13) Redistribute data by using ALTER TABLE REDISTRIBUTE >>>>> >>> >>>>> >>> 14) The new daatnode is now ready >>>>> >>> ./psql test >>>>> >>> create table test_new_dn(a int, b int) distribute by >>>>> replication; >>>>> >>> insert into test_new_dn values(1,2); >>>>> >>> EXECUTE DIRECT ON (data_node_1) 'SELECT * from >>>>> test_new_dn'; >>>>> >>> EXECUTE DIRECT ON (data_node_2) 'SELECT * from >>>>> test_new_dn'; >>>>> >>> EXECUTE DIRECT ON (data_node_3) 'SELECT * from >>>>> test_new_dn'; >>>>> >>> >>>>> >>> Please note that the steps assume that the patch sent earlier >>>>> >>> 1_lock_cluster.patch in mail subject [Patch to lock cluster] is >>>>> applied. >>>>> >>> >>>>> >>> I have also attached test database scripts, that would help in >>>>> patch >>>>> >>> review. >>>>> >>> >>>>> >>> Comments are welcome. >>>>> >>> >>>>> >>> -- >>>>> >>> Abbas >>>>> >>> Architect >>>>> >>> EnterpriseDB Corporation >>>>> >>> The Enterprise PostgreSQL Company >>>>> >>> >>>>> >>> Phone: 92-334-5100153 >>>>> >>> >>>>> >>> Website: www.enterprisedb.com >>>>> >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>> >>>>> >>> This e-mail message (and any attachment) is intended for the use of >>>>> >>> the individual or entity to whom it is addressed. This message >>>>> >>> contains information from EnterpriseDB Corporation that may be >>>>> >>> privileged, confidential, or exempt from disclosure under >>>>> applicable >>>>> >>> law. If you are not the intended recipient or authorized to receive >>>>> >>> this for the intended recipient, any use, dissemination, >>>>> distribution, >>>>> >>> retention, archiving, or copying of this communication is strictly >>>>> >>> prohibited. If you have received this e-mail in error, please >>>>> notify >>>>> >>> the sender immediately by reply e-mail and delete this message. >>>>> >>> >>>>> >>> >>>>> ------------------------------------------------------------------------------ >>>>> >>> Everyone hates slow websites. So do we. >>>>> >>> Make your web apps faster with AppDynamics >>>>> >>> Download AppDynamics Lite for free today: >>>>> >>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>> >>> _______________________________________________ >>>>> >>> Postgres-xc-developers mailing list >>>>> >>> Pos...@li... >>>>> >>> >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>> >>>>> >> >>>>> > >>>>> > >>>>> > >>>>> > -- >>>>> > -- >>>>> > Abbas >>>>> > Architect >>>>> > EnterpriseDB Corporation >>>>> > The Enterprise PostgreSQL Company >>>>> > >>>>> > Phone: 92-334-5100153 >>>>> > >>>>> > Website: www.enterprisedb.com >>>>> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> > Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> > >>>>> > This e-mail message (and any attachment) is intended for the use of >>>>> > the individual or entity to whom it is addressed. This message >>>>> > contains information from EnterpriseDB Corporation that may be >>>>> > privileged, confidential, or exempt from disclosure under applicable >>>>> > law. If you are not the intended recipient or authorized to receive >>>>> > this for the intended recipient, any use, dissemination, >>>>> distribution, >>>>> > retention, archiving, or copying of this communication is strictly >>>>> > prohibited. If you have received this e-mail in error, please notify >>>>> > the sender immediately by reply e-mail and delete this message. >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > |
From: Michael P. <mic...@gm...> - 2013-04-05 06:05:38
|
On Fri, Apr 5, 2013 at 2:26 PM, Pavan Deolasee <pav...@gm...>wrote: > > > > On Thu, Apr 4, 2013 at 7:13 PM, Michael Paquier <mic...@gm... > > wrote: > >> OK guys you just put the XC master out-of-sync with PG master: >> >> http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=52a8aea4290851e5d40c3bb4e3237ad8aeceaf68 >> >> On Thu, Apr 4, 2013 at 7:01 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> >>> On Thu, Apr 4, 2013 at 3:20 PM, Ahsan Hadi <ahs...@en...>wrote: >>> >>>> Hi Pavan, >>>> >>>> Thanks for raising this. Just to make sure i understand the problem, >>>> the next release of postgres-xc will be 1.1. The 1.1 release will be based >>>> on PG 9.2, >>>> >>> >>> and that we should merge from master branch of PostgreSQL upto the point >>> from where REL_9_2 is cut. >>> >> Correcting you here, you will have to merge master branch up to a commit >> which is the intersection of master and REL9_3_STABLE, the intersection >> commit determined by: >> git merge-base master REL9_3_STABLE. >> > > I am sure you mean REL9_2_STABLE because thats the branch we are > interested in. > Oh OK I missed the point. What is aimed here is the stable branch for 1.1. In this case yes, it is REL9_2_STABLE. I thought about merging XC-master with future PG-9.3 stable. > > >> . >> >> Resolving it is possible of course, simply delete the existing master >> branch and recreate it down to the commit before the merge. >> > > That's not a clean way and I am not sure how it would impact the users who > are already tracking the current master branch. Somebody need to study and > experiment carefully before doing more damage. One way I have seen by > reading docs is to use "git revert -m 1 <merge commit id>". This indeed > would revert the merge commit, but unfortunately will keep the history > around. Also, this would cause problems when next time we try to merge the > REL9_2_STABLE branch to the corresponding XC stable branch. > I still vote for cleaning up history and rebasing the master branch. I recall that you did it once in the past when master was synced with PG-8.4 stable. -- Michael |
From: Pavan D. <pav...@gm...> - 2013-04-05 05:27:25
|
On Thu, Apr 4, 2013 at 7:13 PM, Michael Paquier <mic...@gm...>wrote: > OK guys you just put the XC master out-of-sync with PG master: > > http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=52a8aea4290851e5d40c3bb4e3237ad8aeceaf68 > > On Thu, Apr 4, 2013 at 7:01 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> >> >> >> On Thu, Apr 4, 2013 at 3:20 PM, Ahsan Hadi <ahs...@en...>wrote: >> >>> Hi Pavan, >>> >>> Thanks for raising this. Just to make sure i understand the problem, the >>> next release of postgres-xc will be 1.1. The 1.1 release will be based on >>> PG 9.2, >>> >> >> and that we should merge from master branch of PostgreSQL upto the point >> from where REL_9_2 is cut. >> > Correcting you here, you will have to merge master branch up to a commit > which is the intersection of master and REL9_3_STABLE, the intersection > commit determined by: > git merge-base master REL9_3_STABLE. > I am sure you mean REL9_2_STABLE because thats the branch we are interested in. > . > > Resolving it is possible of course, simply delete the existing master > branch and recreate it down to the commit before the merge. > That's not a clean way and I am not sure how it would impact the users who are already tracking the current master branch. Somebody need to study and experiment carefully before doing more damage. One way I have seen by reading docs is to use "git revert -m 1 <merge commit id>". This indeed would revert the merge commit, but unfortunately will keep the history around. Also, this would cause problems when next time we try to merge the REL9_2_STABLE branch to the corresponding XC stable branch. > Can you guys do it without breaking the repository more??? Or not? > > Calm down :-) We all make mistakes. But I agree. We have to extremely careful with what we do the repository given that many people are now following us. Thanks, Pavan |
From: Ashutosh B. <ash...@en...> - 2013-04-04 15:47:03
|
Hi Amit Here are comments on commit commit 1dc081ebe097e63009bab219231e55679db6fae0 Author: Amit Khandekar <ami...@en...> Date: Thu Apr 4 11:39:46 2013 +0545 A few helper functions needed for subsequent trigger-support related commits. In prologue of pgxc_check_triggers_shippability_ex() can you please use word "caller" instead of "user". "user" is confusing. Generally we keep all shippability related code in pgxcship.c. But I see trigger shippability code in trigger.c. Should we move it to pgxcship.c We may want to give a different name to function "pgxc_check_triggers_shippability_ex", since it's doing more than looking for shippability. The function is looking at all triggers for given command type to check if each of them is shippable. It also returns has_quals and trigger descriptor if requested. May be we want to call it as pgxc_relation_has_shippable_triggers() or something like that. BTW, mixing three functionalities 1. checking shippability of triggers on relation 2. returning a trigger descriptor for a relation 3. checking whether a trigger has quals, may not be a good idea (esp. putting 2 along-with 1 and 3). I can understand that it might improve performance a bit, because we save on lookup, but it will hamper readability, since we will be calling this function to get trigger descriptor even if we don't want shippability information. Can we please separate these functionalities into separate functions. A good example of misuse is function "pgxc_triggers_getdesc() which is a misnomer. This function gets trigger descriptor only if the triggers are unshippable and also returns this shippability status. Please add prologue to functions pgxc_form_trigger_tuple() and pgxc_get_trigger_tuple() to explain, what these functions do and where they can be used. -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-04-04 14:18:09
|
Hi Amit, Thanks for creating the branch and the commits. I will give my comments on each of the commits in separate mail. I am starting with 1dc081ebe097e63009bab219231e55679db6fae0. Is that the correct one? On Thu, Apr 4, 2013 at 12:26 PM, Amit Khandekar < ami...@en...> wrote: > > > > On 3 April 2013 15:10, Ashutosh Bapat <ash...@en...>wrote: > >> Hi Amit, >> Given the magnitude of the change, I think we should break this work into >> smaller self-sufficient patches and commit them (either on master or in a >> separate branch for trigger work). This will allow us to review and commit >> small amount of work and set it aside, rather than going over everything in >> every round. >> > > I have created a new branch "rowtriggers" in > postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc, where I > have dumped incremental changes. > > >> >> On Wed, Apr 3, 2013 at 10:46 AM, Amit Khandekar < >> ami...@en...> wrote: >> >>> >>> >>> >>> On 26 March 2013 15:53, Ashutosh Bapat <ash...@en...>wrote: >>> >>>> >>>> >>>> On Tue, Mar 26, 2013 at 8:56 AM, Amit Khandekar < >>>> ami...@en...> wrote: >>>> >>>>> >>>>> >>>>> On 4 March 2013 11:11, Amit Khandekar <ami...@en... >>>>> > wrote: >>>>> >>>>>> On 1 March 2013 13:53, Nikhil Sontakke <ni...@st...> wrote: >>>>>> >> >>>>>> >> Issue: Whether we should fetch the whole from the datanode (OLD >>>>>> row) and not >>>>>> >> just ctid and node_id and required columns and store it at the >>>>>> coordinator >>>>>> >> for the processing OR whether we should fetch each row (OLD and >>>>>> NEW >>>>>> >> variants) while processing each row. >>>>>> >> >>>>>> >> Both of them have performance impacts - the first one has disk >>>>>> impact for >>>>>> >> large number of rows whereas the second has network impact for >>>>>> querying >>>>>> >> rows. Is it possible to do some analytical assessment as to which >>>>>> of them >>>>>> >> would be better? If you can come up with something concrete (may >>>>>> be numbers >>>>>> >> or formulae) we will be able to judge better as to which one to >>>>>> pick up. >>>>>> >>>>>> Will check if we can come up with some sensible analysis or figures. >>>>>> >>>>>> >>>>> I have done some analysis on both of these approaches here: >>>>> >>>>> https://docs.google.com/document/d/10QPPq_go_wHqKqhmOFXjJAokfdLR8OaUyZVNDu47GWk/edit?usp=sharing >>>>> >>>>> In practical terms, we anyways would need to implement (B). The reason >>>>> is because when the trigger has conditional execution(WHEN clause) we >>>>> *have* to fetch the rows beforehand, so there is no point in fetching all >>>>> of them again at the end of the statement when we already have them >>>>> locally. So may be it would be too ambitious to have have both >>>>> implementations, at least for this release. >>>>> >>>>> >>>> I agree here. We can certainly optimize for various cases later, but we >>>> should have something which would give all the functionality (albeit at a >>>> lower performance for now). >>>> >>>> >>>>> So I am focussing on (B) right now. We have two options: >>>>> >>>>> 1. Store all rows in palloced memory, and save the HeapTuple pointers >>>>> in the trigger queue, and directly access the OLD and NEW rows using these >>>>> pointers when needed. Here we will have no control over how much memory we >>>>> should use for the old and new records, and this might even hamper system >>>>> performance, let alone XC performance. >>>>> 2. Other option is to use tuplestore. Here, we need to store the >>>>> positions of the records in the tuplestore. So for a particular tigger >>>>> event, fetch by the position. From what I understand, tuplestore can be >>>>> advanced only sequentially in either direction. So when the read pointer is >>>>> at position 6 and we need to fetch a record at position 10, we need to call >>>>> tuplestore_advance() 4 times, and this call involves palloc/pfree overhead >>>>> because it calls tuplestore_gettuple(). But the trigger records are not >>>>> distributed so randomly. In fact a set of trigger events for a particular >>>>> event id are accessed in the same order as the order in which they are >>>>> queued. So for a particular event id, only the first access call will >>>>> require random access. tuplestore supports multiple read pointers, so may >>>>> be we can make use of that to access the first record using the closest >>>>> read pointer. >>>>> >>>>> >>>> Using palloc will be a problem if the size of data fetched is more that >>>> what could fit in memory. Also pallocing frequently is going to be >>>> performance problem. Let's see how does the tuple store approach go. >>>> >>> >>> While I am working on the AFTER ROW optimization, here's a patch that >>> has only BEFORE ROW trigger support, so that it can get you started with >>> first round of review. The regression is not analyzed fully yet. Besides >>> the AR trigger related changes, I have also stripped the logic of whether >>> to run the trigger on datanode or coordinator; this logic depends on both >>> before and after triggers. >>> >>> >>>> >>>> >>>>> >>>>> >>>>>> >> >>>>>> > >>>>>> > Or we can consider a hybrid approach of getting the rows in batches >>>>>> of >>>>>> > 1000 or so if possible as well. That ways they get into coordinator >>>>>> > memory in one shot and can be processed in batches. Obviously this >>>>>> > should be considered if it's not going to be a complicated >>>>>> > implementation. >>>>>> >>>>>> It just occurred to me that it would not be that hard to optimize the >>>>>> row-fetching-by-ctid as shown below: >>>>>> 1. When it is time to fire the queued triggers at the >>>>>> statement/transaction end, initialize cursors - one cursor per >>>>>> datanode - which would do: SELECT remote_heap_fetch(table_name, >>>>>> '<ctidlist>'); We can form this ctidlist out of the trigger even list. >>>>>> 2. For each trigger event entry in the trigger queue, FETCH NEXT using >>>>>> the appropriate cursor name according to the datanode id to which the >>>>>> trigger entry belongs. >>>>>> >>>>>> > >>>>>> >>> Currently we fetch all attributes in the SELECT subplans. I have >>>>>> >>> created another patch to fetch only the required attribtues, but >>>>>> have >>>>>> >>> not merged that into this patch. >>>>>> > >>>>>> > Do we have other places where we unnecessary fetch all attributes? >>>>>> > ISTM, this should be fixed as a performance improvement first ahead >>>>>> of >>>>>> > everything else. >>>>>> >>>>>> I believe DML subplan is the only remaining place where we fetch all >>>>>> attributes. And yes, this is a must-have for triggers, otherwise, the >>>>>> other optimizations would be of no use. >>>>>> >>>>>> > >>>>>> >>> 2. One important TODO for BEFORE trigger is this: Just before >>>>>> >>> invoking the trigger functions, in PG, the tuple is row-locked >>>>>> >>> (exclusive) by GetTupleTrigger() and the locked version is fetched >>>>>> >>> from the table. So it is made sure that while all the triggers for >>>>>> >>> that table are executed, no one can update that particular row. >>>>>> >>> In the patch, we haven't locked the row. We need to lock it >>>>>> either by >>>>>> >>> executing : >>>>>> >>> 1. SELECT * from tab1 where ctid = <ctid_val> FOR UPDATE, and >>>>>> then >>>>>> >>> use the returned ROW as the OLD row. >>>>>> >>> OR >>>>>> >>> 2. The UPDATE subplan itself should have SELECT for UPDATE so >>>>>> that >>>>>> >>> the row is already locked, and we don't have to lock it again. >>>>>> >>> #2 is simple though it might cause some amount of longer waits in >>>>>> general. >>>>>> >>> Using #1, though the locks would be acquired only when the >>>>>> particular >>>>>> >>> row is updated, the locks would be released only after transaction >>>>>> >>> end, so #1 might not be worth implementing. >>>>>> >>> Also #1 requires another explicit remote fetch for the >>>>>> >>> lock-and-get-latest-version operation. >>>>>> >>> I am more inclined towards #2. >>>>>> >>> >>>>>> >> The option #2 however, has problem of locking too many rows if >>>>>> there are >>>>>> >> coordinator quals in the subplans IOW the number of rows finally >>>>>> updated are >>>>>> >> lesser than the number of rows fetched from the datanode. It can >>>>>> cause >>>>>> >> unwanted deadlocks. Unless there is a way to release these extra >>>>>> locks, I am >>>>>> >> afraid this option will be a problem. >>>>>> >>>>>> True. Regardless of anything else - whether it is deadlocks or longer >>>>>> waits, we should not lock rows that are not to be updated. >>>>>> >>>>>> There is a more general row-locking issue that we need to solve first >>>>>> : 3606317. I anticipate that solving this will solve the trigger >>>>>> specific lock issue. So for triggers, this is a must-have, and I am >>>>>> going to solve this issue as part of this bug 3606317. >>>>>> >>>>>> >> >>>>>> > Deadlocks? ISTM, we can get more lock waits because of this but I do >>>>>> > not see deadlock scenarios.. >>>>>> > >>>>>> > With the FQS shipping work being done by Ashutosh, will we also ship >>>>>> > major chunks of subplans to the datanodes? If yes, then row locking >>>>>> > will only involve required tuples (hopefully) from the coordinator's >>>>>> > point of view. >>>>>> > >>>>>> > Also, something radical is can be invent a new type of FOR [NODE] >>>>>> > UPDATE type lock to minimize the impact of such locking of rows on >>>>>> > datanodes? >>>>>> > >>>>>> > Regards, >>>>>> > Nikhils >>>>>> > >>>>>> >>> >>>>>> >>> 3. The BEFORE trigger function can change the distribution column >>>>>> >>> itself. We need to add a check at the end of the trigger >>>>>> executions. >>>>>> >>> >>>>>> >> >>>>>> >> Good, you thought about that. Yes we should check it. >>>>>> >> >>>>>> >>> >>>>>> >>> 4. Fetching OLD row for WHEN clause handling. >>>>>> >>> >>>>>> >>> 5. Testing with mix of Shippable and non-shippable ROW triggers >>>>>> >>> >>>>>> >>> 6. Other types of triggers. INSTEAD triggers are anticipated to >>>>>> work >>>>>> >>> without significant changes, but they are yet to be tested. >>>>>> >>> INSERT/DELETE triggers: Most of the infrastructure has been done >>>>>> while >>>>>> >>> implementing UPDATE triggers. But some changes specific to INSERT >>>>>> and >>>>>> >>> DELETE are yet to be done. >>>>>> >>> Deferred triggers to be tested. >>>>>> >>> >>>>>> >>> 7. Regression analysis. There are some new failures. Will post >>>>>> another >>>>>> >>> fair version of the patch after regression analysis and fixing >>>>>> various >>>>>> >>> TODOs. >>>>>> >>> >>>>>> >>> Comments welcome. >>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>> Everyone hates slow websites. So do we. >>>>>> >>> Make your web apps faster with AppDynamics >>>>>> >>> Download AppDynamics Lite for free today: >>>>>> >>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> >>> _______________________________________________ >>>>>> >>> Postgres-xc-developers mailing list >>>>>> >>> Pos...@li... >>>>>> >>> >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> -- >>>>>> >> Best Wishes, >>>>>> >> Ashutosh Bapat >>>>>> >> EntepriseDB Corporation >>>>>> >> The Enterprise Postgres Company >>>>>> >> >>>>>> >> >>>>>> ------------------------------------------------------------------------------ >>>>>> >> Everyone hates slow websites. So do we. >>>>>> >> Make your web apps faster with AppDynamics >>>>>> >> Download AppDynamics Lite for free today: >>>>>> >> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> >> _______________________________________________ >>>>>> >> Postgres-xc-developers mailing list >>>>>> >> Pos...@li... >>>>>> >> >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >> >>>>>> > >>>>>> > >>>>>> > >>>>>> > -- >>>>>> > StormDB - http://www.stormdb.com >>>>>> > The Database Cloud >>>>>> > Postgres-XC Support and Service >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Michael P. <mic...@gm...> - 2013-04-04 13:45:11
|
On Thu, Apr 4, 2013 at 7:23 PM, Pavan Deolasee <pav...@gm...>wrote: > > > Right. And I think we have already made a mistake by merging all of 9.2.3. > We should revert that back, but I don't know if there is an easy way to do > so :-( > There is, delete the master branch and recreate it cleanly down to the point just before the merge. -- Michael |
From: Michael P. <mic...@gm...> - 2013-04-04 13:44:02
|
OK guys you just put the XC master out-of-sync with PG master: http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=52a8aea4290851e5d40c3bb4e3237ad8aeceaf68 On Thu, Apr 4, 2013 at 7:01 PM, Ashutosh Bapat < ash...@en...> wrote: > > > > On Thu, Apr 4, 2013 at 3:20 PM, Ahsan Hadi <ahs...@en...>wrote: > >> Hi Pavan, >> >> Thanks for raising this. Just to make sure i understand the problem, the >> next release of postgres-xc will be 1.1. The 1.1 release will be based on >> PG 9.2, >> > > and that we should merge from master branch of PostgreSQL upto the point > from where REL_9_2 is cut. > Correcting you here, you will have to merge master branch up to a commit which is the intersection of master and REL9_3_STABLE, the intersection commit determined by: git merge-base master REL9_3_STABLE. If I understand correctly, what we have done right now, is we have pulled > the code from a stable branch (thus pulling changes of 9.2.3, which are not > in master branch and may not part of 9.3 release) > Merging code of 9.2 stable branch of PG to XC master branch will be a huge mistake: this would make XC master out-of-sync with PG master. > > >> when we create the branch for 1.1 we will continue to do further >> development for the next release and merges in the master branch. >> > > from PostgreSQL master branch and not any REL_ or stable branches. > I suppose that XC 1.1 will be based on PG 9.2, no? In this case, *FIRST* create the stable branch 1.1 when you stop development on XC master branch (normally a beta2): git branch REL1_1_STABLE master *Then* merge the commits of PG 9.2 stable branch to REL1_1_STABLE. Doing this operation reversely, as I think it has is simply crazy, and you blocked all opportunity to update the code with future PG releases. Resolving it is possible of course, simply delete the existing master branch and recreate it down to the commit before the merge. Can you guys do it without breaking the repository more??? Or not? -- Michael |
From: Pavan D. <pav...@gm...> - 2013-04-04 10:23:28
|
On Thu, Apr 4, 2013 at 3:31 PM, Ashutosh Bapat < ash...@en...> wrote: > > > > On Thu, Apr 4, 2013 at 3:20 PM, Ahsan Hadi <ahs...@en...>wrote: > >> Hi Pavan, >> >> Thanks for raising this. Just to make sure i understand the problem, the >> next release of postgres-xc will be 1.1. The 1.1 release will be based on >> PG 9.2, >> > > and that we should merge from master branch of PostgreSQL upto the point > from where REL_9_2 is cut. > If I understand correctly, what we have done right now, is we have pulled > the code from a stable branch (thus pulling changes of 9.2.3, which are not > in master branch and may not part of 9.3 release) > > >> when we create the branch for 1.1 we will continue to do further >> development for the next release and merges in the master branch. >> > > from PostgreSQL master branch and not any REL_ or stable branches. > > >> Any future PG 9.2 point releases will be committed in the 1.1 branch >> going forward. The next major release after 1.1 will be based on PG 9.3. >> >> > Is my annotation right, Pavan? > > Right. And I think we have already made a mistake by merging all of 9.2.3. We should revert that back, but I don't know if there is an easy way to do so :-( Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Ashutosh B. <ash...@en...> - 2013-04-04 10:01:55
|
On Thu, Apr 4, 2013 at 3:20 PM, Ahsan Hadi <ahs...@en...>wrote: > Hi Pavan, > > Thanks for raising this. Just to make sure i understand the problem, the > next release of postgres-xc will be 1.1. The 1.1 release will be based on > PG 9.2, > and that we should merge from master branch of PostgreSQL upto the point from where REL_9_2 is cut. If I understand correctly, what we have done right now, is we have pulled the code from a stable branch (thus pulling changes of 9.2.3, which are not in master branch and may not part of 9.3 release) > when we create the branch for 1.1 we will continue to do further > development for the next release and merges in the master branch. > from PostgreSQL master branch and not any REL_ or stable branches. > Any future PG 9.2 point releases will be committed in the 1.1 branch > going forward. The next major release after 1.1 will be based on PG 9.3. > > Is my annotation right, Pavan? > Thanks, > Ahsan > > > > On Thu, Apr 4, 2013 at 2:25 PM, Pavan Deolasee <pav...@gm...>wrote: > >> Hello, >> >> While I am sure you must be doing a right thing, but since it struck to >> me, I thought I should raise it here. I saw a commit message which has a >> message body "Merge branch 'REL_9_2_3' into master". Its kind of a red flag >> to me. I hope we are *not* merging any point releases of PostgreSQL in the >> master branch of Postgres-XC. In the past, I have spent considerable time >> in fixing similar mistakes and we should not be repeating that. >> >> To explain this point further, we should always be merging only master >> branch of PostgreSQL. Later, if we make a Postgres-XC release based on a >> stable release of PostgreSQL, say 9.2, we should branch of Postgres-XC >> repository at the same commit point as PostgreSQL did and do a release. Any >> bug fixes on that stable release will then go into only that branch while >> the main development would continue on the master branch. If we mistakenly >> merge a PostgreSQL's point release such as 9.2.3, then we will have commits >> in Postgres-XC master branch which will later conflict terribly with the >> master branch commits of PostgreSQL, since the same bug may have been fixed >> in PostgreSQL's master branch too. >> >> Sorry, if this is all noise and you are doing the right thing. But then >> the commit message should look different. >> >> Thanks, >> Pavan >> >> -- >> Pavan Deolasee >> http://www.linkedin.com/in/pavandeolasee >> >> >> ------------------------------------------------------------------------------ >> Minimize network downtime and maximize team effectiveness. >> Reduce network management and security costs.Learn how to hire >> the most talented Cisco Certified professionals. Visit the >> Employer Resources Portal >> http://www.cisco.com/web/learning/employer_resources/index.html >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Ahsan Hadi > Snr Director Product Development > EnterpriseDB Corporation > The Enterprise Postgres Company > > Phone: +92-51-8358874 > Mobile: +92-333-5162114 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of the > individual or entity to whom it is addressed. This message contains > information from EnterpriseDB Corporation that may be privileged, > confidential, or exempt from disclosure under applicable law. If you are > not the intended recipient or authorized to receive this for the intended > recipient, any use, dissemination, distribution, retention, archiving, or > copying of this communication is strictly prohibited. If you have received > this e-mail in error, please notify the sender immediately by reply e-mail > and delete this message. > > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ahsan H. <ahs...@en...> - 2013-04-04 09:50:16
|
Hi Pavan, Thanks for raising this. Just to make sure i understand the problem, the next release of postgres-xc will be 1.1. The 1.1 release will be based on PG 9.2, when we create the branch for 1.1 we will continue to do further development for the next release and merges in the master branch. Any future PG 9.2 point releases will be committed in the 1.1 branch going forward. The next major release after 1.1 will be based on PG 9.3. Thanks, Ahsan On Thu, Apr 4, 2013 at 2:25 PM, Pavan Deolasee <pav...@gm...>wrote: > Hello, > > While I am sure you must be doing a right thing, but since it struck to > me, I thought I should raise it here. I saw a commit message which has a > message body "Merge branch 'REL_9_2_3' into master". Its kind of a red flag > to me. I hope we are *not* merging any point releases of PostgreSQL in the > master branch of Postgres-XC. In the past, I have spent considerable time > in fixing similar mistakes and we should not be repeating that. > > To explain this point further, we should always be merging only master > branch of PostgreSQL. Later, if we make a Postgres-XC release based on a > stable release of PostgreSQL, say 9.2, we should branch of Postgres-XC > repository at the same commit point as PostgreSQL did and do a release. Any > bug fixes on that stable release will then go into only that branch while > the main development would continue on the master branch. If we mistakenly > merge a PostgreSQL's point release such as 9.2.3, then we will have commits > in Postgres-XC master branch which will later conflict terribly with the > master branch commits of PostgreSQL, since the same bug may have been fixed > in PostgreSQL's master branch too. > > Sorry, if this is all noise and you are doing the right thing. But then > the commit message should look different. > > Thanks, > Pavan > > -- > Pavan Deolasee > http://www.linkedin.com/in/pavandeolasee > > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Ahsan Hadi Snr Director Product Development EnterpriseDB Corporation The Enterprise Postgres Company Phone: +92-51-8358874 Mobile: +92-333-5162114 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |