You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Abbas B. <abb...@en...> - 2013-03-05 08:45:24
|
The attached patch changes the name of the option to --include-nodes. On Mon, Mar 4, 2013 at 2:41 PM, Abbas Butt <abb...@en...>wrote: > > > On Mon, Mar 4, 2013 at 2:09 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> >> >> On Mon, Mar 4, 2013 at 1:51 PM, Abbas Butt <abb...@en...>wrote: >> >>> What I had in mind was to have pg_dump, when run with include-node, emit >>> CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those >>> commands will be used to create existing nodes/groups on the new >>> coordinator to be added. So it does make sense to use this option >>> independently, in fact it is supposed to be used independently. >>> >>> >> Ok, got it. But then include-node is really a misnomer. We should use >> --dump-nodes or something like that. >> > > In that case we can use include-nodes here. > > >> >> >>> >>> On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> Dumping TO NODE clause only makes sense if we dump CREATE NODE/ CREATE >>>> NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense >>>> independently, but might be useless without dumping TO NODE clause. >>>> >>>> BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't create >>>> the nodes on all the coordinators, >>> >>> >>> All the coordinators already have the nodes information. >>> >>> >>>> but only the coordinator where dump will be restored. That's another >>>> thing you will need to consider OR are you going to fix that as well? >>> >>> >>> As a first step I am only listing the manual steps required to add a new >>> node, that might say run this command on all the existing coordinators by >>> connecting to them one by one manually. We can decide to automate these >>> steps later. >>> >>> >> ok >> >> >> >>> >>> >>>> >>>> >>>> On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> I was thinking of using include-nodes to dump CREATE NODE / CREATE >>>>> NODE GROUP, that is required as one of the missing links in adding a new >>>>> node. How do you think about that? >>>>> >>>>> >>>>> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> Hi Abbas, >>>>>> Please take a look at >>>>>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which >>>>>> gives all the command line options for pg_dump. instead of >>>>>> include-to-node-clause, just include-nodes would suffice, I guess. >>>>>> >>>>>> >>>>>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> PFA a updated patch that provides a command line argument called >>>>>>> --include-to-node-clause to let pg_dump know that the created dump is >>>>>>> supposed to emit TO NODE clause in the CREATE TABLE command. >>>>>>> If the argument is provided while taking the dump from a datanode, >>>>>>> it does not show TO NODE clause in the dump since the catalog table is >>>>>>> empty in this case. >>>>>>> The documentation of pg_dump is updated accordingly. >>>>>>> The rest of the functionality stays the same as before. >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> I think we should always dump DISTRIBUTE BY. >>>>>>>> >>>>>>>> PG does not stop dumping (or provide an option to do so) newer >>>>>>>> syntax so that the dump will work on older versions. On similar lines, an >>>>>>>> XC dump can not be used against PG without modification (removing >>>>>>>> DISTRIBUTE BY). There can be more serious problems like exceeding table >>>>>>>> size limits if an XC dump is tried to be restored in PG. >>>>>>>> >>>>>>>> As to TO NODE clause, I agree, that one can restore the dump on a >>>>>>>> cluster with different configuration, so giving an option to dump TO NODE >>>>>>>> clause will help. >>>>>>>> >>>>>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>>>>> mic...@gm...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in the >>>>>>>>>>>>>> dump. >>>>>>>>>>>>>> This is required because otherwise all tables get created on >>>>>>>>>>>>>> all nodes after a dump-restore cycle. >>>>>>>>>>>>>> >>>>>>>>>>>>> Not sure this is good if you take a dump of an XC cluster to >>>>>>>>>>>>> restore that to a vanilla Postgres cluster. >>>>>>>>>>>>> Why not adding a new option that would control the generation >>>>>>>>>>>>> of this clause instead of forcing it? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I think you can use the pg_dump that comes with vanilla PG to >>>>>>>>>>>> do that, can't you? But I am open to adding a control option if every body >>>>>>>>>>>> thinks so. >>>>>>>>>>>> >>>>>>>>>>> Sure you can, this is just to simplify the life of users a >>>>>>>>>>> maximum by not having multiple pg_dump binaries in their serves. >>>>>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Yah if we choose to have an option we will put both DISTRIBUTE BY >>>>>>>>>> and TO NODE under it. >>>>>>>>>> >>>>>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>>>>> This would bring more flexibility to the way dumps are generated. >>>>>>>>> -- >>>>>>>>> Michael >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Enterprise Postgres Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Abbas >>>>>>> Architect >>>>>>> EnterpriseDB Corporation >>>>>>> The Enterprise PostgreSQL Company >>>>>>> >>>>>>> Phone: 92-334-5100153 >>>>>>> >>>>>>> Website: www.enterprisedb.com >>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>> >>>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>>> the individual or entity to whom it is addressed. This message >>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>>> law. If you are not the intended recipient or authorized to receive >>>>>>> this for the intended recipient, any use, dissemination, >>>>>>> distribution, >>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Enterprise Postgres Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Abbas >>>>> Architect >>>>> EnterpriseDB Corporation >>>>> The Enterprise PostgreSQL Company >>>>> >>>>> Phone: 92-334-5100153 >>>>> >>>>> Website: www.enterprisedb.com >>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>>>> This e-mail message (and any attachment) is intended for the use of >>>>> the individual or entity to whom it is addressed. This message >>>>> contains information from EnterpriseDB Corporation that may be >>>>> privileged, confidential, or exempt from disclosure under applicable >>>>> law. If you are not the intended recipient or authorized to receive >>>>> this for the intended recipient, any use, dissemination, distribution, >>>>> retention, archiving, or copying of this communication is strictly >>>>> prohibited. If you have received this e-mail in error, please notify >>>>> the sender immediately by reply e-mail and delete this message. >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <koi...@gm...> - 2013-03-04 16:43:21
|
Thanks Nikhils. It looks nicd. Let me take a bit more detailed look before commit it. Regards; ---------- Koichi Suzuki 2013/3/4 Nikhil Sontakke <ni...@st...>: > Hi, > > PFA patch which fixes a locking issue in GTM_BeginTransactionMulti > function. If txn_count is more than 1, then gt_TransArrayLock will be > attempted to be locked multiple times inside the loop. The behavior is > really undefined for pthreads and we can see weird hangs here and > there because of it. I guess we got away with this till date because > it was mostly being called with value 1 for txn_count. > > > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Koichi S. <koi...@gm...> - 2013-03-04 16:40:26
|
Thanks Nikhil. Recent global xmin needs a bit of improvement. Lock should be fixed also. I can look into it after I have a bit more progress on pgxc_ctl C version code. You're welcome to write the fix. Regards; ---------- Koichi Suzuki 2013/3/4 Nikhil Sontakke <ni...@st...>: > Hi, > > If I look at GTM_GetTransactionSnapshot() function, it's taking a READ > lock on GTMTransactions.gt_TransArrayLock. After taking that lock it's > modifying fields like > > GTMTransactions.gt_recent_global_xmin > GTMTransactions.gt_transactions_array > > Hardly READ type of behavior. This behavior particularly seems > dangerous if multiple threads modify the gt_recent_global_xmin value > above while holding read locks. > > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Nikhil S. <ni...@st...> - 2013-03-04 13:20:30
|
Hi, If I look at GTM_GetTransactionSnapshot() function, it's taking a READ lock on GTMTransactions.gt_TransArrayLock. After taking that lock it's modifying fields like GTMTransactions.gt_recent_global_xmin GTMTransactions.gt_transactions_array Hardly READ type of behavior. This behavior particularly seems dangerous if multiple threads modify the gt_recent_global_xmin value above while holding read locks. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Amit K. <ami...@en...> - 2013-03-04 09:59:23
|
On 4 March 2013 14:44, Abbas Butt <abb...@en...> wrote: > > > On Mon, Mar 4, 2013 at 2:00 PM, Amit Khandekar > <ami...@en...> wrote: >> >> On 1 March 2013 18:45, Abbas Butt <abb...@en...> wrote: >> > >> > >> > On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar >> > <ami...@en...> wrote: >> >> >> >> On 19 February 2013 12:37, Abbas Butt <abb...@en...> >> >> wrote: >> >> > >> >> > Hi, >> >> > Attached please find a patch that locks the cluster so that dump can >> >> > be >> >> > taken to be restored on the new node to be added. >> >> > >> >> > To lock the cluster the patch adds a new GUC parameter called >> >> > xc_lock_for_backup, however its status is maintained by the pooler. >> >> > The >> >> > reason is that the default behavior of XC is to release connections >> >> > as >> >> > soon >> >> > as a command is done and it uses PersistentConnections GUC to control >> >> > the >> >> > behavior. We in this case however need a status that is independent >> >> > of >> >> > the >> >> > setting of PersistentConnections. >> >> > >> >> > Assume we have two coordinator cluster, the patch provides this >> >> > behavior: >> >> > >> >> > Case 1: set and show >> >> > ==================== >> >> > psql test -p 5432 >> >> > set xc_lock_for_backup=yes; >> >> > show xc_lock_for_backup; >> >> > xc_lock_for_backup >> >> > -------------------- >> >> > yes >> >> > (1 row) >> >> > >> >> > Case 2: set from one client show from other >> >> > ================================== >> >> > psql test -p 5432 >> >> > set xc_lock_for_backup=yes; >> >> > (From another tab) >> >> > psql test -p 5432 >> >> > show xc_lock_for_backup; >> >> > xc_lock_for_backup >> >> > -------------------- >> >> > yes >> >> > (1 row) >> >> > >> >> > Case 3: set from one, quit it, run again and show >> >> > ====================================== >> >> > psql test -p 5432 >> >> > set xc_lock_for_backup=yes; >> >> > \q >> >> > psql test -p 5432 >> >> > show xc_lock_for_backup; >> >> > xc_lock_for_backup >> >> > -------------------- >> >> > yes >> >> > (1 row) >> >> > >> >> > Case 4: set on one coordinator, show from other >> >> > ===================================== >> >> > psql test -p 5432 >> >> > set xc_lock_for_backup=yes; >> >> > (From another tab) >> >> > psql test -p 5433 >> >> > show xc_lock_for_backup; >> >> > xc_lock_for_backup >> >> > -------------------- >> >> > yes >> >> > (1 row) >> >> > >> >> > pg_dump and pg_dumpall seem to work fine after locking the cluster >> >> > for >> >> > backup but I would test these utilities in detail next. >> >> > >> >> > Also I have yet to look in detail that standard_ProcessUtility is the >> >> > only >> >> > place that updates the portion of catalog that is dumped. There may >> >> > be >> >> > some >> >> > other places too that need to be blocked for catalog updates. >> >> > >> >> > The patch adds no extra warnings and regression shows no extra >> >> > failure. >> >> > >> >> > Comments are welcome. >> >> >> >> Abbas wrote on another thread: >> >> >> >> > Amit wrote on another thread: >> >> >> I haven't given a thought on the earlier patch you sent for cluster >> >> >> lock >> >> >> implementation; may be we can discuss this on that thread, but just >> >> >> a >> >> >> quick >> >> >> question: >> >> >> >> >> >> Does the cluster-lock command wait for the ongoing DDL commands to >> >> >> finish >> >> >> ? If not, we have problems. The subsequent pg_dump would not contain >> >> >> objects >> >> >> created by these particular DDLs. >> >> > >> >> > >> >> > Suppose you have a two coordinator cluster. Assume one client >> >> > connected >> >> > to >> >> > each. Suppose one client issues a lock cluster command and the other >> >> > issues >> >> > a DDL. Is this what you mean by an ongoing DDL? If true then answer >> >> > to >> >> > your >> >> > question is Yes. >> >> > >> >> > Suppose you have a prepared transaction that has a DDL in it, again >> >> > if >> >> > this >> >> > can be considered an on going DDL, then again answer to your question >> >> > is >> >> > Yes. >> >> > >> >> > Suppose you have a two coordinator cluster. Assume one client >> >> > connected >> >> > to >> >> > each. One client starts a transaction and issues a DDL, the second >> >> > client >> >> > issues a lock cluster command, the first commits the transaction. If >> >> > this is >> >> > an ongoing DDL, then the answer to your question is No. >> >> >> >> Yes this last scenario is what I meant: A DDL has been executed on >> >> nodes, >> >> but >> >> not committed, when the cluster lock command is run and then pg_dump >> >> immediately >> >> starts its transaction before the DDL is committed. Here pg_dump does >> >> not see the new objects that would be created. >> -- >> Come to think of it, there would always be a small interval where the >> concurrency issue would remain. > > > Can you please give an example to clarify. -- > >> >> If we were to totally get rid of this >> concurrency issue, we need to have some kind of lock. For e.g. the >> object access hook function will have shared acces lock on this object >> (may be on pg_depend because it is always used for objcet >> creation/drop ??) and the lock-cluster command will try to get >> exclusive lock on the same. This of course should be done after we are >> sure object access hook is called on all types of objects. For e.g. Suppose we come up with a solution where just before transaction commit (i.e. in transaction callback) we check if the cluster is locked and there are objects created/dropped in the current transaction, and then commit if the cluster is not locked. But betwen the instance where we do the lock check and the instance where we actually commit, during this time gap, there can be cluster lock issued followed immediately by pg_dump. For pg_dump the new objects created in that transaction will not be visible. So by doing the cluster-lock check at transaction callback, we have reduced the time gap significantly although it is not completely gone. But if lock-cluster command and the object creation functions (whether it is object acces hook or process_standardUtility) have a lock on a common object, this concurrency issue might be solved. As of now, I see pg_depend as one common object which is *always* accessed for object creation/drop. >> >> >> >> >> >> I myself am not sure how would we prevent this from happening. There >> >> are two callback hooks that might be worth considering though: >> >> 1. Transaction End callback (CallXactCallbacks) >> >> 2. Object creation/drop hook (InvokeObjectAccessHook) >> >> >> >> Suppose we create an object creation/drop hook function that would : >> >> 1. store the current transaction id in a global objects_created list >> >> if the cluster is not locked, >> >> 2. or else if the cluster is locked, this hook would ereport() saying >> >> "cannot create catalog objects in this mode". >> >> >> >> And then during transaction commit , a new transaction callback hook >> >> will: >> >> 1. Check the above objects_created list to see if the current >> >> transaction has any objects created/dropped. >> >> 2. If found and if the cluster-lock is on, it will again ereport() >> >> saying "cannot create catalog objects in this mode" >> >> >> >> Thinking more on the object creation hook, we can even consider this >> >> as a substitute for checking the cluster-lock status in >> >> standardProcessUtility(). But I am not sure whether this hook does get >> >> called on each of the catalog objects. At least the code comments say >> >> it does. >> > >> > >> > These are very good ideas, Thanks, I will work on those lines and will >> > report back. >> > >> >> >> >> >> >> >> >> >> >> > But its a matter of >> >> > deciding which camp are we going to put COMMIT in, the allow camp, or >> >> > the >> >> > deny camp. I decided to put it in allow camp, because I have not yet >> >> > written >> >> > any code to detect whether a transaction being committed has a DDL in >> >> > it >> >> > or >> >> > not, and stopping all transactions from committing looks too >> >> > restrictive >> >> > to >> >> > me. >> >> >> >> >> >> > >> >> > Do you have some other meaning of an ongoing DDL? >> >> >> >> >> >> >> >> > >> >> > -- >> >> > Abbas >> >> > Architect >> >> > EnterpriseDB Corporation >> >> > The Enterprise PostgreSQL Company >> >> > >> >> > Phone: 92-334-5100153 >> >> > >> >> > Website: www.enterprisedb.com >> >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> > >> >> > This e-mail message (and any attachment) is intended for the use of >> >> > the individual or entity to whom it is addressed. This message >> >> > contains information from EnterpriseDB Corporation that may be >> >> > privileged, confidential, or exempt from disclosure under applicable >> >> > law. If you are not the intended recipient or authorized to receive >> >> > this for the intended recipient, any use, dissemination, >> >> > distribution, >> >> > retention, archiving, or copying of this communication is strictly >> >> > prohibited. If you have received this e-mail in error, please notify >> >> > the sender immediately by reply e-mail and delete this message. >> >> > >> >> > >> >> > >> >> > ------------------------------------------------------------------------------ >> >> > Everyone hates slow websites. So do we. >> >> > Make your web apps faster with AppDynamics >> >> > Download AppDynamics Lite for free today: >> >> > http://p.sf.net/sfu/appdyn_d2d_feb >> >> > _______________________________________________ >> >> > Postgres-xc-developers mailing list >> >> > Pos...@li... >> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > >> > >> > >> > >> > >> > -- >> > -- >> > Abbas >> > Architect >> > EnterpriseDB Corporation >> > The Enterprise PostgreSQL Company >> > >> > Phone: 92-334-5100153 >> > >> > Website: www.enterprisedb.com >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> > >> > This e-mail message (and any attachment) is intended for the use of >> > the individual or entity to whom it is addressed. This message >> > contains information from EnterpriseDB Corporation that may be >> > privileged, confidential, or exempt from disclosure under applicable >> > law. If you are not the intended recipient or authorized to receive >> > this for the intended recipient, any use, dissemination, distribution, >> > retention, archiving, or copying of this communication is strictly >> > prohibited. If you have received this e-mail in error, please notify >> > the sender immediately by reply e-mail and delete this message. > > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-03-04 09:41:13
|
On Mon, Mar 4, 2013 at 2:09 PM, Ashutosh Bapat < ash...@en...> wrote: > > > On Mon, Mar 4, 2013 at 1:51 PM, Abbas Butt <abb...@en...>wrote: > >> What I had in mind was to have pg_dump, when run with include-node, emit >> CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those >> commands will be used to create existing nodes/groups on the new >> coordinator to be added. So it does make sense to use this option >> independently, in fact it is supposed to be used independently. >> >> > Ok, got it. But then include-node is really a misnomer. We should use > --dump-nodes or something like that. > In that case we can use include-nodes here. > > >> >> On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Dumping TO NODE clause only makes sense if we dump CREATE NODE/ CREATE >>> NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense >>> independently, but might be useless without dumping TO NODE clause. >>> >>> BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't create >>> the nodes on all the coordinators, >> >> >> All the coordinators already have the nodes information. >> >> >>> but only the coordinator where dump will be restored. That's another >>> thing you will need to consider OR are you going to fix that as well? >> >> >> As a first step I am only listing the manual steps required to add a new >> node, that might say run this command on all the existing coordinators by >> connecting to them one by one manually. We can decide to automate these >> steps later. >> >> > ok > > > >> >> >>> >>> >>> On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> I was thinking of using include-nodes to dump CREATE NODE / CREATE NODE >>>> GROUP, that is required as one of the missing links in adding a new node. >>>> How do you think about that? >>>> >>>> >>>> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> Hi Abbas, >>>>> Please take a look at >>>>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which >>>>> gives all the command line options for pg_dump. instead of >>>>> include-to-node-clause, just include-nodes would suffice, I guess. >>>>> >>>>> >>>>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> PFA a updated patch that provides a command line argument called >>>>>> --include-to-node-clause to let pg_dump know that the created dump is >>>>>> supposed to emit TO NODE clause in the CREATE TABLE command. >>>>>> If the argument is provided while taking the dump from a datanode, it >>>>>> does not show TO NODE clause in the dump since the catalog table is empty >>>>>> in this case. >>>>>> The documentation of pg_dump is updated accordingly. >>>>>> The rest of the functionality stays the same as before. >>>>>> >>>>>> >>>>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> I think we should always dump DISTRIBUTE BY. >>>>>>> >>>>>>> PG does not stop dumping (or provide an option to do so) newer >>>>>>> syntax so that the dump will work on older versions. On similar lines, an >>>>>>> XC dump can not be used against PG without modification (removing >>>>>>> DISTRIBUTE BY). There can be more serious problems like exceeding table >>>>>>> size limits if an XC dump is tried to be restored in PG. >>>>>>> >>>>>>> As to TO NODE clause, I agree, that one can restore the dump on a >>>>>>> cluster with different configuration, so giving an option to dump TO NODE >>>>>>> clause will help. >>>>>>> >>>>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>>>> mic...@gm...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>>>> mic...@gm...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in the >>>>>>>>>>>>> dump. >>>>>>>>>>>>> This is required because otherwise all tables get created on >>>>>>>>>>>>> all nodes after a dump-restore cycle. >>>>>>>>>>>>> >>>>>>>>>>>> Not sure this is good if you take a dump of an XC cluster to >>>>>>>>>>>> restore that to a vanilla Postgres cluster. >>>>>>>>>>>> Why not adding a new option that would control the generation >>>>>>>>>>>> of this clause instead of forcing it? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I think you can use the pg_dump that comes with vanilla PG to do >>>>>>>>>>> that, can't you? But I am open to adding a control option if every body >>>>>>>>>>> thinks so. >>>>>>>>>>> >>>>>>>>>> Sure you can, this is just to simplify the life of users a >>>>>>>>>> maximum by not having multiple pg_dump binaries in their serves. >>>>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>>>> >>>>>>>>> >>>>>>>>> Yah if we choose to have an option we will put both DISTRIBUTE BY >>>>>>>>> and TO NODE under it. >>>>>>>>> >>>>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>>>> This would bring more flexibility to the way dumps are generated. >>>>>>>> -- >>>>>>>> Michael >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Everyone hates slow websites. So do we. >>>>>>>> Make your web apps faster with AppDynamics >>>>>>>> Download AppDynamics Lite for free today: >>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>> _______________________________________________ >>>>>>>> Postgres-xc-developers mailing list >>>>>>>> Pos...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Enterprise Postgres Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> Abbas >>>>>> Architect >>>>>> EnterpriseDB Corporation >>>>>> The Enterprise PostgreSQL Company >>>>>> >>>>>> Phone: 92-334-5100153 >>>>>> >>>>>> Website: www.enterprisedb.com >>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> >>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>> the individual or entity to whom it is addressed. This message >>>>>> contains information from EnterpriseDB Corporation that may be >>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>> law. If you are not the intended recipient or authorized to receive >>>>>> this for the intended recipient, any use, dissemination, distribution, >>>>>> retention, archiving, or copying of this communication is strictly >>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>> the sender immediately by reply e-mail and delete this message. >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Nikhil S. <ni...@st...> - 2013-03-04 09:37:24
|
Hi, PFA patch which fixes a locking issue in GTM_BeginTransactionMulti function. If txn_count is more than 1, then gt_TransArrayLock will be attempted to be locked multiple times inside the loop. The behavior is really undefined for pthreads and we can see weird hangs here and there because of it. I guess we got away with this till date because it was mostly being called with value 1 for txn_count. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Abbas B. <abb...@en...> - 2013-03-04 09:14:20
|
On Mon, Mar 4, 2013 at 2:00 PM, Amit Khandekar < ami...@en...> wrote: > On 1 March 2013 18:45, Abbas Butt <abb...@en...> wrote: > > > > > > On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar > > <ami...@en...> wrote: > >> > >> On 19 February 2013 12:37, Abbas Butt <abb...@en...> > wrote: > >> > > >> > Hi, > >> > Attached please find a patch that locks the cluster so that dump can > be > >> > taken to be restored on the new node to be added. > >> > > >> > To lock the cluster the patch adds a new GUC parameter called > >> > xc_lock_for_backup, however its status is maintained by the pooler. > The > >> > reason is that the default behavior of XC is to release connections as > >> > soon > >> > as a command is done and it uses PersistentConnections GUC to control > >> > the > >> > behavior. We in this case however need a status that is independent of > >> > the > >> > setting of PersistentConnections. > >> > > >> > Assume we have two coordinator cluster, the patch provides this > >> > behavior: > >> > > >> > Case 1: set and show > >> > ==================== > >> > psql test -p 5432 > >> > set xc_lock_for_backup=yes; > >> > show xc_lock_for_backup; > >> > xc_lock_for_backup > >> > -------------------- > >> > yes > >> > (1 row) > >> > > >> > Case 2: set from one client show from other > >> > ================================== > >> > psql test -p 5432 > >> > set xc_lock_for_backup=yes; > >> > (From another tab) > >> > psql test -p 5432 > >> > show xc_lock_for_backup; > >> > xc_lock_for_backup > >> > -------------------- > >> > yes > >> > (1 row) > >> > > >> > Case 3: set from one, quit it, run again and show > >> > ====================================== > >> > psql test -p 5432 > >> > set xc_lock_for_backup=yes; > >> > \q > >> > psql test -p 5432 > >> > show xc_lock_for_backup; > >> > xc_lock_for_backup > >> > -------------------- > >> > yes > >> > (1 row) > >> > > >> > Case 4: set on one coordinator, show from other > >> > ===================================== > >> > psql test -p 5432 > >> > set xc_lock_for_backup=yes; > >> > (From another tab) > >> > psql test -p 5433 > >> > show xc_lock_for_backup; > >> > xc_lock_for_backup > >> > -------------------- > >> > yes > >> > (1 row) > >> > > >> > pg_dump and pg_dumpall seem to work fine after locking the cluster for > >> > backup but I would test these utilities in detail next. > >> > > >> > Also I have yet to look in detail that standard_ProcessUtility is the > >> > only > >> > place that updates the portion of catalog that is dumped. There may be > >> > some > >> > other places too that need to be blocked for catalog updates. > >> > > >> > The patch adds no extra warnings and regression shows no extra > failure. > >> > > >> > Comments are welcome. > >> > >> Abbas wrote on another thread: > >> > >> > Amit wrote on another thread: > >> >> I haven't given a thought on the earlier patch you sent for cluster > >> >> lock > >> >> implementation; may be we can discuss this on that thread, but just a > >> >> quick > >> >> question: > >> >> > >> >> Does the cluster-lock command wait for the ongoing DDL commands to > >> >> finish > >> >> ? If not, we have problems. The subsequent pg_dump would not contain > >> >> objects > >> >> created by these particular DDLs. > >> > > >> > > >> > Suppose you have a two coordinator cluster. Assume one client > connected > >> > to > >> > each. Suppose one client issues a lock cluster command and the other > >> > issues > >> > a DDL. Is this what you mean by an ongoing DDL? If true then answer to > >> > your > >> > question is Yes. > >> > > >> > Suppose you have a prepared transaction that has a DDL in it, again if > >> > this > >> > can be considered an on going DDL, then again answer to your question > is > >> > Yes. > >> > > >> > Suppose you have a two coordinator cluster. Assume one client > connected > >> > to > >> > each. One client starts a transaction and issues a DDL, the second > >> > client > >> > issues a lock cluster command, the first commits the transaction. If > >> > this is > >> > an ongoing DDL, then the answer to your question is No. > >> > >> Yes this last scenario is what I meant: A DDL has been executed on > nodes, > >> but > >> not committed, when the cluster lock command is run and then pg_dump > >> immediately > >> starts its transaction before the DDL is committed. Here pg_dump does > >> not see the new objects that would be created. > > Come to think of it, there would always be a small interval where the > concurrency issue would remain. Can you please give an example to clarify. > If we were to totally get rid of this > concurrency issue, we need to have some kind of lock. For e.g. the > object access hook function will have shared acces lock on this object > (may be on pg_depend because it is always used for objcet > creation/drop ??) and the lock-cluster command will try to get > exclusive lock on the same. This of course should be done after we are > sure object access hook is called on all types of objects. > > > >> > >> I myself am not sure how would we prevent this from happening. There > >> are two callback hooks that might be worth considering though: > >> 1. Transaction End callback (CallXactCallbacks) > >> 2. Object creation/drop hook (InvokeObjectAccessHook) > >> > >> Suppose we create an object creation/drop hook function that would : > >> 1. store the current transaction id in a global objects_created list > >> if the cluster is not locked, > >> 2. or else if the cluster is locked, this hook would ereport() saying > >> "cannot create catalog objects in this mode". > >> > >> And then during transaction commit , a new transaction callback hook > will: > >> 1. Check the above objects_created list to see if the current > >> transaction has any objects created/dropped. > >> 2. If found and if the cluster-lock is on, it will again ereport() > >> saying "cannot create catalog objects in this mode" > >> > >> Thinking more on the object creation hook, we can even consider this > >> as a substitute for checking the cluster-lock status in > >> standardProcessUtility(). But I am not sure whether this hook does get > >> called on each of the catalog objects. At least the code comments say > >> it does. > > > > > > These are very good ideas, Thanks, I will work on those lines and will > > report back. > > > >> > >> > >> > >> > >> > But its a matter of > >> > deciding which camp are we going to put COMMIT in, the allow camp, or > >> > the > >> > deny camp. I decided to put it in allow camp, because I have not yet > >> > written > >> > any code to detect whether a transaction being committed has a DDL in > it > >> > or > >> > not, and stopping all transactions from committing looks too > restrictive > >> > to > >> > me. > >> > >> > >> > > >> > Do you have some other meaning of an ongoing DDL? > >> > >> > >> > >> > > >> > -- > >> > Abbas > >> > Architect > >> > EnterpriseDB Corporation > >> > The Enterprise PostgreSQL Company > >> > > >> > Phone: 92-334-5100153 > >> > > >> > Website: www.enterprisedb.com > >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > >> > Follow us on Twitter: http://www.twitter.com/enterprisedb > >> > > >> > This e-mail message (and any attachment) is intended for the use of > >> > the individual or entity to whom it is addressed. This message > >> > contains information from EnterpriseDB Corporation that may be > >> > privileged, confidential, or exempt from disclosure under applicable > >> > law. If you are not the intended recipient or authorized to receive > >> > this for the intended recipient, any use, dissemination, distribution, > >> > retention, archiving, or copying of this communication is strictly > >> > prohibited. If you have received this e-mail in error, please notify > >> > the sender immediately by reply e-mail and delete this message. > >> > > >> > > >> > > ------------------------------------------------------------------------------ > >> > Everyone hates slow websites. So do we. > >> > Make your web apps faster with AppDynamics > >> > Download AppDynamics Lite for free today: > >> > http://p.sf.net/sfu/appdyn_d2d_feb > >> > _______________________________________________ > >> > Postgres-xc-developers mailing list > >> > Pos...@li... > >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > > > > > > > > > > > -- > > -- > > Abbas > > Architect > > EnterpriseDB Corporation > > The Enterprise PostgreSQL Company > > > > Phone: 92-334-5100153 > > > > Website: www.enterprisedb.com > > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > > Follow us on Twitter: http://www.twitter.com/enterprisedb > > > > This e-mail message (and any attachment) is intended for the use of > > the individual or entity to whom it is addressed. This message > > contains information from EnterpriseDB Corporation that may be > > privileged, confidential, or exempt from disclosure under applicable > > law. If you are not the intended recipient or authorized to receive > > this for the intended recipient, any use, dissemination, distribution, > > retention, archiving, or copying of this communication is strictly > > prohibited. If you have received this e-mail in error, please notify > > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Ashutosh B. <ash...@en...> - 2013-03-04 09:10:05
|
On Mon, Mar 4, 2013 at 1:51 PM, Abbas Butt <abb...@en...>wrote: > What I had in mind was to have pg_dump, when run with include-node, emit > CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those > commands will be used to create existing nodes/groups on the new > coordinator to be added. So it does make sense to use this option > independently, in fact it is supposed to be used independently. > > Ok, got it. But then include-node is really a misnomer. We should use --dump-nodes or something like that. > > On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < > ash...@en...> wrote: > >> Dumping TO NODE clause only makes sense if we dump CREATE NODE/ CREATE >> NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense >> independently, but might be useless without dumping TO NODE clause. >> >> BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't create >> the nodes on all the coordinators, > > > All the coordinators already have the nodes information. > > >> but only the coordinator where dump will be restored. That's another >> thing you will need to consider OR are you going to fix that as well? > > > As a first step I am only listing the manual steps required to add a new > node, that might say run this command on all the existing coordinators by > connecting to them one by one manually. We can decide to automate these > steps later. > > ok > > >> >> >> On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt <abb...@en...>wrote: >> >>> I was thinking of using include-nodes to dump CREATE NODE / CREATE NODE >>> GROUP, that is required as one of the missing links in adding a new node. >>> How do you think about that? >>> >>> >>> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> Hi Abbas, >>>> Please take a look at >>>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which gives >>>> all the command line options for pg_dump. instead of >>>> include-to-node-clause, just include-nodes would suffice, I guess. >>>> >>>> >>>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt <abb...@en... >>>> > wrote: >>>> >>>>> PFA a updated patch that provides a command line argument called >>>>> --include-to-node-clause to let pg_dump know that the created dump is >>>>> supposed to emit TO NODE clause in the CREATE TABLE command. >>>>> If the argument is provided while taking the dump from a datanode, it >>>>> does not show TO NODE clause in the dump since the catalog table is empty >>>>> in this case. >>>>> The documentation of pg_dump is updated accordingly. >>>>> The rest of the functionality stays the same as before. >>>>> >>>>> >>>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> I think we should always dump DISTRIBUTE BY. >>>>>> >>>>>> PG does not stop dumping (or provide an option to do so) newer syntax >>>>>> so that the dump will work on older versions. On similar lines, an XC dump >>>>>> can not be used against PG without modification (removing DISTRIBUTE BY). >>>>>> There can be more serious problems like exceeding table size limits if an >>>>>> XC dump is tried to be restored in PG. >>>>>> >>>>>> As to TO NODE clause, I agree, that one can restore the dump on a >>>>>> cluster with different configuration, so giving an option to dump TO NODE >>>>>> clause will help. >>>>>> >>>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>>> mic...@gm...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>>> mic...@gm...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in the >>>>>>>>>>>> dump. >>>>>>>>>>>> This is required because otherwise all tables get created on >>>>>>>>>>>> all nodes after a dump-restore cycle. >>>>>>>>>>>> >>>>>>>>>>> Not sure this is good if you take a dump of an XC cluster to >>>>>>>>>>> restore that to a vanilla Postgres cluster. >>>>>>>>>>> Why not adding a new option that would control the generation of >>>>>>>>>>> this clause instead of forcing it? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I think you can use the pg_dump that comes with vanilla PG to do >>>>>>>>>> that, can't you? But I am open to adding a control option if every body >>>>>>>>>> thinks so. >>>>>>>>>> >>>>>>>>> Sure you can, this is just to simplify the life of users a maximum >>>>>>>>> by not having multiple pg_dump binaries in their serves. >>>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>>> >>>>>>>> >>>>>>>> Yah if we choose to have an option we will put both DISTRIBUTE BY >>>>>>>> and TO NODE under it. >>>>>>>> >>>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>>> This would bring more flexibility to the way dumps are generated. >>>>>>> -- >>>>>>> Michael >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Everyone hates slow websites. So do we. >>>>>>> Make your web apps faster with AppDynamics >>>>>>> Download AppDynamics Lite for free today: >>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-developers mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Enterprise Postgres Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Abbas >>>>> Architect >>>>> EnterpriseDB Corporation >>>>> The Enterprise PostgreSQL Company >>>>> >>>>> Phone: 92-334-5100153 >>>>> >>>>> Website: www.enterprisedb.com >>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>>>> This e-mail message (and any attachment) is intended for the use of >>>>> the individual or entity to whom it is addressed. This message >>>>> contains information from EnterpriseDB Corporation that may be >>>>> privileged, confidential, or exempt from disclosure under applicable >>>>> law. If you are not the intended recipient or authorized to receive >>>>> this for the intended recipient, any use, dissemination, distribution, >>>>> retention, archiving, or copying of this communication is strictly >>>>> prohibited. If you have received this e-mail in error, please notify >>>>> the sender immediately by reply e-mail and delete this message. >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Amit K. <ami...@en...> - 2013-03-04 09:00:41
|
On 1 March 2013 18:45, Abbas Butt <abb...@en...> wrote: > > > On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar > <ami...@en...> wrote: >> >> On 19 February 2013 12:37, Abbas Butt <abb...@en...> wrote: >> > >> > Hi, >> > Attached please find a patch that locks the cluster so that dump can be >> > taken to be restored on the new node to be added. >> > >> > To lock the cluster the patch adds a new GUC parameter called >> > xc_lock_for_backup, however its status is maintained by the pooler. The >> > reason is that the default behavior of XC is to release connections as >> > soon >> > as a command is done and it uses PersistentConnections GUC to control >> > the >> > behavior. We in this case however need a status that is independent of >> > the >> > setting of PersistentConnections. >> > >> > Assume we have two coordinator cluster, the patch provides this >> > behavior: >> > >> > Case 1: set and show >> > ==================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 2: set from one client show from other >> > ================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > (From another tab) >> > psql test -p 5432 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 3: set from one, quit it, run again and show >> > ====================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > \q >> > psql test -p 5432 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 4: set on one coordinator, show from other >> > ===================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > (From another tab) >> > psql test -p 5433 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > pg_dump and pg_dumpall seem to work fine after locking the cluster for >> > backup but I would test these utilities in detail next. >> > >> > Also I have yet to look in detail that standard_ProcessUtility is the >> > only >> > place that updates the portion of catalog that is dumped. There may be >> > some >> > other places too that need to be blocked for catalog updates. >> > >> > The patch adds no extra warnings and regression shows no extra failure. >> > >> > Comments are welcome. >> >> Abbas wrote on another thread: >> >> > Amit wrote on another thread: >> >> I haven't given a thought on the earlier patch you sent for cluster >> >> lock >> >> implementation; may be we can discuss this on that thread, but just a >> >> quick >> >> question: >> >> >> >> Does the cluster-lock command wait for the ongoing DDL commands to >> >> finish >> >> ? If not, we have problems. The subsequent pg_dump would not contain >> >> objects >> >> created by these particular DDLs. >> > >> > >> > Suppose you have a two coordinator cluster. Assume one client connected >> > to >> > each. Suppose one client issues a lock cluster command and the other >> > issues >> > a DDL. Is this what you mean by an ongoing DDL? If true then answer to >> > your >> > question is Yes. >> > >> > Suppose you have a prepared transaction that has a DDL in it, again if >> > this >> > can be considered an on going DDL, then again answer to your question is >> > Yes. >> > >> > Suppose you have a two coordinator cluster. Assume one client connected >> > to >> > each. One client starts a transaction and issues a DDL, the second >> > client >> > issues a lock cluster command, the first commits the transaction. If >> > this is >> > an ongoing DDL, then the answer to your question is No. >> >> Yes this last scenario is what I meant: A DDL has been executed on nodes, >> but >> not committed, when the cluster lock command is run and then pg_dump >> immediately >> starts its transaction before the DDL is committed. Here pg_dump does >> not see the new objects that would be created. Come to think of it, there would always be a small interval where the concurrency issue would remain. If we were to totally get rid of this concurrency issue, we need to have some kind of lock. For e.g. the object access hook function will have shared acces lock on this object (may be on pg_depend because it is always used for objcet creation/drop ??) and the lock-cluster command will try to get exclusive lock on the same. This of course should be done after we are sure object access hook is called on all types of objects. >> >> I myself am not sure how would we prevent this from happening. There >> are two callback hooks that might be worth considering though: >> 1. Transaction End callback (CallXactCallbacks) >> 2. Object creation/drop hook (InvokeObjectAccessHook) >> >> Suppose we create an object creation/drop hook function that would : >> 1. store the current transaction id in a global objects_created list >> if the cluster is not locked, >> 2. or else if the cluster is locked, this hook would ereport() saying >> "cannot create catalog objects in this mode". >> >> And then during transaction commit , a new transaction callback hook will: >> 1. Check the above objects_created list to see if the current >> transaction has any objects created/dropped. >> 2. If found and if the cluster-lock is on, it will again ereport() >> saying "cannot create catalog objects in this mode" >> >> Thinking more on the object creation hook, we can even consider this >> as a substitute for checking the cluster-lock status in >> standardProcessUtility(). But I am not sure whether this hook does get >> called on each of the catalog objects. At least the code comments say >> it does. > > > These are very good ideas, Thanks, I will work on those lines and will > report back. > >> >> >> >> >> > But its a matter of >> > deciding which camp are we going to put COMMIT in, the allow camp, or >> > the >> > deny camp. I decided to put it in allow camp, because I have not yet >> > written >> > any code to detect whether a transaction being committed has a DDL in it >> > or >> > not, and stopping all transactions from committing looks too restrictive >> > to >> > me. >> >> >> > >> > Do you have some other meaning of an ongoing DDL? >> >> >> >> > >> > -- >> > Abbas >> > Architect >> > EnterpriseDB Corporation >> > The Enterprise PostgreSQL Company >> > >> > Phone: 92-334-5100153 >> > >> > Website: www.enterprisedb.com >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> > >> > This e-mail message (and any attachment) is intended for the use of >> > the individual or entity to whom it is addressed. This message >> > contains information from EnterpriseDB Corporation that may be >> > privileged, confidential, or exempt from disclosure under applicable >> > law. If you are not the intended recipient or authorized to receive >> > this for the intended recipient, any use, dissemination, distribution, >> > retention, archiving, or copying of this communication is strictly >> > prohibited. If you have received this e-mail in error, please notify >> > the sender immediately by reply e-mail and delete this message. >> > >> > >> > ------------------------------------------------------------------------------ >> > Everyone hates slow websites. So do we. >> > Make your web apps faster with AppDynamics >> > Download AppDynamics Lite for free today: >> > http://p.sf.net/sfu/appdyn_d2d_feb >> > _______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <ko...@in...> - 2013-03-04 08:31:05
|
Thank you Benny; Ashutosh, could you commit it when we're sure that the patch is okay to developers. Regards; --- Koichi On Mon, 4 Mar 2013 15:50:36 +0800 Xiong Wang <wan...@gm...> wrote: > Hi Ashutosh , > > Sorry, I made a mistake that I revised the wrong file which is relative > with \d+ reference. The attached patch made it right. > > Thanks & Regards, > > Benny Wang > > 2013/3/1 Ashutosh Bapat <ash...@en...> > > > Hi Benny, > > Sorry for coming back again on this one. I have two requests > > > > 1. We are using two different queries to get distribution information, can > > we do this in a single query? I am extremely sorry for not spotting this > > earlier. Also, can we compress following code > > 73 + if (tuples > 0) > > 74 + { > > 75 + const char *dist_by = _("Distribute By"); > > 76 + const char *loc_nodes = _("Location Nodes"); > > 77 + > > 78 + /* Only one tuple should be returned */ > > 79 + if (tuples > 1) > > 80 + goto error_return; > > > > to read > > if (tuples == 1) > > { > > } > > else > > goto error_return; > > > > 2. Can you please provide the documentation changes as well, in the same > > patch? You will need to change the sgml file under doc-xc folder > > corresponding to app-psql.html. You will find somewhere on Postgres-XC > > wiki, how to compile the documentation. Suzuki-san, can you please help > > here? > > > > > > On Thu, Feb 28, 2013 at 2:11 PM, Ashutosh Bapat < > > ash...@en...> wrote: > > > >> I have reviewed the code and it looks fine. The regression is not showing > >> any extra diffs. > >> > >> I will commit this patch tomorrow morning (IST time), if I do not see any > >> objections. > >> > >> > >> On Thu, Feb 28, 2013 at 12:32 PM, Xiong Wang <wan...@gm...>wrote: > >> > >>> Hi, > >>> > >>> 2013/2/28 Ashutosh Bapat <ash...@en...> > >>> > >>>> Hi Benny, > >>>> > >>>> It seems you commented out some tests in serial schedule in this patch. > >>>> Can you please uncomment those? > >>>> > >>>> Yes. I forgot to clean the comments. Thanks. > >>> > >>> > >> > >> > >> > >>> Regards > >>> > >>> BennyWang > >>> > >>>> > >>>> On Thu, Feb 28, 2013 at 11:56 AM, Xiong Wang <wan...@gm...>wrote: > >>>> > >>>>> Hi Ashutosh, > >>>>> > >>>>> I revised the patch according to your advice. I deleted one duplicated > >>>>> colon when print "Location Nodes" by your revised patch. > >>>>> > >>>>> Thanks & Regards, > >>>>> Benny Wang > >>>>> > >>>>> > >>>>> 2013/2/28 Ashutosh Bapat <ash...@en...> > >>>>> > >>>>>> > >>>>>> > >>>>>> On Thu, Feb 28, 2013 at 7:56 AM, Xiong Wang <wan...@gm...>wrote: > >>>>>> > >>>>>>> Hi Ashutosh, > >>>>>>> > >>>>>>> Thanks for your review at first. > >>>>>>> > >>>>>>> I compared inherit.out and inherit_1.out under directory > >>>>>>> regress/expected. There's a lot of differences between these two files. > >>>>>>> Expected inherit.out keeps the original PG > >>>>>>> results. Do you think it's necessary to revise the inherit.out > >>>>>>> impacted by this patch? > >>>>>>> > >>>>>>> > >>>>>> Yes. As a default we change all the .out files when there is > >>>>>> corresponding change in functionality for XC. E.g. now onwards, \d+ output > >>>>>> on XC will always have distribution information, so there is no point in > >>>>>> keeping the PG output. It's only when the outputs differ because of lack of > >>>>>> functionality (restricted features) in XC or because of bugs in XC, we keep > >>>>>> the PG output for references and create an alternate expected output file. > >>>>>> > >>>>>> > >>>>>>> Thanks & Regards, > >>>>>>> > >>>>>>> Benny Wang > >>>>>>> > >>>>>>> > >>>>>>> 2013/2/27 Ashutosh Bapat <ash...@en...> > >>>>>>> > >>>>>>>> Hi Benny, > >>>>>>>> I took a good look at this patch now. Attached please find a > >>>>>>>> revised patch, with some minor modifications done. Rest of the comments are > >>>>>>>> below > >>>>>>>> > >>>>>>>> 1. As a general guideline for adding #ifdef SOMETHING, it's good to > >>>>>>>> end it with not just #endif but #endif /* SOMETHING */, so that it's easier > >>>>>>>> to find the mutually corresponding pairs of #ifdef and #endif. Right now I > >>>>>>>> have added it myself in the attached patch. > >>>>>>>> > >>>>>>>> 2. It's better to print the the distribution information at the end > >>>>>>>> of everything else, so that it's easy to spot in case, someone needs to > >>>>>>>> differentiate between PG and PGXC output of \d+. This has been taken care > >>>>>>>> in the revised patch. > >>>>>>>> > >>>>>>>> 3. Distribution type and nodes better be on the same line as their > >>>>>>>> heading e.g. Distributed by: REPLICATION. The patch contains the change. > >>>>>>>> > >>>>>>>> 4. As suggested by Nikhil in one of the mails, the ouptut of > >>>>>>>> location nodes needs to be changed from format {node1,node2,..} to node1, > >>>>>>>> node2, node3, ... (notice the space after "," and removal of braces.). Done > >>>>>>>> in my patch. > >>>>>>>> > >>>>>>>> Please provide me a patch, with the regression outputs adjusted > >>>>>>>> accordingly. You will need to change inherit.out along-with inherit_1.out. > >>>>>>>> > >>>>>>>> In attached files, print_distribution_info.patch.2 is patch with > >>>>>>>> the changes described above applied on your patch. > >>>>>>>> print_distribution_info.patch.diff is the patch containing only the changes > >>>>>>>> described above. Please review these changes and provide me an updated > >>>>>>>> patch. > >>>>>>>> > >>>>>>>> > >>>>>>>> On Fri, Feb 22, 2013 at 9:21 AM, Xiong Wang <wan...@gm... > >>>>>>>> > wrote: > >>>>>>>> > >>>>>>>>> Hi all, > >>>>>>>>> > >>>>>>>>> I finished the patch. If you have any comments, give me a reply. > >>>>>>>>> > >>>>>>>>> Thanks & Regards, > >>>>>>>>> > >>>>>>>>> Benny Wang > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> 2013/2/21 Koichi Suzuki <koi...@gm...> > >>>>>>>>> > >>>>>>>>>> Okay. I hope this satisfies everybody. > >>>>>>>>>> ---------- > >>>>>>>>>> Koichi Suzuki > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> 2013/2/20 Ashutosh Bapat <ash...@en...>: > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > On Wed, Feb 20, 2013 at 10:49 AM, Xiong Wang < > >>>>>>>>>> wan...@gm...> wrote: > >>>>>>>>>> >> > >>>>>>>>>> >> Hi Ashutosh, > >>>>>>>>>> >> > >>>>>>>>>> >> 2013/2/6 Ashutosh Bapat <ash...@en...> > >>>>>>>>>> >>> > >>>>>>>>>> >>> Hi Xiong, > >>>>>>>>>> >>> > >>>>>>>>>> >>> On Tue, Feb 5, 2013 at 8:06 PM, Xiong Wang < > >>>>>>>>>> wan...@gm...> > >>>>>>>>>> >>> wrote: > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> Hi Ashutosh, > >>>>>>>>>> >>>> 2013/2/5 Ashutosh Bapat <ash...@en...> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> Hi Xiong, > >>>>>>>>>> >>>>> Thanks for the patch. It's very much awaited feature. > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> Here are some comments on your patch. > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> The patch applies well, but has some unwanted white spaces > >>>>>>>>>> >>>>> [ashutosh@ubuntu coderoot]git apply > >>>>>>>>>> >>>>> /mnt/hgfs/tmp/print_distribution_info.patch > >>>>>>>>>> >>>>> /mnt/hgfs/tmp/print_distribution_info.patch:28: space > >>>>>>>>>> before tab in > >>>>>>>>>> >>>>> indent. > >>>>>>>>>> >>>>> "SELECT CASE pclocatortype \n" > >>>>>>>>>> >>>>> /mnt/hgfs/tmp/print_distribution_info.patch:35: trailing > >>>>>>>>>> whitespace. > >>>>>>>>>> >>>>> "WHEN '%c' THEN 'MODULO' END || ' ('|| > >>>>>>>>>> a.attname > >>>>>>>>>> >>>>> ||')' as distype\n" > >>>>>>>>>> >>>>> /mnt/hgfs/tmp/print_distribution_info.patch:59: trailing > >>>>>>>>>> whitespace. > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> warning: 3 lines add whitespace errors. > >>>>>>>>>> >>>>> Please take care of those. > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> Thanks for your patient review. I will fix these problems. > >>>>>>>>>> >>>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> I see that there are comments like /* NOTICE: The number of > >>>>>>>>>> beginning > >>>>>>>>>> >>>>> whitespace is the same as index print */ followed by > >>>>>>>>>> printing of a message > >>>>>>>>>> >>>>> with some spaces hard-coded in it. I do not see this style > >>>>>>>>>> being used > >>>>>>>>>> >>>>> anywhere in the file and it looks problematic. If it > >>>>>>>>>> happens that this new > >>>>>>>>>> >>>>> information is indented, the hard-coded spaces will not > >>>>>>>>>> align properly. Can > >>>>>>>>>> >>>>> you please check what's the proper way of aligning the > >>>>>>>>>> lines and use that > >>>>>>>>>> >>>>> method? > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> I add this notice deliberately because the length of white > >>>>>>>>>> spaces before > >>>>>>>>>> >>>> printing index information is 4. There is no warn similar > >>>>>>>>>> with my comment in > >>>>>>>>>> >>>> describe.c. So, I will delete this comment within later > >>>>>>>>>> patch. Thanks again. > >>>>>>>>>> >>>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> Don't just delete the comment, we need to get rid of > >>>>>>>>>> hardcoded white > >>>>>>>>>> >>> spaces. Do you see any other instance in the file which uses > >>>>>>>>>> white spaces? > >>>>>>>>>> >> > >>>>>>>>>> >> > >>>>>>>>>> >> Yes. There are several other places use hardcoded white > >>>>>>>>>> spaces such as > >>>>>>>>>> >> printing constraints including check, fk and printing trigger > >>>>>>>>>> informations. > >>>>>>>>>> >> In order to follow postgresql style, I will just delete my > >>>>>>>>>> comments. > >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> Instead of following query, > >>>>>>>>>> >>>>> 1742 "SELECT node_name FROM > >>>>>>>>>> >>>>> pg_catalog.pgxc_node \n" > >>>>>>>>>> >>>>> 1743 "WHERE oid::text in \n" > >>>>>>>>>> >>>>> 1744 "(SELECT > >>>>>>>>>> >>>>> pg_catalog.regexp_split_to_table(nodeoids::text, E'\\\\s+') > >>>>>>>>>> FROM > >>>>>>>>>> >>>>> pg_catalog.pgxc_class WHERE pcrelid = '%s');" > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> I would use (with proper indentation) > >>>>>>>>>> >>>>> SELECT ARRAY(SELECT node_name FROM pg_catalog.pgxc_node > >>>>>>>>>> WHERE oid IN > >>>>>>>>>> >>>>> (SELECT unnest(nodeoids) FROM pgxc_class WHERE pcrelid = > >>>>>>>>>> %s)); > >>>>>>>>>> >>>>> This query will give you only one row containing all the > >>>>>>>>>> nodes. Using > >>>>>>>>>> >>>>> unnest to convert an array to table and then using IN > >>>>>>>>>> operator is better > >>>>>>>>>> >>>>> than converting array to string and using split on string, > >>>>>>>>>> and then > >>>>>>>>>> >>>>> combining the result back. That way, we don't rely on the > >>>>>>>>>> syntax of array to > >>>>>>>>>> >>>>> string conversion or any particular regular expression. > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> Great. I didn't find the unnest function. I will change my > >>>>>>>>>> query later. > >>>>>>>>>> >>>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> Please provide the fix for the failing regressions as well. > >>>>>>>>>> You will > >>>>>>>>>> >>>>> need to change the expected output. > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> As for regression failure, I wanted to submit the fixing > >>>>>>>>>> patch but my > >>>>>>>>>> >>>> test environment is different from yours. I doubt that my > >>>>>>>>>> patch for fixing > >>>>>>>>>> >>>> the failure may be not useful. > >>>>>>>>>> >>>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> Send the expected output changes anyway, we will have to find > >>>>>>>>>> out a way > >>>>>>>>>> >>> to fix the regression. > >>>>>>>>>> >> > >>>>>>>>>> >> Ok. > >>>>>>>>>> >> > >>>>>>>>>> > > >>>>>>>>>> > Now you have a way to fix the regression as well. Use ALL > >>>>>>>>>> DATANODES if the > >>>>>>>>>> > list of nodes contains all the datanodes. We have just seen one > >>>>>>>>>> objection. > >>>>>>>>>> > Printing ALL DATANODES looks to have uses other than silencing > >>>>>>>>>> regressions. > >>>>>>>>>> > So, it's worth putting it. > >>>>>>>>>> > > >>>>>>>>>> >> > >>>>>>>>>> >> Thanks & Regards > >>>>>>>>>> >> Benny Wang > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> Rest of the patch looks good. > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> On Tue, Feb 5, 2013 at 11:41 AM, Xiong Wang < > >>>>>>>>>> wan...@gm...> > >>>>>>>>>> >>>>> wrote: > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> Hi all, > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> The enclosure is the patch for showing distribution > >>>>>>>>>> information. > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> Two sql files, inherit.sql and create_table_like.sql in > >>>>>>>>>> the regression > >>>>>>>>>> >>>>>> test will fail. > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> Thanks & Regards, > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> Benny Wang > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> 2013/2/1 Koichi Suzuki <koi...@gm...> > >>>>>>>>>> >>>>>>> > >>>>>>>>>> >>>>>>> Yes, it's nice to have. > >>>>>>>>>> >>>>>>> > >>>>>>>>>> >>>>>>> I understand there were many discuttions to have it, > >>>>>>>>>> separate command > >>>>>>>>>> >>>>>>> or \d and \d+. \d, \d+ extension will not be affected > >>>>>>>>>> by command > >>>>>>>>>> >>>>>>> name conflict. I hope we can handle further change both > >>>>>>>>>> in XC and > >>>>>>>>>> >>>>>>> PG. I don't see very big difference in comparison of > >>>>>>>>>> >>>>>>> separate/existing command. Their pros and cons seems to > >>>>>>>>>> be > >>>>>>>>>> >>>>>>> comparable. So I think we can decide what is more > >>>>>>>>>> convenient to > >>>>>>>>>> >>>>>>> use. > >>>>>>>>>> >>>>>>> So far, I understand more people prefer \d. It's > >>>>>>>>>> quite okay with > >>>>>>>>>> >>>>>>> me. > >>>>>>>>>> >>>>>>> > >>>>>>>>>> >>>>>>> In addition, we may want to see each node information > >>>>>>>>>> (resource, > >>>>>>>>>> >>>>>>> primary, preferred) and configuration of each nodegroup. > >>>>>>>>>> Because > >>>>>>>>>> >>>>>>> this > >>>>>>>>>> >>>>>>> is quite new to XC, I think it's better to have > >>>>>>>>>> xc-specific command > >>>>>>>>>> >>>>>>> such as \xc something. > >>>>>>>>>> >>>>>>> > >>>>>>>>>> >>>>>>> Regards; > >>>>>>>>>> >>>>>>> ---------- > >>>>>>>>>> >>>>>>> Koichi Suzuki > >>>>>>>>>> >>>>>>> > >>>>>>>>>> >>>>>>> > >>>>>>>>>> >>>>>>> 2013/2/1 Xiong Wang <wan...@gm...>: > >>>>>>>>>> >>>>>>> > Hi Suzuki, > >>>>>>>>>> >>>>>>> > According to Ashutosh and ANikhil, It seems that they > >>>>>>>>>> want to print > >>>>>>>>>> >>>>>>> > distributed method as well as the location node list > >>>>>>>>>> using \d+ . > >>>>>>>>>> >>>>>>> > Are you in favor? > >>>>>>>>>> >>>>>>> > > >>>>>>>>>> >>>>>>> > Regards, > >>>>>>>>>> >>>>>>> > Benny > >>>>>>>>>> >>>>>>> > > >>>>>>>>>> >>>>>>> > > >>>>>>>>>> >>>>>>> > 2013/2/1 Koichi Suzuki <koi...@gm...> > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> One more issue, > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> Does anybody need a command to print node list from > >>>>>>>>>> pgxc_node and > >>>>>>>>>> >>>>>>> >> pgxc_group? > >>>>>>>>>> >>>>>>> >> ---------- > >>>>>>>>>> >>>>>>> >> Koichi Suzuki > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> 2013/2/1 Koichi Suzuki <koi...@gm...>: > >>>>>>>>>> >>>>>>> >> > Great! > >>>>>>>>>> >>>>>>> >> > > >>>>>>>>>> >>>>>>> >> > Benny, please post your patch when ready. > >>>>>>>>>> >>>>>>> >> > ---------- > >>>>>>>>>> >>>>>>> >> > Koichi Suzuki > >>>>>>>>>> >>>>>>> >> > > >>>>>>>>>> >>>>>>> >> > > >>>>>>>>>> >>>>>>> >> > 2013/2/1 Mason Sharp <ma...@st...>: > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> On Thu, Jan 31, 2013 at 5:38 AM, Ashutosh Bapat > >>>>>>>>>> >>>>>>> >> >> <ash...@en...> wrote: > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> +1. We should add this functionality as \d+. > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> +1 > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> Xion, > >>>>>>>>>> >>>>>>> >> >>> You will need to output the nodes where the table > >>>>>>>>>> is > >>>>>>>>>> >>>>>>> >> >>> distributed or > >>>>>>>>>> >>>>>>> >> >>> replicated. > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> On Thu, Jan 31, 2013 at 3:11 PM, Nikhil Sontakke > >>>>>>>>>> >>>>>>> >> >>> <ni...@st...> > >>>>>>>>>> >>>>>>> >> >>> wrote: > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> Btw, I vote for showing PGXC output with \d+ and > >>>>>>>>>> other > >>>>>>>>>> >>>>>>> >> >>>> extended > >>>>>>>>>> >>>>>>> >> >>>> commands > >>>>>>>>>> >>>>>>> >> >>>> only. > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> Nikhils > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> On Thu, Jan 31, 2013 at 3:08 PM, Nikhil Sontakke > >>>>>>>>>> >>>>>>> >> >>>> <ni...@st...> > >>>>>>>>>> >>>>>>> >> >>>> wrote: > >>>>>>>>>> >>>>>>> >> >>>> > I still do not understand how showing > >>>>>>>>>> additional stuff in > >>>>>>>>>> >>>>>>> >> >>>> > the PGXC > >>>>>>>>>> >>>>>>> >> >>>> > version makes it incompatible with vanilla > >>>>>>>>>> Postgres? > >>>>>>>>>> >>>>>>> >> >>>> > > >>>>>>>>>> >>>>>>> >> >>>> > As you can see, the OP made changes to the > >>>>>>>>>> *existing* \d > >>>>>>>>>> >>>>>>> >> >>>> > logic > >>>>>>>>>> >>>>>>> >> >>>> > which > >>>>>>>>>> >>>>>>> >> >>>> > is a logical way of doing things. As long as we > >>>>>>>>>> use #ifdef > >>>>>>>>>> >>>>>>> >> >>>> > PGXC, I > >>>>>>>>>> >>>>>>> >> >>>> > do > >>>>>>>>>> >>>>>>> >> >>>> > not see how printing additional info breaks > >>>>>>>>>> anything. > >>>>>>>>>> >>>>>>> >> >>>> > Infact it > >>>>>>>>>> >>>>>>> >> >>>> > avoids > >>>>>>>>>> >>>>>>> >> >>>> > users having to learn more stuff. > >>>>>>>>>> >>>>>>> >> >>>> > > >>>>>>>>>> >>>>>>> >> >>>> > Regards, > >>>>>>>>>> >>>>>>> >> >>>> > Nikhils > >>>>>>>>>> >>>>>>> >> >>>> > > >>>>>>>>>> >>>>>>> >> >>>> > On Thu, Jan 31, 2013 at 2:59 PM, Michael Paquier > >>>>>>>>>> >>>>>>> >> >>>> > <mic...@gm...> wrote: > >>>>>>>>>> >>>>>>> >> >>>> >> On Thu, Jan 31, 2013 at 6:04 PM, Xiong Wang > >>>>>>>>>> >>>>>>> >> >>>> >> <wan...@gm...> > >>>>>>>>>> >>>>>>> >> >>>> >> wrote: > >>>>>>>>>> >>>>>>> >> >>>> >>> > >>>>>>>>>> >>>>>>> >> >>>> >>> I wrote a simple patch which will show > >>>>>>>>>> distribution > >>>>>>>>>> >>>>>>> >> >>>> >>> information > >>>>>>>>>> >>>>>>> >> >>>> >>> when > >>>>>>>>>> >>>>>>> >> >>>> >>> you > >>>>>>>>>> >>>>>>> >> >>>> >>> use \d tablename command. > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> >>>>>>> >> >>>> >> I vote no for that with \d. I agree it is > >>>>>>>>>> useful, but it > >>>>>>>>>> >>>>>>> >> >>>> >> makes the > >>>>>>>>>> >>>>>>> >> >>>> >> output > >>>>>>>>>> >>>>>>> >> >>>> >> inconsistent with vanilla Postgres. And I am > >>>>>>>>>> sure it > >>>>>>>>>> >>>>>>> >> >>>> >> creates many > >>>>>>>>>> >>>>>>> >> >>>> >> failures > >>>>>>>>>> >>>>>>> >> >>>> >> in regression tests. It has been discussed > >>>>>>>>>> before to use > >>>>>>>>>> >>>>>>> >> >>>> >> either a > >>>>>>>>>> >>>>>>> >> >>>> >> new > >>>>>>>>>> >>>>>>> >> >>>> >> command with a word or a letter we'll be sure > >>>>>>>>>> won't be in > >>>>>>>>>> >>>>>>> >> >>>> >> conflict > >>>>>>>>>> >>>>>>> >> >>>> >> with > >>>>>>>>>> >>>>>>> >> >>>> >> vanilla now and at some point in the future. > >>>>>>>>>> Something > >>>>>>>>>> >>>>>>> >> >>>> >> like > >>>>>>>>>> >>>>>>> >> >>>> >> "\distrib" > >>>>>>>>>> >>>>>>> >> >>>> >> perhaps? > >>>>>>>>>> >>>>>>> >> >>>> >> -- > >>>>>>>>>> >>>>>>> >> >>>> >> Michael Paquier > >>>>>>>>>> >>>>>>> >> >>>> >> http://michael.otacoo.com > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>>> >>>>>>> >> >>>> >> Everyone hates slow websites. So do we. > >>>>>>>>>> >>>>>>> >> >>>> >> Make your web apps faster with AppDynamics > >>>>>>>>>> >>>>>>> >> >>>> >> Download AppDynamics Lite for free today: > >>>>>>>>>> >>>>>>> >> >>>> >> http://p.sf.net/sfu/appdyn_d2d_jan > >>>>>>>>>> >>>>>>> >> >>>> >> _______________________________________________ > >>>>>>>>>> >>>>>>> >> >>>> >> Postgres-xc-developers mailing list > >>>>>>>>>> >>>>>>> >> >>>> >> Pos...@li... > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>> >>>>>>> >> >>>> >> > >>>>>>>>>> >>>>>>> >> >>>> > > >>>>>>>>>> >>>>>>> >> >>>> > > >>>>>>>>>> >>>>>>> >> >>>> > > >>>>>>>>>> >>>>>>> >> >>>> > -- > >>>>>>>>>> >>>>>>> >> >>>> > StormDB - http://www.stormdb.com > >>>>>>>>>> >>>>>>> >> >>>> > The Database Cloud > >>>>>>>>>> >>>>>>> >> >>>> > Postgres-XC Support and Service > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> -- > >>>>>>>>>> >>>>>>> >> >>>> StormDB - http://www.stormdb.com > >>>>>>>>>> >>>>>>> >> >>>> The Database Cloud > >>>>>>>>>> >>>>>>> >> >>>> Postgres-XC Support and Service > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>>> >>>>>>> >> >>>> Everyone hates slow websites. So do we. > >>>>>>>>>> >>>>>>> >> >>>> Make your web apps faster with AppDynamics > >>>>>>>>>> >>>>>>> >> >>>> Download AppDynamics Lite for free today: > >>>>>>>>>> >>>>>>> >> >>>> http://p.sf.net/sfu/appdyn_d2d_jan > >>>>>>>>>> >>>>>>> >> >>>> _______________________________________________ > >>>>>>>>>> >>>>>>> >> >>>> Postgres-xc-developers mailing list > >>>>>>>>>> >>>>>>> >> >>>> Pos...@li... > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> >>>>>>> >> >>>> > >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> -- > >>>>>>>>>> >>>>>>> >> >>> Best Wishes, > >>>>>>>>>> >>>>>>> >> >>> Ashutosh Bapat > >>>>>>>>>> >>>>>>> >> >>> EntepriseDB Corporation > >>>>>>>>>> >>>>>>> >> >>> The Enterprise Postgres Company > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>>> >>>>>>> >> >>> Everyone hates slow websites. So do we. > >>>>>>>>>> >>>>>>> >> >>> Make your web apps faster with AppDynamics > >>>>>>>>>> >>>>>>> >> >>> Download AppDynamics Lite for free today: > >>>>>>>>>> >>>>>>> >> >>> http://p.sf.net/sfu/appdyn_d2d_jan > >>>>>>>>>> >>>>>>> >> >>> _______________________________________________ > >>>>>>>>>> >>>>>>> >> >>> Postgres-xc-developers mailing list > >>>>>>>>>> >>>>>>> >> >>> Pos...@li... > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>> >>>>>>> >> >>> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> -- > >>>>>>>>>> >>>>>>> >> >> Mason Sharp > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> StormDB - http://www.stormdb.com > >>>>>>>>>> >>>>>>> >> >> The Database Cloud > >>>>>>>>>> >>>>>>> >> >> Postgres-XC Support and Services > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>>> >>>>>>> >> >> Everyone hates slow websites. So do we. > >>>>>>>>>> >>>>>>> >> >> Make your web apps faster with AppDynamics > >>>>>>>>>> >>>>>>> >> >> Download AppDynamics Lite for free today: > >>>>>>>>>> >>>>>>> >> >> http://p.sf.net/sfu/appdyn_d2d_jan > >>>>>>>>>> >>>>>>> >> >> _______________________________________________ > >>>>>>>>>> >>>>>>> >> >> Postgres-xc-developers mailing list > >>>>>>>>>> >>>>>>> >> >> Pos...@li... > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>> >>>>>>> >> >> > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>>> >>>>>>> >> Everyone hates slow websites. So do we. > >>>>>>>>>> >>>>>>> >> Make your web apps faster with AppDynamics > >>>>>>>>>> >>>>>>> >> Download AppDynamics Lite for free today: > >>>>>>>>>> >>>>>>> >> http://p.sf.net/sfu/appdyn_d2d_jan > >>>>>>>>>> >>>>>>> >> _______________________________________________ > >>>>>>>>>> >>>>>>> >> Postgres-xc-developers mailing list > >>>>>>>>>> >>>>>>> >> Pos...@li... > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> >>>>>>> >> > >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>> >>>>>>> > > >>>>>>>>>> >>>>>>> > > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>>> > >>>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>>> >>>>>> Free Next-Gen Firewall Hardware Offer > >>>>>>>>>> >>>>>> Buy your Sophos next-gen firewall before the end March 2013 > >>>>>>>>>> >>>>>> and get the hardware for free! Learn more. > >>>>>>>>>> >>>>>> http://p.sf.net/sfu/sophos-d2d-feb > >>>>>>>>>> >>>>>> _______________________________________________ > >>>>>>>>>> >>>>>> Postgres-xc-developers mailing list > >>>>>>>>>> >>>>>> Pos...@li... > >>>>>>>>>> >>>>>> > >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>> >>>>>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> > >>>>>>>>>> >>>>> -- > >>>>>>>>>> >>>>> Best Wishes, > >>>>>>>>>> >>>>> Ashutosh Bapat > >>>>>>>>>> >>>>> EntepriseDB Corporation > >>>>>>>>>> >>>>> The Enterprise Postgres Company > >>>>>>>>>> >>>> > >>>>>>>>>> >>>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> > >>>>>>>>>> >>> -- > >>>>>>>>>> >>> Best Wishes, > >>>>>>>>>> >>> Ashutosh Bapat > >>>>>>>>>> >>> EntepriseDB Corporation > >>>>>>>>>> >>> The Enterprise Postgres Company > >>>>>>>>>> >> > >>>>>>>>>> >> > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > -- > >>>>>>>>>> > Best Wishes, > >>>>>>>>>> > Ashutosh Bapat > >>>>>>>>>> > EntepriseDB Corporation > >>>>>>>>>> > The Enterprise Postgres Company > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>>> > Everyone hates slow websites. So do we. > >>>>>>>>>> > Make your web apps faster with AppDynamics > >>>>>>>>>> > Download AppDynamics Lite for free today: > >>>>>>>>>> > http://p.sf.net/sfu/appdyn_d2d_feb > >>>>>>>>>> > _______________________________________________ > >>>>>>>>>> > Postgres-xc-developers mailing list > >>>>>>>>>> > Pos...@li... > >>>>>>>>>> > > >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>>> > > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> ------------------------------------------------------------------------------ > >>>>>>>>> Everyone hates slow websites. So do we. > >>>>>>>>> Make your web apps faster with AppDynamics > >>>>>>>>> Download AppDynamics Lite for free today: > >>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb > >>>>>>>>> _______________________________________________ > >>>>>>>>> Postgres-xc-developers mailing list > >>>>>>>>> Pos...@li... > >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>>>>>>>> > >>>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> -- > >>>>>>>> Best Wishes, > >>>>>>>> Ashutosh Bapat > >>>>>>>> EntepriseDB Corporation > >>>>>>>> The Enterprise Postgres Company > >>>>>>>> > >>>>>>> > >>>>>>> > >>>>>> > >>>>>> > >>>>>> -- > >>>>>> Best Wishes, > >>>>>> Ashutosh Bapat > >>>>>> EntepriseDB Corporation > >>>>>> The Enterprise Postgres Company > >>>>>> > >>>>> > >>>>> > >>>> > >>>> > >>>> -- > >>>> Best Wishes, > >>>> Ashutosh Bapat > >>>> EntepriseDB Corporation > >>>> The Enterprise Postgres Company > >>>> > >>> > >>> > >> > >> > >> -- > >> Best Wishes, > >> Ashutosh Bapat > >> EntepriseDB Corporation > >> The Enterprise Postgres Company > >> > > > > > > > > -- > > Best Wishes, > > Ashutosh Bapat > > EntepriseDB Corporation > > The Enterprise Postgres Company > > |
From: Abbas B. <abb...@en...> - 2013-03-04 08:22:06
|
What I had in mind was to have pg_dump, when run with include-node, emit CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those commands will be used to create existing nodes/groups on the new coordinator to be added. So it does make sense to use this option independently, in fact it is supposed to be used independently. On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < ash...@en...> wrote: > Dumping TO NODE clause only makes sense if we dump CREATE NODE/ CREATE > NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense > independently, but might be useless without dumping TO NODE clause. > > BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't create > the nodes on all the coordinators, All the coordinators already have the nodes information. > but only the coordinator where dump will be restored. That's another thing > you will need to consider OR are you going to fix that as well? As a first step I am only listing the manual steps required to add a new node, that might say run this command on all the existing coordinators by connecting to them one by one manually. We can decide to automate these steps later. > > > On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt <abb...@en...>wrote: > >> I was thinking of using include-nodes to dump CREATE NODE / CREATE NODE >> GROUP, that is required as one of the missing links in adding a new node. >> How do you think about that? >> >> >> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Hi Abbas, >>> Please take a look at >>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which gives >>> all the command line options for pg_dump. instead of >>> include-to-node-clause, just include-nodes would suffice, I guess. >>> >>> >>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt <abb...@en...>wrote: >>> >>>> PFA a updated patch that provides a command line argument called >>>> --include-to-node-clause to let pg_dump know that the created dump is >>>> supposed to emit TO NODE clause in the CREATE TABLE command. >>>> If the argument is provided while taking the dump from a datanode, it >>>> does not show TO NODE clause in the dump since the catalog table is empty >>>> in this case. >>>> The documentation of pg_dump is updated accordingly. >>>> The rest of the functionality stays the same as before. >>>> >>>> >>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> I think we should always dump DISTRIBUTE BY. >>>>> >>>>> PG does not stop dumping (or provide an option to do so) newer syntax >>>>> so that the dump will work on older versions. On similar lines, an XC dump >>>>> can not be used against PG without modification (removing DISTRIBUTE BY). >>>>> There can be more serious problems like exceeding table size limits if an >>>>> XC dump is tried to be restored in PG. >>>>> >>>>> As to TO NODE clause, I agree, that one can restore the dump on a >>>>> cluster with different configuration, so giving an option to dump TO NODE >>>>> clause will help. >>>>> >>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>> mic...@gm...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>> mic...@gm...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>> mic...@gm...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in the >>>>>>>>>>> dump. >>>>>>>>>>> This is required because otherwise all tables get created on all >>>>>>>>>>> nodes after a dump-restore cycle. >>>>>>>>>>> >>>>>>>>>> Not sure this is good if you take a dump of an XC cluster to >>>>>>>>>> restore that to a vanilla Postgres cluster. >>>>>>>>>> Why not adding a new option that would control the generation of >>>>>>>>>> this clause instead of forcing it? >>>>>>>>>> >>>>>>>>> >>>>>>>>> I think you can use the pg_dump that comes with vanilla PG to do >>>>>>>>> that, can't you? But I am open to adding a control option if every body >>>>>>>>> thinks so. >>>>>>>>> >>>>>>>> Sure you can, this is just to simplify the life of users a maximum >>>>>>>> by not having multiple pg_dump binaries in their serves. >>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>> >>>>>>> >>>>>>> Yah if we choose to have an option we will put both DISTRIBUTE BY >>>>>>> and TO NODE under it. >>>>>>> >>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>> This would bring more flexibility to the way dumps are generated. >>>>>> -- >>>>>> Michael >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Everyone hates slow websites. So do we. >>>>>> Make your web apps faster with AppDynamics >>>>>> Download AppDynamics Lite for free today: >>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> _______________________________________________ >>>>>> Postgres-xc-developers mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Ashutosh B. <ash...@en...> - 2013-03-04 06:26:51
|
Dumping TO NODE clause only makes sense if we dump CREATE NODE/ CREATE NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense independently, but might be useless without dumping TO NODE clause. BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't create the nodes on all the coordinators, but only the coordinator where dump will be restored. That's another thing you will need to consider OR are you going to fix that as well? On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt <abb...@en...>wrote: > I was thinking of using include-nodes to dump CREATE NODE / CREATE NODE > GROUP, that is required as one of the missing links in adding a new node. > How do you think about that? > > > On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> Please take a look at >> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which gives >> all the command line options for pg_dump. instead of >> include-to-node-clause, just include-nodes would suffice, I guess. >> >> >> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt <abb...@en...>wrote: >> >>> PFA a updated patch that provides a command line argument called >>> --include-to-node-clause to let pg_dump know that the created dump is >>> supposed to emit TO NODE clause in the CREATE TABLE command. >>> If the argument is provided while taking the dump from a datanode, it >>> does not show TO NODE clause in the dump since the catalog table is empty >>> in this case. >>> The documentation of pg_dump is updated accordingly. >>> The rest of the functionality stays the same as before. >>> >>> >>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>> ash...@en...> wrote: >>> >>>> I think we should always dump DISTRIBUTE BY. >>>> >>>> PG does not stop dumping (or provide an option to do so) newer syntax >>>> so that the dump will work on older versions. On similar lines, an XC dump >>>> can not be used against PG without modification (removing DISTRIBUTE BY). >>>> There can be more serious problems like exceeding table size limits if an >>>> XC dump is tried to be restored in PG. >>>> >>>> As to TO NODE clause, I agree, that one can restore the dump on a >>>> cluster with different configuration, so giving an option to dump TO NODE >>>> clause will help. >>>> >>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>> mic...@gm...> wrote: >>>> >>>>> >>>>> >>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>> mic...@gm...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>> mic...@gm...> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in the >>>>>>>>>> dump. >>>>>>>>>> This is required because otherwise all tables get created on all >>>>>>>>>> nodes after a dump-restore cycle. >>>>>>>>>> >>>>>>>>> Not sure this is good if you take a dump of an XC cluster to >>>>>>>>> restore that to a vanilla Postgres cluster. >>>>>>>>> Why not adding a new option that would control the generation of >>>>>>>>> this clause instead of forcing it? >>>>>>>>> >>>>>>>> >>>>>>>> I think you can use the pg_dump that comes with vanilla PG to do >>>>>>>> that, can't you? But I am open to adding a control option if every body >>>>>>>> thinks so. >>>>>>>> >>>>>>> Sure you can, this is just to simplify the life of users a maximum >>>>>>> by not having multiple pg_dump binaries in their serves. >>>>>>> Saying that, I think that there is no option to choose if DISTRIBUTE >>>>>>> BY is printed in the dump or not... >>>>>>> >>>>>> >>>>>> Yah if we choose to have an option we will put both DISTRIBUTE BY and >>>>>> TO NODE under it. >>>>>> >>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>> This would bring more flexibility to the way dumps are generated. >>>>> -- >>>>> Michael >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>> _______________________________________________ >>>>> Postgres-xc-developers mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Abbas B. <abb...@en...> - 2013-03-04 06:16:18
|
I was thinking of using include-nodes to dump CREATE NODE / CREATE NODE GROUP, that is required as one of the missing links in adding a new node. How do you think about that? On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > Please take a look at > http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which gives > all the command line options for pg_dump. instead of > include-to-node-clause, just include-nodes would suffice, I guess. > > > On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt <abb...@en...>wrote: > >> PFA a updated patch that provides a command line argument called >> --include-to-node-clause to let pg_dump know that the created dump is >> supposed to emit TO NODE clause in the CREATE TABLE command. >> If the argument is provided while taking the dump from a datanode, it >> does not show TO NODE clause in the dump since the catalog table is empty >> in this case. >> The documentation of pg_dump is updated accordingly. >> The rest of the functionality stays the same as before. >> >> >> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> I think we should always dump DISTRIBUTE BY. >>> >>> PG does not stop dumping (or provide an option to do so) newer syntax so >>> that the dump will work on older versions. On similar lines, an XC dump can >>> not be used against PG without modification (removing DISTRIBUTE BY). There >>> can be more serious problems like exceeding table size limits if an XC dump >>> is tried to be restored in PG. >>> >>> As to TO NODE clause, I agree, that one can restore the dump on a >>> cluster with different configuration, so giving an option to dump TO NODE >>> clause will help. >>> >>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>> mic...@gm...> wrote: >>> >>>> >>>> >>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>> mic...@gm...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>> mic...@gm...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in the dump. >>>>>>>>> This is required because otherwise all tables get created on all >>>>>>>>> nodes after a dump-restore cycle. >>>>>>>>> >>>>>>>> Not sure this is good if you take a dump of an XC cluster to >>>>>>>> restore that to a vanilla Postgres cluster. >>>>>>>> Why not adding a new option that would control the generation of >>>>>>>> this clause instead of forcing it? >>>>>>>> >>>>>>> >>>>>>> I think you can use the pg_dump that comes with vanilla PG to do >>>>>>> that, can't you? But I am open to adding a control option if every body >>>>>>> thinks so. >>>>>>> >>>>>> Sure you can, this is just to simplify the life of users a maximum by >>>>>> not having multiple pg_dump binaries in their serves. >>>>>> Saying that, I think that there is no option to choose if DISTRIBUTE >>>>>> BY is printed in the dump or not... >>>>>> >>>>> >>>>> Yah if we choose to have an option we will put both DISTRIBUTE BY and >>>>> TO NODE under it. >>>>> >>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>> This would bring more flexibility to the way dumps are generated. >>>> -- >>>> Michael >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Amit K. <ami...@en...> - 2013-03-04 05:47:22
|
On 1 March 2013 13:53, Nikhil Sontakke <ni...@st...> wrote: >> >> Issue: Whether we should fetch the whole from the datanode (OLD row) and not >> just ctid and node_id and required columns and store it at the coordinator >> for the processing OR whether we should fetch each row (OLD and NEW >> variants) while processing each row. >> >> Both of them have performance impacts - the first one has disk impact for >> large number of rows whereas the second has network impact for querying >> rows. Is it possible to do some analytical assessment as to which of them >> would be better? If you can come up with something concrete (may be numbers >> or formulae) we will be able to judge better as to which one to pick up. Will check if we can come up with some sensible analysis or figures. >> > > Or we can consider a hybrid approach of getting the rows in batches of > 1000 or so if possible as well. That ways they get into coordinator > memory in one shot and can be processed in batches. Obviously this > should be considered if it's not going to be a complicated > implementation. It just occurred to me that it would not be that hard to optimize the row-fetching-by-ctid as shown below: 1. When it is time to fire the queued triggers at the statement/transaction end, initialize cursors - one cursor per datanode - which would do: SELECT remote_heap_fetch(table_name, '<ctidlist>'); We can form this ctidlist out of the trigger even list. 2. For each trigger event entry in the trigger queue, FETCH NEXT using the appropriate cursor name according to the datanode id to which the trigger entry belongs. > >>> Currently we fetch all attributes in the SELECT subplans. I have >>> created another patch to fetch only the required attribtues, but have >>> not merged that into this patch. > > Do we have other places where we unnecessary fetch all attributes? > ISTM, this should be fixed as a performance improvement first ahead of > everything else. I believe DML subplan is the only remaining place where we fetch all attributes. And yes, this is a must-have for triggers, otherwise, the other optimizations would be of no use. > >>> 2. One important TODO for BEFORE trigger is this: Just before >>> invoking the trigger functions, in PG, the tuple is row-locked >>> (exclusive) by GetTupleTrigger() and the locked version is fetched >>> from the table. So it is made sure that while all the triggers for >>> that table are executed, no one can update that particular row. >>> In the patch, we haven't locked the row. We need to lock it either by >>> executing : >>> 1. SELECT * from tab1 where ctid = <ctid_val> FOR UPDATE, and then >>> use the returned ROW as the OLD row. >>> OR >>> 2. The UPDATE subplan itself should have SELECT for UPDATE so that >>> the row is already locked, and we don't have to lock it again. >>> #2 is simple though it might cause some amount of longer waits in general. >>> Using #1, though the locks would be acquired only when the particular >>> row is updated, the locks would be released only after transaction >>> end, so #1 might not be worth implementing. >>> Also #1 requires another explicit remote fetch for the >>> lock-and-get-latest-version operation. >>> I am more inclined towards #2. >>> >> The option #2 however, has problem of locking too many rows if there are >> coordinator quals in the subplans IOW the number of rows finally updated are >> lesser than the number of rows fetched from the datanode. It can cause >> unwanted deadlocks. Unless there is a way to release these extra locks, I am >> afraid this option will be a problem. True. Regardless of anything else - whether it is deadlocks or longer waits, we should not lock rows that are not to be updated. There is a more general row-locking issue that we need to solve first : 3606317. I anticipate that solving this will solve the trigger specific lock issue. So for triggers, this is a must-have, and I am going to solve this issue as part of this bug 3606317. >> > Deadlocks? ISTM, we can get more lock waits because of this but I do > not see deadlock scenarios.. > > With the FQS shipping work being done by Ashutosh, will we also ship > major chunks of subplans to the datanodes? If yes, then row locking > will only involve required tuples (hopefully) from the coordinator's > point of view. > > Also, something radical is can be invent a new type of FOR [NODE] > UPDATE type lock to minimize the impact of such locking of rows on > datanodes? > > Regards, > Nikhils > >>> >>> 3. The BEFORE trigger function can change the distribution column >>> itself. We need to add a check at the end of the trigger executions. >>> >> >> Good, you thought about that. Yes we should check it. >> >>> >>> 4. Fetching OLD row for WHEN clause handling. >>> >>> 5. Testing with mix of Shippable and non-shippable ROW triggers >>> >>> 6. Other types of triggers. INSTEAD triggers are anticipated to work >>> without significant changes, but they are yet to be tested. >>> INSERT/DELETE triggers: Most of the infrastructure has been done while >>> implementing UPDATE triggers. But some changes specific to INSERT and >>> DELETE are yet to be done. >>> Deferred triggers to be tested. >>> >>> 7. Regression analysis. There are some new failures. Will post another >>> fair version of the patch after regression analysis and fixing various >>> TODOs. >>> >>> Comments welcome. >>> >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_feb >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service |
From: Ashutosh B. <ash...@en...> - 2013-03-04 04:09:46
|
Hi Abbas, Please take a look at http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which gives all the command line options for pg_dump. instead of include-to-node-clause, just include-nodes would suffice, I guess. On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt <abb...@en...>wrote: > PFA a updated patch that provides a command line argument called > --include-to-node-clause to let pg_dump know that the created dump is > supposed to emit TO NODE clause in the CREATE TABLE command. > If the argument is provided while taking the dump from a datanode, it does > not show TO NODE clause in the dump since the catalog table is empty in > this case. > The documentation of pg_dump is updated accordingly. > The rest of the functionality stays the same as before. > > > On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < > ash...@en...> wrote: > >> I think we should always dump DISTRIBUTE BY. >> >> PG does not stop dumping (or provide an option to do so) newer syntax so >> that the dump will work on older versions. On similar lines, an XC dump can >> not be used against PG without modification (removing DISTRIBUTE BY). There >> can be more serious problems like exceeding table size limits if an XC dump >> is tried to be restored in PG. >> >> As to TO NODE clause, I agree, that one can restore the dump on a cluster >> with different configuration, so giving an option to dump TO NODE clause >> will help. >> >> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >> mic...@gm...> wrote: >> >>> >>> >>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> >>>> >>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>> mic...@gm...> wrote: >>>> >>>>> >>>>> >>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>> mic...@gm...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in the dump. >>>>>>>> This is required because otherwise all tables get created on all >>>>>>>> nodes after a dump-restore cycle. >>>>>>>> >>>>>>> Not sure this is good if you take a dump of an XC cluster to restore >>>>>>> that to a vanilla Postgres cluster. >>>>>>> Why not adding a new option that would control the generation of >>>>>>> this clause instead of forcing it? >>>>>>> >>>>>> >>>>>> I think you can use the pg_dump that comes with vanilla PG to do >>>>>> that, can't you? But I am open to adding a control option if every body >>>>>> thinks so. >>>>>> >>>>> Sure you can, this is just to simplify the life of users a maximum by >>>>> not having multiple pg_dump binaries in their serves. >>>>> Saying that, I think that there is no option to choose if DISTRIBUTE >>>>> BY is printed in the dump or not... >>>>> >>>> >>>> Yah if we choose to have an option we will put both DISTRIBUTE BY and >>>> TO NODE under it. >>>> >>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>> This would bring more flexibility to the way dumps are generated. >>> -- >>> Michael >>> >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_feb >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Nikhil S. <ni...@st...> - 2013-03-02 05:39:32
|
> > We are running metrics on the system now, we loaded some data recently, and > continuing to do so now. If Hint bits are the case, what steps can I take to > relieve some of this IO? > This is internal IO which will settle down if you are not loading into the same table again immediately. A lumpsum count(1) call (after a recent data load) will touch all pages immediately, so try to avoid that and amortize the IO over multiple specific select calls. HTH, Nikhils > > > From: Mason Sharp [mailto:ma...@st...] > Sent: Friday, March 01, 2013 2:29 PM > To: Arni Sumarlidason > Cc: pos...@li...; Postgres-XC Developers > (pos...@li...) > Subject: Re: [Postgres-xc-general] IO PGXC > > > > > Sent from my IPhone > > > On Mar 1, 2013, at 4:13 PM, Arni Sumarlidason <Arn...@md...> > wrote: > > Users, > > > > I have 20 nodes sitting on a disk arrays, with multiple LUNs. when I issue > queries – `select count(1) from table` for example, I am experiencing heavy > writes and heavy reads. I expected the reads but not the writes and it has > really thrown a wrench in the caching. My first assumption would be the log > files, do you have any other ideas what could be causing all these writes > with a select? > > > > It could be vacuum. Also, did you just load data? Hint bits get updated, > dirtying pages and causing them to be written. > > > > > > Arni Sumarlidason | Software Engineer, Information Technology > > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA > > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 > > arn...@md...| http://www.mdaus.com > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Michael P. <mic...@gm...> - 2013-03-02 04:15:24
|
Thanks, pushed with some other changes I noticed at the same time. On Fri, Mar 1, 2013 at 5:41 PM, Nikhil Sontakke <ni...@st...> wrote: > PFA, > > Patch which adds proper formatting at a couple of places. This was > causing the doc build (and hence me ;)) some grief. > > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Michael |
From: Nikhil S. <ni...@st...> - 2013-03-02 01:51:14
|
Hmmm, this is going to be a pretty common scenario then. So no option other than #1 I guess. Regards, Nikhils >> Deadlocks? ISTM, we can get more lock waits because of this but I do >> not see deadlock scenarios.. > > > :) Simple rule is: if DBMS is locking transaction-long resources which > application doesn't expect, there are bound to be deadlocks. > > The application induced deadlocks. An application would think that statement > A would not lock row rA because it's not being updated, but actually it gets > locked for UPDATE because of option #2 it locks it. The same application > would assume that statement B would not lock row rB the same way. WIth this > context consider following sequence of events > > Statement A updates row rB > > Statement B updates row rA > > Statement A tries to update row rM, but XC tries to lock rA (because of > option #2) and waits > > Statement B tries to update row rN, but XC tries to lock rB (because of > option #2) and waits > > None of A and B can proceed and thus deadlock, even if the application > doesn't expect those to deadlock. > >> >> >> With the FQS shipping work being done by Ashutosh, will we also ship >> major chunks of subplans to the datanodes? If yes, then row locking >> will only involve required tuples (hopefully) from the coordinator's >> point of view. >> > > The push-down will work only when there shippable subplans, but if they are > not ... > >> >> Also, something radical is can be invent a new type of FOR [NODE] >> UPDATE type lock to minimize the impact of such locking of rows on >> datanodes? >> >> Regards, >> Nikhils >> >> >> >> >> 3. The BEFORE trigger function can change the distribution column >> >> itself. We need to add a check at the end of the trigger executions. >> >> >> > >> > Good, you thought about that. Yes we should check it. >> > >> >> >> >> 4. Fetching OLD row for WHEN clause handling. >> >> >> >> 5. Testing with mix of Shippable and non-shippable ROW triggers >> >> >> >> 6. Other types of triggers. INSTEAD triggers are anticipated to work >> >> without significant changes, but they are yet to be tested. >> >> INSERT/DELETE triggers: Most of the infrastructure has been done while >> >> implementing UPDATE triggers. But some changes specific to INSERT and >> >> DELETE are yet to be done. >> >> Deferred triggers to be tested. >> >> >> >> 7. Regression analysis. There are some new failures. Will post another >> >> fair version of the patch after regression analysis and fixing various >> >> TODOs. >> >> >> >> Comments welcome. >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> Everyone hates slow websites. So do we. >> >> Make your web apps faster with AppDynamics >> >> Download AppDynamics Lite for free today: >> >> http://p.sf.net/sfu/appdyn_d2d_feb >> >> _______________________________________________ >> >> Postgres-xc-developers mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> > >> > >> > >> > -- >> > Best Wishes, >> > Ashutosh Bapat >> > EntepriseDB Corporation >> > The Enterprise Postgres Company >> > >> > >> > ------------------------------------------------------------------------------ >> > Everyone hates slow websites. So do we. >> > Make your web apps faster with AppDynamics >> > Download AppDynamics Lite for free today: >> > http://p.sf.net/sfu/appdyn_d2d_feb >> > _______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Abbas B. <abb...@en...> - 2013-03-02 00:11:10
|
On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar < ami...@en...> wrote: > On 19 February 2013 12:37, Abbas Butt <abb...@en...> wrote: > > > > Hi, > > Attached please find a patch that locks the cluster so that dump can be > > taken to be restored on the new node to be added. > > > > To lock the cluster the patch adds a new GUC parameter called > > xc_lock_for_backup, however its status is maintained by the pooler. The > > reason is that the default behavior of XC is to release connections as > soon > > as a command is done and it uses PersistentConnections GUC to control the > > behavior. We in this case however need a status that is independent of > the > > setting of PersistentConnections. > > > > Assume we have two coordinator cluster, the patch provides this behavior: > > > > Case 1: set and show > > ==================== > > psql test -p 5432 > > set xc_lock_for_backup=yes; > > show xc_lock_for_backup; > > xc_lock_for_backup > > -------------------- > > yes > > (1 row) > > > > Case 2: set from one client show from other > > ================================== > > psql test -p 5432 > > set xc_lock_for_backup=yes; > > (From another tab) > > psql test -p 5432 > > show xc_lock_for_backup; > > xc_lock_for_backup > > -------------------- > > yes > > (1 row) > > > > Case 3: set from one, quit it, run again and show > > ====================================== > > psql test -p 5432 > > set xc_lock_for_backup=yes; > > \q > > psql test -p 5432 > > show xc_lock_for_backup; > > xc_lock_for_backup > > -------------------- > > yes > > (1 row) > > > > Case 4: set on one coordinator, show from other > > ===================================== > > psql test -p 5432 > > set xc_lock_for_backup=yes; > > (From another tab) > > psql test -p 5433 > > show xc_lock_for_backup; > > xc_lock_for_backup > > -------------------- > > yes > > (1 row) > > > > pg_dump and pg_dumpall seem to work fine after locking the cluster for > > backup but I would test these utilities in detail next. > > > > Also I have yet to look in detail that standard_ProcessUtility is the > only > > place that updates the portion of catalog that is dumped. There may be > some > > other places too that need to be blocked for catalog updates. > > > > The patch adds no extra warnings and regression shows no extra failure. > > > > Comments are welcome. > > Abbas wrote on another thread: > > > Amit wrote on another thread: > >> I haven't given a thought on the earlier patch you sent for cluster lock > >> implementation; may be we can discuss this on that thread, but just a > quick > >> question: > >> > >> Does the cluster-lock command wait for the ongoing DDL commands to > finish > >> ? If not, we have problems. The subsequent pg_dump would not contain > objects > >> created by these particular DDLs. > > > > > > Suppose you have a two coordinator cluster. Assume one client connected > to > > each. Suppose one client issues a lock cluster command and the other > issues > > a DDL. Is this what you mean by an ongoing DDL? If true then answer to > your > > question is Yes. > > > > Suppose you have a prepared transaction that has a DDL in it, again if > this > > can be considered an on going DDL, then again answer to your question is > > Yes. > > > > Suppose you have a two coordinator cluster. Assume one client connected > to > > each. One client starts a transaction and issues a DDL, the second client > > issues a lock cluster command, the first commits the transaction. If > this is > > an ongoing DDL, then the answer to your question is No. > > Yes this last scenario is what I meant: A DDL has been executed on nodes, > but > not committed, when the cluster lock command is run and then pg_dump > immediately > starts its transaction before the DDL is committed. Here pg_dump does > not see the new objects that would be created. > > I myself am not sure how would we prevent this from happening. There > are two callback hooks that might be worth considering though: > 1. Transaction End callback (CallXactCallbacks) > 2. Object creation/drop hook (InvokeObjectAccessHook) > > Suppose we create an object creation/drop hook function that would : > 1. store the current transaction id in a global objects_created list > if the cluster is not locked, > 2. or else if the cluster is locked, this hook would ereport() saying > "cannot create catalog objects in this mode". > > And then during transaction commit , a new transaction callback hook will: > 1. Check the above objects_created list to see if the current > transaction has any objects created/dropped. > 2. If found and if the cluster-lock is on, it will again ereport() > saying "cannot create catalog objects in this mode" > > Thinking more on the object creation hook, we can even consider this > as a substitute for checking the cluster-lock status in > standardProcessUtility(). But I am not sure whether this hook does get > called on each of the catalog objects. At least the code comments say > it does. > These are very good ideas, Thanks, I will work on those lines and will report back. > > > > > But its a matter of > > deciding which camp are we going to put COMMIT in, the allow camp, or the > > deny camp. I decided to put it in allow camp, because I have not yet > written > > any code to detect whether a transaction being committed has a DDL in it > or > > not, and stopping all transactions from committing looks too restrictive > to > > me. > > > > > > Do you have some other meaning of an ongoing DDL? > > > > > > > -- > > Abbas > > Architect > > EnterpriseDB Corporation > > The Enterprise PostgreSQL Company > > > > Phone: 92-334-5100153 > > > > Website: www.enterprisedb.com > > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > > Follow us on Twitter: http://www.twitter.com/enterprisedb > > > > This e-mail message (and any attachment) is intended for the use of > > the individual or entity to whom it is addressed. This message > > contains information from EnterpriseDB Corporation that may be > > privileged, confidential, or exempt from disclosure under applicable > > law. If you are not the intended recipient or authorized to receive > > this for the intended recipient, any use, dissemination, distribution, > > retention, archiving, or copying of this communication is strictly > > prohibited. If you have received this e-mail in error, please notify > > the sender immediately by reply e-mail and delete this message. > > > > > ------------------------------------------------------------------------------ > > Everyone hates slow websites. So do we. > > Make your web apps faster with AppDynamics > > Download AppDynamics Lite for free today: > > http://p.sf.net/sfu/appdyn_d2d_feb > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Amit K. <ami...@en...> - 2013-03-01 21:54:56
|
On 19 February 2013 12:37, Abbas Butt <abb...@en...> wrote: > > Hi, > Attached please find a patch that locks the cluster so that dump can be > taken to be restored on the new node to be added. > > To lock the cluster the patch adds a new GUC parameter called > xc_lock_for_backup, however its status is maintained by the pooler. The > reason is that the default behavior of XC is to release connections as soon > as a command is done and it uses PersistentConnections GUC to control the > behavior. We in this case however need a status that is independent of the > setting of PersistentConnections. > > Assume we have two coordinator cluster, the patch provides this behavior: > > Case 1: set and show > ==================== > psql test -p 5432 > set xc_lock_for_backup=yes; > show xc_lock_for_backup; > xc_lock_for_backup > -------------------- > yes > (1 row) > > Case 2: set from one client show from other > ================================== > psql test -p 5432 > set xc_lock_for_backup=yes; > (From another tab) > psql test -p 5432 > show xc_lock_for_backup; > xc_lock_for_backup > -------------------- > yes > (1 row) > > Case 3: set from one, quit it, run again and show > ====================================== > psql test -p 5432 > set xc_lock_for_backup=yes; > \q > psql test -p 5432 > show xc_lock_for_backup; > xc_lock_for_backup > -------------------- > yes > (1 row) > > Case 4: set on one coordinator, show from other > ===================================== > psql test -p 5432 > set xc_lock_for_backup=yes; > (From another tab) > psql test -p 5433 > show xc_lock_for_backup; > xc_lock_for_backup > -------------------- > yes > (1 row) > > pg_dump and pg_dumpall seem to work fine after locking the cluster for > backup but I would test these utilities in detail next. > > Also I have yet to look in detail that standard_ProcessUtility is the only > place that updates the portion of catalog that is dumped. There may be some > other places too that need to be blocked for catalog updates. > > The patch adds no extra warnings and regression shows no extra failure. > > Comments are welcome. Abbas wrote on another thread: > Amit wrote on another thread: >> I haven't given a thought on the earlier patch you sent for cluster lock >> implementation; may be we can discuss this on that thread, but just a quick >> question: >> >> Does the cluster-lock command wait for the ongoing DDL commands to finish >> ? If not, we have problems. The subsequent pg_dump would not contain objects >> created by these particular DDLs. > > > Suppose you have a two coordinator cluster. Assume one client connected to > each. Suppose one client issues a lock cluster command and the other issues > a DDL. Is this what you mean by an ongoing DDL? If true then answer to your > question is Yes. > > Suppose you have a prepared transaction that has a DDL in it, again if this > can be considered an on going DDL, then again answer to your question is > Yes. > > Suppose you have a two coordinator cluster. Assume one client connected to > each. One client starts a transaction and issues a DDL, the second client > issues a lock cluster command, the first commits the transaction. If this is > an ongoing DDL, then the answer to your question is No. Yes this last scenario is what I meant: A DDL has been executed on nodes, but not committed, when the cluster lock command is run and then pg_dump immediately starts its transaction before the DDL is committed. Here pg_dump does not see the new objects that would be created. I myself am not sure how would we prevent this from happening. There are two callback hooks that might be worth considering though: 1. Transaction End callback (CallXactCallbacks) 2. Object creation/drop hook (InvokeObjectAccessHook) Suppose we create an object creation/drop hook function that would : 1. store the current transaction id in a global objects_created list if the cluster is not locked, 2. or else if the cluster is locked, this hook would ereport() saying "cannot create catalog objects in this mode". And then during transaction commit , a new transaction callback hook will: 1. Check the above objects_created list to see if the current transaction has any objects created/dropped. 2. If found and if the cluster-lock is on, it will again ereport() saying "cannot create catalog objects in this mode" Thinking more on the object creation hook, we can even consider this as a substitute for checking the cluster-lock status in standardProcessUtility(). But I am not sure whether this hook does get called on each of the catalog objects. At least the code comments say it does. > But its a matter of > deciding which camp are we going to put COMMIT in, the allow camp, or the > deny camp. I decided to put it in allow camp, because I have not yet written > any code to detect whether a transaction being committed has a DDL in it or > not, and stopping all transactions from committing looks too restrictive to > me. > > Do you have some other meaning of an ongoing DDL? > > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Mason S. <ma...@st...> - 2013-03-01 20:26:33
|
Sent from my IPhone On Mar 1, 2013, at 4:13 PM, Arni Sumarlidason <Arn...@md...> wrote: > Users, > > I have 20 nodes sitting on a disk arrays, with multiple LUNs. when I issue queries – `select count(1) from table` for example, I am experiencing heavy writes and heavy reads. I expected the reads but not the writes and it has really thrown a wrench in the caching. My first assumption would be the log files, do you have any other ideas what could be causing all these writes with a select? It could be vacuum. Also, did you just load data? Hint bits get updated, dirtying pages and causing them to be written. > > Arni Sumarlidason | Software Engineer, Information Technology > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 > arn...@md...| http://www.mdaus.com > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Arni S. <Arn...@md...> - 2013-03-01 20:19:33
|
Thank you for your fast responses, We are running metrics on the system now, we loaded some data recently, and continuing to do so now. If Hint bits are the case, what steps can I take to relieve some of this IO? From: Mason Sharp [mailto:ma...@st...] Sent: Friday, March 01, 2013 2:29 PM To: Arni Sumarlidason Cc: pos...@li...; Postgres-XC Developers (pos...@li...) Subject: Re: [Postgres-xc-general] IO PGXC Sent from my IPhone On Mar 1, 2013, at 4:13 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Users, I have 20 nodes sitting on a disk arrays, with multiple LUNs. when I issue queries – `select count(1) from table` for example, I am experiencing heavy writes and heavy reads. I expected the reads but not the writes and it has really thrown a wrench in the caching. My first assumption would be the log files, do you have any other ideas what could be causing all these writes with a select? It could be vacuum. Also, did you just load data? Hint bits get updated, dirtying pages and causing them to be written. Arni Sumarlidason | Software Engineer, Information Technology MDA | 820 West Diamond Ave | Gaithersburg, MD | USA O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 arn...@md...<mailto:arn...@md...>| http://www.mdaus.com<https://console.mxlogic.com/redir/?4sMUMqenzhOqejrUVBxB4QsCzAS02lgRKDbUrLOrb0VVdOXMUTsSHJaM1l9RyGwgVsSYOYMrhpsjhsshpvdIICMnWhEwdbomHip6fQbxZyV7Uv7jPiWq80Qc6y0b181x8TfM-u0USyrpdIIczxNEVvsdD8rO> ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_feb<https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79I06JR7u1KktrPlBA2Q1l1d0ltYwrLOrb0VVdOXMUTsSHJaM1l9RyGwgVsSYOYMrhpsjhsshpvdIICMnWhEwdbomHip6fQbxZyV7Uv7jPiWq80Qc6y0b181x8TfM-u0USCrpdIIczxNEVvsdBa9a> _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general<https://console.mxlogic.com/redir/?2eosod7bNEVd79JYsOMOyqejhOr9PCJhbcfBiteFlKdLt00_MdM-l9QM-l9OwXn5GQChzZ2UvoKh-7NQYTvASm1POrBTxNKVJnqlw2GjH5l0xOVJVBVwSyOUCyUUyO-rppdwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg1Eod40m2g32hKvxYY1NJASOrpop73zhO-Ur0CfpYIDr> |
From: Nikhil S. <ni...@st...> - 2013-03-01 19:36:33
|
PFA, Patch which adds proper formatting at a couple of places. This was causing the doc build (and hence me ;)) some grief. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Amit K. <ami...@en...> - 2013-03-01 19:35:19
|
On 1 March 2013 01:30, Abbas Butt <abb...@en...> wrote: > > > On Thu, Feb 28, 2013 at 12:44 PM, Amit Khandekar > <ami...@en...> wrote: >> >> >> >> On 28 February 2013 10:23, Abbas Butt <abb...@en...> wrote: >>> >>> Hi All, >>> >>> Attached please find a patch that provides a new command line argument >>> for postgres called --restoremode. >>> >>> While adding a new node to the cluster we need to restore the schema of >>> existing database to the new node. >>> If the new node is a datanode and we connect directly to it, it does not >>> allow DDL, because it is in read only mode & >>> If the new node is a coordinator, it will send DDLs to all the other >>> coordinators which we do not want it to do. >> >> >> What if we allow writes in standalone mode, so that we would initialize >> the new node using standalone mode instead of --restoremode ? > > > Please take a look at the patch, I am using --restoremode in place of > --coordinator & --datanode. I am not sure how would stand alone mode fit in > here. I was trying to see if we can avoid adding a new mode, instead, use standalone mode for all the purposes for which restoremode is used. Actually I checked the documentation, it says this mode is used only for debugging or recovery purposes, so now I myself am a bit hesitent about this mode for the purpose of restoring. > >> >> >>> >>> To provide ability to restore on the new node a new command line argument >>> is provided. >>> It is to be provided in place of --coordinator OR --datanode. >>> In restore mode both coordinator and datanode are internally treated as a >>> datanode. >>> For more details see patch comments. >>> >>> After this patch one can add a new node to the cluster. >>> >>> Here are the steps to add a new coordinator >>> >>> >>> 1) Initdb new coordinator >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_cord3 >>> --nodename coord_3 >>> >>> 2) Make necessary changes in its postgresql.conf, in particular specify >>> new coordinator name and pooler port >>> >>> 3) Connect to any of the existing coordinators & lock the cluster for >>> backup >>> ./psql postgres -p 5432 >>> SET xc_lock_for_backup=yes; >>> \q >> >> >> I haven't given a thought on the earlier patch you sent for cluster lock >> implementation; may be we can discuss this on that thread, but just a quick >> question: >> >> Does the cluster-lock command wait for the ongoing DDL commands to finish >> ? If not, we have problems. The subsequent pg_dump would not contain objects >> created by these particular DDLs. > > > Suppose you have a two coordinator cluster. Assume one client connected to > each. Suppose one client issues a lock cluster command and the other issues > a DDL. Is this what you mean by an ongoing DDL? If true then answer to your > question is Yes. > > Suppose you have a prepared transaction that has a DDL in it, again if this > can be considered an on going DDL, then again answer to your question is > Yes. > > Suppose you have a two coordinator cluster. Assume one client connected to > each. One client starts a transaction and issues a DDL, the second client > issues a lock cluster command, the first commits the transaction. If this is > an ongoing DDL, then the answer to your question is No. But its a matter of > deciding which camp are we going to put COMMIT in, the allow camp, or the > deny camp. I decided to put it in allow camp, because I have not yet written > any code to detect whether a transaction being committed has a DDL in it or > not, and stopping all transactions from committing looks too restrictive to > me. > > Do you have some other meaning of an ongoing DDL? > > I agree that we should have discussed this on the right thread. Lets > continue this discussion on that thread. Continued on the other thread. > >> >> >>> >>> >>> 4) Connect to any of the existing coordinators and take backup of the >>> database >>> ./pg_dump -p 5432 -C -s >>> --file=/home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql test >>> >>> 5) Start the new coordinator specify --restoremode while starting the >>> coordinator >>> ./postgres --restoremode -D ../data_cord3 -p 5455 >>> >>> 6) connect to the new coordinator directly >>> ./psql postgres -p 5455 >>> >>> 7) create all the datanodes and the rest of the coordinators on the new >>> coordiantor & reload configuration >>> CREATE NODE DATA_NODE_1 WITH (HOST = 'localhost', type = >>> 'datanode', PORT = 15432, PRIMARY); >>> CREATE NODE DATA_NODE_2 WITH (HOST = 'localhost', type = >>> 'datanode', PORT = 25432); >>> >>> CREATE NODE COORD_1 WITH (HOST = 'localhost', type = >>> 'coordinator', PORT = 5432); >>> CREATE NODE COORD_2 WITH (HOST = 'localhost', type = >>> 'coordinator', PORT = 5433); >>> >>> SELECT pgxc_pool_reload(); >>> >>> 8) quit psql >>> >>> 9) Create the new database on the new coordinator >>> ./createdb test -p 5455 >>> >>> 10) create the roles and table spaces manually, the dump does not contain >>> roles or table spaces >>> ./psql test -p 5455 >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; >>> CREATE TABLESPACE my_space LOCATION >>> '/usr/local/pgsql/my_space_location'; >>> \q >>> >> >> Will pg_dumpall help ? It dumps roles also. > > > Yah , but I am giving example of pg_dump so this step has to be there. > >> >> >> >>> >>> 11) Restore the backup that was taken from an existing coordinator by >>> connecting to the new coordinator directly >>> ./psql -d test -f >>> /home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql -p 5455 >>> >>> 11) Quit the new coordinator >>> >>> 12) Connect to any of the existing coordinators & unlock the cluster >>> ./psql postgres -p 5432 >>> SET xc_lock_for_backup=no; >>> \q >>> >> >> Unlocking the cluster has to be done *after* the node is added into the >> cluster. > > > Very true. I stand corrected. This means CREATE NODE has to be allowed when > xc_lock_for_backup is set. > >> >> >> >>> >>> 13) Start the new coordinator as a by specifying --coordinator >>> ./postgres --coordinator -D ../data_cord3 -p 5455 >>> >>> 14) Create the new coordinator on rest of the coordinators and reload >>> configuration >>> CREATE NODE COORD_3 WITH (HOST = 'localhost', type = >>> 'coordinator', PORT = 5455); >>> SELECT pgxc_pool_reload(); >>> >>> 15) The new coordinator is now ready >>> ./psql test -p 5455 >>> create table test_new_coord(a int, b int); >>> \q >>> ./psql test -p 5432 >>> select * from test_new_coord; >>> >>> >>> Here are the steps to add a new datanode >>> >>> >>> 1) Initdb new datanode >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data3 --nodename >>> data_node_3 >>> >>> 2) Make necessary changes in its postgresql.conf, in particular specify >>> new datanode name >>> >>> 3) Connect to any of the existing coordinators & lock the cluster for >>> backup >>> ./psql postgres -p 5432 >>> SET xc_lock_for_backup=yes; >>> \q >>> >>> 4) Connect to any of the existing datanodes and take backup of the >>> database >>> ./pg_dump -p 15432 -C -s >>> --file=/home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql test >>> >>> 5) Start the new datanode specify --restoremode while starting the it >>> ./postgres --restoremode -D ../data3 -p 35432 >>> >>> 6) Create the new database on the new datanode >>> ./createdb test -p 35432 >>> >>> 7) create the roles and table spaces manually, the dump does not contain >>> roles or table spaces >>> ./psql test -p 35432 >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; >>> CREATE TABLESPACE my_space LOCATION >>> '/usr/local/pgsql/my_space_location'; >>> \q >>> >>> 8) Restore the backup that was taken from an existing datanode by >>> connecting to the new datanode directly >>> ./psql -d test -f >>> /home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql -p 35432 >>> >>> 9) Quit the new datanode >>> >>> 10) Connect to any of the existing coordinators & unlock the cluster >>> ./psql postgres -p 5432 >>> SET xc_lock_for_backup=no; >>> \q >>> >>> 11) Start the new datanode as a datanode by specifying --datanode >>> ./postgres --datanode -D ../data3 -p 35432 >>> >>> 12) Create the new datanode on all the coordinators and reload >>> configuration >>> CREATE NODE DATA_NODE_3 WITH (HOST = 'localhost', type = >>> 'datanode', PORT = 35432); >>> SELECT pgxc_pool_reload(); >>> >>> 13) Redistribute data by using ALTER TABLE REDISTRIBUTE >>> >>> 14) The new daatnode is now ready >>> ./psql test >>> create table test_new_dn(a int, b int) distribute by replication; >>> insert into test_new_dn values(1,2); >>> EXECUTE DIRECT ON (data_node_1) 'SELECT * from test_new_dn'; >>> EXECUTE DIRECT ON (data_node_2) 'SELECT * from test_new_dn'; >>> EXECUTE DIRECT ON (data_node_3) 'SELECT * from test_new_dn'; >>> >>> Please note that the steps assume that the patch sent earlier >>> 1_lock_cluster.patch in mail subject [Patch to lock cluster] is applied. >>> >>> I have also attached test database scripts, that would help in patch >>> review. >>> >>> Comments are welcome. >>> >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_feb >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. |