You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Koichi S. <koi...@gm...> - 2013-02-19 10:11:24
|
Regression test will create two coordinators and datanodes with fixed names. In this case, we don't have to worry about the names. Anyway, we have to rewrite affected test cases, which could take time. I'm worrying about this. Xion, sorry for taking this long. Maybe it's much better to have separate command or to use pre-defined views. Any more thoughts? ---------- Koichi Suzuki 2013/2/19 Nikhil Sontakke <ni...@st...>: > Hi, > >> According to Ashutosh Bapat, could we just print ALL NODES if the count of >> nodelist is the same as the count of datanodes, otherwise print the node >> list. By this way, the regression test will be resolved. >> > > Umm, I don't particularly like this idea of printing "ALL DATANODES" > just to make regressions work... > > Can we not document that the standard testing environment should have > a so and so config with recommended names for coordinators and > datanodes? > > Regards, > Nikhils > > >> Thanks & Regards, >> >> Benny Wang >> >> 2013/2/15 Ashutosh Bapat <ash...@en...> >>> >>> >>> >>> On Fri, Feb 15, 2013 at 8:28 AM, Koichi Suzuki <koi...@gm...> >>> wrote: >>>> >>>> I found \d+ is used 41 times in the regression. If we extend \d+ >>>> with table distribution, we need to (continue to) modify all of them. >>>> Is it too much? >>>> >>> >>> I don't see regression expected output changes to be of a lot of concern. >>> We can do them ones and for all. There might be some merge conflicts later, >>> but that's something, which would part of our ongoing merge. That's >>> inevitable for a project like XC, which pulls from PG. >>> >>> My concern is different. \d+ or \d whatever, would start printing the list >>> of nodes the table is distributed on, which will be dependent upon the >>> individual's enviroment, and names and number of nodes chosen. So, tests >>> passing for one would not do so for someone else. So, we will need some way >>> to canonicalize the list of nodes so that it comes out to be same >>> independent of the environment. >>> >>> I suggest this, instead of \d+ printing the individual nodes, it first >>> looks at the node groups for a matching node group and prints the name of >>> group if it finds one matching. Next, if all the datanodes are in the list >>> of nodes, it would print ALL DATANODES or something like that. Since the >>> regression tests use default node list, which has all datanodes, for almost >>> (I am sure all) all \d+ outputs we will get ALL DATANODES, which will be >>> independent of the environment used. >>> >>>> >>>> Regards; >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/2/14 Michael Paquier <mic...@gm...>: >>>> > >>>> > >>>> > On Thu, Feb 14, 2013 at 7:44 PM, Nikhil Sontakke <ni...@st...> >>>> > wrote: >>>> >> >>>> >> Michael seems to be doing a see-saw here :) >>>> > >>>> > After doing manually regression merges for 9.1 and 9.2 on more than 50 >>>> > regression tests for each merge, you learn quite a bit... Making it >>>> > more >>>> > painful will just consume more time than necessary. >>>> > >>>> >> On some other thread he seemed ok to extend \d+ >>>> > >>>> > Call that experience, I was OK with with ¥d+ before actually having a >>>> > look >>>> > at the number of tests using it. >>>> > -- >>>> > Michael >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Free Next-Gen Firewall Hardware Offer >>>> Buy your Sophos next-gen firewall before the end March 2013 >>>> and get the hardware for free! Learn more. >>>> http://p.sf.net/sfu/sophos-d2d-feb >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >>> >>> ------------------------------------------------------------------------------ >>> Free Next-Gen Firewall Hardware Offer >>> Buy your Sophos next-gen firewall before the end March 2013 >>> and get the hardware for free! Learn more. >>> http://p.sf.net/sfu/sophos-d2d-feb >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Nikhil S. <ni...@st...> - 2013-02-19 08:02:49
|
Hi, > According to Ashutosh Bapat, could we just print ALL NODES if the count of > nodelist is the same as the count of datanodes, otherwise print the node > list. By this way, the regression test will be resolved. > Umm, I don't particularly like this idea of printing "ALL DATANODES" just to make regressions work... Can we not document that the standard testing environment should have a so and so config with recommended names for coordinators and datanodes? Regards, Nikhils > Thanks & Regards, > > Benny Wang > > 2013/2/15 Ashutosh Bapat <ash...@en...> >> >> >> >> On Fri, Feb 15, 2013 at 8:28 AM, Koichi Suzuki <koi...@gm...> >> wrote: >>> >>> I found \d+ is used 41 times in the regression. If we extend \d+ >>> with table distribution, we need to (continue to) modify all of them. >>> Is it too much? >>> >> >> I don't see regression expected output changes to be of a lot of concern. >> We can do them ones and for all. There might be some merge conflicts later, >> but that's something, which would part of our ongoing merge. That's >> inevitable for a project like XC, which pulls from PG. >> >> My concern is different. \d+ or \d whatever, would start printing the list >> of nodes the table is distributed on, which will be dependent upon the >> individual's enviroment, and names and number of nodes chosen. So, tests >> passing for one would not do so for someone else. So, we will need some way >> to canonicalize the list of nodes so that it comes out to be same >> independent of the environment. >> >> I suggest this, instead of \d+ printing the individual nodes, it first >> looks at the node groups for a matching node group and prints the name of >> group if it finds one matching. Next, if all the datanodes are in the list >> of nodes, it would print ALL DATANODES or something like that. Since the >> regression tests use default node list, which has all datanodes, for almost >> (I am sure all) all \d+ outputs we will get ALL DATANODES, which will be >> independent of the environment used. >> >>> >>> Regards; >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/2/14 Michael Paquier <mic...@gm...>: >>> > >>> > >>> > On Thu, Feb 14, 2013 at 7:44 PM, Nikhil Sontakke <ni...@st...> >>> > wrote: >>> >> >>> >> Michael seems to be doing a see-saw here :) >>> > >>> > After doing manually regression merges for 9.1 and 9.2 on more than 50 >>> > regression tests for each merge, you learn quite a bit... Making it >>> > more >>> > painful will just consume more time than necessary. >>> > >>> >> On some other thread he seemed ok to extend \d+ >>> > >>> > Call that experience, I was OK with with ¥d+ before actually having a >>> > look >>> > at the number of tests using it. >>> > -- >>> > Michael >>> >>> >>> ------------------------------------------------------------------------------ >>> Free Next-Gen Firewall Hardware Offer >>> Buy your Sophos next-gen firewall before the end March 2013 >>> and get the hardware for free! Learn more. >>> http://p.sf.net/sfu/sophos-d2d-feb >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> >> >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer >> Buy your Sophos next-gen firewall before the end March 2013 >> and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Abbas B. <abb...@en...> - 2013-02-19 07:07:10
|
Hi, Attached please find a patch that locks the cluster so that dump can be taken to be restored on the new node to be added. To lock the cluster the patch adds a new GUC parameter called xc_lock_for_backup, however its status is maintained by the pooler. The reason is that the default behavior of XC is to release connections as soon as a command is done and it uses PersistentConnections GUC to control the behavior. We in this case however need a status that is independent of the setting of PersistentConnections. Assume we have two coordinator cluster, the patch provides this behavior: Case 1: set and show ==================== psql test -p 5432 set xc_lock_for_backup=yes; show xc_lock_for_backup; xc_lock_for_backup -------------------- yes (1 row) Case 2: set from one client show from other ================================== psql test -p 5432 set xc_lock_for_backup=yes; (From another tab) psql test -p 5432 show xc_lock_for_backup; xc_lock_for_backup -------------------- yes (1 row) Case 3: set from one, quit it, run again and show ====================================== psql test -p 5432 set xc_lock_for_backup=yes; \q psql test -p 5432 show xc_lock_for_backup; xc_lock_for_backup -------------------- yes (1 row) Case 4: set on one coordinator, show from other ===================================== psql test -p 5432 set xc_lock_for_backup=yes; (From another tab) psql test -p 5433 show xc_lock_for_backup; xc_lock_for_backup -------------------- yes (1 row) pg_dump and pg_dumpall seem to work fine after locking the cluster for backup but I would test these utilities in detail next. Also I have yet to look in detail that standard_ProcessUtility is the only place that updates the portion of catalog that is dumped. There may be some other places too that need to be blocked for catalog updates. The patch adds no extra warnings and regression shows no extra failure. Comments are welcome. -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <koi...@gm...> - 2013-02-19 06:20:37
|
I'm okay with this idea. Any other inputs? ---------- Koichi Suzuki 2013/2/19 Xiong Wang <wan...@gm...>: > Hi All, > > According to Ashutosh Bapat, could we just print ALL NODES if the count of > nodelist is the same as the count of datanodes, otherwise print the node > list. By this way, the regression test will be resolved. > > Thanks & Regards, > > Benny Wang > > 2013/2/15 Ashutosh Bapat <ash...@en...> >> >> >> >> On Fri, Feb 15, 2013 at 8:28 AM, Koichi Suzuki <koi...@gm...> >> wrote: >>> >>> I found \d+ is used 41 times in the regression. If we extend \d+ >>> with table distribution, we need to (continue to) modify all of them. >>> Is it too much? >>> >> >> I don't see regression expected output changes to be of a lot of concern. >> We can do them ones and for all. There might be some merge conflicts later, >> but that's something, which would part of our ongoing merge. That's >> inevitable for a project like XC, which pulls from PG. >> >> My concern is different. \d+ or \d whatever, would start printing the list >> of nodes the table is distributed on, which will be dependent upon the >> individual's enviroment, and names and number of nodes chosen. So, tests >> passing for one would not do so for someone else. So, we will need some way >> to canonicalize the list of nodes so that it comes out to be same >> independent of the environment. >> >> I suggest this, instead of \d+ printing the individual nodes, it first >> looks at the node groups for a matching node group and prints the name of >> group if it finds one matching. Next, if all the datanodes are in the list >> of nodes, it would print ALL DATANODES or something like that. Since the >> regression tests use default node list, which has all datanodes, for almost >> (I am sure all) all \d+ outputs we will get ALL DATANODES, which will be >> independent of the environment used. >> >>> >>> Regards; >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/2/14 Michael Paquier <mic...@gm...>: >>> > >>> > >>> > On Thu, Feb 14, 2013 at 7:44 PM, Nikhil Sontakke <ni...@st...> >>> > wrote: >>> >> >>> >> Michael seems to be doing a see-saw here :) >>> > >>> > After doing manually regression merges for 9.1 and 9.2 on more than 50 >>> > regression tests for each merge, you learn quite a bit... Making it >>> > more >>> > painful will just consume more time than necessary. >>> > >>> >> On some other thread he seemed ok to extend \d+ >>> > >>> > Call that experience, I was OK with with ¥d+ before actually having a >>> > look >>> > at the number of tests using it. >>> > -- >>> > Michael >>> >>> >>> ------------------------------------------------------------------------------ >>> Free Next-Gen Firewall Hardware Offer >>> Buy your Sophos next-gen firewall before the end March 2013 >>> and get the hardware for free! Learn more. >>> http://p.sf.net/sfu/sophos-d2d-feb >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> >> >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer >> Buy your Sophos next-gen firewall before the end March 2013 >> and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Nikhil S. <ni...@st...> - 2013-02-19 05:24:10
|
>> >> I think the barrier makes sense when doing a barrier with the rest of the >> cluster components. Then each element is recoverable to the same point in >> time. Oh yes. I did not mean that BARRIER activity should not track GTM stuff. I only meant that GTM behavior should be done independent of BARRIER calls as well. > > Backing up gtm.control triggerd by BARRIER is not associated with WAL > at all. My idea is > 1) add new message to GTM to report BARRIER is going to be recorded, > with barrier ID. > 2) When gtm (and gtm slave) receives this message, gtm/gtm slave > creates gtm.control backup. > > Important thing is to synchronize PITR recovery point with gtm.control. > Yeah, +1 for this approach to integrate GTM with BARRIER activity. >> Most important is XID. With a valid XID, we can get the current sequence >> value from the individual nodes. > > Well, this will save some value consumed by aborted TXs. This is > wasted in vanilla PostgreSQL too and it does not violate sequence > consistency. > Yeah, this loose requirement also makes our life easier. We can just append an appropriate delta to get the next "sane" value for both GXID and sequences. Unless again if I am mistaken about the semantics or something. Regards, Nikhils >> >> We could consider optionally that when a data node connects to GTM that GTM >> acknowledges and then requests back all of the sequence values from the >> node. Then GTM can take the maximum from all the nodes to be able to >> determine what the current value is cluster-wide. No need for an external >> tool. For the nodes, we can implement it as a new stored function. > > I don't think we need this now. > > As Mason wrote, yes we have to consider GXID wraparound. > > Regards; > --- > Koichi > >> >> Regards, >> >> Mason >> >> >>> >>> >>> Regards, >>> Nikhils >>> >>> >>> > >>> >> >>> >> >>> >> > >>> >> > Thoughts? Any other better suggestions? >>> >> >>> >> Regards; >>> >> --- >>> >> Koichi Suzuki >>> >> >>> >> > >>> >> > Regards, >>> >> > Nikhils >>> >> > >>> >> > >>> >> >> 1) If we kill GTM process, GTM will begin to assing GXID values with >>> >> >> the one of the proveious GTM run. It is bad and harmful behaviour. >>> >> >> To >>> >> >> improve it, GTM should write the next gxid value to gtm.control. >>> >> >> When gxid >>> >> >> value reaches this backup, then the backup should be refreshed with >>> >> >> the new >>> >> >> one. That way, we can safely restart GTM even if it was killed. >>> >> >> When GTM >>> >> >> stops normally, it should write the next gxid value to gtm.control. >>> >> >> >>> >> >> 2) We have BARRIER to synchronize PITR recovery line. However, GTM >>> >> >> does not have this capability. In this case, GTM's sequence >>> >> >> definition >>> >> >> could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued >>> >> >> after >>> >> >> the recovery point. Have WAL in GTM could be a general choice but >>> >> >> it seems >>> >> >> to be too much for current GTM feature. Instead, we can have >>> >> >> GTM.control >>> >> >> backups when CREATE BARRIER is issued. Backup files will be >>> >> >> qualified by >>> >> >> BARRIER id so that DBA can choose which backup to use at the end of >>> >> >> PITR. >>> >> >> >>> >> >> Regards; >>> >> >> --- >>> >> >> Koichi >>> >> >> >>> >> >> On Sun, 17 Feb 2013 10:27:37 +0530 >>> >> >> Nikhil Sontakke <ni...@st...> wrote: >>> >> >> >>> >> >>> Hi Suzuki-san, >>> >> >>> >>> >> >>> > It's so helpful to have this patch. Will review and test it >>> >> >>> > before >>> >> >>> > committing. >>> >> >>> > >>> >> >>> >>> >> >>> Any comments on this patch? I do not see it in the commit logs. >>> >> >>> >>> >> >>> Regards, >>> >> >>> Nikhils >>> >> >>> >>> >> >>> > Best; >>> >> >>> > --- >>> >> >>> > Koichi Suzuki >>> >> >>> > >>> >> >>> > On Wed, 6 Feb 2013 14:25:48 +0530 >>> >> >>> > Nikhil Sontakke <ni...@st...> wrote: >>> >> >>> > >>> >> >>> >> > Also, gtm_ctl -w option >>> >> >>> >> > seems not work properly. >>> >> >>> >> > >>> >> >>> >> >>> >> >>> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also >>> >> >>> >> tested >>> >> >>> >> "-w -t nsecs" behavior and it seems to work as well. This patch >>> >> >>> >> can >>> >> >>> >> be >>> >> >>> >> easily be backported to 1.0 if desired. I hope all these >>> >> >>> >> obnoxious >>> >> >>> >> gtm >>> >> >>> >> startup issues are resolved now. Scripting was a bit painful >>> >> >>> >> because >>> >> >>> >> of these issues. >>> >> >>> >> >>> >> >>> >> Regards, >>> >> >>> >> Nikhils >>> >> >>> >> >>> >> >>> >> > I will take a look at these issue and fix at least in the next >>> >> >>> >> > major >>> >> >>> >> > release. Daemonization should be back-ported to 1.0.x. >>> >> >>> >> > >>> >> >>> >> > Kind Regards; >>> >> >>> >> > ---------- >>> >> >>> >> > Koichi Suzuki >>> >> >>> >> > >>> >> >>> >> > >>> >> >>> >> > 2012/12/27 Michael Meskes <me...@po...>: >>> >> >>> >> >> I still haven't found the time to dig into this, but could >>> >> >>> >> >> anyone please >>> >> >>> >> >> explain to me how gtm is supposed to start up? A simple grep >>> >> >>> >> >> command doesn't >>> >> >>> >> >> seem to find any call to setsid() in gtm subdirs. Could you >>> >> >>> >> >> please point me to >>> >> >>> >> >> the code where gtm daemonizes? Because if it doesn't >>> >> >>> >> >> correctly, >>> >> >>> >> >> there is no >>> >> >>> >> >> suprise gtm_ctl doesn't come back. Or in other words, this is >>> >> >>> >> >> a >>> >> >>> >> >> logical point >>> >> >>> >> >> to start looking. >>> >> >>> >> >> >>> >> >>> >> >> Michael >>> >> >>> >> >> -- >>> >> >>> >> >> Michael Meskes >>> >> >>> >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot >>> >> >>> >> >> (De|Com|Net|Org) >>> >> >>> >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) >>> >> >>> >> >> dot >>> >> >>> >> >> Org >>> >> >>> >> >> Jabber: michael.meskes at gmail dot com >>> >> >>> >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, >>> >> >>> >> >> PostgreSQL >>> >> >>> >> >> >>> >> >>> >> >> >>> >> >>> >> >> >>> >> >>> >> >> ------------------------------------------------------------------------------ >>> >> >>> >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, >>> >> >>> >> >> HTML5, >>> >> >>> >> >> CSS, >>> >> >>> >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your >>> >> >>> >> >> skills >>> >> >>> >> >> current >>> >> >>> >> >> with LearnDevNow - 3,200 step-by-step video tutorials by >>> >> >>> >> >> Microsoft >>> >> >>> >> >> MVPs and experts. ON SALE this month only -- learn more at: >>> >> >>> >> >> http://p.sf.net/sfu/learnmore_122712 >>> >> >>> >> >> _______________________________________________ >>> >> >>> >> >> Postgres-xc-developers mailing list >>> >> >>> >> >> Pos...@li... >>> >> >>> >> >> >>> >> >>> >> >> >>> >> >>> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> -- >>> >> >>> >> StormDB - http://www.stormdb.com >>> >> >>> >> The Database Cloud >>> >> >>> >> Postgres-XC Support and Service >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> -- >>> >> >>> StormDB - http://www.stormdb.com >>> >> >>> The Database Cloud >>> >> >>> Postgres-XC Support and Service >>> >> >>> >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > StormDB - http://www.stormdb.com >>> >> > The Database Cloud >>> >> > Postgres-XC Support and Service >>> >> >>> >> >>> >> >>> >> ------------------------------------------------------------------------------ >>> >> The Go Parallel Website, sponsored by Intel - in partnership with >>> >> Geeknet, >>> >> is your hub for all things parallel software development, from weekly >>> >> thought >>> >> leadership blogs to news, videos, case studies, tutorials, tech docs, >>> >> whitepapers, evaluation guides, and opinion stories. Check out the most >>> >> recent posts - join the conversation now. >>> >> http://goparallel.sourceforge.net/ >>> >> >>> >> _______________________________________________ >>> >> Postgres-xc-developers mailing list >>> >> Pos...@li... >>> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> > >>> > >>> > >>> > >>> > -- >>> > Mason Sharp >>> > >>> > >>> > StormDB - http://www.stormdb.com >>> > The Database Cloud >>> > Postgres-XC Support and Services >>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Service >> >> >> >> >> -- >> Mason Sharp >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Services -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Amit K. <ami...@en...> - 2013-02-19 04:38:56
|
Below are details that cover various points that need to be considered for the implemention of BEFORE UPDATE and AFTER UPDATE row triggers. Because UPDATE covers issues that might occur for both DELETE as well as INSERT, I have targetted UPDATE first. Attached is the WIP patch if you need to peek into specifics. AFTER triggers -------------- AFTER triggers are executed only at the end of statement/transaction. The trigger data is saved in a trigger event queue along with the OLD and NEW ctid. And just before the query/transaction end, the OLD and NEW tuples corresponding to the OLD and NEW ctid are fetched using heap_fetch(SnapshotAny), and each of the saved trigger events are executed. For XC (in the patch), we fetch the tuple remotely from the datanode. And, in addition to ctid, we store the xc_node_id as well in the trigger event data. Created a function to do the remote fetch; this function would be called instead of heap_fetch(). To get NEW row ctid, we add junk attributes ctid and node_id in the RETURNING target list of UPDATE. To get OLD row ctid, we already have the ctid in the SELECT subplan. Currently we fetch all attributes in the SELECT subplans. I have created another patch to fetch only the required attribtues, but have not merged that into this patch. Ultimately, when triggers are present, we want the SELECT to look like this: SELECT ctid, xc_node_id from tab1 Another way to do the AFTER ROW triggers would be to store all the new and old rows in the coordinator memory instead of fetching it when needed. This has a potential of disk writes for large number of update rows. When there is a WHEN condition on OLD and NEW row for AFTER triggers, the WHEN condition is evaluated immediately, and is not deferred until the query/transaction end, and all the triggers that are evaluated to false are not added in the event queue. So if WHEN is present, we need to make it available by fetching the OLD row using SELECT sublan. For this we add another junk attribute wholerow in the SELECT target list. So in this case, the SELECT will look like this: SELECT ctid, xc_node_id, tab1.*::tab1 from tab1 BEFORE triggers --------------- These are executed just before the row is inserted/updated. There can be multiple triggers on the same table. The return ROW of a trigger function is the NEW row of the next trigger, while the OLD row remains same across all of the triggers. So the NEW row can keep changing because the trigger function can modify the NEW row before returning. Ultimately the last NEW row is the one that gets into the table. We need to add wholerow junk attribute in SELECT for fetching the OLD row. And the NEW row is created by overwriting the OLD row values by the modified column values. Once we have OLD and NEW tuples, the rest of the work is taken care of usign existing trigger PG implementation. We have been using the subplan slot to generate the data for BINDing the parameter values for UPDATE. Now for BEFORE ROW triggers, the data of NEW row is the one that is to be BOUND. And this NEW tuple slot does not have the ctid and node_id junk attribute values. So we need to generate the BIND data using the data from NEW row and the junk attributes from the source data. Similarly there are changes in the way we generate the parameters markers for the remote UPDATE statement. TODOs: ====== 1. Current implementation patch has a very basic working model. There are various TODOs marked in the patch. 2. One important TODO for BEFORE trigger is this: Just before invoking the trigger functions, in PG, the tuple is row-locked (exclusive) by GetTupleTrigger() and the locked version is fetched from the table. So it is made sure that while all the triggers for that table are executed, no one can update that particular row. In the patch, we haven't locked the row. We need to lock it either by executing : 1. SELECT * from tab1 where ctid = <ctid_val> FOR UPDATE, and then use the returned ROW as the OLD row. OR 2. The UPDATE subplan itself should have SELECT for UPDATE so that the row is already locked, and we don't have to lock it again. #2 is simple though it might cause some amount of longer waits in general. Using #1, though the locks would be acquired only when the particular row is updated, the locks would be released only after transaction end, so #1 might not be worth implementing. Also #1 requires another explicit remote fetch for the lock-and-get-latest-version operation. I am more inclined towards #2. 3. The BEFORE trigger function can change the distribution column itself. We need to add a check at the end of the trigger executions. 4. Fetching OLD row for WHEN clause handling. 5. Testing with mix of Shippable and non-shippable ROW triggers 6. Other types of triggers. INSTEAD triggers are anticipated to work without significant changes, but they are yet to be tested. INSERT/DELETE triggers: Most of the infrastructure has been done while implementing UPDATE triggers. But some changes specific to INSERT and DELETE are yet to be done. Deferred triggers to be tested. 7. Regression analysis. There are some new failures. Will post another fair version of the patch after regression analysis and fixing various TODOs. Comments welcome. |
From: Xiong W. <wan...@gm...> - 2013-02-19 04:09:20
|
Hi All, According to Ashutosh Bapat, could we just print ALL NODES if the count of nodelist is the same as the count of datanodes, otherwise print the node list. By this way, the regression test will be resolved. Thanks & Regards, Benny Wang 2013/2/15 Ashutosh Bapat <ash...@en...> > > > On Fri, Feb 15, 2013 at 8:28 AM, Koichi Suzuki <koi...@gm...>wrote: > >> I found \d+ is used 41 times in the regression. If we extend \d+ >> with table distribution, we need to (continue to) modify all of them. >> Is it too much? >> >> > I don't see regression expected output changes to be of a lot of concern. > We can do them ones and for all. There might be some merge conflicts later, > but that's something, which would part of our ongoing merge. That's > inevitable for a project like XC, which pulls from PG. > > My concern is different. \d+ or \d whatever, would start printing the list > of nodes the table is distributed on, which will be dependent upon the > individual's enviroment, and names and number of nodes chosen. So, tests > passing for one would not do so for someone else. So, we will need some way > to canonicalize the list of nodes so that it comes out to be same > independent of the environment. > > I suggest this, instead of \d+ printing the individual nodes, it first > looks at the node groups for a matching node group and prints the name of > group if it finds one matching. Next, if all the datanodes are in the list > of nodes, it would print ALL DATANODES or something like that. Since the > regression tests use default node list, which has all datanodes, for almost > (I am sure all) all \d+ outputs we will get ALL DATANODES, which will be > independent of the environment used. > > >> Regards; >> ---------- >> Koichi Suzuki >> >> >> 2013/2/14 Michael Paquier <mic...@gm...>: >> > >> > >> > On Thu, Feb 14, 2013 at 7:44 PM, Nikhil Sontakke <ni...@st...> >> > wrote: >> >> >> >> Michael seems to be doing a see-saw here :) >> > >> > After doing manually regression merges for 9.1 and 9.2 on more than 50 >> > regression tests for each merge, you learn quite a bit... Making it more >> > painful will just consume more time than necessary. >> > >> >> On some other thread he seemed ok to extend \d+ >> > >> > Call that experience, I was OK with with ¥d+ before actually having a >> look >> > at the number of tests using it. >> > -- >> > Michael >> >> >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer >> Buy your Sophos next-gen firewall before the end March 2013 >> and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Koichi S. <koi...@gm...> - 2013-02-19 02:26:47
|
2013/2/19 Mason Sharp <ma...@st...>: > > > On Mon, Feb 18, 2013 at 11:53 AM, Nikhil Sontakke <ni...@st...> > wrote: >> >> > >> > >> > Yes, sequences are trickier. A single transaction can increment multiple >> > sequences. Barrier seems like one good way to do this periodically, but >> > we >> > may be missing info and may have to query the individual data nodes in >> > order >> > to get the actual latest sequence values. >> > >> >> I am not in favor of relying on barrier for GTM stuff. Just >> implementing currval+delta for both XID and sequences seems OK enough >> to me and it keeps things within the GTM scheme of things. > > > I think the barrier makes sense when doing a barrier with the rest of the > cluster components. Then each element is recoverable to the same point in > time. Backing up gtm.control triggerd by BARRIER is not associated with WAL at all. My idea is 1) add new message to GTM to report BARRIER is going to be recorded, with barrier ID. 2) When gtm (and gtm slave) receives this message, gtm/gtm slave creates gtm.control backup. Important thing is to synchronize PITR recovery point with gtm.control. > >> >> >> > We may instead also consider a WAL-type of mechanism where we write the >> > latest sequence values that a transaction updated at transaction commit. >> > With GTM Proxy grouping requests together, too, and by grouping from the >> > other threads, maybe a single GTM WAL fsync can write out quite a lot of >> > information. >> > >> >> IMHO, WAL types implementation seems overkill to me. Configurable >> delta based logic to write to control file can whet everyone's >> appetite. Same to me. > > > Most important is XID. With a valid XID, we can get the current sequence > value from the individual nodes. Well, this will save some value consumed by aborted TXs. This is wasted in vanilla PostgreSQL too and it does not violate sequence consistency. > > We could consider optionally that when a data node connects to GTM that GTM > acknowledges and then requests back all of the sequence values from the > node. Then GTM can take the maximum from all the nodes to be able to > determine what the current value is cluster-wide. No need for an external > tool. For the nodes, we can implement it as a new stored function. I don't think we need this now. As Mason wrote, yes we have to consider GXID wraparound. Regards; --- Koichi > > Regards, > > Mason > > >> >> >> Regards, >> Nikhils >> >> >> > >> >> >> >> >> >> > >> >> > Thoughts? Any other better suggestions? >> >> >> >> Regards; >> >> --- >> >> Koichi Suzuki >> >> >> >> > >> >> > Regards, >> >> > Nikhils >> >> > >> >> > >> >> >> 1) If we kill GTM process, GTM will begin to assing GXID values with >> >> >> the one of the proveious GTM run. It is bad and harmful behaviour. >> >> >> To >> >> >> improve it, GTM should write the next gxid value to gtm.control. >> >> >> When gxid >> >> >> value reaches this backup, then the backup should be refreshed with >> >> >> the new >> >> >> one. That way, we can safely restart GTM even if it was killed. >> >> >> When GTM >> >> >> stops normally, it should write the next gxid value to gtm.control. >> >> >> >> >> >> 2) We have BARRIER to synchronize PITR recovery line. However, GTM >> >> >> does not have this capability. In this case, GTM's sequence >> >> >> definition >> >> >> could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued >> >> >> after >> >> >> the recovery point. Have WAL in GTM could be a general choice but >> >> >> it seems >> >> >> to be too much for current GTM feature. Instead, we can have >> >> >> GTM.control >> >> >> backups when CREATE BARRIER is issued. Backup files will be >> >> >> qualified by >> >> >> BARRIER id so that DBA can choose which backup to use at the end of >> >> >> PITR. >> >> >> >> >> >> Regards; >> >> >> --- >> >> >> Koichi >> >> >> >> >> >> On Sun, 17 Feb 2013 10:27:37 +0530 >> >> >> Nikhil Sontakke <ni...@st...> wrote: >> >> >> >> >> >>> Hi Suzuki-san, >> >> >>> >> >> >>> > It's so helpful to have this patch. Will review and test it >> >> >>> > before >> >> >>> > committing. >> >> >>> > >> >> >>> >> >> >>> Any comments on this patch? I do not see it in the commit logs. >> >> >>> >> >> >>> Regards, >> >> >>> Nikhils >> >> >>> >> >> >>> > Best; >> >> >>> > --- >> >> >>> > Koichi Suzuki >> >> >>> > >> >> >>> > On Wed, 6 Feb 2013 14:25:48 +0530 >> >> >>> > Nikhil Sontakke <ni...@st...> wrote: >> >> >>> > >> >> >>> >> > Also, gtm_ctl -w option >> >> >>> >> > seems not work properly. >> >> >>> >> > >> >> >>> >> >> >> >>> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also >> >> >>> >> tested >> >> >>> >> "-w -t nsecs" behavior and it seems to work as well. This patch >> >> >>> >> can >> >> >>> >> be >> >> >>> >> easily be backported to 1.0 if desired. I hope all these >> >> >>> >> obnoxious >> >> >>> >> gtm >> >> >>> >> startup issues are resolved now. Scripting was a bit painful >> >> >>> >> because >> >> >>> >> of these issues. >> >> >>> >> >> >> >>> >> Regards, >> >> >>> >> Nikhils >> >> >>> >> >> >> >>> >> > I will take a look at these issue and fix at least in the next >> >> >>> >> > major >> >> >>> >> > release. Daemonization should be back-ported to 1.0.x. >> >> >>> >> > >> >> >>> >> > Kind Regards; >> >> >>> >> > ---------- >> >> >>> >> > Koichi Suzuki >> >> >>> >> > >> >> >>> >> > >> >> >>> >> > 2012/12/27 Michael Meskes <me...@po...>: >> >> >>> >> >> I still haven't found the time to dig into this, but could >> >> >>> >> >> anyone please >> >> >>> >> >> explain to me how gtm is supposed to start up? A simple grep >> >> >>> >> >> command doesn't >> >> >>> >> >> seem to find any call to setsid() in gtm subdirs. Could you >> >> >>> >> >> please point me to >> >> >>> >> >> the code where gtm daemonizes? Because if it doesn't >> >> >>> >> >> correctly, >> >> >>> >> >> there is no >> >> >>> >> >> suprise gtm_ctl doesn't come back. Or in other words, this is >> >> >>> >> >> a >> >> >>> >> >> logical point >> >> >>> >> >> to start looking. >> >> >>> >> >> >> >> >>> >> >> Michael >> >> >>> >> >> -- >> >> >>> >> >> Michael Meskes >> >> >>> >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot >> >> >>> >> >> (De|Com|Net|Org) >> >> >>> >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) >> >> >>> >> >> dot >> >> >>> >> >> Org >> >> >>> >> >> Jabber: michael.meskes at gmail dot com >> >> >>> >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, >> >> >>> >> >> PostgreSQL >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> ------------------------------------------------------------------------------ >> >> >>> >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, >> >> >>> >> >> HTML5, >> >> >>> >> >> CSS, >> >> >>> >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your >> >> >>> >> >> skills >> >> >>> >> >> current >> >> >>> >> >> with LearnDevNow - 3,200 step-by-step video tutorials by >> >> >>> >> >> Microsoft >> >> >>> >> >> MVPs and experts. ON SALE this month only -- learn more at: >> >> >>> >> >> http://p.sf.net/sfu/learnmore_122712 >> >> >>> >> >> _______________________________________________ >> >> >>> >> >> Postgres-xc-developers mailing list >> >> >>> >> >> Pos...@li... >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> -- >> >> >>> >> StormDB - http://www.stormdb.com >> >> >>> >> The Database Cloud >> >> >>> >> Postgres-XC Support and Service >> >> >>> >> >> >>> >> >> >>> >> >> >>> -- >> >> >>> StormDB - http://www.stormdb.com >> >> >>> The Database Cloud >> >> >>> Postgres-XC Support and Service >> >> >>> >> >> > >> >> > >> >> > >> >> > -- >> >> > StormDB - http://www.stormdb.com >> >> > The Database Cloud >> >> > Postgres-XC Support and Service >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> The Go Parallel Website, sponsored by Intel - in partnership with >> >> Geeknet, >> >> is your hub for all things parallel software development, from weekly >> >> thought >> >> leadership blogs to news, videos, case studies, tutorials, tech docs, >> >> whitepapers, evaluation guides, and opinion stories. Check out the most >> >> recent posts - join the conversation now. >> >> http://goparallel.sourceforge.net/ >> >> >> >> _______________________________________________ >> >> Postgres-xc-developers mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > >> > >> > >> > -- >> > Mason Sharp >> > >> > >> > StormDB - http://www.stormdb.com >> > The Database Cloud >> > Postgres-XC Support and Services >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service > > > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services |
From: Koichi S. <koi...@gm...> - 2013-02-18 23:57:21
|
It's an implicit 2PC which coordinator generates when a transaction updates more than one node. Usually, implicit 2PC is committed/aborted with commit/abort statement. We have a slight chance to have it live when a node crashes. You can use pgxc_clean utility to clean such TXs. You can clean them locally by using pgxc_maintenance_mode. Regards; ---------- Koichi Suzuki 2013/2/19 Nikhil Sontakke <ni...@st...>: > Hi, > > We use prepared transactions a lot. If we prepare and then do not > COMMIT/ABORT then the locks on involved objects will be around for a > long time. These can then cause unrelated statements to hang much > later on as well. > > My question is if I found a prepared transaction in pg_prepared_xacts like so: > > 29697997 | _$XC$29697997 | 2013-02-06 02:01:02.328885+00 | user@db | pgxc > > How can I abort this prepared transaction? > > What's the name to use? > > TIA, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Mason S. <ma...@st...> - 2013-02-18 18:43:38
|
On Mon, Feb 18, 2013 at 11:53 AM, Nikhil Sontakke <ni...@st...>wrote: > > > > > > Yes, sequences are trickier. A single transaction can increment multiple > > sequences. Barrier seems like one good way to do this periodically, but > we > > may be missing info and may have to query the individual data nodes in > order > > to get the actual latest sequence values. > > > > I am not in favor of relying on barrier for GTM stuff. Just > implementing currval+delta for both XID and sequences seems OK enough > to me and it keeps things within the GTM scheme of things. > I think the barrier makes sense when doing a barrier with the rest of the cluster components. Then each element is recoverable to the same point in time. > > > We may instead also consider a WAL-type of mechanism where we write the > > latest sequence values that a transaction updated at transaction commit. > > With GTM Proxy grouping requests together, too, and by grouping from the > > other threads, maybe a single GTM WAL fsync can write out quite a lot of > > information. > > > > IMHO, WAL types implementation seems overkill to me. Configurable > delta based logic to write to control file can whet everyone's > appetite. > Most important is XID. With a valid XID, we can get the current sequence value from the individual nodes. We could consider optionally that when a data node connects to GTM that GTM acknowledges and then requests back all of the sequence values from the node. Then GTM can take the maximum from all the nodes to be able to determine what the current value is cluster-wide. No need for an external tool. For the nodes, we can implement it as a new stored function. Regards, Mason > > Regards, > Nikhils > > > > > >> > >> > >> > > >> > Thoughts? Any other better suggestions? > >> > >> Regards; > >> --- > >> Koichi Suzuki > >> > >> > > >> > Regards, > >> > Nikhils > >> > > >> > > >> >> 1) If we kill GTM process, GTM will begin to assing GXID values with > >> >> the one of the proveious GTM run. It is bad and harmful behaviour. > To > >> >> improve it, GTM should write the next gxid value to gtm.control. > When gxid > >> >> value reaches this backup, then the backup should be refreshed with > the new > >> >> one. That way, we can safely restart GTM even if it was killed. > When GTM > >> >> stops normally, it should write the next gxid value to gtm.control. > >> >> > >> >> 2) We have BARRIER to synchronize PITR recovery line. However, GTM > >> >> does not have this capability. In this case, GTM's sequence > definition > >> >> could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued > after > >> >> the recovery point. Have WAL in GTM could be a general choice but > it seems > >> >> to be too much for current GTM feature. Instead, we can have > GTM.control > >> >> backups when CREATE BARRIER is issued. Backup files will be > qualified by > >> >> BARRIER id so that DBA can choose which backup to use at the end of > PITR. > >> >> > >> >> Regards; > >> >> --- > >> >> Koichi > >> >> > >> >> On Sun, 17 Feb 2013 10:27:37 +0530 > >> >> Nikhil Sontakke <ni...@st...> wrote: > >> >> > >> >>> Hi Suzuki-san, > >> >>> > >> >>> > It's so helpful to have this patch. Will review and test it > before > >> >>> > committing. > >> >>> > > >> >>> > >> >>> Any comments on this patch? I do not see it in the commit logs. > >> >>> > >> >>> Regards, > >> >>> Nikhils > >> >>> > >> >>> > Best; > >> >>> > --- > >> >>> > Koichi Suzuki > >> >>> > > >> >>> > On Wed, 6 Feb 2013 14:25:48 +0530 > >> >>> > Nikhil Sontakke <ni...@st...> wrote: > >> >>> > > >> >>> >> > Also, gtm_ctl -w option > >> >>> >> > seems not work properly. > >> >>> >> > > >> >>> >> > >> >>> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also > >> >>> >> tested > >> >>> >> "-w -t nsecs" behavior and it seems to work as well. This patch > can > >> >>> >> be > >> >>> >> easily be backported to 1.0 if desired. I hope all these > obnoxious > >> >>> >> gtm > >> >>> >> startup issues are resolved now. Scripting was a bit painful > >> >>> >> because > >> >>> >> of these issues. > >> >>> >> > >> >>> >> Regards, > >> >>> >> Nikhils > >> >>> >> > >> >>> >> > I will take a look at these issue and fix at least in the next > >> >>> >> > major > >> >>> >> > release. Daemonization should be back-ported to 1.0.x. > >> >>> >> > > >> >>> >> > Kind Regards; > >> >>> >> > ---------- > >> >>> >> > Koichi Suzuki > >> >>> >> > > >> >>> >> > > >> >>> >> > 2012/12/27 Michael Meskes <me...@po...>: > >> >>> >> >> I still haven't found the time to dig into this, but could > >> >>> >> >> anyone please > >> >>> >> >> explain to me how gtm is supposed to start up? A simple grep > >> >>> >> >> command doesn't > >> >>> >> >> seem to find any call to setsid() in gtm subdirs. Could you > >> >>> >> >> please point me to > >> >>> >> >> the code where gtm daemonizes? Because if it doesn't > correctly, > >> >>> >> >> there is no > >> >>> >> >> suprise gtm_ctl doesn't come back. Or in other words, this is > a > >> >>> >> >> logical point > >> >>> >> >> to start looking. > >> >>> >> >> > >> >>> >> >> Michael > >> >>> >> >> -- > >> >>> >> >> Michael Meskes > >> >>> >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot > >> >>> >> >> (De|Com|Net|Org) > >> >>> >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) > dot > >> >>> >> >> Org > >> >>> >> >> Jabber: michael.meskes at gmail dot com > >> >>> >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, > >> >>> >> >> PostgreSQL > >> >>> >> >> > >> >>> >> >> > >> >>> >> >> > ------------------------------------------------------------------------------ > >> >>> >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, > HTML5, > >> >>> >> >> CSS, > >> >>> >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your > skills > >> >>> >> >> current > >> >>> >> >> with LearnDevNow - 3,200 step-by-step video tutorials by > >> >>> >> >> Microsoft > >> >>> >> >> MVPs and experts. ON SALE this month only -- learn more at: > >> >>> >> >> http://p.sf.net/sfu/learnmore_122712 > >> >>> >> >> _______________________________________________ > >> >>> >> >> Postgres-xc-developers mailing list > >> >>> >> >> Pos...@li... > >> >>> >> >> > >> >>> >> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> -- > >> >>> >> StormDB - http://www.stormdb.com > >> >>> >> The Database Cloud > >> >>> >> Postgres-XC Support and Service > >> >>> > >> >>> > >> >>> > >> >>> -- > >> >>> StormDB - http://www.stormdb.com > >> >>> The Database Cloud > >> >>> Postgres-XC Support and Service > >> >>> > >> > > >> > > >> > > >> > -- > >> > StormDB - http://www.stormdb.com > >> > The Database Cloud > >> > Postgres-XC Support and Service > >> > >> > >> > ------------------------------------------------------------------------------ > >> The Go Parallel Website, sponsored by Intel - in partnership with > Geeknet, > >> is your hub for all things parallel software development, from weekly > >> thought > >> leadership blogs to news, videos, case studies, tutorials, tech docs, > >> whitepapers, evaluation guides, and opinion stories. Check out the most > >> recent posts - join the conversation now. > >> http://goparallel.sourceforge.net/ > >> > >> _______________________________________________ > >> Postgres-xc-developers mailing list > >> Pos...@li... > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > > > > > > -- > > Mason Sharp > > > > > > StormDB - http://www.stormdb.com > > The Database Cloud > > Postgres-XC Support and Services > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Nikhil S. <ni...@st...> - 2013-02-18 18:27:00
|
Hi, We use prepared transactions a lot. If we prepare and then do not COMMIT/ABORT then the locks on involved objects will be around for a long time. These can then cause unrelated statements to hang much later on as well. My question is if I found a prepared transaction in pg_prepared_xacts like so: 29697997 | _$XC$29697997 | 2013-02-06 02:01:02.328885+00 | user@db | pgxc How can I abort this prepared transaction? What's the name to use? TIA, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-02-18 17:01:54
|
> > > Yes, sequences are trickier. A single transaction can increment multiple > sequences. Barrier seems like one good way to do this periodically, but we > may be missing info and may have to query the individual data nodes in order > to get the actual latest sequence values. > I am not in favor of relying on barrier for GTM stuff. Just implementing currval+delta for both XID and sequences seems OK enough to me and it keeps things within the GTM scheme of things. > We may instead also consider a WAL-type of mechanism where we write the > latest sequence values that a transaction updated at transaction commit. > With GTM Proxy grouping requests together, too, and by grouping from the > other threads, maybe a single GTM WAL fsync can write out quite a lot of > information. > IMHO, WAL types implementation seems overkill to me. Configurable delta based logic to write to control file can whet everyone's appetite. Regards, Nikhils > >> >> >> > >> > Thoughts? Any other better suggestions? >> >> Regards; >> --- >> Koichi Suzuki >> >> > >> > Regards, >> > Nikhils >> > >> > >> >> 1) If we kill GTM process, GTM will begin to assing GXID values with >> >> the one of the proveious GTM run. It is bad and harmful behaviour. To >> >> improve it, GTM should write the next gxid value to gtm.control. When gxid >> >> value reaches this backup, then the backup should be refreshed with the new >> >> one. That way, we can safely restart GTM even if it was killed. When GTM >> >> stops normally, it should write the next gxid value to gtm.control. >> >> >> >> 2) We have BARRIER to synchronize PITR recovery line. However, GTM >> >> does not have this capability. In this case, GTM's sequence definition >> >> could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued after >> >> the recovery point. Have WAL in GTM could be a general choice but it seems >> >> to be too much for current GTM feature. Instead, we can have GTM.control >> >> backups when CREATE BARRIER is issued. Backup files will be qualified by >> >> BARRIER id so that DBA can choose which backup to use at the end of PITR. >> >> >> >> Regards; >> >> --- >> >> Koichi >> >> >> >> On Sun, 17 Feb 2013 10:27:37 +0530 >> >> Nikhil Sontakke <ni...@st...> wrote: >> >> >> >>> Hi Suzuki-san, >> >>> >> >>> > It's so helpful to have this patch. Will review and test it before >> >>> > committing. >> >>> > >> >>> >> >>> Any comments on this patch? I do not see it in the commit logs. >> >>> >> >>> Regards, >> >>> Nikhils >> >>> >> >>> > Best; >> >>> > --- >> >>> > Koichi Suzuki >> >>> > >> >>> > On Wed, 6 Feb 2013 14:25:48 +0530 >> >>> > Nikhil Sontakke <ni...@st...> wrote: >> >>> > >> >>> >> > Also, gtm_ctl -w option >> >>> >> > seems not work properly. >> >>> >> > >> >>> >> >> >>> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also >> >>> >> tested >> >>> >> "-w -t nsecs" behavior and it seems to work as well. This patch can >> >>> >> be >> >>> >> easily be backported to 1.0 if desired. I hope all these obnoxious >> >>> >> gtm >> >>> >> startup issues are resolved now. Scripting was a bit painful >> >>> >> because >> >>> >> of these issues. >> >>> >> >> >>> >> Regards, >> >>> >> Nikhils >> >>> >> >> >>> >> > I will take a look at these issue and fix at least in the next >> >>> >> > major >> >>> >> > release. Daemonization should be back-ported to 1.0.x. >> >>> >> > >> >>> >> > Kind Regards; >> >>> >> > ---------- >> >>> >> > Koichi Suzuki >> >>> >> > >> >>> >> > >> >>> >> > 2012/12/27 Michael Meskes <me...@po...>: >> >>> >> >> I still haven't found the time to dig into this, but could >> >>> >> >> anyone please >> >>> >> >> explain to me how gtm is supposed to start up? A simple grep >> >>> >> >> command doesn't >> >>> >> >> seem to find any call to setsid() in gtm subdirs. Could you >> >>> >> >> please point me to >> >>> >> >> the code where gtm daemonizes? Because if it doesn't correctly, >> >>> >> >> there is no >> >>> >> >> suprise gtm_ctl doesn't come back. Or in other words, this is a >> >>> >> >> logical point >> >>> >> >> to start looking. >> >>> >> >> >> >>> >> >> Michael >> >>> >> >> -- >> >>> >> >> Michael Meskes >> >>> >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot >> >>> >> >> (De|Com|Net|Org) >> >>> >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot >> >>> >> >> Org >> >>> >> >> Jabber: michael.meskes at gmail dot com >> >>> >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, >> >>> >> >> PostgreSQL >> >>> >> >> >> >>> >> >> >> >>> >> >> ------------------------------------------------------------------------------ >> >>> >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, >> >>> >> >> CSS, >> >>> >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills >> >>> >> >> current >> >>> >> >> with LearnDevNow - 3,200 step-by-step video tutorials by >> >>> >> >> Microsoft >> >>> >> >> MVPs and experts. ON SALE this month only -- learn more at: >> >>> >> >> http://p.sf.net/sfu/learnmore_122712 >> >>> >> >> _______________________________________________ >> >>> >> >> Postgres-xc-developers mailing list >> >>> >> >> Pos...@li... >> >>> >> >> >> >>> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >>> >> >> >>> >> >> >>> >> >> >>> >> -- >> >>> >> StormDB - http://www.stormdb.com >> >>> >> The Database Cloud >> >>> >> Postgres-XC Support and Service >> >>> >> >>> >> >>> >> >>> -- >> >>> StormDB - http://www.stormdb.com >> >>> The Database Cloud >> >>> Postgres-XC Support and Service >> >>> >> > >> > >> > >> > -- >> > StormDB - http://www.stormdb.com >> > The Database Cloud >> > Postgres-XC Support and Service >> >> >> ------------------------------------------------------------------------------ >> The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, >> is your hub for all things parallel software development, from weekly >> thought >> leadership blogs to news, videos, case studies, tutorials, tech docs, >> whitepapers, evaluation guides, and opinion stories. Check out the most >> recent posts - join the conversation now. >> http://goparallel.sourceforge.net/ >> >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > -- > Mason Sharp > > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Mason S. <ma...@st...> - 2013-02-18 15:42:26
|
On Mon, Feb 18, 2013 at 10:00 AM, Koichi Suzuki <koi...@gm...>wrote: > Hi, > > 2013/2/18 Nikhil Sontakke <ni...@st...>: > >> Thanks. I tested the patch both on master and REL1_0_STABLE and found > they look to work fine. I will commit it to both branches. > >> > > > > Thanks Suzuki-san. > > > >> I found there are two more issues on GTM. > >> > > > > Yeah, these are very important from the correct functioning of the > > cluster. And both of them need to be tackled. > > > > And we need to think properly for a decent mechanism to do this. > > Specifically, the mechanism should not become a performance > > bottleneck. And yeah GTM WAL does seem to be an overkill. > > > > Maybe, we can have a GTM thread which writes the current GXID every > > 1000 increments. In case GTM dies and restarts in between, it can read > > the control file and start with the current value + 1000 as the next > > valid value. > > Yes, I was thinking the same idea. Number of increments can be a > configuration > parameter. Having dedicated thread could be a good idea which does not > bring > too much overhead to GTM opeation. > > > > > For sequences, again a similar mechanism can be though of. I admit I > > haven't looked at the current sequence handling. But we can introduce > > logic to track current saved value and current delta. Again if GTM > > dies and restarts, we can start with saved value + delta to be clear > > of existing sequence values. > We also need to be mindful of XID wraparound. > > Yes, exactly. We should take sequence in to account too. We need > simple mechanism to synchronize coordinator/datanode PITR and GTM > backup. GTM has sufficient infrastructure to backup its own status. > I'm thinking to extend Barrier mechanism to trigger this gtm.control > backup. > Yes, sequences are trickier. A single transaction can increment multiple sequences. Barrier seems like one good way to do this periodically, but we may be missing info and may have to query the individual data nodes in order to get the actual latest sequence values. We may instead also consider a WAL-type of mechanism where we write the latest sequence values that a transaction updated at transaction commit. With GTM Proxy grouping requests together, too, and by grouping from the other threads, maybe a single GTM WAL fsync can write out quite a lot of information. > > > > > Thoughts? Any other better suggestions? > > Regards; > --- > Koichi Suzuki > > > > > Regards, > > Nikhils > > > > > >> 1) If we kill GTM process, GTM will begin to assing GXID values with > the one of the proveious GTM run. It is bad and harmful behaviour. To > improve it, GTM should write the next gxid value to gtm.control. When > gxid value reaches this backup, then the backup should be refreshed with > the new one. That way, we can safely restart GTM even if it was killed. > When GTM stops normally, it should write the next gxid value to gtm.control. > >> > >> 2) We have BARRIER to synchronize PITR recovery line. However, GTM > does not have this capability. In this case, GTM's sequence definition > could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued after > the recovery point. Have WAL in GTM could be a general choice but it > seems to be too much for current GTM feature. Instead, we can have > GTM.control backups when CREATE BARRIER is issued. Backup files will be > qualified by BARRIER id so that DBA can choose which backup to use at the > end of PITR. > >> > >> Regards; > >> --- > >> Koichi > >> > >> On Sun, 17 Feb 2013 10:27:37 +0530 > >> Nikhil Sontakke <ni...@st...> wrote: > >> > >>> Hi Suzuki-san, > >>> > >>> > It's so helpful to have this patch. Will review and test it before > committing. > >>> > > >>> > >>> Any comments on this patch? I do not see it in the commit logs. > >>> > >>> Regards, > >>> Nikhils > >>> > >>> > Best; > >>> > --- > >>> > Koichi Suzuki > >>> > > >>> > On Wed, 6 Feb 2013 14:25:48 +0530 > >>> > Nikhil Sontakke <ni...@st...> wrote: > >>> > > >>> >> > Also, gtm_ctl -w option > >>> >> > seems not work properly. > >>> >> > > >>> >> > >>> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also tested > >>> >> "-w -t nsecs" behavior and it seems to work as well. This patch can > be > >>> >> easily be backported to 1.0 if desired. I hope all these obnoxious > gtm > >>> >> startup issues are resolved now. Scripting was a bit painful because > >>> >> of these issues. > >>> >> > >>> >> Regards, > >>> >> Nikhils > >>> >> > >>> >> > I will take a look at these issue and fix at least in the next > major > >>> >> > release. Daemonization should be back-ported to 1.0.x. > >>> >> > > >>> >> > Kind Regards; > >>> >> > ---------- > >>> >> > Koichi Suzuki > >>> >> > > >>> >> > > >>> >> > 2012/12/27 Michael Meskes <me...@po...>: > >>> >> >> I still haven't found the time to dig into this, but could > anyone please > >>> >> >> explain to me how gtm is supposed to start up? A simple grep > command doesn't > >>> >> >> seem to find any call to setsid() in gtm subdirs. Could you > please point me to > >>> >> >> the code where gtm daemonizes? Because if it doesn't correctly, > there is no > >>> >> >> suprise gtm_ctl doesn't come back. Or in other words, this is a > logical point > >>> >> >> to start looking. > >>> >> >> > >>> >> >> Michael > >>> >> >> -- > >>> >> >> Michael Meskes > >>> >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot > (De|Com|Net|Org) > >>> >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot > Org > >>> >> >> Jabber: michael.meskes at gmail dot com > >>> >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, > PostgreSQL > >>> >> >> > >>> >> >> > ------------------------------------------------------------------------------ > >>> >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, > CSS, > >>> >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills > current > >>> >> >> with LearnDevNow - 3,200 step-by-step video tutorials by > Microsoft > >>> >> >> MVPs and experts. ON SALE this month only -- learn more at: > >>> >> >> http://p.sf.net/sfu/learnmore_122712 > >>> >> >> _______________________________________________ > >>> >> >> Postgres-xc-developers mailing list > >>> >> >> Pos...@li... > >>> >> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>> >> > >>> >> > >>> >> > >>> >> -- > >>> >> StormDB - http://www.stormdb.com > >>> >> The Database Cloud > >>> >> Postgres-XC Support and Service > >>> > >>> > >>> > >>> -- > >>> StormDB - http://www.stormdb.com > >>> The Database Cloud > >>> Postgres-XC Support and Service > >>> > > > > > > > > -- > > StormDB - http://www.stormdb.com > > The Database Cloud > > Postgres-XC Support and Service > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Koichi S. <koi...@gm...> - 2013-02-18 15:00:12
|
Hi, 2013/2/18 Nikhil Sontakke <ni...@st...>: >> Thanks. I tested the patch both on master and REL1_0_STABLE and found they look to work fine. I will commit it to both branches. >> > > Thanks Suzuki-san. > >> I found there are two more issues on GTM. >> > > Yeah, these are very important from the correct functioning of the > cluster. And both of them need to be tackled. > > And we need to think properly for a decent mechanism to do this. > Specifically, the mechanism should not become a performance > bottleneck. And yeah GTM WAL does seem to be an overkill. > > Maybe, we can have a GTM thread which writes the current GXID every > 1000 increments. In case GTM dies and restarts in between, it can read > the control file and start with the current value + 1000 as the next > valid value. Yes, I was thinking the same idea. Number of increments can be a configuration parameter. Having dedicated thread could be a good idea which does not bring too much overhead to GTM opeation. > > For sequences, again a similar mechanism can be though of. I admit I > haven't looked at the current sequence handling. But we can introduce > logic to track current saved value and current delta. Again if GTM > dies and restarts, we can start with saved value + delta to be clear > of existing sequence values. Yes, exactly. We should take sequence in to account too. We need simple mechanism to synchronize coordinator/datanode PITR and GTM backup. GTM has sufficient infrastructure to backup its own status. I'm thinking to extend Barrier mechanism to trigger this gtm.control backup. > > Thoughts? Any other better suggestions? Regards; --- Koichi Suzuki > > Regards, > Nikhils > > >> 1) If we kill GTM process, GTM will begin to assing GXID values with the one of the proveious GTM run. It is bad and harmful behaviour. To improve it, GTM should write the next gxid value to gtm.control. When gxid value reaches this backup, then the backup should be refreshed with the new one. That way, we can safely restart GTM even if it was killed. When GTM stops normally, it should write the next gxid value to gtm.control. >> >> 2) We have BARRIER to synchronize PITR recovery line. However, GTM does not have this capability. In this case, GTM's sequence definition could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued after the recovery point. Have WAL in GTM could be a general choice but it seems to be too much for current GTM feature. Instead, we can have GTM.control backups when CREATE BARRIER is issued. Backup files will be qualified by BARRIER id so that DBA can choose which backup to use at the end of PITR. >> >> Regards; >> --- >> Koichi >> >> On Sun, 17 Feb 2013 10:27:37 +0530 >> Nikhil Sontakke <ni...@st...> wrote: >> >>> Hi Suzuki-san, >>> >>> > It's so helpful to have this patch. Will review and test it before committing. >>> > >>> >>> Any comments on this patch? I do not see it in the commit logs. >>> >>> Regards, >>> Nikhils >>> >>> > Best; >>> > --- >>> > Koichi Suzuki >>> > >>> > On Wed, 6 Feb 2013 14:25:48 +0530 >>> > Nikhil Sontakke <ni...@st...> wrote: >>> > >>> >> > Also, gtm_ctl -w option >>> >> > seems not work properly. >>> >> > >>> >> >>> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also tested >>> >> "-w -t nsecs" behavior and it seems to work as well. This patch can be >>> >> easily be backported to 1.0 if desired. I hope all these obnoxious gtm >>> >> startup issues are resolved now. Scripting was a bit painful because >>> >> of these issues. >>> >> >>> >> Regards, >>> >> Nikhils >>> >> >>> >> > I will take a look at these issue and fix at least in the next major >>> >> > release. Daemonization should be back-ported to 1.0.x. >>> >> > >>> >> > Kind Regards; >>> >> > ---------- >>> >> > Koichi Suzuki >>> >> > >>> >> > >>> >> > 2012/12/27 Michael Meskes <me...@po...>: >>> >> >> I still haven't found the time to dig into this, but could anyone please >>> >> >> explain to me how gtm is supposed to start up? A simple grep command doesn't >>> >> >> seem to find any call to setsid() in gtm subdirs. Could you please point me to >>> >> >> the code where gtm daemonizes? Because if it doesn't correctly, there is no >>> >> >> suprise gtm_ctl doesn't come back. Or in other words, this is a logical point >>> >> >> to start looking. >>> >> >> >>> >> >> Michael >>> >> >> -- >>> >> >> Michael Meskes >>> >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) >>> >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org >>> >> >> Jabber: michael.meskes at gmail dot com >>> >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL >>> >> >> >>> >> >> ------------------------------------------------------------------------------ >>> >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, >>> >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current >>> >> >> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft >>> >> >> MVPs and experts. ON SALE this month only -- learn more at: >>> >> >> http://p.sf.net/sfu/learnmore_122712 >>> >> >> _______________________________________________ >>> >> >> Postgres-xc-developers mailing list >>> >> >> Pos...@li... >>> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >>> >> >>> >> >>> >> -- >>> >> StormDB - http://www.stormdb.com >>> >> The Database Cloud >>> >> Postgres-XC Support and Service >>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Service >>> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-02-18 14:52:24
|
> Thanks. I tested the patch both on master and REL1_0_STABLE and found they look to work fine. I will commit it to both branches. > Thanks Suzuki-san. > I found there are two more issues on GTM. > Yeah, these are very important from the correct functioning of the cluster. And both of them need to be tackled. And we need to think properly for a decent mechanism to do this. Specifically, the mechanism should not become a performance bottleneck. And yeah GTM WAL does seem to be an overkill. Maybe, we can have a GTM thread which writes the current GXID every 1000 increments. In case GTM dies and restarts in between, it can read the control file and start with the current value + 1000 as the next valid value. For sequences, again a similar mechanism can be though of. I admit I haven't looked at the current sequence handling. But we can introduce logic to track current saved value and current delta. Again if GTM dies and restarts, we can start with saved value + delta to be clear of existing sequence values. Thoughts? Any other better suggestions? Regards, Nikhils > 1) If we kill GTM process, GTM will begin to assing GXID values with the one of the proveious GTM run. It is bad and harmful behaviour. To improve it, GTM should write the next gxid value to gtm.control. When gxid value reaches this backup, then the backup should be refreshed with the new one. That way, we can safely restart GTM even if it was killed. When GTM stops normally, it should write the next gxid value to gtm.control. > > 2) We have BARRIER to synchronize PITR recovery line. However, GTM does not have this capability. In this case, GTM's sequence definition could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued after the recovery point. Have WAL in GTM could be a general choice but it seems to be too much for current GTM feature. Instead, we can have GTM.control backups when CREATE BARRIER is issued. Backup files will be qualified by BARRIER id so that DBA can choose which backup to use at the end of PITR. > > Regards; > --- > Koichi > > On Sun, 17 Feb 2013 10:27:37 +0530 > Nikhil Sontakke <ni...@st...> wrote: > >> Hi Suzuki-san, >> >> > It's so helpful to have this patch. Will review and test it before committing. >> > >> >> Any comments on this patch? I do not see it in the commit logs. >> >> Regards, >> Nikhils >> >> > Best; >> > --- >> > Koichi Suzuki >> > >> > On Wed, 6 Feb 2013 14:25:48 +0530 >> > Nikhil Sontakke <ni...@st...> wrote: >> > >> >> > Also, gtm_ctl -w option >> >> > seems not work properly. >> >> > >> >> >> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also tested >> >> "-w -t nsecs" behavior and it seems to work as well. This patch can be >> >> easily be backported to 1.0 if desired. I hope all these obnoxious gtm >> >> startup issues are resolved now. Scripting was a bit painful because >> >> of these issues. >> >> >> >> Regards, >> >> Nikhils >> >> >> >> > I will take a look at these issue and fix at least in the next major >> >> > release. Daemonization should be back-ported to 1.0.x. >> >> > >> >> > Kind Regards; >> >> > ---------- >> >> > Koichi Suzuki >> >> > >> >> > >> >> > 2012/12/27 Michael Meskes <me...@po...>: >> >> >> I still haven't found the time to dig into this, but could anyone please >> >> >> explain to me how gtm is supposed to start up? A simple grep command doesn't >> >> >> seem to find any call to setsid() in gtm subdirs. Could you please point me to >> >> >> the code where gtm daemonizes? Because if it doesn't correctly, there is no >> >> >> suprise gtm_ctl doesn't come back. Or in other words, this is a logical point >> >> >> to start looking. >> >> >> >> >> >> Michael >> >> >> -- >> >> >> Michael Meskes >> >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) >> >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org >> >> >> Jabber: michael.meskes at gmail dot com >> >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, >> >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current >> >> >> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft >> >> >> MVPs and experts. ON SALE this month only -- learn more at: >> >> >> http://p.sf.net/sfu/learnmore_122712 >> >> >> _______________________________________________ >> >> >> Postgres-xc-developers mailing list >> >> >> Pos...@li... >> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> >> >> >> -- >> >> StormDB - http://www.stormdb.com >> >> The Database Cloud >> >> Postgres-XC Support and Service >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service >> -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Koichi S. <koi...@gm...> - 2013-02-18 05:51:47
|
I tried the stress test. Autovacuum seems to work well. I also ran vacuum analyze verbose from psql directly and it worked without problem during this stress test. I ran vacuumdb as well. All worked without problem. I noticed that you used pgAdmin. Unfortunately, pgAdmin has not been tuned to work with XC. I know some of pgAdmin features works well but others don't. Could you try to run vacuum from psql or vacuumdb? If they don't work, please let me know. Best Regards; ---------- Koichi Suzuki 2013/2/18 Koichi Suzuki <ko...@in...>: > Nice to hear that pgxc_ctl helps. > > As to the warning, I will try to reproduce the problem and fix it. I need to find a time for it so please forgive me a bit of time. The test will run many small transactions which will cause autovacuum lauched, as attached. This test was built as Datanode slave stress test. I think this may work as autovacuum lauch test. I will test it with four coordinators and four datanodes, and four gtm_proxies as well. Whole test will take about a couple of hours with five of six-core Xeon servers (one for GTM). > > Do you think this makes sense to reproduce your problem? > > I will run it both on master and REL1_0_STABLE. > > Regards; > --- > Koichi > > On Sat, 16 Feb 2013 19:32:11 +0000 > Arni Sumarlidason <Arn...@md...> wrote: > >> Koichi, and others, >> >> I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. >> Please advise. >> >> >> Thank you for your script, it does make life easier!! >> >> Best, >> >> -----Original Message----- >> From: Koichi Suzuki [mailto:ko...@in...] >> Sent: Friday, February 15, 2013 4:11 AM >> To: Arni Sumarlidason >> Cc: Michael Paquier; koi...@gm...; pos...@li... >> Subject: Re: [Postgres-xc-general] pgxc: snapshot >> >> If you're not sure about the configuration, please try pgxc_ctl available at >> >> git://github.com/koichi-szk/PGXC-Tools.git >> >> This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. >> >> Regards; >> --- >> Koichi Suzuki >> >> On Fri, 15 Feb 2013 04:22:49 +0000 >> Arni Sumarlidason <Arn...@md...> wrote: >> >> > Thank you both for fast response!! >> > >> > RE: Koichi Suzuki >> > I downloaded the git this afternoon. >> > >> > RE: Michael Paquier >> > >> > - Confirm it is from the datanode's log. >> > >> > - Both coord & datanode connect via the same gtm_proxy on localhost >> > >> > These are my simplified configs, the only change I make on each node >> > is the nodename, PG_HBA >> > local all all trust >> > host all all 127.0.0.1/32 trust >> > host all all ::1/128 trust >> > host all all 10.100.170.0/24 trust >> > >> > COORD >> > pgxc_node_name = 'coord01' >> > listen_addresses = '*' >> > port = 5432 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > pooler_port = 6670 >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > DATA >> > pgxc_node_name = 'data01' >> > listen_addresses = '*' >> > port = 5433 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > PROXY >> > Nodename = 'proxy01' >> > listen_addresses = '*' >> > port = 6666 >> > gtm_host = '10.100.170.10' >> > gtm_port = 6666 >> > >> > >> > best, >> > >> > Arni >> > >> > From: Michael Paquier [mailto:mic...@gm...] >> > Sent: Thursday, February 14, 2013 11:06 PM >> > To: Arni Sumarlidason >> > Cc: pos...@li... >> > Subject: Re: [Postgres-xc-general] pgxc: snapshot >> > >> > >> > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: >> > Hi Everyone! >> > >> > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. >> > >> > Vacuum and analyze from pgadmin looks like this, >> > INFO: vacuuming "public.table" >> > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 >> > pages >> > DETAIL: 0 dead row versions cannot be removed yet. >> > CPU 0.00s/0.00u sec elapsed 0.00 sec. >> > INFO: analyzing "public.table" >> > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 >> > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. >> > >> > Should we use execute direct to perform maintenance? >> > No. Isn't this happening on a Datanode? >> > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. >> > -- >> > Michael >> |
From: Koichi S. <ko...@in...> - 2013-02-18 02:06:23
|
Thanks. I tested the patch both on master and REL1_0_STABLE and found they look to work fine. I will commit it to both branches. I found there are two more issues on GTM. 1) If we kill GTM process, GTM will begin to assing GXID values with the one of the proveious GTM run. It is bad and harmful behaviour. To improve it, GTM should write the next gxid value to gtm.control. When gxid value reaches this backup, then the backup should be refreshed with the new one. That way, we can safely restart GTM even if it was killed. When GTM stops normally, it should write the next gxid value to gtm.control. 2) We have BARRIER to synchronize PITR recovery line. However, GTM does not have this capability. In this case, GTM's sequence definition could be inconsisitent if ALTER SEQUENCE or DROP SEQUENCE was issued after the recovery point. Have WAL in GTM could be a general choice but it seems to be too much for current GTM feature. Instead, we can have GTM.control backups when CREATE BARRIER is issued. Backup files will be qualified by BARRIER id so that DBA can choose which backup to use at the end of PITR. Regards; --- Koichi On Sun, 17 Feb 2013 10:27:37 +0530 Nikhil Sontakke <ni...@st...> wrote: > Hi Suzuki-san, > > > It's so helpful to have this patch. Will review and test it before committing. > > > > Any comments on this patch? I do not see it in the commit logs. > > Regards, > Nikhils > > > Best; > > --- > > Koichi Suzuki > > > > On Wed, 6 Feb 2013 14:25:48 +0530 > > Nikhil Sontakke <ni...@st...> wrote: > > > >> > Also, gtm_ctl -w option > >> > seems not work properly. > >> > > >> > >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also tested > >> "-w -t nsecs" behavior and it seems to work as well. This patch can be > >> easily be backported to 1.0 if desired. I hope all these obnoxious gtm > >> startup issues are resolved now. Scripting was a bit painful because > >> of these issues. > >> > >> Regards, > >> Nikhils > >> > >> > I will take a look at these issue and fix at least in the next major > >> > release. Daemonization should be back-ported to 1.0.x. > >> > > >> > Kind Regards; > >> > ---------- > >> > Koichi Suzuki > >> > > >> > > >> > 2012/12/27 Michael Meskes <me...@po...>: > >> >> I still haven't found the time to dig into this, but could anyone please > >> >> explain to me how gtm is supposed to start up? A simple grep command doesn't > >> >> seem to find any call to setsid() in gtm subdirs. Could you please point me to > >> >> the code where gtm daemonizes? Because if it doesn't correctly, there is no > >> >> suprise gtm_ctl doesn't come back. Or in other words, this is a logical point > >> >> to start looking. > >> >> > >> >> Michael > >> >> -- > >> >> Michael Meskes > >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) > >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org > >> >> Jabber: michael.meskes at gmail dot com > >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL > >> >> > >> >> ------------------------------------------------------------------------------ > >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, > >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current > >> >> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft > >> >> MVPs and experts. ON SALE this month only -- learn more at: > >> >> http://p.sf.net/sfu/learnmore_122712 > >> >> _______________________________________________ > >> >> Postgres-xc-developers mailing list > >> >> Pos...@li... > >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> > >> > >> > >> -- > >> StormDB - http://www.stormdb.com > >> The Database Cloud > >> Postgres-XC Support and Service > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > |
From: Nikhil S. <ni...@st...> - 2013-02-17 05:25:24
|
Hi Suzuki-san, > It's so helpful to have this patch. Will review and test it before committing. > Any comments on this patch? I do not see it in the commit logs. Regards, Nikhils > Best; > --- > Koichi Suzuki > > On Wed, 6 Feb 2013 14:25:48 +0530 > Nikhil Sontakke <ni...@st...> wrote: > >> > Also, gtm_ctl -w option >> > seems not work properly. >> > >> >> PFA, a patch to fix "gtm_ctl -w" behavior in GIT HEAD. I also tested >> "-w -t nsecs" behavior and it seems to work as well. This patch can be >> easily be backported to 1.0 if desired. I hope all these obnoxious gtm >> startup issues are resolved now. Scripting was a bit painful because >> of these issues. >> >> Regards, >> Nikhils >> >> > I will take a look at these issue and fix at least in the next major >> > release. Daemonization should be back-ported to 1.0.x. >> > >> > Kind Regards; >> > ---------- >> > Koichi Suzuki >> > >> > >> > 2012/12/27 Michael Meskes <me...@po...>: >> >> I still haven't found the time to dig into this, but could anyone please >> >> explain to me how gtm is supposed to start up? A simple grep command doesn't >> >> seem to find any call to setsid() in gtm subdirs. Could you please point me to >> >> the code where gtm daemonizes? Because if it doesn't correctly, there is no >> >> suprise gtm_ctl doesn't come back. Or in other words, this is a logical point >> >> to start looking. >> >> >> >> Michael >> >> -- >> >> Michael Meskes >> >> Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) >> >> Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org >> >> Jabber: michael.meskes at gmail dot com >> >> VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL >> >> >> >> ------------------------------------------------------------------------------ >> >> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, >> >> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current >> >> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft >> >> MVPs and experts. ON SALE this month only -- learn more at: >> >> http://p.sf.net/sfu/learnmore_122712 >> >> _______________________________________________ >> >> Postgres-xc-developers mailing list >> >> Pos...@li... >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: David T. <dav...@ya...> - 2013-02-15 09:28:40
|
Thanks for clarifying. David. ________________________________ De : Amit Khandekar <ami...@en...> À : Koichi Suzuki <koi...@gm...> Cc : David TECHER <dav...@ya...>; pgxc-hackers mailing list <pos...@li...> Envoyé le : Vendredi 15 février 2013 3h59 Objet : Re: [Postgres-xc-developers] Implementing triggers features Yes. Trigger implementation is being worked on currently. We are aiming to checkin the trigger implementation this quarter, so as to get it in version 1.1. On 15 February 2013 07:08, Koichi Suzuki <koi...@gm...> wrote: > Trigger is now under the implementation as noticed. Amit, could you > provide info on this? > > Regards; > ---------- > Koichi Suzuki > > > 2013/2/15 David TECHER <dav...@ya...>: >> Hi, >> >> I know that triggers are not implemented in Postgres-XC for the moment. >> >> Having a look in http://postgres-xc.sourceforge.net/roadmap.html >> >> I see that triggers should be implemented perhaps for v1.1. >> >> Do you plan to implement triggers as noticed in the roadmap? >> >> We are using a lot of triggers at work. So we are interested to know. >> >> Thanks for letting me know. >> >> David. >> >> >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer >> Buy your Sophos next-gen firewall before the end March 2013 >> and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Ashutosh B. <ash...@en...> - 2013-02-15 06:01:03
|
On Fri, Feb 15, 2013 at 11:19 AM, Nikhil Sontakke <ni...@st...>wrote: > >> > There is no way for the second session to know that there are prepared > >> > statements out there, so there is no stopping it from changing the > >> > preferred > >> > nodes. At the same time, I do not see any need to do so. Preferred > node > >> > should be a dynamic non-blocking setting, very handy to tackle the > >> > network > >> > dynamics if needed. > >> > > >> > >> In this case the plan cache should be invalidated and re-built afresh. > >> The case should be similar to when the involved relations in a plan > >> undergo a change AFAICS. > >> > > > > When we can take care of this stuff (and without much change in the > code), > > without plan cache invalidation, why to go the route of cache > invalidation? > > Plan cache invalidation is a heavy operation in itself and its > consequences > > cause performance degradation. Because of plan invalidation all those > plans > > which are really not affected by this change, would be hit too. > > > > You make this node ddl stuff sound like a routine activity. It's going > to be once in a while, right? > > You seem to have forgot the other argument I made about inserts not being well-balanced for round-robin tables, which is another argument I have put forth for this change. > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-02-15 05:59:41
|
On Fri, Feb 15, 2013 at 11:19 AM, Nikhil Sontakke <ni...@st...>wrote: > >> > There is no way for the second session to know that there are prepared > >> > statements out there, so there is no stopping it from changing the > >> > preferred > >> > nodes. At the same time, I do not see any need to do so. Preferred > node > >> > should be a dynamic non-blocking setting, very handy to tackle the > >> > network > >> > dynamics if needed. > >> > > >> > >> In this case the plan cache should be invalidated and re-built afresh. > >> The case should be similar to when the involved relations in a plan > >> undergo a change AFAICS. > >> > > > > When we can take care of this stuff (and without much change in the > code), > > without plan cache invalidation, why to go the route of cache > invalidation? > > Plan cache invalidation is a heavy operation in itself and its > consequences > > cause performance degradation. Because of plan invalidation all those > plans > > which are really not affected by this change, would be hit too. > > > > You make this node ddl stuff sound like a routine activity. It's going > to be once in a while, right? > > Changing preferred node can be. > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Nikhil S. <ni...@st...> - 2013-02-15 05:50:15
|
>> > There is no way for the second session to know that there are prepared >> > statements out there, so there is no stopping it from changing the >> > preferred >> > nodes. At the same time, I do not see any need to do so. Preferred node >> > should be a dynamic non-blocking setting, very handy to tackle the >> > network >> > dynamics if needed. >> > >> >> In this case the plan cache should be invalidated and re-built afresh. >> The case should be similar to when the involved relations in a plan >> undergo a change AFAICS. >> > > When we can take care of this stuff (and without much change in the code), > without plan cache invalidation, why to go the route of cache invalidation? > Plan cache invalidation is a heavy operation in itself and its consequences > cause performance degradation. Because of plan invalidation all those plans > which are really not affected by this change, would be hit too. > You make this node ddl stuff sound like a routine activity. It's going to be once in a while, right? Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Ashutosh B. <ash...@en...> - 2013-02-15 05:40:53
|
On Fri, Feb 15, 2013 at 11:02 AM, Nikhil Sontakke <ni...@st...>wrote: > >> ISTM, that all this node management should be done in a stricter > >> environment. Or the user should ensure that no other DML activity is > >> going on when running node ddl. This also makes sense when we think > >> about dynamic node management. > >> > > > > The case I am talking about is, one prepares a statement, meantime > someone > > else changes the preferred nodes from other session, the first one > executes > > the prepared statement. > > > > There is no way for the second session to know that there are prepared > > statements out there, so there is no stopping it from changing the > preferred > > nodes. At the same time, I do not see any need to do so. Preferred node > > should be a dynamic non-blocking setting, very handy to tackle the > network > > dynamics if needed. > > > > In this case the plan cache should be invalidated and re-built afresh. > The case should be similar to when the involved relations in a plan > undergo a change AFAICS. > > When we can take care of this stuff (and without much change in the code), without plan cache invalidation, why to go the route of cache invalidation? Plan cache invalidation is a heavy operation in itself and its consequences cause performance degradation. Because of plan invalidation all those plans which are really not affected by this change, would be hit too. Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Amit K. <ami...@en...> - 2013-02-15 05:37:17
|
Adding developers list. On 15 February 2013 11:04, Amit Khandekar <ami...@en...> wrote: > On 11 February 2013 19:05, Koichi Suzuki <koi...@gm...> wrote: >> If the order of update is different from datanode to datanode, yes >> citd can be diffent from datanode to data. We should not rely on >> such cdid qeualities. > > The ctids would also start to differ once autovacuum starts assigning > the same ctids to other records. > >> As we discussed in the paset F2F MTG, we >> need something different cdids, for example, cdid qualified with >> nodename, and list of local ctids for a given replicated table row. >> >> I think this impacts RETURNING and WCO but not simple queries. So I >> believe it should not affect DBT1 and DBT2 benchmarks. > > I am not sure why it would not impact simple UPDATE queries. For all > UPDATEs that at not shippable, this issue comes into picture. Also I > am wondering why this issue did not surface on DBT. May be > auto-vaccuum does not happen to reassign those ctids, but this also > seems less likely. > >> Also, we may >> want to think this impact to TRIGGER case as well. Maybe it's better >> to extend cdid in XC context and provide abstract means to handle it. > > Even if we have a cluster-scoped ctid, when it comes to updating the > rows in a replicated table, we should fire the update only on one > particular datanode, not both. Right now we fetch rows only from one > particular datanode, so we don't have ctids from the remaining > datanodes. This needs to be changed. We should fetch from all the > datanodes for replicated tables. Only then we can decide during update > which datanode should each UPDATE be targetted for. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> 2013/2/11 Amit Khandekar <ami...@en...>: >>> The remote UPDATE queries for an update statement for replicated >>> tables are something like : >>> UPDATE ONLY public.tab1 SET tab1name1 = $2 WHERE ctid = $4 >>> And we run the same update statement on all the datanodes, while the >>> particular ctid fetched from SELECT belongs to only one datanode. That >>> means, this looks like a general bug in update on replicated tables, >>> this is not a corner case. Likely, the ctids happen to be the same for >>> the datanodes, so it appears to be everything-fine. I hope that I am >>> not missing something. I am going to raise a bug for this ... >>> >>> Thanks >>> -Amit >>> >>> ------------------------------------------------------------------------------ >>> Free Next-Gen Firewall Hardware Offer >>> Buy your Sophos next-gen firewall before the end March 2013 >>> and get the hardware for free! Learn more. >>> http://p.sf.net/sfu/sophos-d2d-feb >>> _______________________________________________ >>> Postgres-xc-core mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core |
From: Nikhil S. <ni...@st...> - 2013-02-15 05:33:17
|
>> ISTM, that all this node management should be done in a stricter >> environment. Or the user should ensure that no other DML activity is >> going on when running node ddl. This also makes sense when we think >> about dynamic node management. >> > > The case I am talking about is, one prepares a statement, meantime someone > else changes the preferred nodes from other session, the first one executes > the prepared statement. > > There is no way for the second session to know that there are prepared > statements out there, so there is no stopping it from changing the preferred > nodes. At the same time, I do not see any need to do so. Preferred node > should be a dynamic non-blocking setting, very handy to tackle the network > dynamics if needed. > In this case the plan cache should be invalidated and re-built afresh. The case should be similar to when the involved relations in a plan undergo a change AFAICS. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |