|
From: Aaron J. <aja...@re...> - 2014-04-23 19:20:12
|
I've read through several pieces of documentation but couldn't find an answer, short of experimentation. I'm really unclear about how coordinators are "managed" when they are in a cluster. From what I can tell, it looks like each node in a cluster must be individually setup ... for example. I have three nodes, A, B & C. - A is my GTM instance. - B & C newly created data nodes and coordinators - nothing in them Now let's say I want to make B & C aware of each other at the coordinator level. So on server B, I connect to the coordinator and issue the following: CREATE NODE coord_2 WITH (TYPE = 'coordinator', PORT = 5432, HOST = 'B'); select pgxc_pool_reload(); select * from pg_catalog.pgxc_node; And as expected, I now have two entries in pgxc_node. I hop over to machine C ... select * from pg_catalog.pgxc_node; This returns only a row for itself. Okay, so I reload the pool and still one row. This implies that adding a coordinator node only affects the coordinator where it was run on. So, I must go to all machines in my cluster and tell them that there is a new coordinator. I just want to make sure I'm not missing something obvious. If this is the way it's designed to work, then that's fine. Aaron |
|
From: Ashutosh B. <ash...@en...> - 2014-04-24 04:13:04
|
On Thu, Apr 24, 2014 at 12:49 AM, Aaron Jackson <aja...@re...>wrote: > I've read through several pieces of documentation but couldn't find an > answer, short of experimentation. I'm really unclear about how > coordinators are "managed" when they are in a cluster. From what I can > tell, it looks like each node in a cluster must be individually setup ... > for example. > > I have three nodes, A, B & C. > > - A is my GTM instance. > - B & C newly created data nodes and coordinators - nothing in them > > Now let's say I want to make B & C aware of each other at the > coordinator level. So on server B, I connect to the coordinator and issue > the following: > > CREATE NODE coord_2 WITH (TYPE = 'coordinator', PORT = 5432, HOST = > 'B'); > select pgxc_pool_reload(); > select * from pg_catalog.pgxc_node; > > And as expected, I now have two entries in pgxc_node. I hop over to > machine C ... > > select * from pg_catalog.pgxc_node; > > This returns only a row for itself. Okay, so I reload the pool and > still one row. This implies that adding a coordinator node only affects > the coordinator where it was run on. So, I must go to all machines in my > cluster and tell them that there is a new coordinator. > > I just want to make sure I'm not missing something obvious. If this is > the way it's designed to work, then that's fine. > > You are right. We have to "introduce" each node (not just coordinator but datanode as well) to all (other, if applicable), coordinators. > Aaron > > > ------------------------------------------------------------------------------ > Start Your Social Network Today - Download eXo Platform > Build your Enterprise Intranet with eXo Platform Software > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > Get Started Now And Turn Your Intranet Into A Collaboration Platform > http://p.sf.net/sfu/ExoPlatform > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: Pavan D. <pav...@gm...> - 2014-04-24 04:31:59
|
> > I just want to make sure I'm not missing something obvious. If this is the way it's designed to work, then that's fine. > Yes, this is per design. Having said that, I wonder if we should make it easier for users. One reason why we don't do it automatically today is because the nodes being added may not be online when they are being added. We discussed a scheme to handle this in the past. One option is to add an ONLINE option to create node command and if specified, coordinator can try connecting to the new node and make appropriate catalog changes there too. Command can fail if the node is Unreachable. We can also find a way to fire pgxc_pool_reload automatically when cluster definition changes. Thanks, Pavan |
|
From: 鈴木 幸市 <ko...@in...> - 2014-04-24 05:46:39
|
At present pgxc_ctl does this. --- Koichi Suzuki 2014/04/24 13:31、Pavan Deolasee <pav...@gm...<mailto:pav...@gm...>> のメール: I just want to make sure I'm not missing something obvious. If this is the way it's designed to work, then that's fine. Yes, this is per design. Having said that, I wonder if we should make it easier for users. One reason why we don't do it automatically today is because the nodes being added may not be online when they are being added. We discussed a scheme to handle this in the past. One option is to add an ONLINE option to create node command and if specified, coordinator can try connecting to the new node and make appropriate catalog changes there too. Command can fail if the node is Unreachable. We can also find a way to fire pgxc_pool_reload automatically when cluster definition changes. Thanks, Pavan ------------------------------------------------------------------------------ Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |