Thread: RE: [SSI] Network access to a SSI cluster
Brought to you by:
brucewalker,
rogertsang
From: Aneesh K. K.V <ane...@di...> - 2001-11-27 14:20:34
|
Hi, On Tue, 2001-11-27 at 19:24, Dickson, Steve wrote: > Hello Anessh, > > > I guess since all the nodes share the same disk( shared storage ) all > > will end up in the same network configuration file. > They only need to share the same root (or boot disk) right? Each node can > have it's own local disk right? Not really . When SSI starts it uses the inital ramdisk. One could make it fetch from the local hardisk. This intial ramdisk contain the /linuxrc file which when executed with start the SSI init and mount the shared GFS as its root. ( that's the shared storage ). now take the case where two nodes are connected with a Y cable to that shared storage. the second node also will make the shared storage as it root node. which means both the node will hit the same /etc/sysconfig directory. This will result in both the nodes etho/eth1 whatever interface u used to have the external connection to have the same ipaddress. One should be able to sove it using context dependent symbolic links (CDSL ) I guess GFS support it . For example in Tru64 machine the files local to the node are kept in a shared storage in the location /cluster/members/{memb}/etc where the 'namei' will convert {memb} to member0 on node0 and member1 on node 1 like that .. I don't whehter SSI is making use of that . Infact i will require /etc/rc.d/init.d/network script to be modified accordingly. > > > But I guess you can do a 'ifconfig' on each node to bring the node to the > external network. > > Ha it is always advisory for you to use another network card for > > external network. If you want to keep the same configuration use the > > cluster.conf with the IP you want eth0 to be configure. > OK... as long as I can bring a second interface (eth1) on a second NIC > things would work out.... I'll probably have to bring up the second > interface > by hand. Meaning each node will have to bring it separately up after its > joined the > cluster, true? > Yes. but once we make use of CDSL it will not be needed any more > > -----Original Message----- > From: Aneesh Kumar K.V [mailto:ane...@di...] > Sent: Tuesday, November 27, 2001 8:09 AM > To: Dickson, Steve > Subject: Re: [SSI] Network access to a SSI cluster > > Hi, > > I guess since all the nodes share the same disk( shared storage ) all > will end up in the same network configuration file. But i guess you can > do a 'ifconfig' on each node to bring the node to the external network. > Ha it is always advisory for you to use another network card for > external network. If you want to keep the same configuration use the > cluster.conf with the IP you want eth0 to be configure. > > correct me if I am wrong > > -aneesh > > > On Tue, 2001-11-27 at 18:18, Dickson, Steve wrote: > > Hello, > > > > I was reading in the INSTALL file of ssi-linux-2.4.10-ac4-v0.5.2 that I > > needed > > turn off networking (step 3 of section III: Configuring the cluster). Does > > this mean > > that I will have no network access to the Cluster? > > > > SteveD. > > _________________________ > > Steve Dickson > > Paceline Systems Corporation > > 50 Nagog Park > > Acton, MA 01720 > > Tel: 978-929-8839 > > Fax: 978-263-8381 > > > > > > > > _______________________________________________ > > ssic-linux-devel mailing list > > ssi...@li... > > https://lists.sourceforge.net/lists/listinfo/ssic-linux-devel |
From: Brian J. W. <Bri...@co...> - 2001-11-27 21:01:04
|
> Hello, > > I was reading in the INSTALL file of ssi-linux-2.4.10-ac4-v0.5.2 that I > needed > turn off networking (step 3 of section III: Configuring the cluster). Does > this mean > that I will have no network access to the Cluster? > No, you'll have the cluster interconnect interface on each node, which is set up during the initrd phase of booting. As Aneesh points out, each node has its own initrd with its own cluster.conf with a unique IP address. For testing and development purposes, the interconnect interface can also be used for external networking. Note that this is a Bad Thing from a security and performance standpoint, and will not be recommended for production. I think an appropriate solution for external networking configuration depends on John Byrne's distributed device work. Following the SSI philosophy, every NIC in the cluster has a unique name, as if they were all part of a giant SMP: Node Interconnect External ---- ------------ -------- 1 eth0 eth1 2 eth2 eth3 3 eth4 eth5 A simple database on the shared root would keep track of the name for each device, so that the naming is persistent regardless of the order in which the nodes are booted. Records would be added automatically to the database the first time a node joins the cluster. User intervention would just be needed to delete records. Once this infrastructure is in place, the existing single-site network script can be used without modification to configure external networking. The SSI version of init re-runs the rc scripts for any late joiners, so their external NICs can be configured, too. Assuming that reconfiguring an existing interface with the same information doesn't do anything nasty, like break connections, this should all just work. BTW, I've been ignoring step III.3 lately, because it's annoying to manually configure the gateway address every time a node boots. The downside is that the CLMS master node can't shutdown cleanly, although you can fix this by removing the appropriate network kill script (/etc/rc.d/rc0.d/K90network on RedHat 7.2). -- Brian Watson | "Now I don't know, but I been told it's Linux Kernel Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Compaq Computer Corp | just as hard with the weight of lead." Los Angeles, CA | -Robert Hunter, 1970 mailto:Bri...@co... http://opensource.compaq.com/ |
From: Dickson, S. <St...@pa...> - 2001-11-28 13:14:06
|
Hello Brian, > No, you'll have the cluster interconnect interface on each node, which > is set up during the initrd phase of booting. As Aneesh points out, each > node has its own initrd with its own cluster.conf with a unique IP > address. For testing and development purposes, the interconnect > interface can also be used for external networking. Note that this is a > Bad Thing from a security and performance standpoint, and will not be > recommended for production. OK... I'm planning on using two nics so there shouldn't be a performance issue... > I think an appropriate solution for external networking configuration > depends on John Byrne's distributed device work. True... Is this part of the CFS work? > A simple database on the shared root would keep track of the name for > each device, so that the naming is persistent regardless of the order in > which the nodes are booted. Records would be added automatically to the > database the first time a node joins the cluster. User intervention > would just be needed to delete records. Interesting thought.... Maybe the database should be distributed to easy in the failover and recovery effort..... SteveD. -----Original Message----- From: Brian J. Watson [mailto:Bri...@co...] Sent: Tuesday, November 27, 2001 3:31 PM To: Dickson, Steve; SSI discussion Subject: Re: [SSI] Network access to a SSI cluster > Hello, > > I was reading in the INSTALL file of ssi-linux-2.4.10-ac4-v0.5.2 that I > needed > turn off networking (step 3 of section III: Configuring the cluster). Does > this mean > that I will have no network access to the Cluster? > No, you'll have the cluster interconnect interface on each node, which is set up during the initrd phase of booting. As Aneesh points out, each node has its own initrd with its own cluster.conf with a unique IP address. For testing and development purposes, the interconnect interface can also be used for external networking. Note that this is a Bad Thing from a security and performance standpoint, and will not be recommended for production. I think an appropriate solution for external networking configuration depends on John Byrne's distributed device work. Following the SSI philosophy, every NIC in the cluster has a unique name, as if they were all part of a giant SMP: Node Interconnect External ---- ------------ -------- 1 eth0 eth1 2 eth2 eth3 3 eth4 eth5 A simple database on the shared root would keep track of the name for each device, so that the naming is persistent regardless of the order in which the nodes are booted. Records would be added automatically to the database the first time a node joins the cluster. User intervention would just be needed to delete records. Once this infrastructure is in place, the existing single-site network script can be used without modification to configure external networking. The SSI version of init re-runs the rc scripts for any late joiners, so their external NICs can be configured, too. Assuming that reconfiguring an existing interface with the same information doesn't do anything nasty, like break connections, this should all just work. BTW, I've been ignoring step III.3 lately, because it's annoying to manually configure the gateway address every time a node boots. The downside is that the CLMS master node can't shutdown cleanly, although you can fix this by removing the appropriate network kill script (/etc/rc.d/rc0.d/K90network on RedHat 7.2). -- Brian Watson | "Now I don't know, but I been told it's Linux Kernel Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Compaq Computer Corp | just as hard with the weight of lead." Los Angeles, CA | -Robert Hunter, 1970 mailto:Bri...@co... http://opensource.compaq.com/ _________________________ Steve Dickson Paceline Systems Corporation 50 Nagog Park, Acton MA, 01720 Tel: 978-929-8839 Fax: 978-263-8381 |
From: Brian J. W. <Bri...@co...> - 2001-11-28 23:15:46
|
"Dickson, Steve" wrote: > > OK... I'm planning on using two nics so there shouldn't be a performance > issue... > Don't expect to use both of them right away, unless you're willing to hack around in your configuration scripts. If you are willing, then take a look at /etc/rc.d/rc.nodeup. It gets run with a node number as an argument every time a node joins the cluster. Similarly, there's a /etc/rc.d/rc.nodedown script. Adding some code to them would allow you to use different configuration info for different nodes in a shared root environment. This is just a short-term hack, however, and the real solution is clusterwide unique network device names as I described. > > I think an appropriate solution for external networking configuration > > depends on John Byrne's distributed device work. > True... Is this part of the CFS work? No, CFS is an alternative (and potential complement) to GFS for shared real filesystems (such as the root). Dave Zafman is doing the CFS work. John's working on a clusterized version of devfs, which is a virtual filesystem (like /proc). It does not use CFS. The clusterized version would assign unique, persistent names to all devices in the cluster, and allow access to any device from any node. The second feature is already supported, and the first has been implemented for ptys (the persistence part was trivial ;). John pointed out to me that network devices cannot be found in /dev, so they are not directly part of his work. Nevertheless, I hope his stuff can be leveraged to provide unique, persistent names for network devices. > Interesting thought.... Maybe the database should be distributed > to easy in the failover and recovery effort..... > Being on a shared filesystem, the database file is already available to all nodes. Any user-mode daemon that maintains it can be made to restart on a new node by adding it to the watch list of an application monitoring and restart daemon, such as keepalive. Any kernel services involved are written to gracefully handle arbitrary node failures, and centralized kernel services are designed to rebuild themselves on a new node (these are requirements of Open SSI kernel components). So, yes -- the database should and will be able to handle arbitrary node failures. -- Brian Watson | "Now I don't know, but I been told it's Linux Kernel Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Compaq Computer Corp | just as hard with the weight of lead." Los Angeles, CA | -Robert Hunter, 1970 mailto:Bri...@co... http://opensource.compaq.com/ |
From: Christoph H. <hc...@ca...> - 2001-11-27 14:28:02
|
On Tue, Nov 27, 2001 at 07:50:26PM +0530, Aneesh Kumar K.V wrote: > One should be > able to sove it using context dependent symbolic links (CDSL ) I guess > GFS support it . For example in Tru64 machine the files local to the > node are kept in a shared storage in the location > /cluster/members/{memb}/etc where the 'namei' will convert {memb} to > member0 on node0 and member1 on node 1 like that .. Yes, GFS currently supports it. But we will remove it in OpenGFS 0.2, as there is a much better and filesystem independant way of doing this in Linux. Just put a mount --bind /etc-${nodename} /etc in one of your initscripts, ${nodename} is a shell variable and there are different ways to fill it (e.g. kernel command line, or looking up by mac address). Christoph -- Of course it doesn't work. We've performed a software upgrade. |