Thread: RE: [SSI-users] various questions
Brought to you by:
brucewalker,
rogertsang
From: Walker, B. J <bru...@hp...> - 2004-09-06 15:11:16
|
> -----Original Message----- > From: ssi...@li...=20 > [mailto:ssi...@li...] On=20 > Behalf Of smee > Sent: Sunday, September 05, 2004 5:48 PM > To: ssi...@li... > Subject: [SSI-users] various questions >=20 >=20 > Hi all, >=20 > After having a chance to play a bit more with openssi, I have a few > questions remaining to get a fuller understanding. >=20 > 1. At the moment, I am booting up the init node and then the other > nodes are joining the cluster by PXE or Etherboot. As such, all the > non-init nodes are getting their boot image from the init node. > 1.1 Is there a way to have node specific bootup configuration and > yet still join the cluster? ie. if I require hardware specific kernel > option on a certain node. It is the goal of OpenSSI to have all the nodes run the same kernel (part of the ease of management goals). With loadable modules, it is quite rare to need different kernels. Please let the group know a situation where it would be necessary. That said, I believe you can program dhcpd.conf to give different stuff to different nodes. You can also have a boot device on each node and in them could be different kernels. If you do either of these, however, be careful because some of the OpenSSI tools will try to put things back in sync (disable ssi-ksync, for example). > 1.2 I noticed there is the option of a local boot device (this may > relate to question 1.1) when using the ssi-create util rather than the > openssi-config-node util. Although the local boot option was mentioned > primarily to be able to have root fail-over, what other purpose was > intended for local boot? Mostly for root failover but can be used in environments where PXE or etherboot would be impractical. >=20 > 2. How is swap space used if no local swap has been configured? ie. If > node2 requires swap at some stage but has not been configured with its > own local swap (via /etc/fstab), then does it eventually end up using > the init node's swap space? Would I be correct in assuming that a > local swap is advisable and would be faster than the init node's > shared swap? Currently you should "manually" configure swap on each node, using the local disk. If you don't, the node will have no swap and if it runs out of memory, processes will be shot. There is no swap sharing. Hopefully we will add "swap-to-a-remote-file" capability at some point. >=20 > 3. LVS Questions > 3.1 The LVS website documentation mentions 3 forms of function: > NAT, Director and Tunnelling. What form does the openssi packaged LVS > take? Reading through the openssi-lvs related documentation, there is > a lot of mention of director-realserver, so would I be correct in > assuming that the openssi packaged LVS is using director routing? I believe the documentation said NAT, "Direct" and tunnelling (not director). In all forms, LVS uses a director to do the routing. Direct routing (which is what is used in OpenSSI), requires all the server nodes to be on the same subnet as the director node (since they are all part of the same OpenSSI cluster which also requires them to be part of the same subnet, this is not an added restriction). It also requires that the cluster interconnect network not be a "private address" network, which is somewhat of a restriction. As a result, in the development branch of CVS there is support for LVS-NAT. (note there is not "release" from the development branch yet. >=20 > 3.2 The LVS website documentation mentions a few methods of load > balancing of incoming connection requests. ie. the IPVS Scheduler. The > methods are all module loadable and configured when building the > kernel. > <M> round-robin scheduling > <M> weighted round-robin scheduling > <M> least-connection scheduling scheduling > <M> weighted least-connection scheduling > <M> locality-based least-connection scheduling > <M> locality-based least-connection with replication=20 > scheduling > <M> destination hashing scheduling > <M> source hashing scheduling >=20 > How do we, as admins, selectively configure which schedule=20 > we want to use? Not sure you can at this point. Weighted round-robin is what is used in OpenSSI at this point. I am interested in supporting choices, particularly one that leverages the load level information that already exists for process load balancing. =20 >=20 > 3.3 I was successful in following the openssi documention for > configuring LVS with failover with the following example: > <?xml version=3D"1.0"?> > <cvips> > <cvip> > <ip_addr>a.b.c.d</ip_addr> > <director_node> > <node_num>1</node_num> > <garp_interface>eth1</garp_interface> > <sync_interface>eth0</sync_interface> > </director_node> > <director_node> > <node_num>2</node_num> > <garp_interface>eth1</garp_interface> > <sync_interface>eth0</sync_interface> > </director_node> > <real_server_node> > <node_num>2</node_num> > </real_server_node> > <real_server_node> > <node_num>3</node_num> > </real_server_node> > </cvip> > </cvips> >=20 > In the above xml configuration, the directors are: node1 as main > director with node2 failover. The realservers are nodes 2 and 3. How > would I go about configuring it so that after failover, node2 no > longer is a part of the realservers? Not easy to do. The cvip.conf example in README.CVIP uses real server as node 1 and node 2. I believe most OpenSSI installations specify the director node as one of the real servers (being a director is not a real heavy burden). >=20 > 3.4 The above xml configuration implies a sort of active-passive > director setup. Is there a way to configure active-active setup. The > examples weren't very clear at hinting this. As I mentioned above, we tend to run active-active by just specifying the director(s) as real servers as well. >=20 > 3.5 The LVS website mentions mon+heartbeat+fake+coda as a way to > have high availability of the virtual ip. Can we safely ignore this > suggestion and just use the openssi ha-lvs solution with failover > configuration to achieve high availability? I ask because where > openssi lacks in its documentation, I have had to jump to the LVS > website to get a clearer picture, but their documentation is sometimes > conflicting or doesn't apply to openssi. You should be able to depend entirely on the OpenSSI documentation. You mention a lack of info on the NAT/DR/Tunnelling and the scheduling policy. Any other specific deficiencies in the OpenSSI LVS-related documents (README.CVIP and README.ipvs)? >=20 >=20 > Cheers for any replies. > Cuong. Thanks for your interest and excellent questions, bruce walker OpenSSI project lead. >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by BEA Weblogic Workshop > FREE Java Enterprise J2EE developer tools! > Get your free copy of BEA WebLogic Workshop 8.1 today. > http://ads.osdn.com/?ad_id=3D5047&alloc_id=3D10808&op=3Dclick > _______________________________________________ > Ssic-linux-users mailing list > Ssi...@li... > https://lists.sourceforge.net/lists/listinfo/ssic-linux-users >=20 |
From: smee <sn...@gm...> - 2004-09-06 23:48:42
|
Thanks for the replies Bruce. > Mostly for root failover but can be used in environments where PXE or > etherboot would be impractical. I will need to administer these machines completely remotely (internationally) when in production. That means I need a foolproof way to boot: no floppies, no cdroms, just in case the media dies on us. Colo hands-on support is expensive just to get someone to replace a floppy or cd, so PXE or etherboot will be impractical. Is there some information I can read up on how to do a local boot and be able to join the cluster? Do I simply set up each machine as I would the init node? If so, how would I specify that other nodes not be init nodes? (I don't recall whether I was prompted for this information when running ./install). > I believe the documentation said NAT, "Direct" and tunnelling (not > director). Oops, my mistake. > It also requires > that the cluster interconnect network not be a "private address" > network, which is somewhat of a restriction. Is that an openssi imposed constraint? That's not the impression I received while reading the LVS website documentation. I thought I read that for Direct routing, you could have private and public addresses as you so choose. > As a result, in the > development branch of CVS there is support for LVS-NAT. > (note there is not "release" from the development branch yet. Hmm...LVS-NAT under load was purported to be around 20% slower than Direct or Tunneling. If so, NAT is not an option. > Not sure you can at this point. Weighted round-robin is what is used in > OpenSSI at this point. I am interested in supporting choices, > particularly one that leverages the load level information that already > exists for process load balancing. I could live with weighted round robin for now and hope that auto process migration will take care of load leveling. > Not easy to do. The cvip.conf example in README.CVIP uses real server > as node 1 and node 2. I believe most OpenSSI installations specify the > director node as one of the real servers (being a director is not a real > heavy burden). Yes, not a real burden, but experience has shown it is best that a machine which is providing a crucial function not be overloaded with other tasks. eg. a DNS machine. I'm sure we could find a dedicated machine for this somewhere in our budget. Cheers. Cuong |
From: Brian J. W. <Bri...@hp...> - 2004-09-24 01:44:38
|
smee wrote: > Thanks for the replies Bruce. > > > >>Mostly for root failover but can be used in environments where PXE or >>etherboot would be impractical. > > > I will need to administer these machines completely remotely > (internationally) when in production. That means I need a foolproof > way to boot: no floppies, no cdroms, just in case the media dies on > us. Colo hands-on support is expensive just to get someone to replace > a floppy or cd, so PXE or etherboot will be impractical. > Is there some information I can read up on how to do a local boot and > be able to join the cluster? Step 3.8 in the installation instructions. Brian |
From: Aneesh K. K.V <ane...@hp...> - 2004-09-07 13:51:00
|
Walker, Bruce J wrote: > >>- >> 3.2 The LVS website documentation mentions a few methods of load >>balancing of incoming connection requests. ie. the IPVS Scheduler. The >>methods are all module loadable and configured when building the >>kernel. >> <M> round-robin scheduling >> <M> weighted round-robin scheduling >> <M> least-connection scheduling scheduling >> <M> weighted least-connection scheduling >> <M> locality-based least-connection scheduling >> <M> locality-based least-connection with replication >>scheduling >> <M> destination hashing scheduling >> <M> source hashing scheduling >> >> How do we, as admins, selectively configure which schedule >>we want to use? > > > Not sure you can at this point. Weighted round-robin is what is used in > OpenSSI at this point. I am interested in supporting choices, > particularly one that leverages the load level information that already > exists for process load balancing. > > I guess we use weight least connection scheduling. ( "wlc" include/cluster/net.h ). You can build a new kernel by changing that definition. I guess it will work. -aneesh |