RE: [SSI] qns regarding SSI features
Brought to you by:
brucewalker,
rogertsang
From: Aneesh K. K.V <ane...@di...> - 2002-10-28 11:49:28
|
On Mon, 2002-10-28 at 16:26, Sharad Tiwari wrote: > hi again, > > Thanks aneesh for the link I found out most of the things the hard way by experimenting them > on the set up. i currently have three nodes on the cluster ... > > I needed to clear a few more doubts, please find my questions below : > > 1) Where is the information of shared hardwareand software resources stored ? ( Also if I have to add a new resource at any node for sharing in the whole cluster how do i do that, and where will be this information be stored ? > What do you mean by shared hardware and software. SSI works with a shraed root ( / ) That means all the nodes see the same files. If you are asking about node specifc files For debian i have put it under /cluster/nodenum1 for node 1 and /cluster/nodenum2 for node 2. And then created a CDSL. ie. /etc/nodename -> /cluster/{nodenum}/etc/nodename. > 2)In the absence of CFS, if my primary node goes down, will a secondary node take over or the whole cluster goes down. In the absence of CFS you will be using GFS right ? Earlier before we had CFS fully supported we were allowing the process migration to open reopen the files with name. Now i guess i looks for inode matching also. I am not sure about this. Brian Should be able to confirm. But i guess now you cannot build a ssi cluster with nodes having local hardisk. > > 3)How do I add and remove nodes both in the presence and absence of CFS? Same here either CFS or GFS. Any adding node is == modifying /etc/clustertab and rebuilding tftp boot image in the case of SSI > > 4) If I dont want to use CFS what are the features I cannot use? For SSI you need to have a shared root. If you are looking for building any other service on top of CI then you can use the member ship service and node communication services provided by CI.( messaging/RPC ) Take a look at opendlm . It depends only on CI and brings in a distributed locking functionality to the cluster. > > 5)Is it possible to have more than one primary node? Yes they are called potential masters. But for the time being is meaningful when used along with GFS. CFS doesn't support fail over as of now > > 6) Is it possible to boot from a node other than the primary node? > You can put a different DHCP server and put the network boot image there. But the "/" will be provided by the primary. Bruce was talking last time about a state in which i can leave a file system in a unavailable state and get it served by some other node. ie, initially when the system boot /home will be served by node1 now after the secondary master ( potential master node ) 2 join you will leave /home in unavailable state and later ask node 2 to server /home. This way IO can be balanced across multiple nodes in the cluster. That will require a sort of layering of CFS over GFS I guess. > Thanking you all in advance ... > > > > -aneesh |