Re: [SSI] Run Cluster with IDE Drives
Brought to you by:
brucewalker,
rogertsang
From: Brian J. W. <Bri...@co...> - 2002-02-02 01:42:39
|
"--==c.g.==--" wrote: > > Hi ! > > Is it possible to run cluster whithout an external shared drive, e.g. with > normal IDE drives. I know that it's not so comfortable, when adding ne nodes > to do the whole installation on them, but for my case it would be ok. > > So: > Is it possible? > Has anybody done it? It is possible to run SSI clusters without a shared root, although it is not recommended. Having a different root on each node can be particularly problematic when migrating processes. When a process is migrating, it has to reopen all of the files it had open on the old node. It expects that each file should have the same pathname, the same inode number, the same filesystem device number, etc. on the new node. This is really only possible with a shared root. There are some shared root solutions that would work with your hardware. One is the GFS Network Block Device (GNBD). In this configuration, you need one node outside the cluster with a fairly large disk. The disk must hold not only its own root, but also a GFS root for the cluster. A good minimum disk size is 4 GB. Sometime next week, I'll release instructions on how to use GNBD with SSI 0.6.0. It should be a bit easier than the instructions were for 0.5.2. Another solution that only requires a single machine is SSI on UML. With this solution, you would have a cluster of virtual machines. Kitrick Sheets contributed code to 0.6.0 that makes this possible, albeit without a shared root. It may not be too difficult to make it work with a shared root. He and I will try to get some instructions out in the next couple of weeks on how to do this. A longer term solution is the Cluster File System (CFS). It's basically NFS with tight coherency, so that the internal IDE drive of one of the nodes can be used as the root of the entire cluster. Combined with some volume management software, the shared root could be mirrored across the internal drives of two or more nodes, thus providing better availability. Dave Zafman has included the current snapshot of his CFS work in 0.6.0. It's derived from Non-Stop Clusters (NSC) for UnixWare, the Linux NFS implementation, and OpenGFS. It doesn't yet work, but it's provided for reference. -- Brian Watson | "Now I don't know, but I been told it's Linux Kernel Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Compaq Computer Corp | just as hard with the weight of lead." Los Angeles, CA | -Robert Hunter, 1970 mailto:Bri...@co... http://opensource.compaq.com/ |