From: Gordan B. <go...@bo...> - 2009-01-29 16:30:58
|
On Thu, 29 Jan 2009 16:39:30 +0100, Marc Grimme <gr...@at...> wrote: [...] > Note: The first triggering of udev is just to cause the modules to be > loaded once to save all MAC addresses. Then they are unloaded again. > > But in order to detect the nics I can also change the process like as > follows: > 1. load drivers as specified in modprobe.conf > 2. Save the drivers > 3. Trigger udev > 4. Save macs > 5. Unload all newly loaded drivers > But I think this wouldn't be too universal. As It does not really work > stable for currupt modprobe.confs. Is "it must work with broken/corrupt modprobe.conf" really a reasonable requirement? > The other way would do the same and > additionally would also work with corrupt modprobe.confs and @driver in the > cluster.conf. What happens when @driver and modprobe.conf contradict each other? Which takes precedence? > I still think this is a general way which is quite stable and most > universal. I agree that it should be stable, but I'm still in two minds about supporting the case of corrupt modprobe.conf. I can see that different cluster nodes could have different hardware and thus have different modprobe.conf-s, which means that there are only two options: 1) Specify drivers in cluster.conf and ignore modprobe.conf completely. Note that this also applies to storage drivers - if the nodes are different they could also have different disk controllers (very relevant for shared-SCSI-bus, DRBD and GlusterFS solutions), which would also cause similar problems. 2) Load each node's modprobe.conf (hopefully /cdsl.local is mounted off the shared file system and not a local partition on each disk - not a requirement (not enforced at least) at the moment!) into the initrd and analyze at runtime based on which node we are running on. The scsi_controller drivers would have to be loaded after the NICs are set up since until we get MACs we don't know which node we're running on. I can see advantages to both approaches. 1) is more elegant from the implementation point of view since we only have to configure storage and NICs for all nodes in one file. 2) is more elegant because there are no redundant configuration entries between cluster.conf and modprobe.conf. Having said that, we need at least some of the NIC setup in cluster.conf, so that already makes the configuration redundancy necessary anyway. OK, I think I'm convinced - maybe it would be better to ignore modprobe.conf alltogether. Does that mean something similar is required for disk controller drivers in cluster.conf? :) Gordan |