From: Gordan B. <go...@bo...> - 2009-01-29 17:42:13
|
On Thu, 29 Jan 2009 18:03:17 +0100, Marc Grimme <gr...@at...> wrote: >> > Note: The first triggering of udev is just to cause the modules to be >> > loaded once to save all MAC addresses. Then they are unloaded again. >> > >> > But in order to detect the nics I can also change the process like as >> > follows: >> > 1. load drivers as specified in modprobe.conf >> > 2. Save the drivers >> > 3. Trigger udev >> > 4. Save macs >> > 5. Unload all newly loaded drivers >> > But I think this wouldn't be too universal. As It does not really work >> > stable for currupt modprobe.confs. >> >> Is "it must work with broken/corrupt modprobe.conf" really a reasonable >> requirement? > > I've seen it not only once. Case 1: mixed hardware. Case 2: Cloned clusters > forgot to change the modprobe.conf. Of course - I have done it more than once myself. :^) Anyway, as I already said, I'm now fully convinced that your original idea (@driver) is the best solution. Sorry I doubted it. :) > I don't know if this is reasonable. I would say it's a positive side effect > of better supporting mixed hardware and that's the real reason. > > There are customers who are using our initrd to being able to boot guest on > real physical hardware and vice versa if need be. Then that @driver concept > is quite a nice thing to have. Agreed. >> > The other way would do the same and >> > additionally would also work with corrupt modprobe.confs and @driver in >> > the cluster.conf. >> >> What happens when @driver and modprobe.conf contradict each other? Which >> takes precedence? > > @driver. If you leave it out it won't be used. So everything works as > before but you can add such a thing. OK. What about making @driver mandatory? It would mean there is less scope for a mistake? Perhaps for now backward compatibility is important for those who blindly "yum update" and expect everything to still work, but I think mkinitrd should at least throw a warning if @driver isn't present, saying that non-use of it is deprecated? >> > I still think this is a general way which is quite stable and most >> > universal. >> >> I agree that it should be stable, but I'm still in two minds about >> supporting the case of corrupt modprobe.conf. I can see that different >> cluster nodes could have different hardware and thus have different >> modprobe.conf-s, which means that there are only two options: >> >> 1) Specify drivers in cluster.conf and ignore modprobe.conf completely. >> Note that this also applies to storage drivers - if the nodes are >> different >> they could also have different disk controllers (very relevant for >> shared-SCSI-bus, DRBD and GlusterFS solutions), which would also cause >> similar problems. >> >> 2) Load each node's modprobe.conf (hopefully /cdsl.local is mounted off >> the shared file system and not a local partition on each disk - not a >> requirement (not enforced at least) at the moment!) into the initrd and >> analyze at runtime based on which node we are running on. The >> scsi_controller drivers would have to be loaded after the NICs are set up >> since until we get MACs we don't know which node we're running on. >> >> I can see advantages to both approaches. 1) is more elegant from the >> implementation point of view since we only have to configure storage and >> NICs for all nodes in one file. 2) is more elegant because there are no >> redundant configuration entries between cluster.conf and modprobe.conf. >> Having said that, we need at least some of the NIC setup in cluster.conf, >> so that already makes the configuration redundancy necessary anyway. >> >> OK, I think I'm convinced - maybe it would be better to ignore >> modprobe.conf alltogether. >> >> Does that mean something similar is required for disk controller drivers >> in cluster.conf? :) > > I knew you would come over with something like this and the answer is yes > but not know. Sorry. :-( I didn't mean to be difficult, just trying to cover an extra base that seemed like a logical extension. > I want to implement and see the @driver scenario then we can easily add the > same thing for storage. But there you normally don't have that order > problem. > But still I think it's a good idea to later have it there too. Great, thanks for clearing it up. Please post when the updated package is in preview and I'll test it on the cluster that I found to be affected by the issue. :) Gordan |