From: <go...@bo...> - 2008-04-17 09:37:11
|
On Thu, 17 Apr 2008, Marc Grimme wrote: >> In the mirror client/server, it would be similar to a DRBD+GFS setup, only >> scaleable to more than 2-3 nodes (IIRC, DRBD only supports up to 2-3 nodes >> at the moment). Each node would mount it's local mirror as OSR (as it does >> with DRBD). >> >> The upshot is that as far as I can make out, unlike GFS, split-brains are >> less of an issue in FS terms - GlusterFS would sort that out, so in >> theory, we could have an n-node cluster with quorum of 1. >> >> The only potential issue with that would be migration of IPs - if it >> split-brains, it would, in theory, cause an IP resource clash. But I think >> the scope for FS corruption would be removed. There might still be file >> clobbering on resync, but the FS certainly wouldn't get totally >> destroyed like with splitbrain GFS and shared SAN storage (DRBD+GFS also >> has the same benefit that you get to keep at least one version of the FS >> after split-brain). > > And wouldn't the ip thing if appropriate being handled via a clustermanager > (rgmanager)? Indeed it would, but that would still be susceptible to split-brain IP clashes. But fencing should, hopefully, stop that from ever happening. >>>> On a separate node, am I correct in presuming that the diet version of >>>> the initrd with the kernel drivers pruned and additional package >>>> filtering added as per the patch I sent a while back was not deemed a >>>> good idea? >>> >>> Thanks for reminding me. I forgot to answer, sorry. >>> >>> The idea itself is good. But originally and by concept the initrd it >>> designed to be an initrd used for different hardware configurations. >> >> Same initrd for multiple configurations? Why is this useful? Different >> configurations could also run different kernels, which would invalidate >> the shared initrd concept... > > No necessarily. I was a design idea and still is a kind of USP and most > important something other customers use. > > Just a small example why. Let's suppose you have servers from HP the same > Product branch (like HP DL38x) but of different generations. Then the onboard > nics would on older ones be the tg3/bmc5700 driver whereas newer Generations > use bnx2 drivers for their onboard nics. And then when bringing in > IBM/Sun/Dell whatever other servers it becomes more complicated. And all this > should be handled by one single shared boot. > > Did this explain the problem? How do you work around the fact that each node needs a different modprobe.conf for the different NIC/driver bindings? Gordan |