From: Marc G. <gr...@at...> - 2008-04-17 09:59:51
|
On Thursday 17 April 2008 11:15:42 go...@bo... wrote: > On Thu, 17 Apr 2008, Marc Grimme wrote: > >> In the mirror client/server, it would be similar to a DRBD+GFS setup, > >> only scaleable to more than 2-3 nodes (IIRC, DRBD only supports up to > >> 2-3 nodes at the moment). Each node would mount it's local mirror as OSR > >> (as it does with DRBD). > >> > >> The upshot is that as far as I can make out, unlike GFS, split-brains > >> are less of an issue in FS terms - GlusterFS would sort that out, so in > >> theory, we could have an n-node cluster with quorum of 1. > >> > >> The only potential issue with that would be migration of IPs - if it > >> split-brains, it would, in theory, cause an IP resource clash. But I > >> think the scope for FS corruption would be removed. There might still be > >> file clobbering on resync, but the FS certainly wouldn't get totally > >> destroyed like with splitbrain GFS and shared SAN storage (DRBD+GFS also > >> has the same benefit that you get to keep at least one version of the FS > >> after split-brain). > > > > And wouldn't the ip thing if appropriate being handled via a > > clustermanager (rgmanager)? > > Indeed it would, but that would still be susceptible to split-brain IP > clashes. But fencing should, hopefully, stop that from ever happening. Yes. The rgmanager or even heartbeat or any other ha-cluster software has its own way of detecting and solving split brain scenarios. The rgmanager uses the same functionality as gfs does. > > >>>> On a separate node, am I correct in presuming that the diet version of > >>>> the initrd with the kernel drivers pruned and additional package > >>>> filtering added as per the patch I sent a while back was not deemed a > >>>> good idea? > >>> > >>> Thanks for reminding me. I forgot to answer, sorry. > >>> > >>> The idea itself is good. But originally and by concept the initrd it > >>> designed to be an initrd used for different hardware configurations. > >> > >> Same initrd for multiple configurations? Why is this useful? Different > >> configurations could also run different kernels, which would invalidate > >> the shared initrd concept... > > > > No necessarily. I was a design idea and still is a kind of USP and most > > important something other customers use. > > > > Just a small example why. Let's suppose you have servers from HP the same > > Product branch (like HP DL38x) but of different generations. Then the > > onboard nics would on older ones be the tg3/bmc5700 driver whereas newer > > Generations use bnx2 drivers for their onboard nics. And then when > > bringing in IBM/Sun/Dell whatever other servers it becomes more > > complicated. And all this should be handled by one single shared boot. > > > > Did this explain the problem? > > How do you work around the fact that each node needs a different > modprobe.conf for the different NIC/driver bindings? The hardware detection takes place in the initrd and the "generated" initrd is copied during bootprocess on the rootdisk. > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/java >one _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |