From: Gordan B. <go...@bo...> - 2008-11-20 21:55:51
|
Marc Grimme wrote: > Hi Gordan, > > On Thursday 20 November 2008 21:10:43 Gordan Bobic wrote: >> Hi, >> >> I'm working on adding GlusterFS support for OSR. I know I mentioned this >> months ago, but I've waited until glusterfs is actually stable enough to >> use in production (well, stable enough for me, anyway). > ;-) >> The client-only case will be simple enough, but for a >> client-with-local-storage case, it's a bit more complicated. GlusterFS >> uses a local file system as backing storage. That means that it needs to >> mount a local ext3 file system before mounting the glusterfs volume via >> fuse on top and chroots into it for the 2nd stage boot. >> >> So, I can use the <rootvolume> entry for the glusterfs part, but before >> mounting that as the rootfs, I first have to mount the underlying ext3 >> file system as stage 1.5 rootfs. >> >> I'm currently thinking of implementing it by having <chrootenv> contain >> the glusterfs backing store. This would be fine as long as the >> <chrootenv> volume can be considered clobber-safe (i.e. extracting the >> initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained >> within the initrd file structure. >> >> 1) Does this sound like a reasonable way to do it? > > Hmm. I don't like the idea of using the chroot environment in way it is not > expected to be used. I see it more the other way around. Not as mis-using the chroot environment for data storage, but as misusing the data store for the chroot environment. ;) >> 2) Is it safe to assume that chrootenv won't get silently clobbered? I >> know it doesn't seem to happen, but I'm still a little nervous about it... > > I agree although I don't think it will be clobbered. The main reason why I suggested it is because it means one fewer partition is required (fewer partitions = more flexible, less forward planning required, less wasted space). I even went as far as thinking about using the same partition for kernel images. It would mean there is only one data volume (plus a swap partition). > But why not go this way: > There is a method/function "clusterfs_services_start" which itself calls > ${clutype}_services_start which should be implemented by > etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore there > should be a library glusterfs-lib.sh. The function clusterfs_services_start > should do all the preparations to mount the clusterfilesystem and could also > mount a local filesystem as a prerequisite. So I would put it there. Fair enough, that sounds reasonable. Just have to make sure the backing volume is listed in the fstab of the init-root. > A rootfs can be specified as follows: > <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> Indeed, I figured this out already. :) I bumped into another problem, though: # com-mkcdslinfrastructure -r /gluster/root -i Traceback (most recent call last): File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? from comoonics.cdsl.ComCdslRepository import * File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", line 28, in ? defaults_path = os.path.join(cdsls_path,cdslDefaults_element) NameError: name 'cdslDefaults_element' is not defined What am I missing? Gordan |