From: Gordan B. <go...@bo...> - 2008-11-20 20:10:49
|
Hi, I'm working on adding GlusterFS support for OSR. I know I mentioned this months ago, but I've waited until glusterfs is actually stable enough to use in production (well, stable enough for me, anyway). The client-only case will be simple enough, but for a client-with-local-storage case, it's a bit more complicated. GlusterFS uses a local file system as backing storage. That means that it needs to mount a local ext3 file system before mounting the glusterfs volume via fuse on top and chroots into it for the 2nd stage boot. So, I can use the <rootvolume> entry for the glusterfs part, but before mounting that as the rootfs, I first have to mount the underlying ext3 file system as stage 1.5 rootfs. I'm currently thinking of implementing it by having <chrootenv> contain the glusterfs backing store. This would be fine as long as the <chrootenv> volume can be considered clobber-safe (i.e. extracting the initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained within the initrd file structure. 1) Does this sound like a reasonable way to do it? 2) Is it safe to assume that chrootenv won't get silently clobbered? I know it doesn't seem to happen, but I'm still a little nervous about it... Thanks. Gordan |
From: Marc G. <gr...@at...> - 2008-11-20 20:41:19
|
Hi Gordan, On Thursday 20 November 2008 21:10:43 Gordan Bobic wrote: > Hi, > > I'm working on adding GlusterFS support for OSR. I know I mentioned this > months ago, but I've waited until glusterfs is actually stable enough to > use in production (well, stable enough for me, anyway). ;-) > > The client-only case will be simple enough, but for a > client-with-local-storage case, it's a bit more complicated. GlusterFS > uses a local file system as backing storage. That means that it needs to > mount a local ext3 file system before mounting the glusterfs volume via > fuse on top and chroots into it for the 2nd stage boot. > > So, I can use the <rootvolume> entry for the glusterfs part, but before > mounting that as the rootfs, I first have to mount the underlying ext3 > file system as stage 1.5 rootfs. > > I'm currently thinking of implementing it by having <chrootenv> contain > the glusterfs backing store. This would be fine as long as the > <chrootenv> volume can be considered clobber-safe (i.e. extracting the > initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained > within the initrd file structure. > > 1) Does this sound like a reasonable way to do it? Hmm. I don't like the idea of using the chroot environment in way it is not expected to be used. > > 2) Is it safe to assume that chrootenv won't get silently clobbered? I > know it doesn't seem to happen, but I'm still a little nervous about it... I agree although I don't think it will be clobbered. But why not go this way: There is a method/function "clusterfs_services_start" which itself calls ${clutype}_services_start which should be implemented by etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore there should be a library glusterfs-lib.sh. The function clusterfs_services_start should do all the preparations to mount the clusterfilesystem and could also mount a local filesystem as a prerequisite. So I would put it there. A rootfs can be specified as follows: <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> What do you think? Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-20 21:55:51
|
Marc Grimme wrote: > Hi Gordan, > > On Thursday 20 November 2008 21:10:43 Gordan Bobic wrote: >> Hi, >> >> I'm working on adding GlusterFS support for OSR. I know I mentioned this >> months ago, but I've waited until glusterfs is actually stable enough to >> use in production (well, stable enough for me, anyway). > ;-) >> The client-only case will be simple enough, but for a >> client-with-local-storage case, it's a bit more complicated. GlusterFS >> uses a local file system as backing storage. That means that it needs to >> mount a local ext3 file system before mounting the glusterfs volume via >> fuse on top and chroots into it for the 2nd stage boot. >> >> So, I can use the <rootvolume> entry for the glusterfs part, but before >> mounting that as the rootfs, I first have to mount the underlying ext3 >> file system as stage 1.5 rootfs. >> >> I'm currently thinking of implementing it by having <chrootenv> contain >> the glusterfs backing store. This would be fine as long as the >> <chrootenv> volume can be considered clobber-safe (i.e. extracting the >> initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained >> within the initrd file structure. >> >> 1) Does this sound like a reasonable way to do it? > > Hmm. I don't like the idea of using the chroot environment in way it is not > expected to be used. I see it more the other way around. Not as mis-using the chroot environment for data storage, but as misusing the data store for the chroot environment. ;) >> 2) Is it safe to assume that chrootenv won't get silently clobbered? I >> know it doesn't seem to happen, but I'm still a little nervous about it... > > I agree although I don't think it will be clobbered. The main reason why I suggested it is because it means one fewer partition is required (fewer partitions = more flexible, less forward planning required, less wasted space). I even went as far as thinking about using the same partition for kernel images. It would mean there is only one data volume (plus a swap partition). > But why not go this way: > There is a method/function "clusterfs_services_start" which itself calls > ${clutype}_services_start which should be implemented by > etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore there > should be a library glusterfs-lib.sh. The function clusterfs_services_start > should do all the preparations to mount the clusterfilesystem and could also > mount a local filesystem as a prerequisite. So I would put it there. Fair enough, that sounds reasonable. Just have to make sure the backing volume is listed in the fstab of the init-root. > A rootfs can be specified as follows: > <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> Indeed, I figured this out already. :) I bumped into another problem, though: # com-mkcdslinfrastructure -r /gluster/root -i Traceback (most recent call last): File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? from comoonics.cdsl.ComCdslRepository import * File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", line 28, in ? defaults_path = os.path.join(cdsls_path,cdslDefaults_element) NameError: name 'cdslDefaults_element' is not defined What am I missing? Gordan |
From: Gordan B. <go...@bo...> - 2008-11-20 23:16:10
|
Gordan Bobic wrote: > I bumped into another problem, though: > > # com-mkcdslinfrastructure -r /gluster/root -i > Traceback (most recent call last): > File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > from comoonics.cdsl.ComCdslRepository import * > File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", > line 28, in ? > defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > NameError: name 'cdslDefaults_element' is not defined Digging a little deeper, it turns out that although the package versions (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the new setup (x86-64), the contents of the /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally different. The new version seems to have a bunch of definitions that, it would seem, break things. I replaced with the old version, and it now seems to complete without errors. Am I correct in guessing this is a packaging bug? Gordan |
From: Marc G. <gr...@at...> - 2008-11-21 07:23:31
|
On Thursday 20 November 2008 22:55:42 Gordan Bobic wrote: > Marc Grimme wrote: > > Hi Gordan, > > > > On Thursday 20 November 2008 21:10:43 Gordan Bobic wrote: > >> Hi, > >> > >> I'm working on adding GlusterFS support for OSR. I know I mentioned this > >> months ago, but I've waited until glusterfs is actually stable enough to > >> use in production (well, stable enough for me, anyway). > > > > ;-) > > > >> The client-only case will be simple enough, but for a > >> client-with-local-storage case, it's a bit more complicated. GlusterFS > >> uses a local file system as backing storage. That means that it needs to > >> mount a local ext3 file system before mounting the glusterfs volume via > >> fuse on top and chroots into it for the 2nd stage boot. > >> > >> So, I can use the <rootvolume> entry for the glusterfs part, but before > >> mounting that as the rootfs, I first have to mount the underlying ext3 > >> file system as stage 1.5 rootfs. > >> > >> I'm currently thinking of implementing it by having <chrootenv> contain > >> the glusterfs backing store. This would be fine as long as the > >> <chrootenv> volume can be considered clobber-safe (i.e. extracting the > >> initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained > >> within the initrd file structure. > >> > >> 1) Does this sound like a reasonable way to do it? > > > > Hmm. I don't like the idea of using the chroot environment in way it is > > not expected to be used. > > I see it more the other way around. Not as mis-using the chroot > environment for data storage, but as misusing the data store for the > chroot environment. ;) You got that point. But there is now alternative is there? Still the idea of the chroot was to build a chroot (either on a localfs or in Memory) for services that need to be run as requirement for keeping a clusterfs running without residing on the clusterfilesystem. AND to have some kind of last resort way to access a frozen cluster (fenceacksv, I hate that name). > > >> 2) Is it safe to assume that chrootenv won't get silently clobbered? I > >> know it doesn't seem to happen, but I'm still a little nervous about > >> it... > > > > I agree although I don't think it will be clobbered. > > The main reason why I suggested it is because it means one fewer > partition is required (fewer partitions = more flexible, less forward > planning required, less wasted space). I even went as far as thinking > about using the same partition for kernel images. It would mean there is > only one data volume (plus a swap partition). As you like it. > Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-22 15:51:43
|
Marc Grimme wrote: > But why not go this way: > There is a method/function "clusterfs_services_start" which itself calls > ${clutype}_services_start which should be implemented by > etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore there > should be a library glusterfs-lib.sh. The function clusterfs_services_start > should do all the preparations to mount the clusterfilesystem and could also > mount a local filesystem as a prerequisite. So I would put it there. > A rootfs can be specified as follows: > <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> OK, this solution wins on the grounds that remapping the chrootenv seems to confuse glusterfs. But ${clutype}_services_start is dependent on ${clutype}, which is returned from getCluType(), which always returns "gfs". Will that not break things? From what you said, if I have the following entry: <rootvolume name="/etc/glusterfs/root.vol" fstype="flusterfs"/> $rootfs come from $fstype. Is that right? Or from <rootsource name="glusterfs">? Or somewhere else entirely? Thanks. Gordan |
From: Marc G. <gr...@at...> - 2008-11-21 07:19:11
|
On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: > Gordan Bobic wrote: > > I bumped into another problem, though: > > > > # com-mkcdslinfrastructure -r /gluster/root -i > > Traceback (most recent call last): > > File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > > from comoonics.cdsl.ComCdslRepository import * > > File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", > > line 28, in ? > > defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > > NameError: name 'cdslDefaults_element' is not defined > > Digging a little deeper, it turns out that although the package versions > (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the > new setup (x86-64), the contents of the > /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally > different. The new version seems to have a bunch of definitions that, it > would seem, break things. > > I replaced with the old version, and it now seems to complete without > errors. > > Am I correct in guessing this is a packaging bug? Yes you are. Don't know how this could happen. But yes I bumped into that bug also some time before. You might want to try more recent versions. This should be fixed. Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-21 09:44:58
|
Marc Grimme wrote: > On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: >> Gordan Bobic wrote: >>> I bumped into another problem, though: >>> >>> # com-mkcdslinfrastructure -r /gluster/root -i >>> Traceback (most recent call last): >>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? >>> from comoonics.cdsl.ComCdslRepository import * >>> File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", >>> line 28, in ? >>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) >>> NameError: name 'cdslDefaults_element' is not defined >> Digging a little deeper, it turns out that although the package versions >> (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the >> new setup (x86-64), the contents of the >> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally >> different. The new version seems to have a bunch of definitions that, it >> would seem, break things. >> >> I replaced with the old version, and it now seems to complete without >> errors. >> >> Am I correct in guessing this is a packaging bug? > Yes you are. Don't know how this could happen. But yes I bumped into that bug > also some time before. > > You might want to try more recent versions. > > This should be fixed. Do you mean more recent version as of today? This was broken in the version I yum-installed yesterday. Gordan |
From: Marc G. <gr...@at...> - 2008-11-21 09:56:56
|
On Friday 21 November 2008 10:44:29 Gordan Bobic wrote: > Marc Grimme wrote: > > On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: > >> Gordan Bobic wrote: > >>> I bumped into another problem, though: > >>> > >>> # com-mkcdslinfrastructure -r /gluster/root -i > >>> Traceback (most recent call last): > >>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > >>> from comoonics.cdsl.ComCdslRepository import * > >>> File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", > >>> line 28, in ? > >>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > >>> NameError: name 'cdslDefaults_element' is not defined > >> > >> Digging a little deeper, it turns out that although the package versions > >> (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the > >> new setup (x86-64), the contents of the > >> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally > >> different. The new version seems to have a bunch of definitions that, it > >> would seem, break things. > >> > >> I replaced with the old version, and it now seems to complete without > >> errors. > >> > >> Am I correct in guessing this is a packaging bug? > > > > Yes you are. Don't know how this could happen. But yes I bumped into that > > bug also some time before. > > > > You might want to try more recent versions. > > > > This should be fixed. > > Do you mean more recent version as of today? This was broken in the > version I yum-installed yesterday. Hm. Are you using preview or productive? > > Gordan > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-21 12:25:22
|
Marc Grimme wrote: > On Friday 21 November 2008 10:44:29 Gordan Bobic wrote: >> Marc Grimme wrote: >>> On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: >>>> Gordan Bobic wrote: >>>>> I bumped into another problem, though: >>>>> >>>>> # com-mkcdslinfrastructure -r /gluster/root -i >>>>> Traceback (most recent call last): >>>>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? >>>>> from comoonics.cdsl.ComCdslRepository import * >>>>> File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", >>>>> line 28, in ? >>>>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) >>>>> NameError: name 'cdslDefaults_element' is not defined >>>> Digging a little deeper, it turns out that although the package versions >>>> (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the >>>> new setup (x86-64), the contents of the >>>> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally >>>> different. The new version seems to have a bunch of definitions that, it >>>> would seem, break things. >>>> >>>> I replaced with the old version, and it now seems to complete without >>>> errors. >>>> >>>> Am I correct in guessing this is a packaging bug? >>> Yes you are. Don't know how this could happen. But yes I bumped into that >>> bug also some time before. >>> >>> You might want to try more recent versions. >>> >>> This should be fixed. >> Do you mean more recent version as of today? This was broken in the >> version I yum-installed yesterday. > > Hm. Are you using preview or productive? I think I have both repositories enabled in yum, so it would have come from whichever has the higher package version number. Gordan |
From: Marc G. <gr...@at...> - 2008-11-21 12:33:59
|
On Friday 21 November 2008 13:25:12 Gordan Bobic wrote: > Marc Grimme wrote: > > On Friday 21 November 2008 10:44:29 Gordan Bobic wrote: > >> Marc Grimme wrote: > >>> On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: > >>>> Gordan Bobic wrote: > >>>>> I bumped into another problem, though: > >>>>> > >>>>> # com-mkcdslinfrastructure -r /gluster/root -i > >>>>> Traceback (most recent call last): > >>>>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > >>>>> from comoonics.cdsl.ComCdslRepository import * > >>>>> File > >>>>> "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", line > >>>>> 28, in ? > >>>>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > >>>>> NameError: name 'cdslDefaults_element' is not defined > >>>> > >>>> Digging a little deeper, it turns out that although the package > >>>> versions (comoonics-cdsl-py-0.2-11) are the same on my old setup > >>>> (i386) and the new setup (x86-64), the contents of the > >>>> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are > >>>> totally different. The new version seems to have a bunch of > >>>> definitions that, it would seem, break things. > >>>> > >>>> I replaced with the old version, and it now seems to complete without > >>>> errors. > >>>> > >>>> Am I correct in guessing this is a packaging bug? > >>> > >>> Yes you are. Don't know how this could happen. But yes I bumped into > >>> that bug also some time before. > >>> > >>> You might want to try more recent versions. > >>> > >>> This should be fixed. > >> > >> Do you mean more recent version as of today? This was broken in the > >> version I yum-installed yesterday. > > > > Hm. Are you using preview or productive? > > I think I have both repositories enabled in yum, so it would have come > from whichever has the higher package version number. So yes in the preview we had this bug. So an update should fix it. You could also just update comoonics-cdsl-py and deps. That should do it. > > Gordan > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Marc G. <gr...@at...> - 2008-11-22 18:00:18
|
On Saturday 22 November 2008 16:51:31 Gordan Bobic wrote: > Marc Grimme wrote: > > But why not go this way: > > There is a method/function "clusterfs_services_start" which itself calls > > ${clutype}_services_start which should be implemented by > > etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore > > there should be a library glusterfs-lib.sh. The function > > clusterfs_services_start should do all the preparations to mount the > > clusterfilesystem and could also mount a local filesystem as a > > prerequisite. So I would put it there. A rootfs can be specified as > > follows: > > <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> > > OK, this solution wins on the grounds that remapping the chrootenv seems > to confuse glusterfs. > > But ${clutype}_services_start is dependent on ${clutype}, which is > returned from getCluType(), which always returns "gfs". Will that not > break things? Not any more take the latest version of clusterfs-lib.sh and it is dependent on rootfs. I changed it how it should recently. > > From what you said, if I have the following entry: > <rootvolume name="/etc/glusterfs/root.vol" fstype="flusterfs"/> > $rootfs come from $fstype. Is that right? Or from > <rootsource name="glusterfs">? Or somewhere else entirely? It comes form the fstype attribute not from the name. > > Thanks. > > Gordan > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-22 21:49:15
|
Marc Grimme wrote: > On Saturday 22 November 2008 16:51:31 Gordan Bobic wrote: >> Marc Grimme wrote: >>> But why not go this way: >>> There is a method/function "clusterfs_services_start" which itself calls >>> ${clutype}_services_start which should be implemented by >>> etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore >>> there should be a library glusterfs-lib.sh. The function >>> clusterfs_services_start should do all the preparations to mount the >>> clusterfilesystem and could also mount a local filesystem as a >>> prerequisite. So I would put it there. A rootfs can be specified as >>> follows: >>> <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> >> OK, this solution wins on the grounds that remapping the chrootenv seems >> to confuse glusterfs. >> >> But ${clutype}_services_start is dependent on ${clutype}, which is >> returned from getCluType(), which always returns "gfs". Will that not >> break things? > > Not any more take the latest version of clusterfs-lib.sh and it is dependent > on rootfs. I changed it how it should recently. Do you mean you changed getCluType(), or made it irrelevant? I freshly rebuilt the test box this morning, that that's where getCluType() was always returning "gfs". Anyway, I'll just ignore it if it's not important. Gordan |
From: Marc G. <gr...@at...> - 2008-11-23 10:32:03
|
On Saturday 22 November 2008 22:37:00 Gordan Bobic wrote: > Marc Grimme wrote: > > On Saturday 22 November 2008 16:51:31 Gordan Bobic wrote: > >> Marc Grimme wrote: > >>> But why not go this way: > >>> There is a method/function "clusterfs_services_start" which itself > >>> calls ${clutype}_services_start which should be implemented by > >>> etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore > >>> there should be a library glusterfs-lib.sh. The function > >>> clusterfs_services_start should do all the preparations to mount the > >>> clusterfilesystem and could also mount a local filesystem as a > >>> prerequisite. So I would put it there. A rootfs can be specified as > >>> follows: > >>> <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> > >> > >> OK, this solution wins on the grounds that remapping the chrootenv seems > >> to confuse glusterfs. > >> > >> But ${clutype}_services_start is dependent on ${clutype}, which is > >> returned from getCluType(), which always returns "gfs". Will that not > >> break things? > > > > Not any more take the latest version of clusterfs-lib.sh and it is > > dependent on rootfs. I changed it how it should recently. > > Do you mean you changed getCluType(), or made it irrelevant? I freshly > rebuilt the test box this morning, that that's where getCluType() was > always returning "gfs". Anyway, I'll just ignore it if it's not important. I wouldn't say that it's not important (although there are no other options implemented yet) but anyway it is "thought" as follows: clutype should return the "clustertype" we are talking of. When using a cluster with /etc/cluster/cluster.conf it is a "gfs" cluster. Although I don't like the "gfs". But I cannot change it so easily so we have to stay with it for sometime. In next versions we are planing to change it how it should be to something like "redhat-cluster". But the "gfs" has to stay for the first as clustertype for any /etc/cluster/cluster.conf based cluster. "rootfs" is a little bit ahead as we are supporting more then one rootfs (gfs, ocfs2 and ext3 .. gluserfs ;-) ). As we implemented ocfs2 and ext3 recently we had to review the use of "clutype" and "rootfs" as it should be. So in your case clutype will stay "gfs" (yes I don't like it either but we'll change it some versions ahead. At latest when we are pushing sles10 in productive state. Until now you have to use /etc/cluster/cluster.conf in sles10 although there is no gfs or anything the like.) and "rootfs" should be "glusterfs". Therefore you have to build a glusterfs-lib.sh with all functions called by clusterfs-lib.sh implemented. We will also need a listfile for it. But you've done this before so I don't expect it to be of any problem. If you like I'll send you a list of functions to implement tomorrow. Ok? -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |