You can subscribe to this list here.
2006 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(105) |
Nov
(10) |
Dec
(7) |
2008 |
Jan
|
Feb
(31) |
Mar
(13) |
Apr
(7) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(23) |
Dec
|
2009 |
Jan
(25) |
Feb
(24) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(6) |
Jul
(27) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(7) |
Dec
(25) |
2010 |
Jan
|
Feb
(7) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
From: Marc G. <gr...@at...> - 2009-01-21 12:29:28
|
Hi Gordan, On Wednesday 21 January 2009 02:52:57 Gordan Bobic wrote: > Hi, > > It would appear that > /opt/atix/comoonics-bootimage/boot-scripts/etc/rhel5/hardware-lib.sh has > gone through a few changes in the past few months, which, unfortunately, > break it for me. > > The problem is in the ordering of the detected NICs. On one of my > systems I have a dual e1000 built into the mobo, and an e100 as an > add-in card. /etc/modprobe.conf lists eth0 and eth1 as the e1000s, and > eth2 as e100. This works fine with hardware-lib.sh v1.5, but with v1.7 > the ordering seems to be both unstable (about 1/10 of the time it'll > actually get the NIC ordering as expected and specified in cluster.conf > and the rest of the time it'll do something different) and inconsistent > with what is in cluster.conf and modprobe.conf. That's strange. I have the same problems on one cluster like you describe it. One time everything works and the other time it doesn't. But all other clusters work. The reason why I changed the hw detection for rhel5 is because it didn't work for VMs (especially kvm) and I didn't find any problems on all the other clusters (except for the one me and the one from you). I think I have to look deeper into that matter. So what you say is if you just change hardware-lib.sh from 1.7 to 1.5 everything works fine? Cause I thought it was due to the order (that's what I've changed) of udevd and kudzu/modprobe eth* being called. Older versions first called kudzu then probed for the nics and then started udevd. Now I'm first starting udevd then - if appropriate - kudzu and then probe for the NICs. I always thought that it was because of the order. But if the new order works with hardware-lib.sh (v1.5) but not for 1.7 it isn't because of the order. As the order is defined by linuxrc.generic.sh. Can you acknowledge that it's only the version of hardware-lib.sh? > > The last version that works for me is v1.5, and the latest released > version (I'm talking about CVS version numbers here) appears to be v1.7 > for this file (in the comoonics-bootimage-1.3-40.noarch.rpm release). > > Needless to say, trying to boot off an iSCSI shared root with the NIC > not starting because eth designation doesn't match the MAC doesn't get > very far. :-/ Very needless. It's the same for non iscsi clusters ;-) . So this needs to be fixed. > > On a separate note, would it perhaps be a good idea to also have an > updinitrd script? After a few versions of the clustering tools and OSR > tools, it's impossible to tell what bugs could be introduced that break > things. Granted, indiscriminately doing "yum update" is a bad idea, but > it happens to the best of us that we miss something that we really ought > to exclude. But what could be done instead is to have an updinitrd > script that opens the current initrd and just modifies the handful of > files that need changing (e.g. adding a service to cluster.conf) before > re-cpio-ing it. Any thoughts on this idea? I know that in the ideal > world it shouldn't be needed, but this is exactly what I ended up having > to do yesterday because new initrds just wouldn't boot (there was an > additional problem _just_ before mounting the GFS off DRBD where it > fails and drops to the prompt - I haven't gotten to the bottom of that > one yet). Interestingly, the latest tools work just fine for GlusterFS. That's it. It's working for most clusters but some make problems so I need to elaborate on this. The updateinitrd was answered by Reiner already. Thanks and sorry about that ugly bug. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Reiner R. <rot...@at...> - 2009-01-21 07:34:19
|
Hi Gordan, First of all I have great respect for your work and would like to thank you for your effort! Regarding the update of the initrd you may have a look at the comoonics enterprise copy tool (com-ec) which is able to inject a new cluster.conf in the existing initrd by re-cpio'ing it. The usage is quite simple. You need to install the following RPMs: comoonics-ec-py-0.1-36 comoonics-cs-xsl-ec-0.5-19 Then in /etc/comoonics/enterprisecopy/ there is a XML file called updateinitrd.xml. There you need to specify where your boot-device resides and what to mount. For your requirements you may need to adapt the XML file. e.g. to skip the mounting of boot and simply stating the location of your initrd. Best regards, Reiner -- Gruss / Regards, Dipl.-Ing. (FH) Reiner Rottmann ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-12 Fax: +49-89 990 1766-0 Email: rot...@at... PGP Key-ID: 0xCA67C5A6 www.atix.de | www.open-sharedroot.org Vorstaende: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 ---------------------------------------------------------------------- *** Besuchen Sie uns auf dem ATIX IT Solution Day: Linux Cluster-Technolgien, am 05.02.2009 in Neuss b. Koeln/Duesseldorf! www.atix.de/event-archiv/atix-it-solution-day-linux-neuss *** ---------------------------------------------------------------------- On Wednesday 21 January 2009 02:52:57 am Gordan Bobic wrote: > Hi, > > It would appear that > /opt/atix/comoonics-bootimage/boot-scripts/etc/rhel5/hardware-lib.sh has > gone through a few changes in the past few months, which, unfortunately, > break it for me. > > The problem is in the ordering of the detected NICs. On one of my > systems I have a dual e1000 built into the mobo, and an e100 as an > add-in card. /etc/modprobe.conf lists eth0 and eth1 as the e1000s, and > eth2 as e100. This works fine with hardware-lib.sh v1.5, but with v1.7 > the ordering seems to be both unstable (about 1/10 of the time it'll > actually get the NIC ordering as expected and specified in cluster.conf > and the rest of the time it'll do something different) and inconsistent > with what is in cluster.conf and modprobe.conf. > > The last version that works for me is v1.5, and the latest released > version (I'm talking about CVS version numbers here) appears to be v1.7 > for this file (in the comoonics-bootimage-1.3-40.noarch.rpm release). > > Needless to say, trying to boot off an iSCSI shared root with the NIC > not starting because eth designation doesn't match the MAC doesn't get > very far. :-/ > > On a separate note, would it perhaps be a good idea to also have an > updinitrd script? After a few versions of the clustering tools and OSR > tools, it's impossible to tell what bugs could be introduced that break > things. Granted, indiscriminately doing "yum update" is a bad idea, but > it happens to the best of us that we miss something that we really ought > to exclude. But what could be done instead is to have an updinitrd > script that opens the current initrd and just modifies the handful of > files that need changing (e.g. adding a service to cluster.conf) before > re-cpio-ing it. Any thoughts on this idea? I know that in the ideal > world it shouldn't be needed, but this is exactly what I ended up having > to do yesterday because new initrds just wouldn't boot (there was an > additional problem _just_ before mounting the GFS off DRBD where it > fails and drops to the prompt - I haven't gotten to the bottom of that > one yet). Interestingly, the latest tools work just fine for GlusterFS. > > Gordan > > --------------------------------------------------------------------------- >--- This SF.net email is sponsored by: > SourcForge Community > SourceForge wants to tell your story. > http://p.sf.net/sfu/sf-spreadtheword > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel |
From: Gordan B. <go...@bo...> - 2009-01-21 01:53:40
|
Hi, It would appear that /opt/atix/comoonics-bootimage/boot-scripts/etc/rhel5/hardware-lib.sh has gone through a few changes in the past few months, which, unfortunately, break it for me. The problem is in the ordering of the detected NICs. On one of my systems I have a dual e1000 built into the mobo, and an e100 as an add-in card. /etc/modprobe.conf lists eth0 and eth1 as the e1000s, and eth2 as e100. This works fine with hardware-lib.sh v1.5, but with v1.7 the ordering seems to be both unstable (about 1/10 of the time it'll actually get the NIC ordering as expected and specified in cluster.conf and the rest of the time it'll do something different) and inconsistent with what is in cluster.conf and modprobe.conf. The last version that works for me is v1.5, and the latest released version (I'm talking about CVS version numbers here) appears to be v1.7 for this file (in the comoonics-bootimage-1.3-40.noarch.rpm release). Needless to say, trying to boot off an iSCSI shared root with the NIC not starting because eth designation doesn't match the MAC doesn't get very far. :-/ On a separate note, would it perhaps be a good idea to also have an updinitrd script? After a few versions of the clustering tools and OSR tools, it's impossible to tell what bugs could be introduced that break things. Granted, indiscriminately doing "yum update" is a bad idea, but it happens to the best of us that we miss something that we really ought to exclude. But what could be done instead is to have an updinitrd script that opens the current initrd and just modifies the handful of files that need changing (e.g. adding a service to cluster.conf) before re-cpio-ing it. Any thoughts on this idea? I know that in the ideal world it shouldn't be needed, but this is exactly what I ended up having to do yesterday because new initrds just wouldn't boot (there was an additional problem _just_ before mounting the GFS off DRBD where it fails and drops to the prompt - I haven't gotten to the bottom of that one yet). Interestingly, the latest tools work just fine for GlusterFS. Gordan |
From: Marc G. <gr...@at...> - 2009-01-19 20:44:45
|
Hi Gordan, I flew over your patches and everything looks ok. I will diet the glusterfs-lib.sh as there are functions that are to be in clusterfs-lib.sh and gfs-lib.sh. But all that core functions will go upstream right away. Great work thanks. Nice try with the vim-common and vim-minimal ;-) (https://bugzilla.atix.de/show_bug.cgi?id=228). There is already a bug open. There will be a comoonic-bootimage-extras-vim at some point in time. Do you mind creating a Howto on open-sharedroot.org like the one there and just change the things you said? Let's say you would create a Howto in www.open-sharedroot.org/Members/gordan/my-howto and we'll review and move it to the documentation folder? Marc. On Friday 16 January 2009 00:39:56 Gordan Bobic wrote: > Here's a preliminary version based on the OSR RHEL5 GFS mini-howto from > here: > http://www.open-sharedroot.org/documentation/rhel5-gfs-shared-root-mini-how >to > > Modified/added files are attached. > > Prerequisites > ------------- > Freshly installed RHEL5. > > Install RHCS components: > # yum install cman > > Install Com.oonics Packages > --------------------------- > > # yum install comoonics-bootimage \ > comoonics-cdsl-py \ > comoonics-bootimage-extras-glusterfs > > [Note: I'm assuming the package for this will be called extras-glusterfs > since that was what the DRBD one I submitted ended up being called.] > > Install GlusterFS patched fuse packages: > # wget > http://ftp.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.7.3-2.src.rpm > > # rpmbuild --rebuild fuse-2.7.3-2.src.rpm > # rpm -Uvh /usr/src/redhat/RPMS/x86_64/fuse-2.7.3-2.x86_64.rpm \ > /usr/src/redhat/RPMS/x86_64/fuse-libs-2.7.3-2.x86_64.rpm \ > /usr/src/redhat/RPMS/x86_64/fuse-kernel-module-2.6.18-92.1.22.el5-2.7.3-2.x >86_64.rpm > > # wget > http://ftp.gluster.com/pub/gluster/glusterfs/2.0/2.0.0/glusterfs-2.0.0rc1.t >ar.gz > > # rpmbuild -tb glusterfs-2.0.0rc1.tar.gz > # rpm -Uvh /usr/src/redhat/RPMS/x86_64/glusterfs-2.0.0rc1-1.x86_64.rpm > > [Note: The current versions may change, the ones listed are correct at > the time of writing this document. The paths above also assume x86-64 > architecture.] > > Create a cluster configuration file /etc/cluster/cluster.conf with the > com_info tags. This time fencing isn't mandatory if you aren't using > resource failover, as unline GFS, GlusterFS won't block if a peer goes > away. For GlusterFS splitbrain caveats and handling see > http://www.gluster.org/ > > [cluster.conf attached] > > A quick note about the disk layout of the underlying disk. This howto > assumes the following: > /dev/sda1 /boot > /dev/sda2 / > /dev/sda3 swap > /dev/sda4 comoonics-chroot (so we can deallocate the initrd) > > On /, we are assumed to have a directory /gluster/root which contains > the gluster rootfs. > > Create the GlusterFS root volume specification in > /etc/glusterfs/root.vol. Here is an example of a simple volume spec file > for a 2-server AFR (mirroring) configuration where each server has a > local copy of the data. Note that you could also do this diskless, with > the rootfs being on remote server(s) much like NFS, and even distributed > or striped across multiple servers. See GlusterFS documentation for > details. > > [root.vol attached] > > Mount the glusterfs file system: > # mkdir /mnt/newroot > # mount -t glusterfs /etc/glusterfs/root.vom /mnt/newroot > > Copy all data from the local installed RHEL5 root filesystem to the > shared root filesystem: > > [ ... the rest of the section is identical to the GFS howto ... ] > > Make sure you apply the patches from > /opt/atix/comoonics-bootimage/patches to the init scripts in the new > root, especially the network and halt init scripts! > > Note that the RPM library in it's default configuration WILL NOT work > under GlusterFS. GlusterFS is fuse based and thus doesn't support > writable mmap(), which BerkeleyDB (default RPM database format) requires > to function. To work around this problem, we can convert the RPM > database to use SQLite. The functionality is already built into RHEL5 > RPM packages, we just need to do the following: > > > # rpm -v --rebuilddbapi 4 --dbpath /var/lib/rpm --rebuilddb > > and then change the following lines in /usr/lib/rpm/macros: > %_dbapi 3 > %_dbapi_rebuild 3 > > to > > %_dbapi 4 > %_dbapi_rebuild 4 > > [Note: This should _probably_ be another patch in > /opt/atix/comoonics-bootimage/patches, trivial as it may be.] > > [Note: Updated network.patch attached, the current one in the repo > didn't seem to apply cleanly, and I added the exclusion of network > disconnection when GlusterFS is used.] > > [Note: You cannot use a GlusterFS based shared boot per se, but you > COULD use GlusterFS to keep /boot in sync and boot off it's backing > storage device. No new devices need be created, only an additional > volume spec using the /boot volume as the backing store for GlusterFS. > All operations on top of GlusterFS would cause the /boot device to get > mirrored across the machines. This is only meaningful with > AFR/mirroring. Also note that grub is virtually guaranteed to get > horribly confused when asked to make a GlusterFS based file system > bootable. In conclusion - don't do this unless you understand what I'm > talking about here and know what you're doing.] > > Create the shared root initrd as per usual: > > /opt/atix/comoonics-bootimage/mkinitrd -f > /mnt/newroot/boot/initrd_sr-$(uname -r).img $(uname -r) > > > Final note: You can side-step the copying of the root FS by operating > directly on the master copy. This means you won't have to then manually > go and delete the initial installation (except for the /gluster > directory), but it also means that any mistakes along the way will > render the system unusable and you'll have to re-install from scratch > and try again. You would then, of course, need to change the path in > root.vol from /mnt/tmproot/glusterfs/root to /mnt/tmproot. > > Awaiting peer review. :) > > Gordan -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2009-01-16 10:30:03
|
Ah, I see! Perhaps it might be an idea to include the creation of the /tmp directory, too, then? It was only a matter of time before something needed it for initialization (it's required by fuse, hence why I included it in the files.initrd.d/glusterfs.list). When can I "yum update" to the fixed version? ;) Thanks. Gordan On Fri 16/01/09 10:17 , Marc Grimme <gr...@at...> wrote: > On Friday 16 January 2009 10:33:06 Gordan Bobic wrote: > > Err... So how has this been working until now? :-/ > > > > Marc Grimme wrote: > > > On Friday 16 January 2009 00:59:17 Gordan Bobic wrote: > > >> Quick question, embarrasing as it may be - where does /sys > mount point > > >> get created in the initrd? I have just rebuilt my initrd and > there is no > > >> /sys mount point in it, so mounting /sys fails, which is > probably why > > >> udev startup doesn't create the device nodes, and from there on > it all > > >> goes wrong. grepping for sys and mkdir comes up with nothing... > What > > >> have I broken? > > > > > > I think nothing. Yesterday I realised that /sys is not created. > Just > > > change boot-scripts/etc/boot-lib.sh function initEnv so that it > is > > > automatically created: > > > echo_local_debug "*****************************" > > > echo_local -n "Mounting Proc-FS" > > > is_mounted /proc > > > if [ $? -ne 0 ]; then > > > [ -d /proc ] || mkdir /proc > > > exec_local /bin/mount -t proc proc /proc > > > return_code > > > else > > > passed > > > fi > > > > > > echo_local -n "Mounting Sys-FS" > > > is_mounted /sys > > > if [ $? -ne 0 ]; then > > > [ -d /sys ] || mkdir /sys > > > exec_local /bin/mount -t sysfs none /sys > > > return_code > > > else > > > passed > > > fi > I think there was a dir sys in > /opt/atix/comoonics-bootimage/boot-scripts/sys. > That is dir copied into the initrd and mapped to /. > -- > Gruss / Regards, > Marc Grimme > http://www.atix.de/ http://www.open-sharedroot.org/ > > ---- Msg sent via @Mail - http://atmail.com/ |
From: Marc G. <gr...@at...> - 2009-01-16 10:17:39
|
On Friday 16 January 2009 10:33:06 Gordan Bobic wrote: > Err... So how has this been working until now? :-/ > > Marc Grimme wrote: > > On Friday 16 January 2009 00:59:17 Gordan Bobic wrote: > >> Quick question, embarrasing as it may be - where does /sys mount point > >> get created in the initrd? I have just rebuilt my initrd and there is no > >> /sys mount point in it, so mounting /sys fails, which is probably why > >> udev startup doesn't create the device nodes, and from there on it all > >> goes wrong. grepping for sys and mkdir comes up with nothing... What > >> have I broken? > > > > I think nothing. Yesterday I realised that /sys is not created. Just > > change boot-scripts/etc/boot-lib.sh function initEnv so that it is > > automatically created: > > echo_local_debug "*****************************" > > echo_local -n "Mounting Proc-FS" > > is_mounted /proc > > if [ $? -ne 0 ]; then > > [ -d /proc ] || mkdir /proc > > exec_local /bin/mount -t proc proc /proc > > return_code > > else > > passed > > fi > > > > echo_local -n "Mounting Sys-FS" > > is_mounted /sys > > if [ $? -ne 0 ]; then > > [ -d /sys ] || mkdir /sys > > exec_local /bin/mount -t sysfs none /sys > > return_code > > else > > passed > > fi > > > > But don't ask me how that missing /sys was missed. > > > > Sorry about that cheers > > Marc. > > --------------------------------------------------------------------------- >--- This SF.net email is sponsored by: > SourcForge Community > SourceForge wants to tell your story. > http://p.sf.net/sfu/sf-spreadtheword > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel I think there was a dir sys in /opt/atix/comoonics-bootimage/boot-scripts/sys. That is dir copied into the initrd and mapped to /. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2009-01-16 09:33:20
|
Err... So how has this been working until now? :-/ Marc Grimme wrote: > On Friday 16 January 2009 00:59:17 Gordan Bobic wrote: >> Quick question, embarrasing as it may be - where does /sys mount point >> get created in the initrd? I have just rebuilt my initrd and there is no >> /sys mount point in it, so mounting /sys fails, which is probably why >> udev startup doesn't create the device nodes, and from there on it all >> goes wrong. grepping for sys and mkdir comes up with nothing... What >> have I broken? > I think nothing. Yesterday I realised that /sys is not created. Just change > boot-scripts/etc/boot-lib.sh function initEnv so that it is automatically > created: > echo_local_debug "*****************************" > echo_local -n "Mounting Proc-FS" > is_mounted /proc > if [ $? -ne 0 ]; then > [ -d /proc ] || mkdir /proc > exec_local /bin/mount -t proc proc /proc > return_code > else > passed > fi > > echo_local -n "Mounting Sys-FS" > is_mounted /sys > if [ $? -ne 0 ]; then > [ -d /sys ] || mkdir /sys > exec_local /bin/mount -t sysfs none /sys > return_code > else > passed > fi > > But don't ask me how that missing /sys was missed. > > Sorry about that cheers > Marc. > > |
From: Marc G. <gr...@at...> - 2009-01-16 08:06:46
|
On Friday 16 January 2009 00:59:17 Gordan Bobic wrote: > Quick question, embarrasing as it may be - where does /sys mount point > get created in the initrd? I have just rebuilt my initrd and there is no > /sys mount point in it, so mounting /sys fails, which is probably why > udev startup doesn't create the device nodes, and from there on it all > goes wrong. grepping for sys and mkdir comes up with nothing... What > have I broken? I think nothing. Yesterday I realised that /sys is not created. Just change boot-scripts/etc/boot-lib.sh function initEnv so that it is automatically created: echo_local_debug "*****************************" echo_local -n "Mounting Proc-FS" is_mounted /proc if [ $? -ne 0 ]; then [ -d /proc ] || mkdir /proc exec_local /bin/mount -t proc proc /proc return_code else passed fi echo_local -n "Mounting Sys-FS" is_mounted /sys if [ $? -ne 0 ]; then [ -d /sys ] || mkdir /sys exec_local /bin/mount -t sysfs none /sys return_code else passed fi But don't ask me how that missing /sys was missed. Sorry about that cheers Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2009-01-15 23:59:24
|
Quick question, embarrasing as it may be - where does /sys mount point get created in the initrd? I have just rebuilt my initrd and there is no /sys mount point in it, so mounting /sys fails, which is probably why udev startup doesn't create the device nodes, and from there on it all goes wrong. grepping for sys and mkdir comes up with nothing... What have I broken? Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2009-01-15 23:40:03
|
Here's a preliminary version based on the OSR RHEL5 GFS mini-howto from here: http://www.open-sharedroot.org/documentation/rhel5-gfs-shared-root-mini-howto Modified/added files are attached. Prerequisites ------------- Freshly installed RHEL5. Install RHCS components: # yum install cman Install Com.oonics Packages --------------------------- # yum install comoonics-bootimage \ comoonics-cdsl-py \ comoonics-bootimage-extras-glusterfs [Note: I'm assuming the package for this will be called extras-glusterfs since that was what the DRBD one I submitted ended up being called.] Install GlusterFS patched fuse packages: # wget http://ftp.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.7.3-2.src.rpm # rpmbuild --rebuild fuse-2.7.3-2.src.rpm # rpm -Uvh /usr/src/redhat/RPMS/x86_64/fuse-2.7.3-2.x86_64.rpm \ /usr/src/redhat/RPMS/x86_64/fuse-libs-2.7.3-2.x86_64.rpm \ /usr/src/redhat/RPMS/x86_64/fuse-kernel-module-2.6.18-92.1.22.el5-2.7.3-2.x86_64.rpm # wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/2.0.0/glusterfs-2.0.0rc1.tar.gz # rpmbuild -tb glusterfs-2.0.0rc1.tar.gz # rpm -Uvh /usr/src/redhat/RPMS/x86_64/glusterfs-2.0.0rc1-1.x86_64.rpm [Note: The current versions may change, the ones listed are correct at the time of writing this document. The paths above also assume x86-64 architecture.] Create a cluster configuration file /etc/cluster/cluster.conf with the com_info tags. This time fencing isn't mandatory if you aren't using resource failover, as unline GFS, GlusterFS won't block if a peer goes away. For GlusterFS splitbrain caveats and handling see http://www.gluster.org/ [cluster.conf attached] A quick note about the disk layout of the underlying disk. This howto assumes the following: /dev/sda1 /boot /dev/sda2 / /dev/sda3 swap /dev/sda4 comoonics-chroot (so we can deallocate the initrd) On /, we are assumed to have a directory /gluster/root which contains the gluster rootfs. Create the GlusterFS root volume specification in /etc/glusterfs/root.vol. Here is an example of a simple volume spec file for a 2-server AFR (mirroring) configuration where each server has a local copy of the data. Note that you could also do this diskless, with the rootfs being on remote server(s) much like NFS, and even distributed or striped across multiple servers. See GlusterFS documentation for details. [root.vol attached] Mount the glusterfs file system: # mkdir /mnt/newroot # mount -t glusterfs /etc/glusterfs/root.vom /mnt/newroot Copy all data from the local installed RHEL5 root filesystem to the shared root filesystem: [ ... the rest of the section is identical to the GFS howto ... ] Make sure you apply the patches from /opt/atix/comoonics-bootimage/patches to the init scripts in the new root, especially the network and halt init scripts! Note that the RPM library in it's default configuration WILL NOT work under GlusterFS. GlusterFS is fuse based and thus doesn't support writable mmap(), which BerkeleyDB (default RPM database format) requires to function. To work around this problem, we can convert the RPM database to use SQLite. The functionality is already built into RHEL5 RPM packages, we just need to do the following: # rpm -v --rebuilddbapi 4 --dbpath /var/lib/rpm --rebuilddb and then change the following lines in /usr/lib/rpm/macros: %_dbapi 3 %_dbapi_rebuild 3 to %_dbapi 4 %_dbapi_rebuild 4 [Note: This should _probably_ be another patch in /opt/atix/comoonics-bootimage/patches, trivial as it may be.] [Note: Updated network.patch attached, the current one in the repo didn't seem to apply cleanly, and I added the exclusion of network disconnection when GlusterFS is used.] [Note: You cannot use a GlusterFS based shared boot per se, but you COULD use GlusterFS to keep /boot in sync and boot off it's backing storage device. No new devices need be created, only an additional volume spec using the /boot volume as the backing store for GlusterFS. All operations on top of GlusterFS would cause the /boot device to get mirrored across the machines. This is only meaningful with AFR/mirroring. Also note that grub is virtually guaranteed to get horribly confused when asked to make a GlusterFS based file system bootable. In conclusion - don't do this unless you understand what I'm talking about here and know what you're doing.] Create the shared root initrd as per usual: /opt/atix/comoonics-bootimage/mkinitrd -f /mnt/newroot/boot/initrd_sr-$(uname -r).img $(uname -r) Final note: You can side-step the copying of the root FS by operating directly on the master copy. This means you won't have to then manually go and delete the initial installation (except for the /gluster directory), but it also means that any mistakes along the way will render the system unusable and you'll have to re-install from scratch and try again. You would then, of course, need to change the path in root.vol from /mnt/tmproot/glusterfs/root to /mnt/tmproot. Awaiting peer review. :) Gordan |
From: Gordan B. <go...@bo...> - 2009-01-14 22:34:37
|
Marc Grimme wrote: >>>> I have GlusterFS OSR working and I'll be sending a patch when I >>> have done a >>> >>>> bit more testing including rebuilding a fresh system from scratch >>> to test >>> >>>> my notes/documentation which I'll also be submitting. >>> Sounds great. I'll be happy to bring this upstream. >> Thanks. Sorry it took this long (I first mentioned the idea some months >> ago). There were some bugs in GlusterFS that made it too unstable for >> production (for my use-cases at least), so I left it on hold until I was >> happy with it in the standard environment. >> >> Some of the notable things that were required to get OSR working with it >> were: - fuse needs /tmp to exist before it'll work. The error it returns is >> entirely misleading, and this took a while to get to the bottom of. Until >> now, OSR didn't include a /tmp directory. - RPM refused to work because it >> is based on BerkeleyDB, which requires writable mmap support, which isn't >> available in fuse. The solution for that was to convert the RPM DB to >> SQLite. > > That's interesting cause I always wandered if it is possible to change the db > underneath. We also have so issues with gfs and rpmdb<berkeleydb>. Can you > point out some online resources on how to do this? I'll definitely include it in the docs when I've got it all ready. Hopefully by the end of the week. I just want to make sure that my results are repeatable first. :) >>>> Either way, I copied the halt script across and added glusterfs to >>> the list >>> >>>> of things to not kill (-x parameter), but this didn't appear to >>> make any >>> >>>> difference. The man page for killall/killall5 on RHEL5 doesn't >>> appear to >>> >>>> list the -x option (although I did find a reference to this >>> parameter being >>> >>>> added in Debian in 2006). Is this known to work on RHEL/CentOS 5, >>> or is >>> >>>> there a modified killall required for this distro? >>> Normally this binary that does this killing is the rpm >>> SysVinit-comoonics >>> found at >>> >>> http://download.atix.de/yum/comoonics/redhat-el5/productive/x86_64/RPMS/. >>> >>> This provides a new killall found at /usr/comoonics/sbin. This is >>> called by >>> halt and accepts the -x as option. >>> Let me know if this is enough information. >> I'll double-check, but I'm almost certain that this package was installed. >> Does the same package also include the modified halt init script? > > No those are added by comoonics-bootimage-initscripts. These provide the > patches needed. Thanks, I'll have another look at it and see what I've missed. Gordan |
From: Marc G. <gr...@at...> - 2009-01-14 19:22:38
|
On Wednesday 14 January 2009 17:09:54 Gordan Bobic wrote: > On Wed 14/01/09 15:37 , Marc Grimme <gr...@at...> wrote: > > On Wednesday 14 January 2009 12:07:11 Gordan Bobic wrote: > > > Hi, > > > > > > I have GlusterFS OSR working and I'll be sending a patch when I > > > > have done a > > > > > bit more testing including rebuilding a fresh system from scratch > > > > to test > > > > > my notes/documentation which I'll also be submitting. > > > > Sounds great. I'll be happy to bring this upstream. > > Thanks. Sorry it took this long (I first mentioned the idea some months > ago). There were some bugs in GlusterFS that made it too unstable for > production (for my use-cases at least), so I left it on hold until I was > happy with it in the standard environment. > > Some of the notable things that were required to get OSR working with it > were: - fuse needs /tmp to exist before it'll work. The error it returns is > entirely misleading, and this took a while to get to the bottom of. Until > now, OSR didn't include a /tmp directory. - RPM refused to work because it > is based on BerkeleyDB, which requires writable mmap support, which isn't > available in fuse. The solution for that was to convert the RPM DB to > SQLite. That's interesting cause I always wandered if it is possible to change the db underneath. We also have so issues with gfs and rpmdb<berkeleydb>. Can you point out some online resources on how to do this? > > > > Either way, I copied the halt script across and added glusterfs to > > > > the list > > > > > of things to not kill (-x parameter), but this didn't appear to > > > > make any > > > > > difference. The man page for killall/killall5 on RHEL5 doesn't > > > > appear to > > > > > list the -x option (although I did find a reference to this > > > > parameter being > > > > > added in Debian in 2006). Is this known to work on RHEL/CentOS 5, > > > > or is > > > > > there a modified killall required for this distro? > > > > Normally this binary that does this killing is the rpm > > SysVinit-comoonics > > found at > > > > http://download.atix.de/yum/comoonics/redhat-el5/productive/x86_64/RPMS/. > > > > This provides a new killall found at /usr/comoonics/sbin. This is > > called by > > halt and accepts the -x as option. > > Let me know if this is enough information. > > I'll double-check, but I'm almost certain that this package was installed. > Does the same package also include the modified halt init script? No those are added by comoonics-bootimage-initscripts. These provide the patches needed. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Christopher B. <chr...@ql...> - 2009-01-14 16:20:03
|
> -----Original Message----- > From: Gordan Bobic [mailto:go...@bo...] > Sent: Wednesday, January 14, 2009 11:10 AM > To: ope...@li... > Subject: Re: [OSR-devel] killall and init scripts > > On Wed 14/01/09 15:37 , Marc Grimme <gr...@at...> wrote: > > On Wednesday 14 January 2009 12:07:11 Gordan Bobic wrote: > > > Hi, > > > > > > I have GlusterFS OSR working and I'll be sending a patch when I > > have done a ==========snip=8<----------- > > Some of the notable things that were required to get OSR > working with it were: > - fuse needs /tmp to exist before it'll work. The error it > returns is entirely misleading, and this took a while to get > to the bottom of. Until now, OSR didn't include a /tmp directory. is the $TMPDIR variable not honored/effective? > - RPM refused to work because it is based on BerkeleyDB, > which requires writable mmap support, which isn't available > in fuse. The solution for that was to convert the RPM DB to SQLite. > Amazing stuff you all are doing here. Great work. -C |
From: Gordan B. <go...@bo...> - 2009-01-14 16:10:05
|
On Wed 14/01/09 15:37 , Marc Grimme <gr...@at...> wrote: > On Wednesday 14 January 2009 12:07:11 Gordan Bobic wrote: > > Hi, > > > > I have GlusterFS OSR working and I'll be sending a patch when I > have done a > > bit more testing including rebuilding a fresh system from scratch > to test > > my notes/documentation which I'll also be submitting. > > Sounds great. I'll be happy to bring this upstream. Thanks. Sorry it took this long (I first mentioned the idea some months ago). There were some bugs in GlusterFS that made it too unstable for production (for my use-cases at least), so I left it on hold until I was happy with it in the standard environment. Some of the notable things that were required to get OSR working with it were: - fuse needs /tmp to exist before it'll work. The error it returns is entirely misleading, and this took a while to get to the bottom of. Until now, OSR didn't include a /tmp directory. - RPM refused to work because it is based on BerkeleyDB, which requires writable mmap support, which isn't available in fuse. The solution for that was to convert the RPM DB to SQLite. > > Either way, I copied the halt script across and added glusterfs to > the list > > of things to not kill (-x parameter), but this didn't appear to > make any > > difference. The man page for killall/killall5 on RHEL5 doesn't > appear to > > list the -x option (although I did find a reference to this > parameter being > > added in Debian in 2006). Is this known to work on RHEL/CentOS 5, > or is > > there a modified killall required for this distro? > Normally this binary that does this killing is the rpm > SysVinit-comoonics > found at > > http://download.atix.de/yum/comoonics/redhat-el5/productive/x86_64/RPMS/. > > This provides a new killall found at /usr/comoonics/sbin. This is > called by > halt and accepts the -x as option. > Let me know if this is enough information. I'll double-check, but I'm almost certain that this package was installed. Does the same package also include the modified halt init script? Thanks. Gordan ---- Msg sent via @Mail - http://atmail.com/ |
From: Marc G. <gr...@at...> - 2009-01-14 15:50:57
|
On Wednesday 14 January 2009 12:07:11 Gordan Bobic wrote: > Hi, > > I have GlusterFS OSR working and I'll be sending a patch when I have done a > bit more testing including rebuilding a fresh system from scratch to test > my notes/documentation which I'll also be submitting. Sounds great. I'll be happy to bring this upstream. > > There is a minor problem, however. On my previous cluster, I appear to have > a modified halt init script (RHEL5). This has been changed to exclude > certain processes from getting killed to prevent the root FS from > disappearing half way through the shutdown sequence. What package provides > this? The rpm database only listed the RHEL supplied package, but the file > says it's an OSR patched file. So where does it come from? The only > difference between the old and new systems is that one is IA32 and the > other x86-64. Could there be a packaging error on the 64-bit version that > didn't include the modified halt script or something like that? > > Either way, I copied the halt script across and added glusterfs to the list > of things to not kill (-x parameter), but this didn't appear to make any > difference. The man page for killall/killall5 on RHEL5 doesn't appear to > list the -x option (although I did find a reference to this parameter being > added in Debian in 2006). Is this known to work on RHEL/CentOS 5, or is > there a modified killall required for this distro? Normally this binary that does this killing is the rpm SysVinit-comoonics found at http://download.atix.de/yum/comoonics/redhat-el5/productive/x86_64/RPMS/. This provides a new killall found at /usr/comoonics/sbin. This is called by halt and accepts the -x as option. Let me know if this is enough information. Thanks Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2009-01-14 11:24:38
|
Hi, I have GlusterFS OSR working and I'll be sending a patch when I have done a bit more testing including rebuilding a fresh system from scratch to test my notes/documentation which I'll also be submitting. There is a minor problem, however. On my previous cluster, I appear to have a modified halt init script (RHEL5). This has been changed to exclude certain processes from getting killed to prevent the root FS from disappearing half way through the shutdown sequence. What package provides this? The rpm database only listed the RHEL supplied package, but the file says it's an OSR patched file. So where does it come from? The only difference between the old and new systems is that one is IA32 and the other x86-64. Could there be a packaging error on the 64-bit version that didn't include the modified halt script or something like that? Either way, I copied the halt script across and added glusterfs to the list of things to not kill (-x parameter), but this didn't appear to make any difference. The man page for killall/killall5 on RHEL5 doesn't appear to list the -x option (although I did find a reference to this parameter being added in Debian in 2006). Is this known to work on RHEL/CentOS 5, or is there a modified killall required for this distro? Thanks. Gordan ---- Msg sent via @Mail - http://atmail.com/ |
From: Marc G. <gr...@at...> - 2008-11-23 10:32:03
|
On Saturday 22 November 2008 22:37:00 Gordan Bobic wrote: > Marc Grimme wrote: > > On Saturday 22 November 2008 16:51:31 Gordan Bobic wrote: > >> Marc Grimme wrote: > >>> But why not go this way: > >>> There is a method/function "clusterfs_services_start" which itself > >>> calls ${clutype}_services_start which should be implemented by > >>> etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore > >>> there should be a library glusterfs-lib.sh. The function > >>> clusterfs_services_start should do all the preparations to mount the > >>> clusterfilesystem and could also mount a local filesystem as a > >>> prerequisite. So I would put it there. A rootfs can be specified as > >>> follows: > >>> <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> > >> > >> OK, this solution wins on the grounds that remapping the chrootenv seems > >> to confuse glusterfs. > >> > >> But ${clutype}_services_start is dependent on ${clutype}, which is > >> returned from getCluType(), which always returns "gfs". Will that not > >> break things? > > > > Not any more take the latest version of clusterfs-lib.sh and it is > > dependent on rootfs. I changed it how it should recently. > > Do you mean you changed getCluType(), or made it irrelevant? I freshly > rebuilt the test box this morning, that that's where getCluType() was > always returning "gfs". Anyway, I'll just ignore it if it's not important. I wouldn't say that it's not important (although there are no other options implemented yet) but anyway it is "thought" as follows: clutype should return the "clustertype" we are talking of. When using a cluster with /etc/cluster/cluster.conf it is a "gfs" cluster. Although I don't like the "gfs". But I cannot change it so easily so we have to stay with it for sometime. In next versions we are planing to change it how it should be to something like "redhat-cluster". But the "gfs" has to stay for the first as clustertype for any /etc/cluster/cluster.conf based cluster. "rootfs" is a little bit ahead as we are supporting more then one rootfs (gfs, ocfs2 and ext3 .. gluserfs ;-) ). As we implemented ocfs2 and ext3 recently we had to review the use of "clutype" and "rootfs" as it should be. So in your case clutype will stay "gfs" (yes I don't like it either but we'll change it some versions ahead. At latest when we are pushing sles10 in productive state. Until now you have to use /etc/cluster/cluster.conf in sles10 although there is no gfs or anything the like.) and "rootfs" should be "glusterfs". Therefore you have to build a glusterfs-lib.sh with all functions called by clusterfs-lib.sh implemented. We will also need a listfile for it. But you've done this before so I don't expect it to be of any problem. If you like I'll send you a list of functions to implement tomorrow. Ok? -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-22 21:49:15
|
Marc Grimme wrote: > On Saturday 22 November 2008 16:51:31 Gordan Bobic wrote: >> Marc Grimme wrote: >>> But why not go this way: >>> There is a method/function "clusterfs_services_start" which itself calls >>> ${clutype}_services_start which should be implemented by >>> etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore >>> there should be a library glusterfs-lib.sh. The function >>> clusterfs_services_start should do all the preparations to mount the >>> clusterfilesystem and could also mount a local filesystem as a >>> prerequisite. So I would put it there. A rootfs can be specified as >>> follows: >>> <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> >> OK, this solution wins on the grounds that remapping the chrootenv seems >> to confuse glusterfs. >> >> But ${clutype}_services_start is dependent on ${clutype}, which is >> returned from getCluType(), which always returns "gfs". Will that not >> break things? > > Not any more take the latest version of clusterfs-lib.sh and it is dependent > on rootfs. I changed it how it should recently. Do you mean you changed getCluType(), or made it irrelevant? I freshly rebuilt the test box this morning, that that's where getCluType() was always returning "gfs". Anyway, I'll just ignore it if it's not important. Gordan |
From: Marc G. <gr...@at...> - 2008-11-22 18:00:18
|
On Saturday 22 November 2008 16:51:31 Gordan Bobic wrote: > Marc Grimme wrote: > > But why not go this way: > > There is a method/function "clusterfs_services_start" which itself calls > > ${clutype}_services_start which should be implemented by > > etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore > > there should be a library glusterfs-lib.sh. The function > > clusterfs_services_start should do all the preparations to mount the > > clusterfilesystem and could also mount a local filesystem as a > > prerequisite. So I would put it there. A rootfs can be specified as > > follows: > > <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> > > OK, this solution wins on the grounds that remapping the chrootenv seems > to confuse glusterfs. > > But ${clutype}_services_start is dependent on ${clutype}, which is > returned from getCluType(), which always returns "gfs". Will that not > break things? Not any more take the latest version of clusterfs-lib.sh and it is dependent on rootfs. I changed it how it should recently. > > From what you said, if I have the following entry: > <rootvolume name="/etc/glusterfs/root.vol" fstype="flusterfs"/> > $rootfs come from $fstype. Is that right? Or from > <rootsource name="glusterfs">? Or somewhere else entirely? It comes form the fstype attribute not from the name. > > Thanks. > > Gordan > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-22 15:51:43
|
Marc Grimme wrote: > But why not go this way: > There is a method/function "clusterfs_services_start" which itself calls > ${clutype}_services_start which should be implemented by > etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore there > should be a library glusterfs-lib.sh. The function clusterfs_services_start > should do all the preparations to mount the clusterfilesystem and could also > mount a local filesystem as a prerequisite. So I would put it there. > A rootfs can be specified as follows: > <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> OK, this solution wins on the grounds that remapping the chrootenv seems to confuse glusterfs. But ${clutype}_services_start is dependent on ${clutype}, which is returned from getCluType(), which always returns "gfs". Will that not break things? From what you said, if I have the following entry: <rootvolume name="/etc/glusterfs/root.vol" fstype="flusterfs"/> $rootfs come from $fstype. Is that right? Or from <rootsource name="glusterfs">? Or somewhere else entirely? Thanks. Gordan |
From: Marc G. <gr...@at...> - 2008-11-21 12:33:59
|
On Friday 21 November 2008 13:25:12 Gordan Bobic wrote: > Marc Grimme wrote: > > On Friday 21 November 2008 10:44:29 Gordan Bobic wrote: > >> Marc Grimme wrote: > >>> On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: > >>>> Gordan Bobic wrote: > >>>>> I bumped into another problem, though: > >>>>> > >>>>> # com-mkcdslinfrastructure -r /gluster/root -i > >>>>> Traceback (most recent call last): > >>>>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > >>>>> from comoonics.cdsl.ComCdslRepository import * > >>>>> File > >>>>> "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", line > >>>>> 28, in ? > >>>>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > >>>>> NameError: name 'cdslDefaults_element' is not defined > >>>> > >>>> Digging a little deeper, it turns out that although the package > >>>> versions (comoonics-cdsl-py-0.2-11) are the same on my old setup > >>>> (i386) and the new setup (x86-64), the contents of the > >>>> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are > >>>> totally different. The new version seems to have a bunch of > >>>> definitions that, it would seem, break things. > >>>> > >>>> I replaced with the old version, and it now seems to complete without > >>>> errors. > >>>> > >>>> Am I correct in guessing this is a packaging bug? > >>> > >>> Yes you are. Don't know how this could happen. But yes I bumped into > >>> that bug also some time before. > >>> > >>> You might want to try more recent versions. > >>> > >>> This should be fixed. > >> > >> Do you mean more recent version as of today? This was broken in the > >> version I yum-installed yesterday. > > > > Hm. Are you using preview or productive? > > I think I have both repositories enabled in yum, so it would have come > from whichever has the higher package version number. So yes in the preview we had this bug. So an update should fix it. You could also just update comoonics-cdsl-py and deps. That should do it. > > Gordan > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-21 12:25:22
|
Marc Grimme wrote: > On Friday 21 November 2008 10:44:29 Gordan Bobic wrote: >> Marc Grimme wrote: >>> On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: >>>> Gordan Bobic wrote: >>>>> I bumped into another problem, though: >>>>> >>>>> # com-mkcdslinfrastructure -r /gluster/root -i >>>>> Traceback (most recent call last): >>>>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? >>>>> from comoonics.cdsl.ComCdslRepository import * >>>>> File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", >>>>> line 28, in ? >>>>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) >>>>> NameError: name 'cdslDefaults_element' is not defined >>>> Digging a little deeper, it turns out that although the package versions >>>> (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the >>>> new setup (x86-64), the contents of the >>>> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally >>>> different. The new version seems to have a bunch of definitions that, it >>>> would seem, break things. >>>> >>>> I replaced with the old version, and it now seems to complete without >>>> errors. >>>> >>>> Am I correct in guessing this is a packaging bug? >>> Yes you are. Don't know how this could happen. But yes I bumped into that >>> bug also some time before. >>> >>> You might want to try more recent versions. >>> >>> This should be fixed. >> Do you mean more recent version as of today? This was broken in the >> version I yum-installed yesterday. > > Hm. Are you using preview or productive? I think I have both repositories enabled in yum, so it would have come from whichever has the higher package version number. Gordan |
From: Marc G. <gr...@at...> - 2008-11-21 09:56:56
|
On Friday 21 November 2008 10:44:29 Gordan Bobic wrote: > Marc Grimme wrote: > > On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: > >> Gordan Bobic wrote: > >>> I bumped into another problem, though: > >>> > >>> # com-mkcdslinfrastructure -r /gluster/root -i > >>> Traceback (most recent call last): > >>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > >>> from comoonics.cdsl.ComCdslRepository import * > >>> File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", > >>> line 28, in ? > >>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > >>> NameError: name 'cdslDefaults_element' is not defined > >> > >> Digging a little deeper, it turns out that although the package versions > >> (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the > >> new setup (x86-64), the contents of the > >> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally > >> different. The new version seems to have a bunch of definitions that, it > >> would seem, break things. > >> > >> I replaced with the old version, and it now seems to complete without > >> errors. > >> > >> Am I correct in guessing this is a packaging bug? > > > > Yes you are. Don't know how this could happen. But yes I bumped into that > > bug also some time before. > > > > You might want to try more recent versions. > > > > This should be fixed. > > Do you mean more recent version as of today? This was broken in the > version I yum-installed yesterday. Hm. Are you using preview or productive? > > Gordan > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-21 09:44:58
|
Marc Grimme wrote: > On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: >> Gordan Bobic wrote: >>> I bumped into another problem, though: >>> >>> # com-mkcdslinfrastructure -r /gluster/root -i >>> Traceback (most recent call last): >>> File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? >>> from comoonics.cdsl.ComCdslRepository import * >>> File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", >>> line 28, in ? >>> defaults_path = os.path.join(cdsls_path,cdslDefaults_element) >>> NameError: name 'cdslDefaults_element' is not defined >> Digging a little deeper, it turns out that although the package versions >> (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the >> new setup (x86-64), the contents of the >> /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally >> different. The new version seems to have a bunch of definitions that, it >> would seem, break things. >> >> I replaced with the old version, and it now seems to complete without >> errors. >> >> Am I correct in guessing this is a packaging bug? > Yes you are. Don't know how this could happen. But yes I bumped into that bug > also some time before. > > You might want to try more recent versions. > > This should be fixed. Do you mean more recent version as of today? This was broken in the version I yum-installed yesterday. Gordan |
From: Marc G. <gr...@at...> - 2008-11-21 08:02:28
|
Thomas, I moved the bug you've opened to the right place: https://bugzilla.open-sharedroot.org/show_bug.cgi?id=298 Please add yourself as CC to it. But anyway I think if you really like to get any further you should follow the alternative way which is described here: * http://www.open-sharedroot.org/documentation/rhel5-gfs-shared-root-mini-howto * http://www.open-sharedroot.org/documentation/the-opensharedroot-mini-howto Thanks and Regards Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |