You can subscribe to this list here.
2006 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(105) |
Nov
(10) |
Dec
(7) |
2008 |
Jan
|
Feb
(31) |
Mar
(13) |
Apr
(7) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(23) |
Dec
|
2009 |
Jan
(25) |
Feb
(24) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(6) |
Jul
(27) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(7) |
Dec
(25) |
2010 |
Jan
|
Feb
(7) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
From: Mark H. <hla...@at...> - 2007-12-20 16:45:15
|
On Thursday 20 December 2007 17:28:04 go...@bo... wrote: > On Thu, 20 Dec 2007, Mark Hlawatschek wrote: > >> However, mkinitrd seems to mangle and rewrite my cluster.conf and remove > >> all the fencing devices. Is this normal? Is there anything special I > >> have to do to get fencing to work? I am using fence_drac, if that makes > >> any difference. > > > > mkinitrd shouldn't do anything to your cluster.conf. mkinitrd will always > > use the cluster.conf in /etc/cluster/cluster.conf. Please make sure, that > > the version number of the latest cluster.conf is increased and it is > > deployed to all cluster nodes using a "ccs_tool update > > /etc/cluster/cluster.conf". > > I rebuilt the initrd, and the cluster.conf that ends up in > /var/comoonics/chroot/etc/cluster/cluster.conf > is NOT the same as the one in /etc/cluster/cluster.conf This can happen, if the cluster.conf in the initrd has a lower version number than the cluster version number. If this is the case, the active cluster.conf with the higher version number will be used. > > It seems to me that cluster.conf ends up getting rebuilt and mangled by > mkinitrd before it is folded into the initrd. mkinitrd is really doing nothing to the cluster.conf. > > If you want to use fence_drac, you need to put all required perl stuff > > into the chroot environment. This can be either done by > > 1) adding all perl stuff into the initrd or > > 2) adding the perl stuff only into the chroot environment during the boot > > process. > > > > to do this, create a file called perl.list with the following content: > > > > -->snip > > perl > > perl-libwww-perl > > perl-XML-Encoding > > perl-URI > > perl-HTML-Parser > > perl-XML-Parser > > perl-libxml-perl > > perl-Net-Telnet > > perl-HTML-Tagset > > perl-Crypt-SSLeay > > ##### > > -->snap > > > > > > > > and copy it into > > > > 1) /etc/comoonics/bootimage/rpms.initrd.d/ > > and make a new rpm > > Not sure I follow what you mean. What rpm? uups... I meant the initrd > > > _or_ > > 2) /etc/comoonics/bootimage-chroot/rpms.initrd.d/ > > and run "service bootsr start" > > Run this on the booted system? What does the bootsr service do? Yes, you can run this on the booted systems. bootsr is just doing some updates to the chroot environment, if not already done. Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany |
From: <go...@bo...> - 2007-12-20 16:28:10
|
On Thu, 20 Dec 2007, Mark Hlawatschek wrote: >> However, mkinitrd seems to mangle and rewrite my cluster.conf and remove >> all the fencing devices. Is this normal? Is there anything special I have >> to do to get fencing to work? I am using fence_drac, if that makes any >> difference. > mkinitrd shouldn't do anything to your cluster.conf. mkinitrd will always use > the cluster.conf in /etc/cluster/cluster.conf. Please make sure, that the > version number of the latest cluster.conf is increased and it is deployed to > all cluster nodes using a "ccs_tool update /etc/cluster/cluster.conf". I rebuilt the initrd, and the cluster.conf that ends up in /var/comoonics/chroot/etc/cluster/cluster.conf is NOT the same as the one in /etc/cluster/cluster.conf It seems to me that cluster.conf ends up getting rebuilt and mangled by mkinitrd before it is folded into the initrd. > If you want to use fence_drac, you need to put all required perl stuff into > the chroot environment. This can be either done by > 1) adding all perl stuff into the initrd or > 2) adding the perl stuff only into the chroot environment during the boot > process. > > to do this, create a file called perl.list with the following content: > > -->snip > perl > perl-libwww-perl > perl-XML-Encoding > perl-URI > perl-HTML-Parser > perl-XML-Parser > perl-libxml-perl > perl-Net-Telnet > perl-HTML-Tagset > perl-Crypt-SSLeay > ##### > -->snap > and copy it into > > 1) /etc/comoonics/bootimage/rpms.initrd.d/ > and make a new rpm Not sure I follow what you mean. What rpm? > _or_ > 2) /etc/comoonics/bootimage-chroot/rpms.initrd.d/ > and run "service bootsr start" Run this on the booted system? What does the bootsr service do? Gordan |
From: Mark H. <hla...@at...> - 2007-12-20 16:19:02
|
Hi Gordon, > I'm trying to get my OSR cluster to not lock up completely when one of the > nodes goes down, so I have been trying to get fencing to work. Good idea :-) > > However, mkinitrd seems to mangle and rewrite my cluster.conf and remove > all the fencing devices. Is this normal? Is there anything special I have > to do to get fencing to work? I am using fence_drac, if that makes any > difference. mkinitrd shouldn't do anything to your cluster.conf. mkinitrd will always use the cluster.conf in /etc/cluster/cluster.conf. Please make sure, that the version number of the latest cluster.conf is increased and it is deployed to all cluster nodes using a "ccs_tool update /etc/cluster/cluster.conf". If you want to use fence_drac, you need to put all required perl stuff into the chroot environment. This can be either done by 1) adding all perl stuff into the initrd or 2) adding the perl stuff only into the chroot environment during the boot process. to do this, create a file called perl.list with the following content: -->snip perl perl-libwww-perl perl-XML-Encoding perl-URI perl-HTML-Parser perl-XML-Parser perl-libxml-perl perl-Net-Telnet perl-HTML-Tagset perl-Crypt-SSLeay ##### -->snap and copy it into 1) /etc/comoonics/bootimage/rpms.initrd.d/ and make a new rpm _or_ 2) /etc/comoonics/bootimage-chroot/rpms.initrd.d/ and run "service bootsr start" Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany |
From: <go...@bo...> - 2007-12-20 12:37:36
|
Hi, I'm trying to get my OSR cluster to not lock up completely when one of the nodes goes down, so I have been trying to get fencing to work. However, mkinitrd seems to mangle and rewrite my cluster.conf and remove all the fencing devices. Is this normal? Is there anything special I have to do to get fencing to work? I am using fence_drac, if that makes any difference. Thanks. Gordan |
From: <go...@bo...> - 2007-11-08 12:19:57
|
On Thu, 8 Nov 2007, Mark Hlawatschek wrote: >> Once my shared root cluster has booted, if I try cman_tool status, it >> tells me that cman isn't running. Taking a closer look, dlm_controld, >> aisexec and fenced are running, but not cman. Is this normal? > > There should be the following two symbolic links. The links should be > automatically created during the bootprocess from /etc/init.d/bootsr > > # ls -l /var/run/cman_* > lrwxrwxrwx 1 root root 40 Nov 8 > 14:30 /var/run/cman_admin -> /var/comoonics/chroot/var/run/cman_admin > lrwxrwxrwx 1 root root 41 Nov 8 > 14:30 /var/run/cman_client -> /var/comoonics/chroot/var/run/cman_client There are no /var/run/cman* links. But manually symlinking it fixed the problem, thanks. :-) Gordan |
From: Mark H. <hla...@at...> - 2007-11-08 11:01:10
|
> Once my shared root cluster has booted, if I try cman_tool status, it > tells me that cman isn't running. Taking a closer look, dlm_controld, > aisexec and fenced are running, but not cman. Is this normal? There should be the following two symbolic links. The links should be automatically created during the bootprocess from /etc/init.d/bootsr # ls -l /var/run/cman_* lrwxrwxrwx 1 root root 40 Nov 8 14:30 /var/run/cman_admin -> /var/comoonics/chroot/var/run/cman_admin lrwxrwxrwx 1 root root 41 Nov 8 14:30 /var/run/cman_client -> /var/comoonics/chroot/var/run/cman_client Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany |
From: <go...@bo...> - 2007-11-08 09:58:45
|
Hi, Once my shared root cluster has booted, if I try cman_tool status, it tells me that cman isn't running. Taking a closer look, dlm_controld, aisexec and fenced are running, but not cman. Is this normal? Gordan |
From: <go...@bo...> - 2007-11-08 08:49:17
|
On Wed, 7 Nov 2007, Marc Grimme wrote: >>>> I'm having a problem with the way the ifcfg-eth* files are being handled >>>> for the initrd. My ifcfg-eth1 file doesn't get transferred across >>>> verbatim. This is a problem because I have to explicitly specify the >>>> HWADDR in my ifcfg files (therwise they come up on wrong subnets). >>> >>> The initrd creates new ifcfg-ethX files for the defined ethernet >>> interface for its internal use. I.e. for all <eth /> entries for the node >>> in the cluster.conf. These file are only valid inside the initrd. >> >> Sure, but the iSCSI interface has to be the same in root and initrd, >> otherwise it won't work. >> >>>> It looks as if mkinitrd removes some lines from the file, and this >>>> causes eth1 to be physical eth0, which puts it on the wrong subnet, and >>>> that means it can't see the iSCSI shares. :-( >>>> >>>> I checked the /etc/sysconfig/network-scripts/ifcfg-eth1 in the initrd, >>>> and it is indeed lacking the HWADDR line (and a few others). >>>> >>>> Am I missing something here? Does mkinitrd mangle the ifcfg-eth* files? >>> >>> The mkinitd does not modify the ifcfg-ethX files. >>> If you want to define the HWADDR parameter in the ifcfg-ethX files, you >>> have to make them hostdependent. >> >> I did, but network-lib.sh checks if it's in the initrd, and if so saves >> a backup and creates a new one from the parameters that come from >> elsewhere (I'm assuimg from cluster.conf). >> >> The new one doesn't have the HWADDR parameter in it, because it isn't one >> of the things passed to network-lib.sh. So, any interface that are bound >> by MAC address rather than the default ordering won't work properly in the >> initrd. I have interface name, ip and mac specified in cluster.conf, but >> it never gets included in the built ifcfg file. :-( > > That's a known "Bug" I dare say ;-) . But it should not be a big thing to > change, should it?! > Try the rpm attached. > What do you think? I'll try it later, when my cluster stabilises a bit. I spent all of yesterday getting a 3-node cluster to boot, and it kept failing when starting the cman service. Eventually, it booted up fine, and I still don't know what I changed in the initrd that would make any difference at all - so I'm a bit reluctant to try changing things again. For now I think I'll have to live with interface inconsistency on a few machines. Maybe in a few days. :-) On a separate note - is there an option to include a delay after the interface comes up? I'm using managed switches and they are quite crap - they can take up to 30-60 seconds to figure out that something has been plugged in, so when the interface comes up, the SAN isn't yet accessible by the time iSCSI loads. For now I've bodget it by adding a sleep 30 in iscsi-lib.sh, but it'd be nicer if it just used the standard ifcfg options for this, as it's more generic. Gordan |
From: Marc G. <gr...@at...> - 2007-11-07 17:19:35
|
On Tuesday 06 November 2007 13:48:56 go...@bo... wrote: > On Tue, 6 Nov 2007, Mark Hlawatschek wrote: > >> I'm having a problem with the way the ifcfg-eth* files are being handled > >> for the initrd. My ifcfg-eth1 file doesn't get transferred across > >> verbatim. This is a problem because I have to explicitly specify the > >> HWADDR in my ifcfg files (therwise they come up on wrong subnets). > > > > The initrd creates new ifcfg-ethX files for the defined ethernet > > interface for its internal use. I.e. for all <eth /> entries for the node > > in the cluster.conf. These file are only valid inside the initrd. > > Sure, but the iSCSI interface has to be the same in root and initrd, > otherwise it won't work. > > >> It looks as if mkinitrd removes some lines from the file, and this > >> causes eth1 to be physical eth0, which puts it on the wrong subnet, and > >> that means it can't see the iSCSI shares. :-( > >> > >> I checked the /etc/sysconfig/network-scripts/ifcfg-eth1 in the initrd, > >> and it is indeed lacking the HWADDR line (and a few others). > >> > >> Am I missing something here? Does mkinitrd mangle the ifcfg-eth* files? > > > > The mkinitd does not modify the ifcfg-ethX files. > > If you want to define the HWADDR parameter in the ifcfg-ethX files, you > > have to make them hostdependent. > > I did, but network-lib.sh checks if it's in the initrd, and if so saves > a backup and creates a new one from the parameters that come from > elsewhere (I'm assuimg from cluster.conf). > > The new one doesn't have the HWADDR parameter in it, because it isn't one > of the things passed to network-lib.sh. So, any interface that are bound > by MAC address rather than the default ordering won't work properly in the > initrd. I have interface name, ip and mac specified in cluster.conf, but > it never gets included in the built ifcfg file. :-( That's a known "Bug" I dare say ;-) . But it should not be a big thing to change, should it?! Try the rpm attached. What do you think? Marc. > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme Phone: +49-89 452 3538-14 http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss |
From: <go...@bo...> - 2007-11-06 12:49:58
|
On Tue, 6 Nov 2007, Mark Hlawatschek wrote: >> I'm having a problem with the way the ifcfg-eth* files are being handled >> for the initrd. My ifcfg-eth1 file doesn't get transferred across >> verbatim. This is a problem because I have to explicitly specify the >> HWADDR in my ifcfg files (therwise they come up on wrong subnets). > The initrd creates new ifcfg-ethX files for the defined ethernet interface for > its internal use. I.e. for all <eth /> entries for the node in the > cluster.conf. These file are only valid inside the initrd. Sure, but the iSCSI interface has to be the same in root and initrd, otherwise it won't work. >> It looks as if mkinitrd removes some lines from the file, and this causes >> eth1 to be physical eth0, which puts it on the wrong subnet, and that >> means it can't see the iSCSI shares. :-( >> >> I checked the /etc/sysconfig/network-scripts/ifcfg-eth1 in the initrd, and >> it is indeed lacking the HWADDR line (and a few others). >> >> Am I missing something here? Does mkinitrd mangle the ifcfg-eth* files? > The mkinitd does not modify the ifcfg-ethX files. > If you want to define the HWADDR parameter in the ifcfg-ethX files, you have > to make them hostdependent. I did, but network-lib.sh checks if it's in the initrd, and if so saves a backup and creates a new one from the parameters that come from elsewhere (I'm assuimg from cluster.conf). The new one doesn't have the HWADDR parameter in it, because it isn't one of the things passed to network-lib.sh. So, any interface that are bound by MAC address rather than the default ordering won't work properly in the initrd. I have interface name, ip and mac specified in cluster.conf, but it never gets included in the built ifcfg file. :-( Gordan |
From: Mark H. <hla...@at...> - 2007-11-06 12:42:49
|
> I'm having a problem with the way the ifcfg-eth* files are being handled > for the initrd. My ifcfg-eth1 file doesn't get transferred across > verbatim. This is a problem because I have to explicitly specify the > HWADDR in my ifcfg files (therwise they come up on wrong subnets). The initrd creates new ifcfg-ethX files for the defined ethernet interface for its internal use. I.e. for all <eth /> entries for the node in the cluster.conf. These file are only valid inside the initrd. > > It looks as if mkinitrd removes some lines from the file, and this causes > eth1 to be physical eth0, which puts it on the wrong subnet, and that > means it can't see the iSCSI shares. :-( > > I checked the /etc/sysconfig/network-scripts/ifcfg-eth1 in the initrd, and > it is indeed lacking the HWADDR line (and a few others). > > Am I missing something here? Does mkinitrd mangle the ifcfg-eth* files? The mkinitd does not modify the ifcfg-ethX files. If you want to define the HWADDR parameter in the ifcfg-ethX files, you have to make them hostdependent. Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany |
From: <go...@bo...> - 2007-11-06 12:20:16
|
OK, I found the problem. network-lib.sh clobbers the ifcfg-eth* (and saves a backup). Now, I could just cut that out and get it to use the existing ifcfg-eth* file that I can just add to the initrd manually. But that's kind of naff. So, I need to somehow get the mac="a:b:c:d:e:f" parameter from cluster.conf. Is this accessible from network-lib.sh, or do I have to pass it in? If I have to pass it in, what is the call stack, and what is the deepest point where the cluster.conf stuff is available so that I can add the additional function parameter on the lot? Thanks. Gordan On Tue, 6 Nov 2007, go...@bo... wrote: > Just tried working around it by adding the ifcfg-eth1 file to the initrd > build and it didn't help. It's as if the ifcfg file actually gets built at > run-time. What program does this? > > Gordan > > On Tue, 6 Nov 2007, go...@bo... wrote: > >> Thanks for that. :-) >> >> I'm having a problem with the way the ifcfg-eth* files are being handled >> for the initrd. My ifcfg-eth1 file doesn't get transferred across >> verbatim. This is a problem because I have to explicitly specify the >> HWADDR in my ifcfg files (therwise they come up on wrong subnets). >> >> It looks as if mkinitrd removes some lines from the file, and this causes >> eth1 to be physical eth0, which puts it on the wrong subnet, and that >> means it can't see the iSCSI shares. :-( >> >> I checked the /etc/sysconfig/network-scripts/ifcfg-eth1 in the initrd, and >> it is indeed lacking the HWADDR line (and a few others). >> >> Am I missing something here? Does mkinitrd mangle the ifcfg-eth* files? >> >> Thanks. >> >> Gordan >> >> On Mon, 22 Oct 2007, Mark Hlawatschek wrote: >> >>>> I'm in a position where I have to move my GFS volume from one SAN to >>>> another, on a different IP block. >>>> >>>> If my IP for the SAN and cluster operation is assigned by DHCP, do I have >>>> to specify it in cluster.conf anyway? Or is specifying the MAC address >>>> sufficient? Or do I specify something like "dhcp" as the IP address? >>> Yes, it should be possible to use dhcp by adding this: >>> <eth name="eth0" ip="dhcp" mac="yourmacaddress" /> >>> Please note, that defining a mac address is mandatory, because it defines the >>> node in the cluster. >>> >>>> >>>> If the cluster doesn't come up normally and I'm stuck in the initrd boot >>>> image, is there a documented list of procedures on how to get from there >>>> to mounting the GFS root volume in the right place sufficiently (i.e. >>>> including the host speciffic files, such as /etc/modprobe.conf) to re-make >>>> the initrd? >>> You need to do the following steps: >>> 1. mount your rootfs to /mnt/newroot if the cluster software fails to start, >>> use the lockproto=lock_nolock option to mount the fs. But make sure it's the >>> only node mounting the filesystem ;-) ) >>> # mount -t gfs -o lockproto=lock_nolock /dev/foo /mnt/newroot >>> 2. mount --bind /dev /mnt/newroot/dev >>> 3. mount -t proc proc /mnt/newroot/proc >>> 4. mount -t sysfs none /mnt/newroot/sys >>> 5. mount --bind /mnt/newroot/cluster/cdsl/<nodeid> /mnt/newroot/cdsl.local >>> 6. chroot /mnt/newroot >>> 7. you can make your changes and build new initrd in the chroot >> >> ------------------------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. >> Still grepping through log files to find problems? Stop. >> Now Search log events and configuration files using AJAX and a browser. >> Download your FREE copy of Splunk now >> http://get.splunk.com/ >> _______________________________________________ >> Open-sharedroot-devel mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel >> > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > |
From: <go...@bo...> - 2007-11-06 11:32:17
|
Just tried working around it by adding the ifcfg-eth1 file to the initrd build and it didn't help. It's as if the ifcfg file actually gets built at run-time. What program does this? Gordan On Tue, 6 Nov 2007, go...@bo... wrote: > Thanks for that. :-) > > I'm having a problem with the way the ifcfg-eth* files are being handled > for the initrd. My ifcfg-eth1 file doesn't get transferred across > verbatim. This is a problem because I have to explicitly specify the > HWADDR in my ifcfg files (therwise they come up on wrong subnets). > > It looks as if mkinitrd removes some lines from the file, and this causes > eth1 to be physical eth0, which puts it on the wrong subnet, and that > means it can't see the iSCSI shares. :-( > > I checked the /etc/sysconfig/network-scripts/ifcfg-eth1 in the initrd, and > it is indeed lacking the HWADDR line (and a few others). > > Am I missing something here? Does mkinitrd mangle the ifcfg-eth* files? > > Thanks. > > Gordan > > On Mon, 22 Oct 2007, Mark Hlawatschek wrote: > >>> I'm in a position where I have to move my GFS volume from one SAN to >>> another, on a different IP block. >>> >>> If my IP for the SAN and cluster operation is assigned by DHCP, do I have >>> to specify it in cluster.conf anyway? Or is specifying the MAC address >>> sufficient? Or do I specify something like "dhcp" as the IP address? >> Yes, it should be possible to use dhcp by adding this: >> <eth name="eth0" ip="dhcp" mac="yourmacaddress" /> >> Please note, that defining a mac address is mandatory, because it defines the >> node in the cluster. >> >>> >>> If the cluster doesn't come up normally and I'm stuck in the initrd boot >>> image, is there a documented list of procedures on how to get from there >>> to mounting the GFS root volume in the right place sufficiently (i.e. >>> including the host speciffic files, such as /etc/modprobe.conf) to re-make >>> the initrd? >> You need to do the following steps: >> 1. mount your rootfs to /mnt/newroot if the cluster software fails to start, >> use the lockproto=lock_nolock option to mount the fs. But make sure it's the >> only node mounting the filesystem ;-) ) >> # mount -t gfs -o lockproto=lock_nolock /dev/foo /mnt/newroot >> 2. mount --bind /dev /mnt/newroot/dev >> 3. mount -t proc proc /mnt/newroot/proc >> 4. mount -t sysfs none /mnt/newroot/sys >> 5. mount --bind /mnt/newroot/cluster/cdsl/<nodeid> /mnt/newroot/cdsl.local >> 6. chroot /mnt/newroot >> 7. you can make your changes and build new initrd in the chroot > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > |
From: <go...@bo...> - 2007-11-06 10:53:59
|
Thanks for that. :-) I'm having a problem with the way the ifcfg-eth* files are being handled for the initrd. My ifcfg-eth1 file doesn't get transferred across verbatim. This is a problem because I have to explicitly specify the HWADDR in my ifcfg files (therwise they come up on wrong subnets). It looks as if mkinitrd removes some lines from the file, and this causes eth1 to be physical eth0, which puts it on the wrong subnet, and that means it can't see the iSCSI shares. :-( I checked the /etc/sysconfig/network-scripts/ifcfg-eth1 in the initrd, and it is indeed lacking the HWADDR line (and a few others). Am I missing something here? Does mkinitrd mangle the ifcfg-eth* files? Thanks. Gordan On Mon, 22 Oct 2007, Mark Hlawatschek wrote: >> I'm in a position where I have to move my GFS volume from one SAN to >> another, on a different IP block. >> >> If my IP for the SAN and cluster operation is assigned by DHCP, do I have >> to specify it in cluster.conf anyway? Or is specifying the MAC address >> sufficient? Or do I specify something like "dhcp" as the IP address? > Yes, it should be possible to use dhcp by adding this: > <eth name="eth0" ip="dhcp" mac="yourmacaddress" /> > Please note, that defining a mac address is mandatory, because it defines the > node in the cluster. > >> >> If the cluster doesn't come up normally and I'm stuck in the initrd boot >> image, is there a documented list of procedures on how to get from there >> to mounting the GFS root volume in the right place sufficiently (i.e. >> including the host speciffic files, such as /etc/modprobe.conf) to re-make >> the initrd? > You need to do the following steps: > 1. mount your rootfs to /mnt/newroot if the cluster software fails to start, > use the lockproto=lock_nolock option to mount the fs. But make sure it's the > only node mounting the filesystem ;-) ) > # mount -t gfs -o lockproto=lock_nolock /dev/foo /mnt/newroot > 2. mount --bind /dev /mnt/newroot/dev > 3. mount -t proc proc /mnt/newroot/proc > 4. mount -t sysfs none /mnt/newroot/sys > 5. mount --bind /mnt/newroot/cluster/cdsl/<nodeid> /mnt/newroot/cdsl.local > 6. chroot /mnt/newroot > 7. you can make your changes and build new initrd in the chroot |
From: Mark H. <hla...@at...> - 2007-10-22 14:49:59
|
On Monday 22 October 2007 09:48:32 go...@bo... wrote: > I'm in a position where I have to move my GFS volume from one SAN to > another, on a different IP block. > > If my IP for the SAN and cluster operation is assigned by DHCP, do I have > to specify it in cluster.conf anyway? Or is specifying the MAC address > sufficient? Or do I specify something like "dhcp" as the IP address? Yes, it should be possible to use dhcp by adding this: <eth name="eth0" ip="dhcp" mac="yourmacaddress" /> Please note, that defining a mac address is mandatory, because it defines the node in the cluster. > > If the cluster doesn't come up normally and I'm stuck in the initrd boot > image, is there a documented list of procedures on how to get from there > to mounting the GFS root volume in the right place sufficiently (i.e. > including the host speciffic files, such as /etc/modprobe.conf) to re-make > the initrd? You need to do the following steps: 1. mount your rootfs to /mnt/newroot if the cluster software fails to start, use the lockproto=lock_nolock option to mount the fs. But make sure it's the only node mounting the filesystem ;-) ) # mount -t gfs -o lockproto=lock_nolock /dev/foo /mnt/newroot 2. mount --bind /dev /mnt/newroot/dev 3. mount -t proc proc /mnt/newroot/proc 4. mount -t sysfs none /mnt/newroot/sys 5. mount --bind /mnt/newroot/cluster/cdsl/<nodeid> /mnt/newroot/cdsl.local 6. chroot /mnt/newroot 7. you can make your changes and build new initrd in the chroot Hope that helps, Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany |
From: <go...@bo...> - 2007-10-22 07:49:55
|
I'm in a position where I have to move my GFS volume from one SAN to another, on a different IP block. If my IP for the SAN and cluster operation is assigned by DHCP, do I have to specify it in cluster.conf anyway? Or is specifying the MAC address sufficient? Or do I specify something like "dhcp" as the IP address? If the cluster doesn't come up normally and I'm stuck in the initrd boot image, is there a documented list of procedures on how to get from there to mounting the GFS root volume in the right place sufficiently (i.e. including the host speciffic files, such as /etc/modprobe.conf) to re-make the initrd? Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2007-10-18 09:10:21
|
On Thu, 18 Oct 2007, Mark Hlawatschek wrote: >> My cluster.conf says: >> <rootvolume name = "/dev/sdb" mountopts = "noatime,nodiratime"/> >> >> But /proc/mounts says: >> /dev/sdb / gfs rw,hostdata=jid=0:id=131076:first=0 0 0 >> /dev/sdb /cdsl.local gfs rw,hostdata=jid=0:id=131076:first=0 0 0 >> >> It doesn't list the noatime and nodiratime options. >> >> Am I doing something wrong here? > > No, you hit a bug. It is fixes in the latest comoonics-bootimage-1.3.21 > release. Ah, OK. I worked around it by using the kernel command like gfs-mountopt parameter, which did the trick. Thanks for responding and fixing it. :-) Gordan |
From: Mark H. <hla...@at...> - 2007-10-18 08:40:09
|
On Monday 15 October 2007 17:13:14 Gordan Bobic wrote: > My cluster.conf says: > <rootvolume name = "/dev/sdb" mountopts = "noatime,nodiratime"/> > > But /proc/mounts says: > /dev/sdb / gfs rw,hostdata=jid=0:id=131076:first=0 0 0 > /dev/sdb /cdsl.local gfs rw,hostdata=jid=0:id=131076:first=0 0 0 > > It doesn't list the noatime and nodiratime options. > > Am I doing something wrong here? No, you hit a bug. It is fixes in the latest comoonics-bootimage-1.3.21 release. Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2007-10-15 15:26:52
|
On Mon, 15 Oct 2007, Gordan Bobic wrote: >>>>>>> I have attached a tar ball of files that I changed (or added) in the >>>>>>> standard setup to get iSCSI to work. >>>>>>> >>>>>>> Please let me know if I omitted anything. >>>>>>> >>>>>>> There is probably some pruning that could be done to this (I went for >>>>>>> including full RPMs + configs), which was arguably sub-optimal - there is >>>>>>> likely to be some docs/man pages included (not that anyone is likely to >>>>>>> notice next to 70MB of drivers :^). >>>>>>> >>>>>>> I also hard-removed (C)LVM stuff - something that you probably won't want >>>>>>> to merge into the main tree. >>>>>>> >>>>>>> Thanks for all the help over the last few days, and for accepting my >>>>>>> patches. I really appreciate it. This is by far the most postive OSS >>>>>>> experience I have had to date. :-) >>>>>>> >>>>>> Attached you'll find the latest rpm with iscsi included. >>>>>> Last thing you should have to have is either rootsource=iscsi as bootparam or >>>>>> in com_info <rootsource name="iscsi"/>. >>>>>> Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list >>>>>> the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 >>>>>> >>>>>> Let me know if it works. >>>>> >>>>> It appears to be broken. The new initrd fails to fire up iSCSI. Where my >>>>> old hard-coded version would modprobe the required modules and start up >>>>> iscsid and iscsi, the new version doesn't. >>>>> >>>>> I added <rootsource name="iscsi"/> to the com_info section on every node. >>>>> >>>>> Did I miss something? >>>> >>>> I just added the rootsource=iscsi kernel boot option, and that doesn't fix >>>> it. Carefully looking at the screen log, the iSCSI TCP transport module >>>> does appear to get loaded, but the iscsid and iscsi services don't get >>>> started. >>> >>> Hmm... It looks like iSCSI is getting started correctly after all. It does >>> come up and brings up the block device. The problem is somewhere else. It >>> may be a good idea to not silence the iscsi startup, just for easier >>> overview of what happens and what doesn't (unless you particularly object >>> to a few green OKs among the blues). >> >> Something very weird is happening. With full debug enabled, configfs >> module is loaded, but mounting the configfs fails. :-/ > > I'm not entirely sure how, but it was the existance of the > /etc/sysconfig/comoonics-chroot file that was breaking things. I just > looked through the new RPMs, and that's not where it came from. I can only > guess that I created it to silence a warning, and then later removed it > from /etc/comoonics file list. The configs got replaced by the new RPMs > and put it back in, which is what broke it. > > Instead of replacing the files in /etc/comoonics, perhaps it would > be better to create them as file.rpmnew, to avoid clobbering an existing, > good config. > > Anyway, the new RPMs are good. They meet the "it works for me" approval. > :-) I spoke a little too soon. rootsource=iscsi kernel boot option works, but the xml option in cluster.conf doesn't seem to. With just the cluster.conf option, the image won't boot. Gordan |
From: Gordan B. <go...@bo...> - 2007-10-15 15:13:26
|
My cluster.conf says: <rootvolume name = "/dev/sdb" mountopts = "noatime,nodiratime"/> But /proc/mounts says: /dev/sdb / gfs rw,hostdata=jid=0:id=131076:first=0 0 0 /dev/sdb /cdsl.local gfs rw,hostdata=jid=0:id=131076:first=0 0 0 It doesn't list the noatime and nodiratime options. Am I doing something wrong here? Gordan |
From: Gordan B. <go...@bo...> - 2007-10-15 15:09:43
|
On Mon, 15 Oct 2007, Gordan Bobic wrote: >>>>>> I have attached a tar ball of files that I changed (or added) in the >>>>>> standard setup to get iSCSI to work. >>>>>> >>>>>> Please let me know if I omitted anything. >>>>>> >>>>>> There is probably some pruning that could be done to this (I went for >>>>>> including full RPMs + configs), which was arguably sub-optimal - there is >>>>>> likely to be some docs/man pages included (not that anyone is likely to >>>>>> notice next to 70MB of drivers :^). >>>>>> >>>>>> I also hard-removed (C)LVM stuff - something that you probably won't want >>>>>> to merge into the main tree. >>>>>> >>>>>> Thanks for all the help over the last few days, and for accepting my >>>>>> patches. I really appreciate it. This is by far the most postive OSS >>>>>> experience I have had to date. :-) >>>>>> >>>>> Attached you'll find the latest rpm with iscsi included. >>>>> Last thing you should have to have is either rootsource=iscsi as bootparam or >>>>> in com_info <rootsource name="iscsi"/>. >>>>> Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list >>>>> the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 >>>>> >>>>> Let me know if it works. >>>> >>>> It appears to be broken. The new initrd fails to fire up iSCSI. Where my >>>> old hard-coded version would modprobe the required modules and start up >>>> iscsid and iscsi, the new version doesn't. >>>> >>>> I added <rootsource name="iscsi"/> to the com_info section on every node. >>>> >>>> Did I miss something? >>> >>> I just added the rootsource=iscsi kernel boot option, and that doesn't fix >>> it. Carefully looking at the screen log, the iSCSI TCP transport module >>> does appear to get loaded, but the iscsid and iscsi services don't get >>> started. >> >> Hmm... It looks like iSCSI is getting started correctly after all. It does >> come up and brings up the block device. The problem is somewhere else. It >> may be a good idea to not silence the iscsi startup, just for easier >> overview of what happens and what doesn't (unless you particularly object >> to a few green OKs among the blues). > > Something very weird is happening. With full debug enabled, configfs > module is loaded, but mounting the configfs fails. :-/ I'm not entirely sure how, but it was the existance of the /etc/sysconfig/comoonics-chroot file that was breaking things. I just looked through the new RPMs, and that's not where it came from. I can only guess that I created it to silence a warning, and then later removed it from /etc/comoonics file list. The configs got replaced by the new RPMs and put it back in, which is what broke it. Instead of replacing the files in /etc/comoonics, perhaps it would be better to create them as file.rpmnew, to avoid clobbering an existing, good config. Anyway, the new RPMs are good. They meet the "it works for me" approval. :-) Thanks again, guys. Gordan |
From: Gordan B. <go...@bo...> - 2007-10-15 14:13:58
|
>>>>> I have attached a tar ball of files that I changed (or added) in the >>>>> standard setup to get iSCSI to work. >>>>> >>>>> Please let me know if I omitted anything. >>>>> >>>>> There is probably some pruning that could be done to this (I went for >>>>> including full RPMs + configs), which was arguably sub-optimal - there is >>>>> likely to be some docs/man pages included (not that anyone is likely to >>>>> notice next to 70MB of drivers :^). >>>>> >>>>> I also hard-removed (C)LVM stuff - something that you probably won't want >>>>> to merge into the main tree. >>>>> >>>>> Thanks for all the help over the last few days, and for accepting my >>>>> patches. I really appreciate it. This is by far the most postive OSS >>>>> experience I have had to date. :-) >>>>> >>>> Attached you'll find the latest rpm with iscsi included. >>>> Last thing you should have to have is either rootsource=iscsi as bootparam or >>>> in com_info <rootsource name="iscsi"/>. >>>> Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list >>>> the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 >>>> >>>> Let me know if it works. >>> >>> It appears to be broken. The new initrd fails to fire up iSCSI. Where my >>> old hard-coded version would modprobe the required modules and start up >>> iscsid and iscsi, the new version doesn't. >>> >>> I added <rootsource name="iscsi"/> to the com_info section on every node. >>> >>> Did I miss something? >> >> I just added the rootsource=iscsi kernel boot option, and that doesn't fix >> it. Carefully looking at the screen log, the iSCSI TCP transport module >> does appear to get loaded, but the iscsid and iscsi services don't get >> started. > > Hmm... It looks like iSCSI is getting started correctly after all. It does > come up and brings up the block device. The problem is somewhere else. It > may be a good idea to not silence the iscsi startup, just for easier > overview of what happens and what doesn't (unless you particularly object > to a few green OKs among the blues). Something very weird is happening. With full debug enabled, configfs module is loaded, but mounting the configfs fails. :-/ Gordan |
From: Gordan B. <go...@bo...> - 2007-10-15 14:02:49
|
On Mon, 15 Oct 2007, Gordan Bobic wrote: >>>> I have attached a tar ball of files that I changed (or added) in the >>>> standard setup to get iSCSI to work. >>>> >>>> Please let me know if I omitted anything. >>>> >>>> There is probably some pruning that could be done to this (I went for >>>> including full RPMs + configs), which was arguably sub-optimal - there is >>>> likely to be some docs/man pages included (not that anyone is likely to >>>> notice next to 70MB of drivers :^). >>>> >>>> I also hard-removed (C)LVM stuff - something that you probably won't want >>>> to merge into the main tree. >>>> >>>> Thanks for all the help over the last few days, and for accepting my >>>> patches. I really appreciate it. This is by far the most postive OSS >>>> experience I have had to date. :-) >>>> >>>> Gordan >>> >>> Attached you'll find the latest rpm with iscsi included. >>> Last thing you should have to have is either rootsource=iscsi as bootparam or >>> in com_info <rootsource name="iscsi"/>. >>> Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list >>> the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 >>> >>> Let me know if it works. >> >> It appears to be broken. The new initrd fails to fire up iSCSI. Where my >> old hard-coded version would modprobe the required modules and start up >> iscsid and iscsi, the new version doesn't. >> >> I added <rootsource name="iscsi"/> to the com_info section on every node. >> >> Did I miss something? > > I just added the rootsource=iscsi kernel boot option, and that doesn't fix > it. Carefully looking at the screen log, the iSCSI TCP transport module > does appear to get loaded, but the iscsid and iscsi services don't get > started. Hmm... It looks like iSCSI is getting started correctly after all. It does come up and brings up the block device. The problem is somewhere else. It may be a good idea to not silence the iscsi startup, just for easier overview of what happens and what doesn't (unless you particularly object to a few green OKs among the blues). Gordan |
From: Gordan B. <go...@bo...> - 2007-10-15 13:50:53
|
>>> I have attached a tar ball of files that I changed (or added) in the >>> standard setup to get iSCSI to work. >>> >>> Please let me know if I omitted anything. >>> >>> There is probably some pruning that could be done to this (I went for >>> including full RPMs + configs), which was arguably sub-optimal - there is >>> likely to be some docs/man pages included (not that anyone is likely to >>> notice next to 70MB of drivers :^). >>> >>> I also hard-removed (C)LVM stuff - something that you probably won't want >>> to merge into the main tree. >>> >>> Thanks for all the help over the last few days, and for accepting my >>> patches. I really appreciate it. This is by far the most postive OSS >>> experience I have had to date. :-) >>> >>> Gordan >> >> Attached you'll find the latest rpm with iscsi included. >> Last thing you should have to have is either rootsource=iscsi as bootparam or >> in com_info <rootsource name="iscsi"/>. >> Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list >> the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 >> >> Let me know if it works. > > It appears to be broken. The new initrd fails to fire up iSCSI. Where my > old hard-coded version would modprobe the required modules and start up > iscsid and iscsi, the new version doesn't. > > I added <rootsource name="iscsi"/> to the com_info section on every node. > > Did I miss something? I just added the rootsource=iscsi kernel boot option, and that doesn't fix it. Carefully looking at the screen log, the iSCSI TCP transport module does appear to get loaded, but the iscsid and iscsi services don't get started. Gordan |
From: Gordan B. <go...@bo...> - 2007-10-15 13:38:30
|
On Fri, 12 Oct 2007, Marc Grimme wrote: > On Friday 12 October 2007 12:29:27 Gordan Bobic wrote: >> I have attached a tar ball of files that I changed (or added) in the >> standard setup to get iSCSI to work. >> >> Please let me know if I omitted anything. >> >> There is probably some pruning that could be done to this (I went for >> including full RPMs + configs), which was arguably sub-optimal - there is >> likely to be some docs/man pages included (not that anyone is likely to >> notice next to 70MB of drivers :^). >> >> I also hard-removed (C)LVM stuff - something that you probably won't want >> to merge into the main tree. >> >> Thanks for all the help over the last few days, and for accepting my >> patches. I really appreciate it. This is by far the most postive OSS >> experience I have had to date. :-) >> >> Gordan > > Attached you'll find the latest rpm with iscsi included. > Last thing you should have to have is either rootsource=iscsi as bootparam or > in com_info <rootsource name="iscsi"/>. > Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list > the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 > > Let me know if it works. It appears to be broken. The new initrd fails to fire up iSCSI. Where my old hard-coded version would modprobe the required modules and start up iscsid and iscsi, the new version doesn't. I added <rootsource name="iscsi"/> to the com_info section on every node. Did I miss something? :-( Gordan |