You can subscribe to this list here.
2006 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(105) |
Nov
(10) |
Dec
(7) |
2008 |
Jan
|
Feb
(31) |
Mar
(13) |
Apr
(7) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(23) |
Dec
|
2009 |
Jan
(25) |
Feb
(24) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(6) |
Jul
(27) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(7) |
Dec
(25) |
2010 |
Jan
|
Feb
(7) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
From: <go...@bo...> - 2008-04-16 13:13:17
|
On Wed, 16 Apr 2008, Marc Grimme wrote: >> Does anyone think that adding support for this would be a good idea? I'm >> working with GlusterFS at the moment, so could try to add the relevant >> init stuff when I've ironed things out a bit. Maybe as a contrib package, >> like DRBD? > > After going roughly over the features and concepts of Glusterfs, I would doubt > it being an easy task to build a open-sharedroot cluster with it but why not. It shouldn't be too different to the OSR NFS setup. There are two options: 1) diskless client 2) mirror client/server In the diskless client case it would be pretty much the same as NFS. In the mirror client/server, it would be similar to a DRBD+GFS setup, only scaleable to more than 2-3 nodes (IIRC, DRBD only supports up to 2-3 nodes at the moment). Each node would mount it's local mirror as OSR (as it does with DRBD). The upshot is that as far as I can make out, unlike GFS, split-brains are less of an issue in FS terms - GlusterFS would sort that out, so in theory, we could have an n-node cluster with quorum of 1. The only potential issue with that would be migration of IPs - if it split-brains, it would, in theory, cause an IP resource clash. But I think the scope for FS corruption would be removed. There might still be file clobbering on resync, but the FS certainly wouldn't get totally destroyed like with splitbrain GFS and shared SAN storage (DRBD+GFS also has the same benefit that you get to keep at least one version of the FS after split-brain). > Still it sounds quite promising and if you like you are welcome to contribute. OK, thanks. I just wanted to float the idea and see if there are any strong objections to it first. :-) >> On a separate node, am I correct in presuming that the diet version of the >> initrd with the kernel drivers pruned and additional package filtering >> added as per the patch I sent a while back was not deemed a good idea? > Thanks for reminding me. I forgot to answer, sorry. > > The idea itself is good. But originally and by concept the initrd it designed > to be an initrd used for different hardware configurations. Same initrd for multiple configurations? Why is this useful? Different configurations could also run different kernels, which would invalidate the shared initrd concept... > That implies we need different kernel modules and tools on the same > cluster. Sure - but clusters like this, at least in my experience, generally tend to be homogenous, when there is choice. The way I made the patch make some allowance for this is that both the loaded modules and all the ones listed in /etc/modprobe.conf get included - just in case. So modprobe.conf could be (ab)used to load additional modules. But I accept this is potentially a somewhat cringeworthy hack when used to force load additional modules into the initrd. > Say you would > use a combination of virtulized and unvirtualized nodes in a cluster. As of > now that is possible. Or just different servers. This would not be possible > with your diet-patch, would it? No, probably not - but then again virtualized and non-virtualized nodes would be running different kernels (e.g. 2.6.18-53 physical and 2.6.18-53xen virtual), so the point is somewhat moot. You'd need different initrds anyway. And the different nodes would use different modprobe.conf files if their hardware is different. So the only extra requirement with my patch would be that the initrd is built on the node as the node that will be running the initrd image. In case of virtual vs. non-virtual hardware (or just different kernel versions), it would still be a case of running mkinitrd with different kernel versions with a different modprobe.conf file, as AFAIK, this gets included in the initrd. > I thought of using it as special option to the mkinitrd (--diet or the like). > Could you provide a patch for this? That could be arranged. I think that's a reasonably good idea. But as I mentioned above, I'm not sure the full-fat initrd actually gains much in terms of node/hardware compatibility. I'll send the --diet optioned patch. I'll leave the choice of whether --diet should be the default to you guys. :-) Gordan |
From: Marc G. <gr...@at...> - 2008-04-16 12:47:36
|
Hi Gordan, On Wednesday 16 April 2008 11:52:18 go...@bo... wrote: > Hi, > > Does anyone think that adding support for this would be a good idea? I'm > working with GlusterFS at the moment, so could try to add the relevant > init stuff when I've ironed things out a bit. Maybe as a contrib package, > like DRBD? After going roughly over the features and concepts of Glusterfs, I would doubt it being an easy task to build a open-sharedroot cluster with it but why not. Still it sounds quite promising and if you like you are welcome to contribute. We'll support you as best as we can. > > On a separate node, am I correct in presuming that the diet version of the > initrd with the kernel drivers pruned and additional package filtering > added as per the patch I sent a while back was not deemed a good idea? Thanks for reminding me. I forgot to answer, sorry. The idea itself is good. But originally and by concept the initrd it designed to be an initrd used for different hardware configurations. That implies we need different kernel modules and tools on the same cluster. Say you would use a combination of virtulized and unvirtualized nodes in a cluster. As of now that is possible. Or just different servers. This would not be possible with your diet-patch, would it? I thought of using it as special option to the mkinitrd (--diet or the like). Could you provide a patch for this? Thanks and regards Marc. > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/java >one _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: <go...@bo...> - 2008-04-16 09:52:22
|
Hi, Does anyone think that adding support for this would be a good idea? I'm working with GlusterFS at the moment, so could try to add the relevant init stuff when I've ironed things out a bit. Maybe as a contrib package, like DRBD? On a separate node, am I correct in presuming that the diet version of the initrd with the kernel drivers pruned and additional package filtering added as per the patch I sent a while back was not deemed a good idea? Gordan |
From: Gordan B. <go...@bo...> - 2008-03-21 01:41:33
|
Hi, guys. I have implemented some changes that I think others may benefit from, especially those running the initrd in RAM rather than disk-backed. They reduce the size if the initrd. 1) Only kernel modules that are either - In /etc/modprobe.conf - Currently loaded get copied into the initrd. I think this probably a reasonable sub-selection of kernel modules to get the system up and running, especially since all that is required to boot-strap the OSR will be loaded by the time an OSR volume is prepared. This gets my compressed initrd image from 50MB down to 40MB (uncompressed ~130MB). 2) Added a feature to the rpms.initrd.d parser. Now 2 filter parameters are available. The new format is: package +filter -filter As such, it is fully backwards compatible with the existing rpm list files. If only -filter is used, +filter should be "." (a single dot). The reason for this is that grep's regex engine doesn't appear to have support for negating an expression with !. Now the list is filtered through: grep -e "+filter" | grep -v "-filter" The latter is set to ^$ if it is not set, so it doesn't affect operation when -filter is not specified. This is useful because it means that additional control can be applied to remove things like /usr/share/doc or /usr/share/man from the initrd, which otherwise wouldn't be possible. The net benefits are about 20-25MB of RAM (or disk space) used and proportionately reduced initrd build time. Attached are patches for: /opt/atix/comoonics-bootimage/create-gfs-initrd-generic.sh /opt/atix/comoonics-bootimage/boot-scripts/etc/chroot-lib.sh The patches are against the comoonics-bootimage-1.3-28 release in the yum repository. These modifications have passed my "it works for me" testing (yes, I did remember to wipe out /var/commonics/chroot before I tried the new initrd), but peer review would be good. :-) Is there any point in forwarding my modified /etc/comoonics/rpms.initrd.d/* files? I didn't think so, but let me know if anyone disagrees. I hope this is useful. Thanks for all your support, and for making this great product. :-) Gordan |
From: <go...@bo...> - 2008-03-20 15:37:12
|
Then you clearly haven't looked very hard. Looks very much relevant and on-topic for all the lists in the headers to me. Gordan On Thu, 20 Mar 2008, Ken Barber wrote: > Looks like spam to me. > > On Thursday 20 March 2008 02:30:55 am Marc Grimme wrote: >> Hello, >> we are very happy to announce the availability of the first official >> release candidate of the com.oonics open shared root cluster installation >> DVD (RC3). > [snip] > > ******************************************** > This message is intended only for the use of the Addressee and > may contain information that is PRIVILEGED and CONFIDENTIAL. > > If you are not the intended recipient, you are hereby notified > that any dissemination of this communication is strictly prohibited. > > If you have received this communication in error, please erase > all copies of the message and its attachments and notify us > immediately. > > Thank you. > ******************************************** > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > |
From: Ken B. <kb...@bm...> - 2008-03-20 15:23:47
|
Looks like spam to me. On Thursday 20 March 2008 02:30:55 am Marc Grimme wrote: > Hello, > we are very happy to announce the availability of the first official > release candidate of the com.oonics open shared root cluster installation > DVD (RC3). [snip] ******************************************** This message is intended only for the use of the Addressee and may contain information that is PRIVILEGED and CONFIDENTIAL. If you are not the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, please erase all copies of the message and its attachments and notify us immediately. Thank you. ******************************************** |
From: Marc G. <gr...@at...> - 2008-03-20 07:31:02
|
Hello, we are very happy to announce the availability of the first official release candidate of the com.oonics open shared root cluster installation DVD (RC3). The com.oonics open shared root cluster installation DVD allows the installation of a single node open shared root cluster with the use of anaconda, the well known installation software provided by Red Hat. After the installation, the open shared root cluster can be easily scaled up to more than hundred cluster nodes. You can now download the open shared root installation DVD from www.open-sharedroot.org. We are very interested in feetback. Please either file a bug or feature or post to the mailinglist (see www.open-sharedroot.org). More details can be found here: http://www.open-sharedroot.org/news-archive/availability-of-first-beta-of-the-com-oonics-version-of-anaconda. Note: The download isos are based on Centos5.1! Have fun testing it and let us know the outcome. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Mark H. <hla...@at...> - 2008-03-11 14:38:29
|
Hi Gordan, the productive channel contains the package-set, that successfully passed our QA process. Preview contains all packages including pre-releases and unstable versions. The productive package-set is a subset of preview. All packages will also remain in the preview channel. The release process takes all stable versions from preview into a rc-channel. after a successfully completed QA test, all packages from rc will be released as a new productive channel. The current productive release for comoonics for RHEL5 is 4.3 comoonics for RHEL4 is 4.3.1. Best Regards, Mark On Tuesday 11 March 2008 15:19:58 go...@bo... wrote: > Mark, > > Are these the production/stable versions of the preview repositories? > If so, will the preview repositories remain for unstable/pre-release > versions? > > Gordan > > On Tue, 11 Mar 2008, Mark Hlawatschek wrote: > > Hi ! > > > > The comoonics productive channel for RHEL5 and CentOS5 has been released > > ! > > > > To use the comoonics productive channel for RHEL5 create a file > > called /etc/yum.repos.d/comoonics.repo and add the following content: > > > > [comoonics] > > name=Packages for the comoonics shared root cluster > > baseurl=http://download.atix.de/yum/comoonics/redhat-el5/productive/noarc > >h/ enabled=1 > > gpgcheck=1 > > gpgkey=http://download.atix.de/yum/comoonics/comoonics-RPM-GPG.key > > > > [comoonics-$arch] > > name=Packages for the comoonics shared root cluster > > baseurl=http://download.atix.de/yum/comoonics/redhat-el5/productive/$arch > > enabled=1 > > gpgcheck=1 > > gpgkey=http://download.atix.de/yum/comoonics/comoonics-RPM-GPG.key > > > > Have fun ! > > > > Mark > > > > -- > > Gruss / Regards, > > > > Dipl.-Ing. Mark Hlawatschek > > http://www.atix.de/ > > http://www.open-sharedroot.org/ > > > > > > > > ------------------------------------------------------------------------- > > This SF.net email is sponsored by: Microsoft > > Defy all challenges. Microsoft(R) Visual Studio 2008. > > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > > _______________________________________________ > > Open-sharedroot-devel mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ |
From: <go...@bo...> - 2008-03-11 14:20:06
|
Mark, Are these the production/stable versions of the preview repositories? If so, will the preview repositories remain for unstable/pre-release versions? Gordan On Tue, 11 Mar 2008, Mark Hlawatschek wrote: > Hi ! > > The comoonics productive channel for RHEL5 and CentOS5 has been released ! > > To use the comoonics productive channel for RHEL5 create a file > called /etc/yum.repos.d/comoonics.repo and add the following content: > > [comoonics] > name=Packages for the comoonics shared root cluster > baseurl=http://download.atix.de/yum/comoonics/redhat-el5/productive/noarch/ > enabled=1 > gpgcheck=1 > gpgkey=http://download.atix.de/yum/comoonics/comoonics-RPM-GPG.key > > [comoonics-$arch] > name=Packages for the comoonics shared root cluster > baseurl=http://download.atix.de/yum/comoonics/redhat-el5/productive/$arch > enabled=1 > gpgcheck=1 > gpgkey=http://download.atix.de/yum/comoonics/comoonics-RPM-GPG.key > > Have fun ! > > Mark > > -- > Gruss / Regards, > > Dipl.-Ing. Mark Hlawatschek > http://www.atix.de/ > http://www.open-sharedroot.org/ > > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > |
From: Mark H. <hla...@at...> - 2008-03-11 14:07:48
|
Hi ! The comoonics productive channel for RHEL5 and CentOS5 has been released ! To use the comoonics productive channel for RHEL5 create a file called /etc/yum.repos.d/comoonics.repo and add the following content: [comoonics] name=Packages for the comoonics shared root cluster baseurl=http://download.atix.de/yum/comoonics/redhat-el5/productive/noarch/ enabled=1 gpgcheck=1 gpgkey=http://download.atix.de/yum/comoonics/comoonics-RPM-GPG.key [comoonics-$arch] name=Packages for the comoonics shared root cluster baseurl=http://download.atix.de/yum/comoonics/redhat-el5/productive/$arch enabled=1 gpgcheck=1 gpgkey=http://download.atix.de/yum/comoonics/comoonics-RPM-GPG.key Have fun ! Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ |
From: <go...@bo...> - 2008-03-04 14:06:34
|
On Tue, 4 Mar 2008, Marc Grimme wrote: >>>> Does this have to be added to any lists (rpms or files) in >>>> /etc/comoonics/bootimage before it'll get bundled/started in the initrd? >>>> >>>> I have it in my cluster.conf, but there doesn't appear to be anything >>>> listening on port 12242. :-( >>> >>> What does /etc/init.d/fenceack stop and then /etc/init.d/fenceacksv start >>> tell you in the syslog and do you see a service like this: >>> >>> [marc@realserver9 ~]$ ps ax | grep fenceack >>> 12266 ? S >>> 0:36 /usr/bin/python /opt/atix/comoonics-fenceacksv/fenceacksv --xml >>> --xml-clusterconf --xml-novalidate --debug --nodename axqa01_1 >>> /etc/cluster/cluster.conf >>> 12267 ? S 0:08 /usr/bin/logger -t fenceacksv >>> 13119 pts/2 S+ 0:00 grep fenceack >> >> In a word - no. >> >> # /etc/init.d/fenceacksv stop >> Stopping fenceacksv: [ OK ] >> # /etc/init.d/fenceacksv start >> Starting fenceacksv: [ OK ] >> # ps axww | grep fenceack >> 28772 pts/0 S+ 0:00 grep fenceack >> >> Nothing about this gets logged in /var/log/messages. > strange. try starting it manually: > /usr/bin/python /opt/atix/comoonics-fenceacksv/fenceacksv --xml --xml-clusterconf --xml-novalidate --debug --nodename > <nodename> /etc/cluster/cluster.conf > > Where <nodename> is the clusternodename you are starting it on. > > What does it say? That seems to work: # /usr/bin/python /opt/atix/comoonics-fenceacksv/fenceacksv --xml --xml-clusterconf --xml-novalidate --debug --nodename sentinel1c /etc/cluster/cluster.conf DEBUG:comoonics.bootimage.fenceacksv:Parsing document /etc/cluster/cluster.conf DEBUG:comoonics.bootimage.fenceacksv:Nodename: sentinel1c, path: /cluster/clusternodes/clusternode[@name="sentinel1c"]/com_info/fenceackserver DEBUG:comoonics.bootimage.fenceacksv:Starting nonssl server So at a guess, it may be the init script choking on something in my setup. I'll take a closer look at it. Gordan |
From: Marc G. <gr...@at...> - 2008-03-04 14:03:10
|
On Tuesday 04 March 2008 14:49:11 go...@bo... wrote: > On Tue, 4 Mar 2008, Marc Grimme wrote: > >> Does this have to be added to any lists (rpms or files) in > >> /etc/comoonics/bootimage before it'll get bundled/started in the initrd? > >> > >> I have it in my cluster.conf, but there doesn't appear to be anything > >> listening on port 12242. :-( > > > > What does /etc/init.d/fenceack stop and then /etc/init.d/fenceacksv start > > tell you in the syslog and do you see a service like this: > > > > [marc@realserver9 ~]$ ps ax | grep fenceack > > 12266 ? S > > 0:36 /usr/bin/python /opt/atix/comoonics-fenceacksv/fenceacksv --xml > > --xml-clusterconf --xml-novalidate --debug --nodename axqa01_1 > > /etc/cluster/cluster.conf > > 12267 ? S 0:08 /usr/bin/logger -t fenceacksv > > 13119 pts/2 S+ 0:00 grep fenceack > > In a word - no. > > # /etc/init.d/fenceacksv stop > Stopping fenceacksv: [ OK ] > # /etc/init.d/fenceacksv start > Starting fenceacksv: [ OK ] > # ps axww | grep fenceack > 28772 pts/0 S+ 0:00 grep fenceack > > Nothing about this gets logged in /var/log/messages. strange. try starting it manually: /usr/bin/python /opt/atix/comoonics-fenceacksv/fenceacksv --xml --xml-clusterconf --xml-novalidate --debug --nodename <nodename> /etc/cluster/cluster.conf Where <nodename> is the clusternodename you are starting it on. What does it say? > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: <go...@bo...> - 2008-03-04 13:49:14
|
On Tue, 4 Mar 2008, Marc Grimme wrote: >> Does this have to be added to any lists (rpms or files) in >> /etc/comoonics/bootimage before it'll get bundled/started in the initrd? >> >> I have it in my cluster.conf, but there doesn't appear to be anything >> listening on port 12242. :-( >> > What does /etc/init.d/fenceack stop and then /etc/init.d/fenceacksv start tell > you in the syslog and do you see a service like this: > > [marc@realserver9 ~]$ ps ax | grep fenceack > 12266 ? S > 0:36 /usr/bin/python /opt/atix/comoonics-fenceacksv/fenceacksv --xml --xml-clusterconf --xml-novalidate --debug --nodename > axqa01_1 /etc/cluster/cluster.conf > 12267 ? S 0:08 /usr/bin/logger -t fenceacksv > 13119 pts/2 S+ 0:00 grep fenceack In a word - no. # /etc/init.d/fenceacksv stop Stopping fenceacksv: [ OK ] # /etc/init.d/fenceacksv start Starting fenceacksv: [ OK ] # ps axww | grep fenceack 28772 pts/0 S+ 0:00 grep fenceack Nothing about this gets logged in /var/log/messages. Gordan |
From: Marc G. <gr...@at...> - 2008-03-04 13:33:01
|
On Tuesday 04 March 2008 14:22:22 go...@bo... wrote: > Does this have to be added to any lists (rpms or files) in > /etc/comoonics/bootimage before it'll get bundled/started in the initrd? > > I have it in my cluster.conf, but there doesn't appear to be anything > listening on port 12242. :-( > > Thanks. > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel What does /etc/init.d/fenceack stop and then /etc/init.d/fenceacksv start tell you in the syslog and do you see a service like this: [marc@realserver9 ~]$ ps ax | grep fenceack 12266 ? S 0:36 /usr/bin/python /opt/atix/comoonics-fenceacksv/fenceacksv --xml --xml-clusterconf --xml-novalidate --debug --nodename axqa01_1 /etc/cluster/cluster.conf 12267 ? S 0:08 /usr/bin/logger -t fenceacksv 13119 pts/2 S+ 0:00 grep fenceack -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: <go...@bo...> - 2008-03-04 13:22:28
|
Does this have to be added to any lists (rpms or files) in /etc/comoonics/bootimage before it'll get bundled/started in the initrd? I have it in my cluster.conf, but there doesn't appear to be anything listening on port 12242. :-( Thanks. Gordan |
From: <go...@bo...> - 2008-03-04 11:42:15
|
Is there a configuration parameter in cluster.conf to specify the MTU of the cluster interface in the <com_info><eth> section? I did a quick grep -ir mtu on /opt/atix but didn't find anything. It would be a useful parameter to have since SAN/NAS throughput benefits quite a lot from larger frames. Gordan |
From: <go...@bo...> - 2008-02-12 18:07:53
|
On Tue, 12 Feb 2008, Marc Grimme wrote: >>>>> Simplified drbd-lib.sh and fixed cl_checknodes so it identifies quorum >>>>> correctly in a 2-node cluster. >>>> >>>> Is added find files attached. The 1.3-28 is also now in the preview >>>> channel rhel5. >>> >>> # yum clean metadata >>> # yum update >>> >>> results in: >>> >>> http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/ >>> primary.xml.gz: [Errno -1] Metadata file does not match checksum >>> Trying other mirror. >>> Error: failure: repodata/primary.xml.gz from comoonics-preview: [Errno >>> 256] No more mirrors to try. >> >> Digging a bit further, the timestamp of the generated checksums seems to >> be 1201718041, which is approximately 30/01/2008 6:34pm GMT. Looks like >> repomd.xml hasn't been re-generated when a package update was added to >> the repository. :-( > > Try agian that should have been fixed ;-) Still the same. Flushed the entire yum directory, and all the web caches on my network, and it's still coming up with the same checksums with the same time stamps. :-( Gordan |
From: Marc G. <gr...@at...> - 2008-02-12 17:38:06
|
On Tuesday 12 February 2008 18:25:41 go...@bo... wrote: > On Tue, 12 Feb 2008, go...@bo... wrote: > >>> Simplified drbd-lib.sh and fixed cl_checknodes so it identifies quorum > >>> correctly in a 2-node cluster. > >> > >> Is added find files attached. The 1.3-28 is also now in the preview > >> channel rhel5. > > > > # yum clean metadata > > # yum update > > > > results in: > > > > http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/ > >primary.xml.gz: [Errno -1] Metadata file does not match checksum > > Trying other mirror. > > Error: failure: repodata/primary.xml.gz from comoonics-preview: [Errno > > 256] No more mirrors to try. > > Digging a bit further, the timestamp of the generated checksums seems to > be 1201718041, which is approximately 30/01/2008 6:34pm GMT. Looks like > repomd.xml hasn't been re-generated when a package update was added to > the repository. :-( Try agian that should have been fixed ;-) > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme Phone: +49-89 452 3538-14 http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss |
From: <go...@bo...> - 2008-02-12 17:25:46
|
On Tue, 12 Feb 2008, go...@bo... wrote: >>> Simplified drbd-lib.sh and fixed cl_checknodes so it identifies quorum >>> correctly in a 2-node cluster. >> >> Is added find files attached. The 1.3-28 is also now in the preview channel >> rhel5. > > # yum clean metadata > # yum update > > results in: > > http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/primary.xml.gz: > [Errno -1] Metadata file does not match checksum > Trying other mirror. > Error: failure: repodata/primary.xml.gz from comoonics-preview: [Errno > 256] No more mirrors to try. Digging a bit further, the timestamp of the generated checksums seems to be 1201718041, which is approximately 30/01/2008 6:34pm GMT. Looks like repomd.xml hasn't been re-generated when a package update was added to the repository. :-( Gordan |
From: <go...@bo...> - 2008-02-12 12:03:20
|
On Tue, 12 Feb 2008, Marc Grimme wrote: >>> How about your iscsi-config? >> >> That was all good and working. I no longer have access to the said cluster >> because the system was deployed for the client, but there have been no >> reported problems. The last problem was with the node(s) coming up too >> quickly because of ARP propagation on managed switches, but that was >> solved. IIRC you put a feature in to make the post-ifcfg initialization >> delay carry over to the initroot to replace my ugly hack of putting "sleep >> 60" in iscsi-lib.sh. >> >> I _think_ I might still have a copy of the cluster.conf file from that >> setup, if that's of interest. > > yes it is ;-) . Attached. :) Gordan |
From: Marc G. <gr...@at...> - 2008-02-12 11:47:32
|
On Tuesday 12 February 2008 12:23:30 go...@bo... wrote: > On Tue, 12 Feb 2008, Marc Grimme wrote: > > How about your iscsi-config? > > That was all good and working. I no longer have access to the said cluster > because the system was deployed for the client, but there have been no > reported problems. The last problem was with the node(s) coming up too > quickly because of ARP propagation on managed switches, but that was > solved. IIRC you put a feature in to make the post-ifcfg initialization > delay carry over to the initroot to replace my ugly hack of putting "sleep > 60" in iscsi-lib.sh. > > I _think_ I might still have a copy of the cluster.conf file from that > setup, if that's of interest. yes it is ;-) . > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme Phone: +49-89 452 3538-14 http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss |
From: <go...@bo...> - 2008-02-12 11:27:17
|
On Tue, 12 Feb 2008, Marc Grimme wrote: >>>> I've been trying to track down the issue of a cluster with: >>>> two_node="1" >>>> expected_vodes="1" >>>> not coming up as quorate with just one node, and as far as I can tell, >>>> it is because /usr/bin/cl_checknodes is returning the wrong value. >>>> >>>> I think this bit around line 72 is causing the problem: >>>> >>>> if len(nodeelements) == 1: >>>> quorum=1 >>>> else: >>>> quorum=len(nodeelements)/2+1 >>>> >>>> Should that not instead be something like: >>>> >>>> if len(nodeelements) <= 2: >>>> quorum=1 >>>> else: >>>> quorum=len(nodeelements)/2+1 >>>> >>>> >>>> Please advise. >>> >>> Basically that wasn't the idea behind it. We wanted to prevent a cluster >>> with more then one node to not coming up with splitbrain when both nodes >>> are powered up initially. It's basically a way to wait until both nodes >>> are up and running. We didn't want to risk doublemounts for users not >>> being very sensitive with booting machines or just first level supporters >>> don't bothering much about data consistency. >>> >>> So for you I would be just add the bootoption quorumack if you don't have >>> anybody else being able to reboot/fence your clusternodes without much >>> experience. >> >> OK, that makes sense. Where should the "quorumack" option be? In >> cluster.conf? If so, which tag/section? Or is it a kernel boot parameter >> option? > > It's only a bootparm ;-). So, just something like this in grub.conf is sufficient? kernel /2.6.18-53.1.6.el5/vmlinuz ro quorumack Gordan |
From: <go...@bo...> - 2008-02-12 11:23:37
|
On Tue, 12 Feb 2008, Marc Grimme wrote: > How about your iscsi-config? That was all good and working. I no longer have access to the said cluster because the system was deployed for the client, but there have been no reported problems. The last problem was with the node(s) coming up too quickly because of ARP propagation on managed switches, but that was solved. IIRC you put a feature in to make the post-ifcfg initialization delay carry over to the initroot to replace my ugly hack of putting "sleep 60" in iscsi-lib.sh. I _think_ I might still have a copy of the cluster.conf file from that setup, if that's of interest. Gordan |
From: Marc G. <gr...@at...> - 2008-02-12 11:13:57
|
On Tuesday 12 February 2008 12:07:11 go...@bo... wrote: > On Tue, 12 Feb 2008, Marc Grimme wrote: > >> I've been trying to track down the issue of a cluster with: > >> two_node="1" > >> expected_vodes="1" > >> not coming up as quorate with just one node, and as far as I can tell, > >> it is because /usr/bin/cl_checknodes is returning the wrong value. > >> > >> I think this bit around line 72 is causing the problem: > >> > >> if len(nodeelements) == 1: > >> quorum=1 > >> else: > >> quorum=len(nodeelements)/2+1 > >> > >> Should that not instead be something like: > >> > >> if len(nodeelements) <= 2: > >> quorum=1 > >> else: > >> quorum=len(nodeelements)/2+1 > >> > >> > >> Please advise. > > > > Basically that wasn't the idea behind it. We wanted to prevent a cluster > > with more then one node to not coming up with splitbrain when both nodes > > are powered up initially. It's basically a way to wait until both nodes > > are up and running. We didn't want to risk doublemounts for users not > > being very sensitive with booting machines or just first level supporters > > don't bothering much about data consistency. > > > > So for you I would be just add the bootoption quorumack if you don't have > > anybody else being able to reboot/fence your clusternodes without much > > experience. > > OK, that makes sense. Where should the "quorumack" option be? In > cluster.conf? If so, which tag/section? Or is it a kernel boot parameter > option? It's only a bootparm ;-). > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme Phone: +49-89 452 3538-14 http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss |
From: Marc G. <gr...@at...> - 2008-02-12 11:13:34
|
On Tuesday 12 February 2008 12:08:17 go...@bo... wrote: > On Tue, 12 Feb 2008, Marc Grimme wrote: > > On Monday 11 February 2008 22:26:18 Gordan Bobic wrote: > >> Simplified drbd-lib.sh and fixed cl_checknodes so it identifies quorum > >> correctly in a 2-node cluster. > > > > Is added find files attached. The 1.3-28 is also now in the preview > > channel rhel5. > > Great, thanks. yum update didn't find it, but I'll try again in a minute. > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel Here you'll find your cluster.conf! http://www.open-sharedroot.org/faq/administrators-handbook/cluster-system-administration/example-configurations/cluster-configuration-for-drbd-based-cluster/ How about your iscsi-config? -- Gruss / Regards, Marc Grimme Phone: +49-89 452 3538-14 http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss |