You can subscribe to this list here.
2006 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(105) |
Nov
(10) |
Dec
(7) |
2008 |
Jan
|
Feb
(31) |
Mar
(13) |
Apr
(7) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(23) |
Dec
|
2009 |
Jan
(25) |
Feb
(24) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(6) |
Jul
(27) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(7) |
Dec
(25) |
2010 |
Jan
|
Feb
(7) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
From: Gordan B. <go...@bo...> - 2009-07-14 15:34:08
|
On Tue, 14 Jul 2009 17:19:02 +0200, Marc Grimme <gr...@at...> wrote: > Strange one. I don't know why that happens (I need to check this). > Nevertheless I've done a resync. You might try again. Still seeing the same problem. :( primary.xml.gz | 1.1 kB 00:00 http://download.atix.de/yum/comoonics/redhat-el5/preview/x86_64/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from comoonics-preview-x86_64: [Errno 256] No more mirrors to try. > I didn't have time to merge your libraries in yet. But hope to have this > done this week. No problem. A part of the GlusterFS is just an ugly bodge to cover the fact that fuse returns _before_ the file system is actually ready, but I decided to add it anyway because it doesn't look like the upstream fix is going to be arriving any time soon. The other patch is mostly to compensate for the bug/feature that makes mkinitrd put the local version of files into the initrd rather than all the /cluster/cdsl versions. This meant that all nodes tried to connect arrays with the same IDs, which didn't work. This means that each node has to have a different initrd, which is an issue that is also affecting a few other things I'm trying to do (specifically, glusterfs related configs, but it's probably only a matter of time before something else trips over it). On a separate note - where can I get the sources for the modified killall? I want to look into adding a -d option to it to make it dump to stderr the name of the process if is killing so that I can at least get a reasonable idea what gets killed that downs the rootfs with Gluster. Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2009-07-14 15:02:59
|
comoonics-preview-x86_64 | 1.7 kB 00:00 primary.xml.gz | 1.1 kB 00:00 http://download.atix.de/yum/comoonics/redhat-el5/preview/x86_64/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. comoonics-preview | 1.7 kB 00:00 primary.xml.gz | 36 kB 00:00 http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/primary.xml.gz: [Errno -3] Error performing checksum Trying other mirror. |
From: Gordan B. <go...@bo...> - 2009-07-10 00:24:15
|
This function needs to be added, the last version I submitted didn't include it. It still doesn't solve the shutdown problem, though. #****f* glusterfs-lib.sh/glusterfs_get_userspace_procs # NAME # glusterfs_get_userspace_procs # SYNOPSIS # function glusterfs_get_userspace_procs(cluster_conf, nodename) # DESCRIPTION # gets userspace programs that are to be running dependent on rootfs # SOURCE function glusterfs_get_userspace_procs { local clutype=$1 local rootfs=$2 echo -e "glusterfs \n\ glusterfsd" } #******** glusterfs_get_userspace_procs |
From: Gordan B. <go...@bo...> - 2009-07-10 00:22:34
|
When building the initrd, host-specific files don't appear to be put into the initrd as host specific. Instead, whatever version is used for the host that builds the initrd ends up being in the initrd. This was why I re-worked the mdadm part - /etc/mdadm.conf wasn't being included in a host-specific way, so all hosts other than the initrd-build host failed to mount the RAID volume. Is there a fix or a workaround available for this? I can build the initrd separately on each node, but that's a bit lame and goes against "how it's supposed to work". Gordan |
From: Gordan B. <go...@bo...> - 2009-07-08 21:27:19
|
Hi, Attached are: My latest clusterfs-lib.sh that includes a temporary bodge for a problem with glusterfs that makes operations on the FS fail for a few seconds after mounting because fuse takes a little while to settle. So the bodge is to include "sleep 5; ls -la <mount-point>" to make sure that the real root is ready by the time we mount cdsl.local. It's a patched version of the latest package that was preview (I haven't seen any updates to preview in at least a few weeks). Also attached is hardware-lib.sh that has a different approach to mdadm stuff (this is, AFAIK, only really useful for gluster at the moment). Instead of: if [ -f /etc/mdadm.conf ] it checks for if [ -x /sbin/mdadm ] because I've been having problems with mdadm.conf contents - it looks like despite them being cdsl-ed out to cdsl.local, the node that builds the initrd hard-puts it's own mdadm.conf into the initrd. Anyway, instead of looking for a pre-existing mdadm.conf, the patch makes a new one: mdadm --examine --scan > /etc/mdadm.conf This file should now be removed from the package: /etc/comoonics/bootimage/files.initrd.d/mdadm.list (or at least be empty), since it only contained /etc/mdadm.conf Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2009-07-08 21:27:11
|
Ooops, forgot to actually attach the added files. *headdesk* Here they are. Gordan Bobic wrote: > Hi, > > Attached are: > > My latest clusterfs-lib.sh that includes a temporary bodge for a problem > with glusterfs that makes operations on the FS fail for a few seconds > after mounting because fuse takes a little while to settle. So the bodge > is to include "sleep 5; ls -la <mount-point>" to make sure that the real > root is ready by the time we mount cdsl.local. It's a patched version > of the latest package that was preview (I haven't seen any updates to > preview in at least a few weeks). > > Also attached is hardware-lib.sh that has a different approach to mdadm > stuff (this is, AFAIK, only really useful for gluster at the moment). > Instead of: > > if [ -f /etc/mdadm.conf ] > > it checks for > > if [ -x /sbin/mdadm ] > > because I've been having problems with mdadm.conf contents - it looks > like despite them being cdsl-ed out to cdsl.local, the node that builds > the initrd hard-puts it's own mdadm.conf into the initrd. > Anyway, instead of looking for a pre-existing mdadm.conf, the patch > makes a new one: > > mdadm --examine --scan > /etc/mdadm.conf > > This file should now be removed from the package: > /etc/comoonics/bootimage/files.initrd.d/mdadm.list > (or at least be empty), since it only contained /etc/mdadm.conf > > Thanks. > > Gordan > |
From: Marc G. <gr...@at...> - 2009-07-03 15:01:14
|
On Wednesday 01 July 2009 13:47:28 Gordan Bobic wrote: > On Wed, 1 Jul 2009 13:07:02 +0200, Marc Grimme <gr...@at...> wrote: > > Hi Gordan, > > sorry for taking that long. > > No problem. This particular thing is only an issue at shutdown and I don't > down my servers very often. And even then it's not a problem with > functioning fencing devices. ;) But it should work though. > > >> What is the difference between these two files? I noticed that > >> /etc/xkillallprocs got clobbered after a reboot, and the two lines I > >> added > >> to it (glusterfs and glusterfsd) got removed. On shutdown, with the file > > > > Yes they got removed. Basically they should be built automatically. > > The procs are got from a function called {rootfs}_get_userspace_procs. In > > your case it should be glusterfs_userspace_procs. > > Aha! That's what I'm missing! Thank you! Let me know if it works. > > >> edited to add those two, shutdown with glusterfs still locks up > >> immediately > >> after "sending all processes the TERM signal". Any ideas on how to debug > >> this further? My gut feeling is that glusterfs ends up getting killed > > and > > >> the machine locks up because the rootfs went away, but it's quite hard > >> investigate a system in such a hung state. > > > > Yes. It is. I always add /bin/bash(s) at every step in the relevant > > initscripts. But I would say if you get that xkillallprocs right it > > should > > > work. > > I was thinking about something similar, but with double-wrapping init so > that there is an init for the base root that can run gettys, and have a > base root shell available to investigate things when they get going. It was > sufficiently complicated to implement to deter me, at least for now, > though. The bash-at-every-line idea has more short-term merit. :) Yes, I don't like it either. > > > You also need the /usr/comoonics/sbin/killall binary which does not kill > > _ALL_ > > userproceses but can exclude the ones in i.e. /etc/xkillallprocs. > > Last I checked, that was in the halt patch that gets applied automatically. > Has that changed recently? No it still is in SysVinit-comoonics found in the comoonics-repo. > > > For a little backround see: > > https://bugzilla.redhat.com/show_bug.cgi?id=496843 > > https://bugzilla.redhat.com/show_bug.cgi?id=496854 > > https://bugzilla.redhat.com/show_bug.cgi?id=496857 > > https://bugzilla.redhat.com/show_bug.cgi?id=496861 > > Indeed, I'm aware of the background. I was just failing to figure out where > the exclusion list gets set. Having said that, if i manually modify the > /etc/xkillallprocs, should that not be honoured at least in the next > shutdown? I've found that the shutdown hangs even when I add glusterfs > processes to it. As I said you need /usr/comoonics/sbin/killall5 for it. This allows a killall5 -x <process> + init u. We are trying to get this upstream but until now only init u got accepted. Marc. > > Thanks. > > Gordan > > --------------------------------------------------------------------------- >--- _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: gordan <go...@bo...> - 2009-07-03 14:39:48
|
How about adding a debug (-d) switch to killall that makes it report the name of the process it is killing? Gordan On Wed, 1 Jul 2009, Gordan Bobic wrote: > On Wed, 1 Jul 2009 13:07:02 +0200, Marc Grimme <gr...@at...> wrote: >> Hi Gordan, >> sorry for taking that long. > > No problem. This particular thing is only an issue at shutdown and I don't > down my servers very often. And even then it's not a problem with > functioning fencing devices. ;) > >>> What is the difference between these two files? I noticed that >>> /etc/xkillallprocs got clobbered after a reboot, and the two lines I >>> added >>> to it (glusterfs and glusterfsd) got removed. On shutdown, with the file >> >> Yes they got removed. Basically they should be built automatically. >> The procs are got from a function called {rootfs}_get_userspace_procs. In >> your case it should be glusterfs_userspace_procs. > > Aha! That's what I'm missing! Thank you! > >>> edited to add those two, shutdown with glusterfs still locks up >>> immediately >>> after "sending all processes the TERM signal". Any ideas on how to debug >>> this further? My gut feeling is that glusterfs ends up getting killed > and >>> the machine locks up because the rootfs went away, but it's quite hard >>> investigate a system in such a hung state. >> >> Yes. It is. I always add /bin/bash(s) at every step in the relevant >> initscripts. But I would say if you get that xkillallprocs right it > should >> work. > > I was thinking about something similar, but with double-wrapping init so > that there is an init for the base root that can run gettys, and have a > base root shell available to investigate things when they get going. It was > sufficiently complicated to implement to deter me, at least for now, > though. The bash-at-every-line idea has more short-term merit. :) > >> You also need the /usr/comoonics/sbin/killall binary which does not kill >> _ALL_ >> userproceses but can exclude the ones in i.e. /etc/xkillallprocs. > > Last I checked, that was in the halt patch that gets applied automatically. > Has that changed recently? > >> For a little backround see: >> https://bugzilla.redhat.com/show_bug.cgi?id=496843 >> https://bugzilla.redhat.com/show_bug.cgi?id=496854 >> https://bugzilla.redhat.com/show_bug.cgi?id=496857 >> https://bugzilla.redhat.com/show_bug.cgi?id=496861 > > Indeed, I'm aware of the background. I was just failing to figure out where > the exclusion list gets set. Having said that, if i manually modify the > /etc/xkillallprocs, should that not be honoured at least in the next > shutdown? I've found that the shutdown hangs even when I add glusterfs > processes to it. > > Thanks. > > Gordan > > ------------------------------------------------------------------------------ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > |
From: Gordan B. <go...@bo...> - 2009-07-03 12:31:24
|
On Wed, 1 Jul 2009 13:07:02 +0200, Marc Grimme <gr...@at...> wrote: > Hi Gordan, > sorry for taking that long. No problem. This particular thing is only an issue at shutdown and I don't down my servers very often. And even then it's not a problem with functioning fencing devices. ;) >> What is the difference between these two files? I noticed that >> /etc/xkillallprocs got clobbered after a reboot, and the two lines I >> added >> to it (glusterfs and glusterfsd) got removed. On shutdown, with the file > > Yes they got removed. Basically they should be built automatically. > The procs are got from a function called {rootfs}_get_userspace_procs. In > your case it should be glusterfs_userspace_procs. Aha! That's what I'm missing! Thank you! >> edited to add those two, shutdown with glusterfs still locks up >> immediately >> after "sending all processes the TERM signal". Any ideas on how to debug >> this further? My gut feeling is that glusterfs ends up getting killed and >> the machine locks up because the rootfs went away, but it's quite hard >> investigate a system in such a hung state. > > Yes. It is. I always add /bin/bash(s) at every step in the relevant > initscripts. But I would say if you get that xkillallprocs right it should > work. I was thinking about something similar, but with double-wrapping init so that there is an init for the base root that can run gettys, and have a base root shell available to investigate things when they get going. It was sufficiently complicated to implement to deter me, at least for now, though. The bash-at-every-line idea has more short-term merit. :) > You also need the /usr/comoonics/sbin/killall binary which does not kill > _ALL_ > userproceses but can exclude the ones in i.e. /etc/xkillallprocs. Last I checked, that was in the halt patch that gets applied automatically. Has that changed recently? > For a little backround see: > https://bugzilla.redhat.com/show_bug.cgi?id=496843 > https://bugzilla.redhat.com/show_bug.cgi?id=496854 > https://bugzilla.redhat.com/show_bug.cgi?id=496857 > https://bugzilla.redhat.com/show_bug.cgi?id=496861 Indeed, I'm aware of the background. I was just failing to figure out where the exclusion list gets set. Having said that, if i manually modify the /etc/xkillallprocs, should that not be honoured at least in the next shutdown? I've found that the shutdown hangs even when I add glusterfs processes to it. Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2009-07-01 14:47:03
|
The reason I ask is because I am currently running with preview packages from a couple of weeks ago. I'm debating whether to update to the production packages and if I do so, what differences I may have to watch out for, since some of the stuff I use is a bit "experimental" (e.g. glusterfs root). :) Gordan On Wed, 1 Jul 2009 16:21:58 +0200, Mark Hlawatschek <hla...@at...> wrote: > Hi Gordan, > > in this moment, the preview and the productive are quite similar. > The productive channel will stay as it is, until a new package set has been > > qa'ed. The preview channel is designed to hold the latest (not qa'ed) > software versions. > > -Mark > > On Wednesday 01 July 2009 13:49:49 Gordan Bobic wrote: >> Are there any major changes in this compared to recent preview packages? >> >> On Wed, 1 Jul 2009 10:13:43 +0200, Mark Hlawatschek <hla...@at...> >> >> wrote: >> > Hi !! >> > >> > The open-sharedroot project is proud to announce the general >> > availability >> > of >> > the comoonics 4.5 productive version ! With this release, the project >> > reached >> > a major milestone and we want to thank and congratulate all >> > participants >> >> in >> >> > the project for this great success !! >> > >> > The latest release of the comoonics 4.5 beta has passed our strict >> >> quality >> >> > assurance process and we are proud to announce the release of the >> > cluster >> > >> > software comoonics 4.5!! >> > >> > comoonics 4.5 includes the following major enhancements: >> > >> > * Improved boot-time configuration. Configure the cluster node during >> >> boot >> >> > time. >> > * Improved hardware detection and configuration. >> > * Enhanced cluster lifecycle. Boot the same OS installation in >> > different >> > >> > (virt-)hardware configurations. >> > * Lite initrds to reduce the size of initrds by 50% >> > * Update existing initrds without the need to newly build them >> > * The same initrd can be used to boot multiple different kernels >> > >> > The open-sharedroot project now adds support for the following >> > technologies: >> > * NFS3/NFS4 >> > * OCFS2 >> > * Ext3 (single node) >> > >> > Mark >> >> --------------------------------------------------------------------------- >>--- >> >> > _______________________________________________ >> > Open-sharedroot-devel mailing list >> > Ope...@li... >> > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel >> >> --------------------------------------------------------------------------- >>--- _______________________________________________ >> Open-sharedroot-devel mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel |
From: Mark H. <hla...@at...> - 2009-07-01 14:22:06
|
Hi Gordan, in this moment, the preview and the productive are quite similar. The productive channel will stay as it is, until a new package set has been qa'ed. The preview channel is designed to hold the latest (not qa'ed) software versions. -Mark On Wednesday 01 July 2009 13:49:49 Gordan Bobic wrote: > Are there any major changes in this compared to recent preview packages? > > On Wed, 1 Jul 2009 10:13:43 +0200, Mark Hlawatschek <hla...@at...> > > wrote: > > Hi !! > > > > The open-sharedroot project is proud to announce the general availability > > of > > the comoonics 4.5 productive version ! With this release, the project > > reached > > a major milestone and we want to thank and congratulate all participants > > in > > > the project for this great success !! > > > > The latest release of the comoonics 4.5 beta has passed our strict > > quality > > > assurance process and we are proud to announce the release of the cluster > > > > software comoonics 4.5!! > > > > comoonics 4.5 includes the following major enhancements: > > > > * Improved boot-time configuration. Configure the cluster node during > > boot > > > time. > > * Improved hardware detection and configuration. > > * Enhanced cluster lifecycle. Boot the same OS installation in different > > > > (virt-)hardware configurations. > > * Lite initrds to reduce the size of initrds by 50% > > * Update existing initrds without the need to newly build them > > * The same initrd can be used to boot multiple different kernels > > > > The open-sharedroot project now adds support for the following > > technologies: > > * NFS3/NFS4 > > * OCFS2 > > * Ext3 (single node) > > > > Mark > > --------------------------------------------------------------------------- >--- > > > _______________________________________________ > > Open-sharedroot-devel mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > > --------------------------------------------------------------------------- >--- _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Dipl.-Ing. Mark Hlawatschek Vorstand Phone: +49-89 452 3538-15 E-Mail: hla...@at... ATIX Informationstechnologie und Consulting AG | Einsteinstrasse 10 | 85716 Unterschleissheim | www.atix.de | www.open-sharedroot.org Registergericht: Amtsgericht Muenchen, Registernummer: HRB 168930, USt.-Id.: DE209485962 | Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) | Vorsitzender des Aufsichtsrats: Dr. Martin Buss |
From: Gordan B. <go...@bo...> - 2009-07-01 11:50:59
|
Are there any major changes in this compared to recent preview packages? On Wed, 1 Jul 2009 10:13:43 +0200, Mark Hlawatschek <hla...@at...> wrote: > Hi !! > > The open-sharedroot project is proud to announce the general availability > of > the comoonics 4.5 productive version ! With this release, the project > reached > a major milestone and we want to thank and congratulate all participants in > > the project for this great success !! > > The latest release of the comoonics 4.5 beta has passed our strict quality > assurance process and we are proud to announce the release of the cluster > software comoonics 4.5!! > > comoonics 4.5 includes the following major enhancements: > > * Improved boot-time configuration. Configure the cluster node during boot > time. > * Improved hardware detection and configuration. > * Enhanced cluster lifecycle. Boot the same OS installation in different > (virt-)hardware configurations. > * Lite initrds to reduce the size of initrds by 50% > * Update existing initrds without the need to newly build them > * The same initrd can be used to boot multiple different kernels > > The open-sharedroot project now adds support for the following > technologies: > * NFS3/NFS4 > * OCFS2 > * Ext3 (single node) > > Mark > > > ------------------------------------------------------------------------------ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel |
From: Marc G. <gr...@at...> - 2009-07-01 11:07:55
|
Hi Gordan, sorry for taking that long. On Tuesday 16 June 2009 16:53:02 Gordan Bobic wrote: > What is the difference between these two files? I noticed that > /etc/xkillallprocs got clobbered after a reboot, and the two lines I added > to it (glusterfs and glusterfsd) got removed. On shutdown, with the file Yes they got removed. Basically they should be built automatically. The procs are got from a function called {rootfs}_get_userspace_procs. In your case it should be glusterfs_userspace_procs. For gfs (rhel5) it looks as follows: function gfs_get_userspace_procs { local clutype=$1 local rootfs=$2 echo -e "aisexec \n\ ccsd \n\ fenced \n\ gfs_controld \n\ dlm_controld \n\ groupd \n\ qdiskd \n\ clvmd" } > edited to add those two, shutdown with glusterfs still locks up immediately > after "sending all processes the TERM signal". Any ideas on how to debug > this further? My gut feeling is that glusterfs ends up getting killed and > the machine locks up because the rootfs went away, but it's quite hard > investigate a system in such a hung state. Yes. It is. I always add /bin/bash(s) at every step in the relevant initscripts. But I would say if you get that xkillallprocs right it should work. You also need the /usr/comoonics/sbin/killall binary which does not kill _ALL_ userproceses but can exclude the ones in i.e. /etc/xkillallprocs. For a little backround see: https://bugzilla.redhat.com/show_bug.cgi?id=496843 https://bugzilla.redhat.com/show_bug.cgi?id=496854 https://bugzilla.redhat.com/show_bug.cgi?id=496857 https://bugzilla.redhat.com/show_bug.cgi?id=496861 Again sorry for the late response. But still hope that helps Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Mark H. <hla...@at...> - 2009-07-01 08:14:43
|
Hi !! The open-sharedroot project is proud to announce the general availability of the comoonics 4.5 productive version ! With this release, the project reached a major milestone and we want to thank and congratulate all participants in the project for this great success !! The latest release of the comoonics 4.5 beta has passed our strict quality assurance process and we are proud to announce the release of the cluster software comoonics 4.5!! comoonics 4.5 includes the following major enhancements: * Improved boot-time configuration. Configure the cluster node during boot time. * Improved hardware detection and configuration. * Enhanced cluster lifecycle. Boot the same OS installation in different (virt-)hardware configurations. * Lite initrds to reduce the size of initrds by 50% * Update existing initrds without the need to newly build them * The same initrd can be used to boot multiple different kernels The open-sharedroot project now adds support for the following technologies: * NFS3/NFS4 * OCFS2 * Ext3 (single node) Mark |
From: Gordan B. <go...@bo...> - 2009-06-16 14:53:10
|
What is the difference between these two files? I noticed that /etc/xkillallprocs got clobbered after a reboot, and the two lines I added to it (glusterfs and glusterfsd) got removed. On shutdown, with the file edited to add those two, shutdown with glusterfs still locks up immediately after "sending all processes the TERM signal". Any ideas on how to debug this further? My gut feeling is that glusterfs ends up getting killed and the machine locks up because the rootfs went away, but it's quite hard investigate a system in such a hung state. Gordan |
From: Marc G. <gr...@at...> - 2009-06-05 10:50:13
|
On Friday 05 June 2009 12:21:53 Gordan Bobic wrote: > Marc Grimme wrote: > > If so let me know and I'll resync. > > Thanks for fixing the repository. :) > > But what's this about during the initrd build? > > "Extracting rpms...Cannot find rpm "sysvinit-comoonics". Skipping." > > Should this not be SysVinit-comoonics? > > [root@arthur /etc/comoonics]# grep -ir sysvinit * > bootimage/rpms.initrd.d/comoonics.list:SysVinit-comoonics > bootimage/rpms.initrd.d/comoonics.list:sysvinit-comoonics > > Is this a workaround, (forward?) compatibility, or just a bug? :^) Not a bug but yes a forward compatibility. With sles10 sysvinit is sysvinit-comoonics not SysVinit-comoonics because the base paket is sysvinit/SysVinit and yes later on both will be moved in their dep dirs (rhel5,rhel4,fedora and sles10). But up to now I followed this way ;-) . But I don't like it was only easier. Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2009-06-05 10:22:00
|
Marc Grimme wrote: > If so let me know and I'll resync. Thanks for fixing the repository. :) But what's this about during the initrd build? "Extracting rpms...Cannot find rpm "sysvinit-comoonics". Skipping." Should this not be SysVinit-comoonics? [root@arthur /etc/comoonics]# grep -ir sysvinit * bootimage/rpms.initrd.d/comoonics.list:SysVinit-comoonics bootimage/rpms.initrd.d/comoonics.list:sysvinit-comoonics Is this a workaround, (forward?) compatibility, or just a bug? :^) Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2009-06-04 20:55:11
|
Marc Grimme wrote: > On Thursday 04 June 2009 21:45:17 Gordan Bobic wrote: >> comoonics-preview >> >> | 1.5 kB 00:00 >> >> primary.xml.gz >> >> | 25 kB 00:00 >> >> http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/pr >> imary.xml.gz: [Errno -1] Metadata file does not match checksum >> Trying other mirror. >> primary.xml.gz >> >> | 25 kB 00:00 >> >> http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/pr >> imary.xml.gz: [Errno -1] Metadata file does not match checksum >> Trying other mirror. >> Error: failure: repodata/primary.xml.gz from comoonics-preview: [Errno >> 256] No more mirrors to try. >> >> Can somebody please fix it, I need to build a new cluster and it >> requires stuff that's only available in preview. :^) > > Did you clear the cache (rm -rf /var/cache/yum/comoonics-*). I've rebuild > everything just yesterday and it should be alright. If so let me know and > I'll resync. I'm starting with a 100% fresh, clean CentOS 5.3 box that hasn't been yum updated yet, so /var/cache/yum is completely empty. Gordan |
From: Marc G. <gr...@at...> - 2009-06-04 19:51:49
|
On Thursday 04 June 2009 21:45:17 Gordan Bobic wrote: > comoonics-preview > > | 1.5 kB 00:00 > > primary.xml.gz > > | 25 kB 00:00 > > http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/pr >imary.xml.gz: [Errno -1] Metadata file does not match checksum > Trying other mirror. > primary.xml.gz > > | 25 kB 00:00 > > http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/pr >imary.xml.gz: [Errno -1] Metadata file does not match checksum > Trying other mirror. > Error: failure: repodata/primary.xml.gz from comoonics-preview: [Errno > 256] No more mirrors to try. > > Can somebody please fix it, I need to build a new cluster and it > requires stuff that's only available in preview. :^) > > Thanks. > > Gordan > > --------------------------------------------------------------------------- >--- OpenSolaris 2009.06 is a cutting edge operating system for enterprises > looking to deploy the next generation of Solaris that includes the latest > innovations from Sun and the OpenSource community. Download a copy and > enjoy capabilities such as Networking, Storage and Virtualization. Go to: > http://p.sf.net/sfu/opensolaris-get > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel Did you clear the cache (rm -rf /var/cache/yum/comoonics-*). I've rebuild everything just yesterday and it should be alright. If so let me know and I'll resync. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2009-06-04 19:45:25
|
comoonics-preview | 1.5 kB 00:00 primary.xml.gz | 25 kB 00:00 http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. primary.xml.gz | 25 kB 00:00 http://download.atix.de/yum/comoonics/redhat-el5/preview/noarch/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from comoonics-preview: [Errno 256] No more mirrors to try. Can somebody please fix it, I need to build a new cluster and it requires stuff that's only available in preview. :^) Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2009-05-06 10:00:53
|
On Wed, 6 May 2009 09:22:25 +0200, Marc Grimme <gr...@at...> wrote: > On Tuesday 05 May 2009 23:15:46 Gordan Bobic wrote: >> Gordan Bobic wrote: >> > Module dependencies don't seem to get worked out correctly. e100 module >> > requires mii module to be loaded. mkinitrd -l doesn't package mii >> > module >> > into the initrd. This might be due to the fact that depmod -a never ran >> > on the new kernel. Either way, this is something that initrd -l should >> > really deal with on it's own without requiring manual intervention via >> > -M. > > I perfectly agree btw. this holds also for tg3 driver who require since > kernel 2.6.18-128.. the libphy module. mii is required by e100 in RHEL 5.2 and 5.3, but this has probably been the case for a while. > But I didn't find a way how to query module dependencies. > > Any ideas would be appreciated. Something along these lines would probably work: # grep e100.ko /lib/modules/$(uname -r)/modules.dep /lib/modules/2.6.18-128.1.6.el5/kernel/drivers/net/e100.ko: /lib/modules/2.6.18-128.1.6.el5/kernel/drivers/net/mii.ko It could similarly be grepped out of /proc/modules or lsmod. >> Especially when -M doesn't work: >> # /opt/atix/comoonics-bootimage/mkinitrd -l -M mii /tmp/initrd_sr.img >> 2.6.18-128.1.6.el5 >> /opt/atix/comoonics-bootimage/mkinitrd: illegal option -- M >> Error wrong option. > You're right. This is a bug. > Apply the attached patch or as a workaround add this module > to /etc/comoonics/comoonics-bootimage.cfg in the default_modules section. Thanks, will try the patch tonight. I already worked around it by adding the uncaught module files to an auxiliary list file. Gordan |
From: Marc G. <gr...@at...> - 2009-05-06 07:22:40
|
On Tuesday 05 May 2009 23:15:46 Gordan Bobic wrote: > Gordan Bobic wrote: > > Module dependencies don't seem to get worked out correctly. e100 module > > requires mii module to be loaded. mkinitrd -l doesn't package mii module > > into the initrd. This might be due to the fact that depmod -a never ran > > on the new kernel. Either way, this is something that initrd -l should > > really deal with on it's own without requiring manual intervention via > > -M. I perfectly agree btw. this holds also for tg3 driver who require since kernel 2.6.18-128.. the libphy module. But I didn't find a way how to query module dependencies. Any ideas would be appreciated. > > Especially when -M doesn't work: > # /opt/atix/comoonics-bootimage/mkinitrd -l -M mii /tmp/initrd_sr.img > 2.6.18-128.1.6.el5 > /opt/atix/comoonics-bootimage/mkinitrd: illegal option -- M > Error wrong option. You're right. This is a bug. Apply the attached patch or as a workaround add this module to /etc/comoonics/comoonics-bootimage.cfg in the default_modules section. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2009-05-05 21:16:03
|
Gordan Bobic wrote: > Module dependencies don't seem to get worked out correctly. e100 module > requires mii module to be loaded. mkinitrd -l doesn't package mii module > into the initrd. This might be due to the fact that depmod -a never ran > on the new kernel. Either way, this is something that initrd -l should > really deal with on it's own without requiring manual intervention via -M. Especially when -M doesn't work: # /opt/atix/comoonics-bootimage/mkinitrd -l -M mii /tmp/initrd_sr.img 2.6.18-128.1.6.el5 /opt/atix/comoonics-bootimage/mkinitrd: illegal option -- M Error wrong option. Gordan |
From: Gordan B. <go...@bo...> - 2009-05-05 21:08:12
|
Module dependencies don't seem to get worked out correctly. e100 module requires mii module to be loaded. mkinitrd -l doesn't package mii module into the initrd. This might be due to the fact that depmod -a never ran on the new kernel. Either way, this is something that initrd -l should really deal with on it's own without requiring manual intervention via -M. Gordan |
From: Marc G. <gr...@at...> - 2009-04-21 08:00:31
|
On Monday 20 April 2009 22:46:07 Gordan Bobic wrote: > What's this about: > /opt/atix/comoonics-bootimage/boot-scripts/etc/clusterfs-lib.sh: line > 1180: typeset: `fuse.glusterfs_chroot_needed': not a valid identifier > > I haven't seen that one before. Me neither ;-) . Background: I introduced some new functions. One is: ${rootfs}_chroot_needed: default 0 => yes. Means we need a chroot. I.e. for nfsv3 and localfs this returns 1 cause they don't need a chroot. So we are not going to build it. Do you have a rootfs type in /etc/cluster/cluster.conf of fuse.glusterfs? Cause clusterfs-lib.sh just tries to call ${rootfs}_chroot_need if it exists. > > Other than that and the ambiguous redirect mentioned previously, the > latest version in preview seems to "just work" (tm). :) I've not had > that happen in a while with my weird bleeding edge clusters. :) What did I say! > > Oh, and the fact that shutdown seems to hang half way through on > glusterfs, but that's always been the case with glusterfs - from what I > can tell killall seems to kill something important (I excluded glusterfs > daemons). But I can live with that at the moment because it's quite hard > to debug. ;) Another new thing. It should be pretty easier or even much better to track this one cause the shutting down and rebooting should have been improved much more. Another new function helping with this is: ${rootfs}_get_userspace_procs: If this function is implemented it returns a list (separated with \n) of processes that need to stay running until the very end. With rhel5 gfs it is: function gfs_get_userspace_procs { local clutype=$1 local rootfs=$2 echo -e "aisexec \n\ ccsd \n\ fenced \n\ gfs_controld \n\ dlm_controld \n\ groupd \n\ qdiskd \n\ clvmd" } This list goes to /etc/xkillall_procs. /etc/xkillall_procs is read when calling halt and is a list of processes that will not get killed from killall5/killall9. For this we added a new option to killall5/9 the [-x {processname}]*. I'm going to file some bugs for fedora and rhel5 today and make more documentation for this on open-sharedroot.org so that the (I call it xfiles) are to be taken into account from the initscripts. Additionally to xkillall_procs there will be a /etc/xrootfs and /etc/xtab: /etc/xrootfs: is the rootfs type that will not be umounted from either netfs or network. /etc/xtab: are mountpoints that should not get umounted from netfs/network/halt. These are i.e. /cdsl.local and if need be /var/comoonics/chroot.. Take the new rpms I've uploaded right away. And thanks for your input. Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |