You can subscribe to this list here.
2006 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(105) |
Nov
(10) |
Dec
(7) |
2008 |
Jan
|
Feb
(31) |
Mar
(13) |
Apr
(7) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(23) |
Dec
|
2009 |
Jan
(25) |
Feb
(24) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(6) |
Jul
(27) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(7) |
Dec
(25) |
2010 |
Jan
|
Feb
(7) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
From: Gordan B. <go...@bo...> - 2007-10-15 13:09:54
|
>> For now, I have a different question - is there a way to specify mount >> options (e.g. noatime) for the GFS shared root? I know there is "options" >> for nfsclient tag, but is there something similar for the rootvolume tag? >> What is the best way to do this? > > You can define mount options by a mountopts="..." attribute in the > <rootvolue/> tag. > E.g. <rootvolume name="/dev/vg_axqad101_sr/lv_sharedroot" > mountopts="noatime,nodiratime"/> Thanks for that. :-) Gordan |
From: Mark H. <hla...@at...> - 2007-10-15 13:05:24
|
> For now, I have a different question - is there a way to specify mount > options (e.g. noatime) for the GFS shared root? I know there is "options" > for nfsclient tag, but is there something similar for the rootvolume tag? > What is the best way to do this? You can define mount options by a mountopts="..." attribute in the <rootvolue/> tag. E.g. <rootvolume name="/dev/vg_axqad101_sr/lv_sharedroot" mountopts="noatime,nodiratime"/> Mark |
From: Gordan B. <go...@bo...> - 2007-10-15 12:46:00
|
On Fri, 12 Oct 2007, Marc Grimme wrote: >> I have attached a tar ball of files that I changed (or added) in the >> standard setup to get iSCSI to work. >> >> Please let me know if I omitted anything. >> >> There is probably some pruning that could be done to this (I went for >> including full RPMs + configs), which was arguably sub-optimal - there is >> likely to be some docs/man pages included (not that anyone is likely to >> notice next to 70MB of drivers :^). >> >> I also hard-removed (C)LVM stuff - something that you probably won't want >> to merge into the main tree. >> >> Thanks for all the help over the last few days, and for accepting my >> patches. I really appreciate it. This is by far the most postive OSS >> experience I have had to date. :-) > > Attached you'll find the latest rpm with iscsi included. > Last thing you should have to have is either rootsource=iscsi as bootparam or > in com_info <rootsource name="iscsi"/>. Hmm... Would it perhaps be better to put this a level up in the XML? I'm not sure it makes sense to set this up on every node entry. It would make more sense to define it for the whole cluster. I don't really see different nodes using a different method to mount the same block device. > Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list > the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 > > Let me know if it works. Thanks, I'll try it in a bit. For now, I have a different question - is there a way to specify mount options (e.g. noatime) for the GFS shared root? I know there is "options" for nfsclient tag, but is there something similar for the rootvolume tag? What is the best way to do this? Thanks. Gordan |
From: Marc G. <gr...@at...> - 2007-10-12 15:34:48
|
On Friday 12 October 2007 12:29:27 Gordan Bobic wrote: > I have attached a tar ball of files that I changed (or added) in the > standard setup to get iSCSI to work. > > Please let me know if I omitted anything. > > There is probably some pruning that could be done to this (I went for > including full RPMs + configs), which was arguably sub-optimal - there is > likely to be some docs/man pages included (not that anyone is likely to > notice next to 70MB of drivers :^). > > I also hard-removed (C)LVM stuff - something that you probably won't want > to merge into the main tree. > > Thanks for all the help over the last few days, and for accepting my > patches. I really appreciate it. This is by far the most postive OSS > experience I have had to date. :-) > > Gordan Attached you'll find the latest rpm with iscsi included. Last thing you should have to have is either rootsource=iscsi as bootparam or in com_info <rootsource name="iscsi"/>. Also add to the /etc/comoonics/bootimage/files.initrd.d/user_edit.list the /var/lib/iscsi/nodes/iqn.2001-05.com.equallogic\:1-234567-89abcdef0-123456789abcdef0-ssi-root/10.10.10.1\,3260 Let me know if it works. -- Gruss / Regards, Marc Grimme Phone: +49-89 452 3538-14 http://www.atix.de/ http://www.open-sharedroot.org/ ** Visit us at LinuxWorld Conference & Expo 31.10. - 01.11.2007 in Jaarbeurs Utrecht - The Netherlands ATIX stand: Hall 9 / B 005 ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss |
From: Gordan B. <go...@bo...> - 2007-10-12 10:30:08
|
I have attached a tar ball of files that I changed (or added) in the standard setup to get iSCSI to work. Please let me know if I omitted anything. There is probably some pruning that could be done to this (I went for including full RPMs + configs), which was arguably sub-optimal - there is likely to be some docs/man pages included (not that anyone is likely to notice next to 70MB of drivers :^). I also hard-removed (C)LVM stuff - something that you probably won't want to merge into the main tree. Thanks for all the help over the last few days, and for accepting my patches. I really appreciate it. This is by far the most postive OSS experience I have had to date. :-) Gordan |
From: Gordan B. <go...@bo...> - 2007-10-12 10:07:02
|
On Fri, 12 Oct 2007, Mark Hlawatschek wrote: > On Friday 12 October 2007 11:53:08 Gordan Bobic wrote: >> On Fri, 12 Oct 2007, Mark Hlawatschek wrote: >>>> Thus, in the initrd_cleanup, I would need to rm -rf >>>> /var/comoonics/chroot/lib/modules ? >>> >>> In my opinion it would be better to modify the create_chroot function. >>> Here, the files are copied from the initrd. We wouldn't need to copy >>> the /lib/modules. >> >> But wouldn't that only be useful for the disk based init root? I see this >> as an issue only for the ramdisk init root. Or does the init ramdisk make >> a new ramdisk root if there is no disk for init root? > > Yes is creates a new ramdisk root when there is no local disk assigned. Ah, OK. I misunderstood how it works. I thought the initrd is what sticks around. I didn't realise that there was a secondary init rd that gets created to carry the SSI start-up and tear-down. Gordan |
From: Gordan B. <go...@bo...> - 2007-10-12 10:04:15
|
>>>>>>>> I'm figuring that just adding rm -rf /lib/modules would help. I have >>>>>>>> just added this to clean_initrd function. I'll have a look at what >>>>>>>> else can be pruned after the GFS root is mounted. >>>>>>> >>>>>>> Hmm, this doesn't appear to have worked. I think it happens too late. >>>>>>> I'd need to delete these before the new chroot is fired up. Any >>>>>>> suggestion on where would be a good time to put this? >>>>>> >>>>>> The chroot environment is builded in >>>>>> etc/rhel{4,5}/boot-lib.sh:create_chroot() There might be a more >>>>>> intelligent way to create the chroot environment ;-) >>>>> >>>>> I think there may be a path issue here. What is the absolute path of >>>>> /lib/modules in the initrd? I told it to mv /lib/modules /lib/modules2, >>>>> and this doesn't appear to have happened in either the initrd nor in the >>>>> real GFS root. :-/ >>>> >>>> during the initrd init process, a new chroot environment is build to hold >>>> the cluster services itsself. This has to be done to solve the >>>> chicken-egg problem with rootfs depending on userspace cluster services. >>>> >>>> The new chroot environment ist created in the function create_chroot. >>>> The default path for the chroot environment is /var/comoonics/chroot. >>> >>> Does that mean the /lib/modules, even during the initrd boot sequence, >>> live under /var/comoonics/chroot/lib/modules? >> >> Yes. > > Well, adding: > mv /var/comoonics/chroot/lib/modules /var/comoonics/chroot/lib/modules2 > didn't rename the folder. I give up. I just had a different idea - if we are making the initrd of the current system (as we will be in most cases), how about just bundling the modules currently loaded (from lsmod)? That way we could inly include the drivers that are required from the /lib/modules tree, rather than everything. If something isn't loaded once the system is fully booted up, then the chances are that it isn't needed to get the system that far. Gordan |
From: Mark H. <hla...@at...> - 2007-10-12 10:02:43
|
On Friday 12 October 2007 11:53:08 Gordan Bobic wrote: > On Fri, 12 Oct 2007, Mark Hlawatschek wrote: > >> Thus, in the initrd_cleanup, I would need to rm -rf > >> /var/comoonics/chroot/lib/modules ? > > > > In my opinion it would be better to modify the create_chroot function. > > Here, the files are copied from the initrd. We wouldn't need to copy > > the /lib/modules. > > But wouldn't that only be useful for the disk based init root? I see this > as an issue only for the ramdisk init root. Or does the init ramdisk make > a new ramdisk root if there is no disk for init root? Yes is creates a new ramdisk root when there is no local disk assigned. > > Basically, if the init root is being moved to a local disk, then the fact > that the dedicated partition needs to be 70MB bigger isn't such a huge > deal. An extra 70MB of RAM being uses is a considerably more > serious issue. I agree with you. Mark |
From: Gordan B. <go...@bo...> - 2007-10-12 10:01:42
|
On Fri, 12 Oct 2007, Mark Hlawatschek wrote: >>>>>>> I'm figuring that just adding rm -rf /lib/modules would help. I have >>>>>>> just added this to clean_initrd function. I'll have a look at what >>>>>>> else can be pruned after the GFS root is mounted. >>>>>> >>>>>> Hmm, this doesn't appear to have worked. I think it happens too late. >>>>>> I'd need to delete these before the new chroot is fired up. Any >>>>>> suggestion on where would be a good time to put this? >>>>> >>>>> The chroot environment is builded in >>>>> etc/rhel{4,5}/boot-lib.sh:create_chroot() There might be a more >>>>> intelligent way to create the chroot environment ;-) >>>> >>>> I think there may be a path issue here. What is the absolute path of >>>> /lib/modules in the initrd? I told it to mv /lib/modules /lib/modules2, >>>> and this doesn't appear to have happened in either the initrd nor in the >>>> real GFS root. :-/ >>> >>> during the initrd init process, a new chroot environment is build to hold >>> the cluster services itsself. This has to be done to solve the >>> chicken-egg problem with rootfs depending on userspace cluster services. >>> >>> The new chroot environment ist created in the function create_chroot. >>> The default path for the chroot environment is /var/comoonics/chroot. >> >> Does that mean the /lib/modules, even during the initrd boot sequence, >> live under /var/comoonics/chroot/lib/modules? > > Yes. Well, adding: mv /var/comoonics/chroot/lib/modules /var/comoonics/chroot/lib/modules2 didn't rename the folder. I give up. Gordan |
From: Gordan B. <go...@bo...> - 2007-10-12 09:53:14
|
On Fri, 12 Oct 2007, Mark Hlawatschek wrote: >> Thus, in the initrd_cleanup, I would need to rm -rf >> /var/comoonics/chroot/lib/modules ? > In my opinion it would be better to modify the create_chroot function. Here, > the files are copied from the initrd. We wouldn't need to copy > the /lib/modules. But wouldn't that only be useful for the disk based init root? I see this as an issue only for the ramdisk init root. Or does the init ramdisk make a new ramdisk root if there is no disk for init root? Basically, if the init root is being moved to a local disk, then the fact that the dedicated partition needs to be 70MB bigger isn't such a huge deal. An extra 70MB of RAM being uses is a considerably more serious issue. Gordan |
From: Mark H. <hla...@at...> - 2007-10-12 09:38:06
|
On Friday 12 October 2007 11:14:01 Gordan Bobic wrote: > On Fri, 12 Oct 2007, Mark Hlawatschek wrote: > > On Friday 12 October 2007 10:37:22 Gordan Bobic wrote: > >>>>> I'm figuring that just adding rm -rf /lib/modules would help. I have > >>>>> just added this to clean_initrd function. I'll have a look at what > >>>>> else can be pruned after the GFS root is mounted. > >>>> > >>>> Hmm, this doesn't appear to have worked. I think it happens too late. > >>>> I'd need to delete these before the new chroot is fired up. Any > >>>> suggestion on where would be a good time to put this? > >>> > >>> The chroot environment is builded in > >>> etc/rhel{4,5}/boot-lib.sh:create_chroot() There might be a more > >>> intelligent way to create the chroot environment ;-) > >> > >> I think there may be a path issue here. What is the absolute path of > >> /lib/modules in the initrd? I told it to mv /lib/modules /lib/modules2, > >> and this doesn't appear to have happened in either the initrd nor in the > >> real GFS root. :-/ > > > > during the initrd init process, a new chroot environment is build to hold > > the cluster services itsself. This has to be done to solve the > > chicken-egg problem with rootfs depending on userspace cluster services. > > > > The new chroot environment ist created in the function create_chroot. > > The default path for the chroot environment is /var/comoonics/chroot. > > Does that mean the /lib/modules, even during the initrd boot sequence, > live under /var/comoonics/chroot/lib/modules? Yes. > > Thus, in the initrd_cleanup, I would need to rm -rf > /var/comoonics/chroot/lib/modules ? In my opinion it would be better to modify the create_chroot function. Here, the files are copied from the initrd. We wouldn't need to copy the /lib/modules. Mark |
From: Gordan B. <go...@bo...> - 2007-10-12 09:36:28
|
I finally have everything working as I want it. Apart from one thing - I was expecting to have /boot on iSCSI/GFS as well, which is now not going to happen. So I want to end up using /dev/sdb whole for the root GFS. My plan is to make another SAN partition, mount it via iSCSI, tar the contents across, change cluster.conf and build a second initrd to boot it (for testing - before I blow away the original GFS root). Is there anything special that I should pay attention to during this exercise? Will tar (or cp -ax as per the original local->SAN copy in the docs) pick up everything that is required (e.g. cdsl stuff)? What will I have to re-do manually afterwards to make the new image bootable? Thanks. Gordan P.S. Will post the iSCSI enabling mods in a bit. |
From: Gordan B. <go...@bo...> - 2007-10-12 09:14:10
|
On Fri, 12 Oct 2007, Mark Hlawatschek wrote: > On Friday 12 October 2007 10:37:22 Gordan Bobic wrote: >>>>> I'm figuring that just adding rm -rf /lib/modules would help. I have >>>>> just added this to clean_initrd function. I'll have a look at what else >>>>> can be pruned after the GFS root is mounted. >>>> >>>> Hmm, this doesn't appear to have worked. I think it happens too late. >>>> I'd need to delete these before the new chroot is fired up. Any >>>> suggestion on where would be a good time to put this? >>> >>> The chroot environment is builded in >>> etc/rhel{4,5}/boot-lib.sh:create_chroot() There might be a more >>> intelligent way to create the chroot environment ;-) >> >> I think there may be a path issue here. What is the absolute path of >> /lib/modules in the initrd? I told it to mv /lib/modules /lib/modules2, >> and this doesn't appear to have happened in either the initrd nor in the >> real GFS root. :-/ > > during the initrd init process, a new chroot environment is build to hold the > cluster services itsself. This has to be done to solve the chicken-egg > problem with rootfs depending on userspace cluster services. > > The new chroot environment ist created in the function create_chroot. > The default path for the chroot environment is /var/comoonics/chroot. Does that mean the /lib/modules, even during the initrd boot sequence, live under /var/comoonics/chroot/lib/modules? Thus, in the initrd_cleanup, I would need to rm -rf /var/comoonics/chroot/lib/modules ? Gordan |
From: Gordan B. <go...@bo...> - 2007-10-12 09:11:15
|
On Fri, 12 Oct 2007, Gordan Bobic wrote: > On Fri, 12 Oct 2007, Marc Grimme wrote: > >>>> So I'll explain it without. >>>> It's basically quite easy: >>>> 1. For every node: spare one partition for the chroot (let's say it >>>> is /dev/sda4) and let it be at least 500M. >>>> 2. For every node: mkfs.ext3 /dev/sda4 >>>> 3. Add to the com_info section for every node the following: >>>> <chrootenv mountpoint="/var/comoonics/chroot" fstype="ext3" >>>> device="/dev/sda4" chrootdir="/var/comoonics/chroot"/> >>>> 4. Make a new initrd >>>> 5. reboot every node >>>> That's it no everything should be running on your local disk instead of >>>> tmpfs. >>> >>> OK - how does this work, then? Does it copy the initrd to the disk at >>> boot time? Or does the mkinitrd build the init root straight on that >>> partition? Or does something else happen? What does >>> /etc/sysconfig/comoonics-chroot do, then? I thought it had some part to >>> play in this. >> >> So first the initrd is loaded into RAM. This we cannot change. Then the >> localdisk is setup (linuxrc.generic.sh lines 279-288). Just thinking about this - would it not be better to explicitly re-create this file system (thus blowing away whatever was there beforehand) ever time the init root is put there? My reasoning is that something could pollute this fs in the meantime, or it could be not unmounted cleanly or similar. There is no need for journalling on this, as it is created from the ramdisk every time. So instead run mkfs.ext2 on it every time before it gets used. Any thoughts on this? Granted, this could be a tad dangerous, as accidentally changing the configuration would mean a wrong file system gets blown away - which is possibly a bit _too_ dangerous. Gordan |
From: Mark H. <hla...@at...> - 2007-10-12 09:04:39
|
On Friday 12 October 2007 10:37:22 Gordan Bobic wrote: > >>> I'm figuring that just adding rm -rf /lib/modules would help. I have > >>> just added this to clean_initrd function. I'll have a look at what else > >>> can be pruned after the GFS root is mounted. > >> > >> Hmm, this doesn't appear to have worked. I think it happens too late. > >> I'd need to delete these before the new chroot is fired up. Any > >> suggestion on where would be a good time to put this? > > > > The chroot environment is builded in > > etc/rhel{4,5}/boot-lib.sh:create_chroot() There might be a more > > intelligent way to create the chroot environment ;-) > > I think there may be a path issue here. What is the absolute path of > /lib/modules in the initrd? I told it to mv /lib/modules /lib/modules2, > and this doesn't appear to have happened in either the initrd nor in the > real GFS root. :-/ > during the initrd init process, a new chroot environment is build to hold the cluster services itsself. This has to be done to solve the chicken-egg problem with rootfs depending on userspace cluster services. The new chroot environment ist created in the function create_chroot. The default path for the chroot environment is /var/comoonics/chroot. Mark |
From: Marc G. <gr...@at...> - 2007-10-12 08:43:36
|
On Friday 12 October 2007 10:35:33 Gordan Bobic wrote: > On Fri, 12 Oct 2007, Marc Grimme wrote: > > > > So first the initrd is loaded into RAM. This we cannot change. Then the > > localdisk is setup (linuxrc.generic.sh lines 279-288). > > Aha! All becomes clear. So /etc/sysconfig/comoonics-chroot is not > something I need to worry about. Only if you are using RHEL4 with comoonics-bootimage-1.2 and localfilesystem for chroot and not willing to change the old config. And nevertheless what to upgrade to bootimage-1.3. It is only for compatibility reasons with 1.2. Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2007-10-12 08:37:25
|
>>> I'm figuring that just adding rm -rf /lib/modules would help. I have just >>> added this to clean_initrd function. I'll have a look at what else can be >>> pruned after the GFS root is mounted. >> >> Hmm, this doesn't appear to have worked. I think it happens too late. I'd >> need to delete these before the new chroot is fired up. Any >> suggestion on where would be a good time to put this? > > The chroot environment is builded in etc/rhel{4,5}/boot-lib.sh:create_chroot() > There might be a more intelligent way to create the chroot environment ;-) I think there may be a path issue here. What is the absolute path of /lib/modules in the initrd? I told it to mv /lib/modules /lib/modules2, and this doesn't appear to have happened in either the initrd nor in the real GFS root. :-/ Gordan |
From: Gordan B. <go...@bo...> - 2007-10-12 08:35:39
|
On Fri, 12 Oct 2007, Marc Grimme wrote: >>> So I'll explain it without. >>> It's basically quite easy: >>> 1. For every node: spare one partition for the chroot (let's say it >>> is /dev/sda4) and let it be at least 500M. >>> 2. For every node: mkfs.ext3 /dev/sda4 >>> 3. Add to the com_info section for every node the following: >>> <chrootenv mountpoint="/var/comoonics/chroot" fstype="ext3" >>> device="/dev/sda4" chrootdir="/var/comoonics/chroot"/> >>> 4. Make a new initrd >>> 5. reboot every node >>> That's it no everything should be running on your local disk instead of >>> tmpfs. >> >> OK - how does this work, then? Does it copy the initrd to the disk at >> boot time? Or does the mkinitrd build the init root straight on that >> partition? Or does something else happen? What does >> /etc/sysconfig/comoonics-chroot do, then? I thought it had some part to >> play in this. > > So first the initrd is loaded into RAM. This we cannot change. Then the > localdisk is setup (linuxrc.generic.sh lines 279-288). Aha! All becomes clear. So /etc/sysconfig/comoonics-chroot is not something I need to worry about. Gordan |
From: Mark H. <hla...@at...> - 2007-10-12 08:21:30
|
> > I'm figuring that just adding rm -rf /lib/modules would help. I have just > > added this to clean_initrd function. I'll have a look at what else can be > > pruned after the GFS root is mounted. > > Hmm, this doesn't appear to have worked. I think it happens too late. I'd > need to delete these before the new chroot is fired up. Any > suggestion on where would be a good time to put this? The chroot environment is builded in etc/rhel{4,5}/boot-lib.sh:create_chroot() There might be a more intelligent way to create the chroot environment ;-) Mark |
From: Marc G. <gr...@at...> - 2007-10-12 08:19:36
|
On Friday 12 October 2007 10:03:21 Gordan Bobic wrote: > On Fri, 12 Oct 2007, Marc Grimme wrote: > >>>> Now, in theory, I should be able to bring up another node on the same > >>>> file system. All I would need to do is clone the /boot partition to > >>>> the other box, and it should just come up. > >>> > >>> Why cloning it and not using the same. Isn't that possible. We are > >>> always doing it this way. > >> > >> Because I'm not booting this off DHCP. I'm booting the kernel and the > >> initrd off the local disk. So I need to clone the boot partition with > >> the kernel and the initrd to each of the nodes. > > > > ok. > > How about PXE. IMHO you could use one shared bootimage couldn't you? > > Sure I could - but I'd still prefer to reclaim all the initrd memory. No > point in wasting it when I have an 80GB RAID1 local disk that'll only ever > get used for swap and /tmp. > > >>>> As far as unsharing things under /var, I _think_ only /var/lock > >>>> actually needs to be unshared. Can I do this with the running image > >>>> with: > >>>> > >>>> com-mkcdsl -r / -a /var/lock > >>> > >>> you can skip the -r/ it is default. > >>> How about /var/run, /var/log, /var/cache, /var/tmp, /var/spool. All of > >>> these normally need to be hostdependent. > >> > >> I'm not sure why /var/cache and /var/spool would need to be host > >> dependent. I can see reasons why I'd want to them to be shared. > > > > I think e.g. /var/spool/mail or just from the name it should be. But it's > > up to you. > > I would _definitely_ prefer to have /var/spool/mail shared. More to the > point, I'm planning to use this cluster for a big mail system with > maildirs, so it'd better work! :-p > > >> I agree that /var/run and /var/lock should be private. > >> > >> It would be _nice_ to have a shared /var/log, but from past experience, > >> the logs will get messed up when multiple syslogs try to write to them. > >> Is there a shared logging solution for this? I know I can pick a master > >> log node and get syslog pointed at this, but this won't work for all the > >> other non-syslog services (e.g. Apache). > > > > Why did I want to say (use a syslog-server)? Right with apache it does > > not work. For e.g. apache we've written a log analysis tool to merge the > > logs. It's in the addons channel and is called mgrep. > > I think I also read a howto integrate apache into syslog somewhere. > > Or there is a Spread based Apache logging system. I know there are > workarounds. Shame logging doesn't work as atomic writes - that would > have made things much easier for this scenario... :-( > > >> I plan to link /var/tmp to /tmp, and have /tmp mounted to a big local > >> partition (local disks are only planned to have /boot, /tmp and swap). > >> > >> Which brings me to the next question - how do I use a local disk > >> partition instead of the initrd? What's the procedure for that? It seems > >> a more efficient solution than relying on a ramdisk that eats memory > >> after booting up when there is plenty of local disk space available. How > >> do I use /etc/sysconfig/comoonics-chroot ? > > > > Yes. So I suppose you don't want to configure your local disk with lvm > > ;-) . > > LOL! I'd prefer not. :-) > > > So I'll explain it without. > > It's basically quite easy: > > 1. For every node: spare one partition for the chroot (let's say it > > is /dev/sda4) and let it be at least 500M. > > 2. For every node: mkfs.ext3 /dev/sda4 > > 3. Add to the com_info section for every node the following: > > <chrootenv mountpoint="/var/comoonics/chroot" fstype="ext3" > > device="/dev/sda4" chrootdir="/var/comoonics/chroot"/> > > 4. Make a new initrd > > 5. reboot every node > > That's it no everything should be running on your local disk instead of > > tmpfs. > > OK - how does this work, then? Does it copy the initrd to the disk at > boot time? Or does the mkinitrd build the init root straight on that > partition? Or does something else happen? What does > /etc/sysconfig/comoonics-chroot do, then? I thought it had some part to > play in this. So first the initrd is loaded into RAM. This we cannot change. Then the localdisk is setup (linuxrc.generic.sh lines 279-288). > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2007-10-12 08:07:08
|
>>> Is the ram disk that remains mounted on /var/comoonics/chroot supposed to >>> be 176MB? About 75MB of this is kernel drivers. Surely these are no longer >>> required once the root image is mounted, because they can be loaded from >>> the GFS file system now. I don't mind a MB or two remaining, but 176 seems >>> a little excessive... >> >> Feel free to remove those files. Yes I agree. It would be ok to delete the >> kernel modules. >> Again, normally this chroot is automatically moved to a local disk (the same >> that is used for swap and /tmp) because these data are not important. That's >> basically the reason why we do not clean up the tmpfs. > > I'm figuring that just adding rm -rf /lib/modules would help. I have just > added this to clean_initrd function. I'll have a look at what else can be > pruned after the GFS root is mounted. Hmm, this doesn't appear to have worked. I think it happens too late. I'd need to delete these before the new chroot is fired up. Any suggestion on where would be a good time to put this? Gordan |
From: Gordan B. <go...@bo...> - 2007-10-12 08:05:27
|
On Fri, 12 Oct 2007, Marc Grimme wrote: >> Assuming you would like me to submit patches for this, how would you like >> them? I think they are split up across multiple RPMs, so I'm guessing a >> unified diff wouldn't be ideal. I could just attached the files that I've >> changed. (I'd prefer the latter option, because I haven't kept >> backups of the old versions.) Please advise. OK, I'll wait until I have this last bit with the local disk backed initial root sorted out. :-) Gordan |
From: Gordan B. <go...@bo...> - 2007-10-12 08:03:24
|
On Fri, 12 Oct 2007, Marc Grimme wrote: >>>> Now, in theory, I should be able to bring up another node on the same >>>> file system. All I would need to do is clone the /boot partition to the >>>> other box, and it should just come up. >>> >>> Why cloning it and not using the same. Isn't that possible. We are always >>> doing it this way. >> >> Because I'm not booting this off DHCP. I'm booting the kernel and the >> initrd off the local disk. So I need to clone the boot partition with the >> kernel and the initrd to each of the nodes. > > ok. > How about PXE. IMHO you could use one shared bootimage couldn't you? Sure I could - but I'd still prefer to reclaim all the initrd memory. No point in wasting it when I have an 80GB RAID1 local disk that'll only ever get used for swap and /tmp. >>>> As far as unsharing things under /var, I _think_ only /var/lock actually >>>> needs to be unshared. Can I do this with the running image with: >>>> >>>> com-mkcdsl -r / -a /var/lock >>> >>> you can skip the -r/ it is default. >>> How about /var/run, /var/log, /var/cache, /var/tmp, /var/spool. All of >>> these normally need to be hostdependent. >> >> I'm not sure why /var/cache and /var/spool would need to be host >> dependent. I can see reasons why I'd want to them to be shared. > > I think e.g. /var/spool/mail or just from the name it should be. But it's up > to you. I would _definitely_ prefer to have /var/spool/mail shared. More to the point, I'm planning to use this cluster for a big mail system with maildirs, so it'd better work! :-p >> I agree that /var/run and /var/lock should be private. >> >> It would be _nice_ to have a shared /var/log, but from past experience, >> the logs will get messed up when multiple syslogs try to write to them. >> Is there a shared logging solution for this? I know I can pick a master >> log node and get syslog pointed at this, but this won't work for all the >> other non-syslog services (e.g. Apache). > > Why did I want to say (use a syslog-server)? Right with apache it does not > work. For e.g. apache we've written a log analysis tool to merge the logs. > It's in the addons channel and is called mgrep. > I think I also read a howto integrate apache into syslog somewhere. Or there is a Spread based Apache logging system. I know there are workarounds. Shame logging doesn't work as atomic writes - that would have made things much easier for this scenario... :-( >> I plan to link /var/tmp to /tmp, and have /tmp mounted to a big local >> partition (local disks are only planned to have /boot, /tmp and swap). >> >> Which brings me to the next question - how do I use a local disk partition >> instead of the initrd? What's the procedure for that? It seems a more >> efficient solution than relying on a ramdisk that eats memory after >> booting up when there is plenty of local disk space available. How do I >> use /etc/sysconfig/comoonics-chroot ? > > Yes. So I suppose you don't want to configure your local disk with lvm ;-) . LOL! I'd prefer not. :-) > So I'll explain it without. > It's basically quite easy: > 1. For every node: spare one partition for the chroot (let's say it > is /dev/sda4) and let it be at least 500M. > 2. For every node: mkfs.ext3 /dev/sda4 > 3. Add to the com_info section for every node the following: > <chrootenv mountpoint="/var/comoonics/chroot" fstype="ext3" device="/dev/sda4" > chrootdir="/var/comoonics/chroot"/> > 4. Make a new initrd > 5. reboot every node > That's it no everything should be running on your local disk instead of tmpfs. OK - how does this work, then? Does it copy the initrd to the disk at boot time? Or does the mkinitrd build the init root straight on that partition? Or does something else happen? What does /etc/sysconfig/comoonics-chroot do, then? I thought it had some part to play in this. Gordan |
From: Marc G. <gr...@at...> - 2007-10-12 08:00:49
|
On Friday 12 October 2007 09:56:32 Gordan Bobic wrote: > Assuming you would like me to submit patches for this, how would you like > them? I think they are split up across multiple RPMs, so I'm guessing a > unified diff wouldn't be ideal. I could just attached the files that I've > changed. (I'd prefer the latter option, because I haven't kept > backups of the old versions.) Please advise. > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel Do what you like best. It's ok with me. -- Gruss / Regards, Marc Grimme Phone: +49-89 452 3538-14 http://www.atix.de/ http://www.open-sharedroot.org/ ** Visit us at LinuxWorld Conference & Expo 31.10. - 01.11.2007 in Jaarbeurs Utrecht - The Netherlands ATIX stand: Hall 9 / B 005 ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss |
From: Gordan B. <go...@bo...> - 2007-10-12 07:56:35
|
Assuming you would like me to submit patches for this, how would you like them? I think they are split up across multiple RPMs, so I'm guessing a unified diff wouldn't be ideal. I could just attached the files that I've changed. (I'd prefer the latter option, because I haven't kept backups of the old versions.) Please advise. Gordan |