From: Gordan B. <go...@bo...> - 2009-02-08 14:06:52
|
Marc Grimme wrote: > On Sunday 08 February 2009 14:29:42 Gordan Bobic wrote: >> Marc Grimme wrote: >>> On Thursday 05 February 2009 23:11:24 Gordan Bobic wrote: >>>> It would appear that OSR doesn't start up MD software RAID arrays. > ... >> Well, I just got shared root working on GlusterFS, which is effectively >> a fuse stackable replication file system. It requires a standard >> underlying file system with xattr support (e.g. ext3). This in turn has >> to be backed by a block device, and in my case, that block device is an >> MD software RAID. > > Ok I thought about something like this. Cause I think it's not usable for > clusterfilesystems like GFS. Indeed, it isn't. Nobody seems to know when MD will get any cluster awareness. :( >> I'm not talking about detecting the presence of mdadm devices. I'm >> talking about detecting the presence of /sbin/mdadm in the init-root. In >> other words, if you want MD support, just add rpm and file mdadm.list >> files and replace the line I added "mdadm --assemble --scan" with >> something more like: >> >> if [ -x /sbin/mdadm ]; then >> mdadm --assemble --scan >> fi >> >> That's all I was talking about, really. This will check for MD markers >> on all the available disks and assemble any it can. > > I think that should do. Especially if we normally not import it but only if > the rpm comoonics-bootimage-extras-md is installed. > Ok forget about the rest for the first. OK. I'll send a new patch later. :) >> I'm not sure this is worthwhile, though. It would be much more >> straightforward to just add the if block mentioned above and have >> something like and extras-mdadm package that just adds the >> /etc/comoonics/bootimage/files.initrd.d/mdadm.list >> /etc/comoonics/bootimage/rpms.initrd.d/mdadm.list >> files (dependency on mdadm rpm). That means the total bloat of 3 lines in >> /opt/atix/comoonics-bootimage/boot-scripts/etc/hardware-lib.sh >> if it isn't actually required for the specific build. >> > ... >>> No, not yet. I didn't forget it but had no time to reallly think about >>> it. I'm still not sure how to use it best. >> OK. :) >> IIRC, the patch I provided makes the parameter optional and non-default >> anyway. So without the -l parameter it'll do the exact same thing as it >> does now (i.e. bundle all the drivers for the current kernel). > > Hm. Yes I should think about it again. > Can I still use the old patch you've provided? It applied cleanly last time I tried it, but I'll double check. >>> With the python packages. The .pyo/pyc is a point. I'll think about >>> adding something like <package> *.py to the listfiles listing python >>> rpms. That means only files with .py in the end will be included. And no >>> I don't think it works negative. ;-) >> What about not including them in the rpm in the first place? And/or >> perhaps only generating them in the RPM post-install? > > I don't want to risk take even more time then it does now. As it is taking > some time to build an initrd. There may be ways to speed it up, I'll have a think about it. The most time seems to be spent with context switching between process invocations. If we can find a way to combine the extraction of file list from all RPMs in question in one go (specify multiple packages as RPM parameters - not sure if this'll work), then I suspect the time would be massively cut. This seems to be what takes up most time. The only drawback is that filters wouldn't be directly applicable per rpm, but globally. I'm not sure if this is actually a problem, and if it is, how big. The 2nd longest part is the compression of the initrd. Using a version of gzip built specifically optimized for the target platform using ICC yields a speed-up of around 20+%. > BTW. we can spare 20/150 MB if we remove the *.pyc/*.pyo. That's not too bad. Similar amount if not more by pruning all the unused kernel modules. :) RH kernels come compiled with everything and the kitchen sink. :( Gordan |