You can subscribe to this list here.
2006 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(105) |
Nov
(10) |
Dec
(7) |
2008 |
Jan
|
Feb
(31) |
Mar
(13) |
Apr
(7) |
May
|
Jun
(2) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(23) |
Dec
|
2009 |
Jan
(25) |
Feb
(24) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(6) |
Jul
(27) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(7) |
Dec
(25) |
2010 |
Jan
|
Feb
(7) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(1) |
From: Marc G. <gr...@at...> - 2008-11-21 07:23:31
|
On Thursday 20 November 2008 22:55:42 Gordan Bobic wrote: > Marc Grimme wrote: > > Hi Gordan, > > > > On Thursday 20 November 2008 21:10:43 Gordan Bobic wrote: > >> Hi, > >> > >> I'm working on adding GlusterFS support for OSR. I know I mentioned this > >> months ago, but I've waited until glusterfs is actually stable enough to > >> use in production (well, stable enough for me, anyway). > > > > ;-) > > > >> The client-only case will be simple enough, but for a > >> client-with-local-storage case, it's a bit more complicated. GlusterFS > >> uses a local file system as backing storage. That means that it needs to > >> mount a local ext3 file system before mounting the glusterfs volume via > >> fuse on top and chroots into it for the 2nd stage boot. > >> > >> So, I can use the <rootvolume> entry for the glusterfs part, but before > >> mounting that as the rootfs, I first have to mount the underlying ext3 > >> file system as stage 1.5 rootfs. > >> > >> I'm currently thinking of implementing it by having <chrootenv> contain > >> the glusterfs backing store. This would be fine as long as the > >> <chrootenv> volume can be considered clobber-safe (i.e. extracting the > >> initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained > >> within the initrd file structure. > >> > >> 1) Does this sound like a reasonable way to do it? > > > > Hmm. I don't like the idea of using the chroot environment in way it is > > not expected to be used. > > I see it more the other way around. Not as mis-using the chroot > environment for data storage, but as misusing the data store for the > chroot environment. ;) You got that point. But there is now alternative is there? Still the idea of the chroot was to build a chroot (either on a localfs or in Memory) for services that need to be run as requirement for keeping a clusterfs running without residing on the clusterfilesystem. AND to have some kind of last resort way to access a frozen cluster (fenceacksv, I hate that name). > > >> 2) Is it safe to assume that chrootenv won't get silently clobbered? I > >> know it doesn't seem to happen, but I'm still a little nervous about > >> it... > > > > I agree although I don't think it will be clobbered. > > The main reason why I suggested it is because it means one fewer > partition is required (fewer partitions = more flexible, less forward > planning required, less wasted space). I even went as far as thinking > about using the same partition for kernel images. It would mean there is > only one data volume (plus a swap partition). As you like it. > Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Marc G. <gr...@at...> - 2008-11-21 07:19:11
|
On Friday 21 November 2008 00:15:56 Gordan Bobic wrote: > Gordan Bobic wrote: > > I bumped into another problem, though: > > > > # com-mkcdslinfrastructure -r /gluster/root -i > > Traceback (most recent call last): > > File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > > from comoonics.cdsl.ComCdslRepository import * > > File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", > > line 28, in ? > > defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > > NameError: name 'cdslDefaults_element' is not defined > > Digging a little deeper, it turns out that although the package versions > (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the > new setup (x86-64), the contents of the > /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally > different. The new version seems to have a bunch of definitions that, it > would seem, break things. > > I replaced with the old version, and it now seems to complete without > errors. > > Am I correct in guessing this is a packaging bug? Yes you are. Don't know how this could happen. But yes I bumped into that bug also some time before. You might want to try more recent versions. This should be fixed. Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-20 23:16:10
|
Gordan Bobic wrote: > I bumped into another problem, though: > > # com-mkcdslinfrastructure -r /gluster/root -i > Traceback (most recent call last): > File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > from comoonics.cdsl.ComCdslRepository import * > File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", > line 28, in ? > defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > NameError: name 'cdslDefaults_element' is not defined Digging a little deeper, it turns out that although the package versions (comoonics-cdsl-py-0.2-11) are the same on my old setup (i386) and the new setup (x86-64), the contents of the /usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py are totally different. The new version seems to have a bunch of definitions that, it would seem, break things. I replaced with the old version, and it now seems to complete without errors. Am I correct in guessing this is a packaging bug? Gordan |
From: Gordan B. <go...@bo...> - 2008-11-20 21:55:51
|
Marc Grimme wrote: > Hi Gordan, > > On Thursday 20 November 2008 21:10:43 Gordan Bobic wrote: >> Hi, >> >> I'm working on adding GlusterFS support for OSR. I know I mentioned this >> months ago, but I've waited until glusterfs is actually stable enough to >> use in production (well, stable enough for me, anyway). > ;-) >> The client-only case will be simple enough, but for a >> client-with-local-storage case, it's a bit more complicated. GlusterFS >> uses a local file system as backing storage. That means that it needs to >> mount a local ext3 file system before mounting the glusterfs volume via >> fuse on top and chroots into it for the 2nd stage boot. >> >> So, I can use the <rootvolume> entry for the glusterfs part, but before >> mounting that as the rootfs, I first have to mount the underlying ext3 >> file system as stage 1.5 rootfs. >> >> I'm currently thinking of implementing it by having <chrootenv> contain >> the glusterfs backing store. This would be fine as long as the >> <chrootenv> volume can be considered clobber-safe (i.e. extracting the >> initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained >> within the initrd file structure. >> >> 1) Does this sound like a reasonable way to do it? > > Hmm. I don't like the idea of using the chroot environment in way it is not > expected to be used. I see it more the other way around. Not as mis-using the chroot environment for data storage, but as misusing the data store for the chroot environment. ;) >> 2) Is it safe to assume that chrootenv won't get silently clobbered? I >> know it doesn't seem to happen, but I'm still a little nervous about it... > > I agree although I don't think it will be clobbered. The main reason why I suggested it is because it means one fewer partition is required (fewer partitions = more flexible, less forward planning required, less wasted space). I even went as far as thinking about using the same partition for kernel images. It would mean there is only one data volume (plus a swap partition). > But why not go this way: > There is a method/function "clusterfs_services_start" which itself calls > ${clutype}_services_start which should be implemented by > etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore there > should be a library glusterfs-lib.sh. The function clusterfs_services_start > should do all the preparations to mount the clusterfilesystem and could also > mount a local filesystem as a prerequisite. So I would put it there. Fair enough, that sounds reasonable. Just have to make sure the backing volume is listed in the fstab of the init-root. > A rootfs can be specified as follows: > <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> Indeed, I figured this out already. :) I bumped into another problem, though: # com-mkcdslinfrastructure -r /gluster/root -i Traceback (most recent call last): File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? from comoonics.cdsl.ComCdslRepository import * File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", line 28, in ? defaults_path = os.path.join(cdsls_path,cdslDefaults_element) NameError: name 'cdslDefaults_element' is not defined What am I missing? Gordan |
From: Marc G. <gr...@at...> - 2008-11-20 20:41:19
|
Hi Gordan, On Thursday 20 November 2008 21:10:43 Gordan Bobic wrote: > Hi, > > I'm working on adding GlusterFS support for OSR. I know I mentioned this > months ago, but I've waited until glusterfs is actually stable enough to > use in production (well, stable enough for me, anyway). ;-) > > The client-only case will be simple enough, but for a > client-with-local-storage case, it's a bit more complicated. GlusterFS > uses a local file system as backing storage. That means that it needs to > mount a local ext3 file system before mounting the glusterfs volume via > fuse on top and chroots into it for the 2nd stage boot. > > So, I can use the <rootvolume> entry for the glusterfs part, but before > mounting that as the rootfs, I first have to mount the underlying ext3 > file system as stage 1.5 rootfs. > > I'm currently thinking of implementing it by having <chrootenv> contain > the glusterfs backing store. This would be fine as long as the > <chrootenv> volume can be considered clobber-safe (i.e. extracting the > initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained > within the initrd file structure. > > 1) Does this sound like a reasonable way to do it? Hmm. I don't like the idea of using the chroot environment in way it is not expected to be used. > > 2) Is it safe to assume that chrootenv won't get silently clobbered? I > know it doesn't seem to happen, but I'm still a little nervous about it... I agree although I don't think it will be clobbered. But why not go this way: There is a method/function "clusterfs_services_start" which itself calls ${clutype}_services_start which should be implemented by etc/${rootfs}-lib.sh. Your $rootfs should be glusterfs and therefore there should be a library glusterfs-lib.sh. The function clusterfs_services_start should do all the preparations to mount the clusterfilesystem and could also mount a local filesystem as a prerequisite. So I would put it there. A rootfs can be specified as follows: <rootvolume name="/dev/whicheverdeviceyoulike" fstype="glusterfs"/> What do you think? Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-20 20:10:49
|
Hi, I'm working on adding GlusterFS support for OSR. I know I mentioned this months ago, but I've waited until glusterfs is actually stable enough to use in production (well, stable enough for me, anyway). The client-only case will be simple enough, but for a client-with-local-storage case, it's a bit more complicated. GlusterFS uses a local file system as backing storage. That means that it needs to mount a local ext3 file system before mounting the glusterfs volume via fuse on top and chroots into it for the 2nd stage boot. So, I can use the <rootvolume> entry for the glusterfs part, but before mounting that as the rootfs, I first have to mount the underlying ext3 file system as stage 1.5 rootfs. I'm currently thinking of implementing it by having <chrootenv> contain the glusterfs backing store. This would be fine as long as the <chrootenv> volume can be considered clobber-safe (i.e. extracting the initrd to it won't ever wipe the path (e.g. /gluster/ ) not contained within the initrd file structure. 1) Does this sound like a reasonable way to do it? 2) Is it safe to assume that chrootenv won't get silently clobbered? I know it doesn't seem to happen, but I'm still a little nervous about it... Thanks. Gordan |
From: Gordan B. <go...@bo...> - 2008-11-20 13:21:06
|
Marc Grimme wrote: > this is easy to be fixed. > What do you think? Better now? Yup, it now passes the "it works for me" test. :) Gordan |
From: Marc G. <gr...@at...> - 2008-11-20 12:37:28
|
Hi, this is easy to be fixed. What do you think? Better now? Marc -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Gordan B. <go...@bo...> - 2008-11-20 03:49:13
|
Hi, Just got back to using OSR and there appears to be a bug in the FAQ on RHEL5 yum repository setup. It uses the variable $arch. I'm pretty sure this should be $basearch ($arch for x86-64 returns ia32e, whereas $basearch returns the correct value of x86-64). Gordan |
From: Marc G. <gr...@at...> - 2008-11-07 19:35:36
|
Thomas, ok looks like two things I'd say. The first is a bug: 11:25:32 CRITICAL: Traceback (most recent call first): File "/usr/lib/anaconda/iw/osr_gui.py", line 51, in getScreen node.addNetdev(OSRClusterNodeNetdev(node, net.available().keys()[0])) We expect anaconda to find a valid nic. Looks like anaconda did not detect your realtec nics. Would be nice if you could file a bug. We'll include it into the next release. The second is that anaconda does not find the nics in your server. The means even if we fix that bug you wont have a valid cluster without nics. A valid solution would be to change the nic: Do you have another nic (intel, broadcom, ..) that you can plug into this server? Regards Marc. On Thursday 06 November 2008 12:00:27 Thomas Kisler wrote: > Hello, > > thank you for your messages. The syslog option would have been nice, but > it seems that not recognizing the network interface is the bug. > Therefore i was not able to copy the exception details to another host. > > The missing piece was the hint with the anaconda.log. Although it does > not reside in /var/log, knowing the name helped me finding it in /tmp. > There is the anaconda.log and the (maybe) more interesting anacdump.txt > which contains the exception details. I append both files and some > information on the hardware. Tell me, if I can help you with some more > information. > > --------------------------------------------------------------------------- >---------------- Motherboard: K9A2CF from MSI > --------------------------------------------------------------------------- >---------------- > > lspci gives the following information: > > 00:00.0 Host bridge: ATI Technologies Inc RD780 Northbridge only dual > slot PCI-e_GFX and HT1 K8 part > 00:02.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge > (external gfx0 port A) > 00:06.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI > express gpp port C) > 00:12.0 SATA controller: ATI Technologies Inc SB600 Non-Raid-5 SATA > 00:13.0 USB Controller: ATI Technologies Inc SB600 USB (OHCI0) > 00:13.1 USB Controller: ATI Technologies Inc SB600 USB (OHCI1) > 00:13.2 USB Controller: ATI Technologies Inc SB600 USB (OHCI2) > 00:13.3 USB Controller: ATI Technologies Inc SB600 USB (OHCI3) > 00:13.4 USB Controller: ATI Technologies Inc SB600 USB (OHCI4) > 00:13.5 USB Controller: ATI Technologies Inc SB600 USB Controller (EHCI) > 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 14) > 00:14.1 IDE interface: ATI Technologies Inc SB600 IDE > 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) > 00:14.3 ISA bridge: ATI Technologies Inc SB600 PCI to LPC Bridge > 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge > 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, > Athlon64, Sempron] HyperTransport Configuration > 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, > Athlon64, Sempron] Address Map > 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, > Athlon64, Sempron] DRAM Controller > 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, > Athlon64, Sempron] Miscellaneous Control > 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, > Athlon64, Sempron] Link Control > 01:00.0 VGA compatible controller: nVidia Corporation G70 [GeForce 7300 > GT] (rev a1) > 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. > RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) > > --------------------------------------------------------------------------- >---------------- cpuinfo of one core: > > processor : 0 > vendor_id : AuthenticAMD > cpu family : 16 > model : 2 > model name : AMD Phenom(tm) 9500 Quad-Core Processor > stepping : 2 > cpu MHz : 2200.153 > cache size : 512 KB > physical id : 0 > siblings : 4 > core id : 3 > cpu cores : 4 > apicid : 3 > initial apicid : 3 > fpu : yes > fpu_exception : yes > cpuid level : 5 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext > fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl pni > monitor > cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a > misalignsse 3dnowprefetch osvw ibs > bogomips : 4400.43 > TLB size : 1024 4K pages > clflush size : 64 > cache_alignment : 64 > address sizes : 48 bits physical, 48 bits virtual > power management: ts ttp tm stc 100mhzsteps hwpstate > > --------------------------------------------------------------------------- >---------------- > > Should I also create a new report at the sourceforge bug tracker? > > Best regards > > Thomas Kisler > ------------------------ > > Reiner Rottmann wrote: > > Hello, > > > > during the Anaconda setup you also may switch to another console: > > > > * Alt-F1 - installation dialog > > * Alt-F2 - shell prompt > > * Alt-F3 - install log (messages from install program) > > * Alt-F4 - system log (messages from kernel, etc.) > > * Alt-F5 - other messages > > > > Maybe there are error messages displayed that will help debugging the > > issue. > > > > Also there are anaconda kernel boot parameters that will help the > > debugging process: > > > > * debug - Add a debug button to the UI that allows dropping into a > > python debugger. > > * nokill - A debugging option that prevents anaconda from terminating > > all running programs when a fatal error occurs > > > > Usually Anaconda stores a log in /var/log/anaconda.log. From a shell you > > may access the file. > > > > May I ask what hardware configuration do you use? > > > > Best regards, > > Reiner Rottmann > > > > > > ------------------------------------------------------------------------ > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your Move Developer's > > challenge Build the coolest Linux based applications with Moblin SDK & > > win great prizes Grand prize is a trip for two to an Open Source event > > anywhere in the world > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Open-sharedroot-devel mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Thomas K. <ka...@rz...> - 2008-11-06 11:01:14
|
Hello, thank you for your messages. The syslog option would have been nice, but it seems that not recognizing the network interface is the bug. Therefore i was not able to copy the exception details to another host. The missing piece was the hint with the anaconda.log. Although it does not reside in /var/log, knowing the name helped me finding it in /tmp. There is the anaconda.log and the (maybe) more interesting anacdump.txt which contains the exception details. I append both files and some information on the hardware. Tell me, if I can help you with some more information. ------------------------------------------------------------------------------------------- Motherboard: K9A2CF from MSI ------------------------------------------------------------------------------------------- lspci gives the following information: 00:00.0 Host bridge: ATI Technologies Inc RD780 Northbridge only dual slot PCI-e_GFX and HT1 K8 part 00:02.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (external gfx0 port A) 00:06.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port C) 00:12.0 SATA controller: ATI Technologies Inc SB600 Non-Raid-5 SATA 00:13.0 USB Controller: ATI Technologies Inc SB600 USB (OHCI0) 00:13.1 USB Controller: ATI Technologies Inc SB600 USB (OHCI1) 00:13.2 USB Controller: ATI Technologies Inc SB600 USB (OHCI2) 00:13.3 USB Controller: ATI Technologies Inc SB600 USB (OHCI3) 00:13.4 USB Controller: ATI Technologies Inc SB600 USB (OHCI4) 00:13.5 USB Controller: ATI Technologies Inc SB600 USB Controller (EHCI) 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 14) 00:14.1 IDE interface: ATI Technologies Inc SB600 IDE 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: ATI Technologies Inc SB600 PCI to LPC Bridge 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Link Control 01:00.0 VGA compatible controller: nVidia Corporation G70 [GeForce 7300 GT] (rev a1) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) ------------------------------------------------------------------------------------------- cpuinfo of one core: processor : 0 vendor_id : AuthenticAMD cpu family : 16 model : 2 model name : AMD Phenom(tm) 9500 Quad-Core Processor stepping : 2 cpu MHz : 2200.153 cache size : 512 KB physical id : 0 siblings : 4 core id : 3 cpu cores : 4 apicid : 3 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs bogomips : 4400.43 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm stc 100mhzsteps hwpstate ------------------------------------------------------------------------------------------- Should I also create a new report at the sourceforge bug tracker? Best regards Thomas Kisler ------------------------ Reiner Rottmann wrote: > Hello, > > during the Anaconda setup you also may switch to another console: > > * Alt-F1 - installation dialog > * Alt-F2 - shell prompt > * Alt-F3 - install log (messages from install program) > * Alt-F4 - system log (messages from kernel, etc.) > * Alt-F5 - other messages > > Maybe there are error messages displayed that will help debugging the issue. > > Also there are anaconda kernel boot parameters that will help the debugging > process: > > * debug - Add a debug button to the UI that allows dropping into a python > debugger. > * nokill - A debugging option that prevents anaconda from terminating all > running programs when a fatal error occurs > > Usually Anaconda stores a log in /var/log/anaconda.log. From a shell you may > access the file. > > May I ask what hardware configuration do you use? > > Best regards, > Reiner Rottmann > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > ------------------------------------------------------------------------ > > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel > |
From: Reiner R. <rot...@at...> - 2008-11-06 10:22:25
|
Hello, during the Anaconda setup you also may switch to another console: * Alt-F1 - installation dialog * Alt-F2 - shell prompt * Alt-F3 - install log (messages from install program) * Alt-F4 - system log (messages from kernel, etc.) * Alt-F5 - other messages Maybe there are error messages displayed that will help debugging the issue. Also there are anaconda kernel boot parameters that will help the debugging process: * debug - Add a debug button to the UI that allows dropping into a python debugger. * nokill - A debugging option that prevents anaconda from terminating all running programs when a fatal error occurs Usually Anaconda stores a log in /var/log/anaconda.log. From a shell you may access the file. May I ask what hardware configuration do you use? Best regards, Reiner Rottmann -- Gruss / Regards, Dipl.-Ing. (FH) Reiner Rottmann ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-12 Fax: +49-89 990 1766-0 Email: rot...@at... PGP Key-ID: 0xCA67C5A6 www.atix.de | www.open-sharedroot.org Vorstaende: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 On Wednesday 05 November 2008 11:21:31 am Thomas Kisler wrote: > Hello, > > is there a sophisticated way to save a bug report within anaconda. The > setup crashes right after the selection of the keyboard layout. > > Thanks in advance > > Thomas Kisler > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel |
From: Marc G. <gr...@at...> - 2008-11-05 21:10:08
|
On Wednesday 05 November 2008 11:21:31 Thomas Kisler wrote: > Hello, > > is there a sophisticated way to save a bug report within anaconda. The > setup crashes right after the selection of the keyboard layout. > > Thanks in advance > > Thomas Kisler > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel Depends on what you mean with crash (does the system freeze or do you fail back to a shell or does it reboot). You should be able to upload a "dump" within anaconda and be able to scp it to another host. You have to enable the debug mode (just start it with "linux debug" or if it is not possible you can at least syslog all messages to another syslog host. I think this works with starting with "linux syslog=<ip>") then you should get a button from where you should be able to scp the data to another host. Let me know if that works or if it at least helps? Regards Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Thomas K. <ka...@rz...> - 2008-11-05 10:36:49
|
Hello, is there a sophisticated way to save a bug report within anaconda. The setup crashes right after the selection of the keyboard layout. Thanks in advance Thomas Kisler |
From: Marc G. <gr...@at...> - 2008-09-24 08:49:02
|
Sources are available now. Sorry about that. -marc On Wednesday 24 September 2008 03:58:50 Sunil Mushran wrote: > Are the comoonics packages open sourced (GPL or otherwise)? > Wondering as I could not find the sources on the site. > > Marc Grimme wrote: > > Hello, > > I just wanted to inform you that we have successfully ported the > > Open-Sharedroot Cluster to be used with Novell SLES10 SP2 with OCFS2 > > 1.4.1 (SuSE Version) and above. > > > > More information can be found here: > > http://www.opensharedroot.org/documentation/sles10-ocfs2-shared-root-mini > >-howto > > > > Have fun and always share the root ;-) > > > > - Marc > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Reiner R. <rot...@at...> - 2008-09-24 06:23:51
|
Hello, On Wednesday 24 September 2008 03:58:50 am Sunil Mushran wrote: > Are the comoonics packages open sourced (GPL or otherwise)? > Wondering as I could not find the sources on the site. Yes of course they are! See the RPM information header for the license details. e.g.: # rpm -qpi comoonics-bootimage-1.3-38.noarch.rpm |grep License|awk '{print $5}' GPL As most of the files are shell or python based scripts, they actually are distributed as sources already. However there are also Source RPMs available on our Download Server: http://download.atix.de/yum/comoonics/suse-linux-es10/preview/SRPMS/ http://download.atix.de/yum/comoonics/redhat-el5/productive/SRPMS/ http://download.atix.de/yum/comoonics/redhat-el4/productive/SRPMS/ They are usually distributed as soon as they pass the internal Quality Assurance. -Reiner |
From: Sunil M. <sun...@or...> - 2008-09-24 01:57:45
|
Are the comoonics packages open sourced (GPL or otherwise)? Wondering as I could not find the sources on the site. Marc Grimme wrote: > Hello, > I just wanted to inform you that we have successfully ported the > Open-Sharedroot Cluster to be used with Novell SLES10 SP2 with OCFS2 1.4.1 > (SuSE Version) and above. > > More information can be found here: > http://www.opensharedroot.org/documentation/sles10-ocfs2-shared-root-mini-howto > > Have fun and always share the root ;-) > > - Marc > > |
From: Marc G. <gr...@at...> - 2008-09-23 19:02:10
|
Hello, I just wanted to inform you that we have successfully ported the Open-Sharedroot Cluster to be used with Novell SLES10 SP2 with OCFS2 1.4.1 (SuSE Version) and above. More information can be found here: http://www.opensharedroot.org/documentation/sles10-ocfs2-shared-root-mini-howto Have fun and always share the root ;-) - Marc -- Regards, Marc Grimme / ATIX AG http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Marc G. <gr...@at...> - 2008-07-03 08:01:20
|
Hello, we are very happy to announce the availability of the last and final official release candidate of the com.oonics open shared root cluster installation DVD (RC4). The com.oonics open shared root cluster installation DVD allows the installation of a single node open shared root cluster with the use of anaconda, the well known installation software provided by Red Hat. After the installation, the open shared root cluster can be easily scaled up to more than hundred cluster nodes. You can now download the open shared root installation DVD from www.open-sharedroot.org. We are very interested in feetback. Please either file a bug or feature or post to the mailinglist (see www.open-sharedroot.org). More details can be found here: http://open-sharedroot.org/news-archive/availability-of-rc4-of-the-com-oonics-version-of-anaconda Note: The download isos are based on Centos5.1! RHEL5.1 versions will be provided on request. Have fun testing it and let us know the what you're thinking. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Sunil M. <Sun...@or...> - 2008-06-05 22:27:34
|
Thanks. Marc Grimme wrote: > Hello, > I just wanted to inform you that we have successfully ported the > Open-Sharedroot Cluster to be used with RHEL5.2/CentOS5.2 with OCFS2 1.3.9 > and above. > > More information can be found here: > > http://www.open-sharedroot.org/documentation/rhel5-ocfs2-shared-root-mini-howto > > Have fun and let us know about your exerience!! > > |
From: Marc G. <gr...@at...> - 2008-06-05 15:52:54
|
Hello, I just wanted to inform you that we have successfully ported the Open-Sharedroot Cluster to be used with RHEL5.2/CentOS5.2 with OCFS2 1.3.9 and above. More information can be found here: http://www.open-sharedroot.org/documentation/rhel5-ocfs2-shared-root-mini-howto Have fun and let us know about your exerience!! -- Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: <go...@bo...> - 2008-04-17 11:18:40
|
On Thu, 17 Apr 2008, Marc Grimme wrote: >>>>>> On a separate node, am I correct in presuming that the diet version of >>>>>> the initrd with the kernel drivers pruned and additional package >>>>>> filtering added as per the patch I sent a while back was not deemed a >>>>>> good idea? >>>>> >>>>> Thanks for reminding me. I forgot to answer, sorry. >>>>> >>>>> The idea itself is good. But originally and by concept the initrd it >>>>> designed to be an initrd used for different hardware configurations. >>>> >>>> Same initrd for multiple configurations? Why is this useful? Different >>>> configurations could also run different kernels, which would invalidate >>>> the shared initrd concept... >>> >>> No necessarily. I was a design idea and still is a kind of USP and most >>> important something other customers use. >>> >>> Just a small example why. Let's suppose you have servers from HP the same >>> Product branch (like HP DL38x) but of different generations. Then the >>> onboard nics would on older ones be the tg3/bmc5700 driver whereas newer >>> Generations use bnx2 drivers for their onboard nics. And then when >>> bringing in IBM/Sun/Dell whatever other servers it becomes more >>> complicated. And all this should be handled by one single shared boot. >>> >>> Did this explain the problem? >> >> How do you work around the fact that each node needs a different >> modprobe.conf for the different NIC/driver bindings? > > The hardware detection takes place in the initrd and the "generated" initrd is > copied during bootprocess on the rootdisk. Ah, OK. I didn't realize that this bit of NIC detection logic happens in the initrd. I thought it just went by modprobe.conf. Anyway - attached is the updated patch for create-gfs-initrd-generic.sh mkinitrd now takes the -l parameter (l for "light"). I thought about using getopt instead of getopts for long parameters, but since the current implementation uses getopts bash builtin, I decided to stick with it. When -l is passed, only modules currently loaded and those listed in modprobe.conf are loaded into the initrd. This reduces the initrd image by about 10MB (in my case from 53MB to 43MB). I have also attached the patch for chroot-lib.sh. This is the same patch I sent previously, that adds optional exclusion filtering in the rpm list files. Old format is still supported - if the 2nd filtering parameter (the one for excluding files) is ommitted, it will be ignored and the exclusion not performed. Gordan |
From: Marc G. <gr...@at...> - 2008-04-17 09:59:51
|
On Thursday 17 April 2008 11:15:42 go...@bo... wrote: > On Thu, 17 Apr 2008, Marc Grimme wrote: > >> In the mirror client/server, it would be similar to a DRBD+GFS setup, > >> only scaleable to more than 2-3 nodes (IIRC, DRBD only supports up to > >> 2-3 nodes at the moment). Each node would mount it's local mirror as OSR > >> (as it does with DRBD). > >> > >> The upshot is that as far as I can make out, unlike GFS, split-brains > >> are less of an issue in FS terms - GlusterFS would sort that out, so in > >> theory, we could have an n-node cluster with quorum of 1. > >> > >> The only potential issue with that would be migration of IPs - if it > >> split-brains, it would, in theory, cause an IP resource clash. But I > >> think the scope for FS corruption would be removed. There might still be > >> file clobbering on resync, but the FS certainly wouldn't get totally > >> destroyed like with splitbrain GFS and shared SAN storage (DRBD+GFS also > >> has the same benefit that you get to keep at least one version of the FS > >> after split-brain). > > > > And wouldn't the ip thing if appropriate being handled via a > > clustermanager (rgmanager)? > > Indeed it would, but that would still be susceptible to split-brain IP > clashes. But fencing should, hopefully, stop that from ever happening. Yes. The rgmanager or even heartbeat or any other ha-cluster software has its own way of detecting and solving split brain scenarios. The rgmanager uses the same functionality as gfs does. > > >>>> On a separate node, am I correct in presuming that the diet version of > >>>> the initrd with the kernel drivers pruned and additional package > >>>> filtering added as per the patch I sent a while back was not deemed a > >>>> good idea? > >>> > >>> Thanks for reminding me. I forgot to answer, sorry. > >>> > >>> The idea itself is good. But originally and by concept the initrd it > >>> designed to be an initrd used for different hardware configurations. > >> > >> Same initrd for multiple configurations? Why is this useful? Different > >> configurations could also run different kernels, which would invalidate > >> the shared initrd concept... > > > > No necessarily. I was a design idea and still is a kind of USP and most > > important something other customers use. > > > > Just a small example why. Let's suppose you have servers from HP the same > > Product branch (like HP DL38x) but of different generations. Then the > > onboard nics would on older ones be the tg3/bmc5700 driver whereas newer > > Generations use bnx2 drivers for their onboard nics. And then when > > bringing in IBM/Sun/Dell whatever other servers it becomes more > > complicated. And all this should be handled by one single shared boot. > > > > Did this explain the problem? > > How do you work around the fact that each node needs a different > modprobe.conf for the different NIC/driver bindings? The hardware detection takes place in the initrd and the "generated" initrd is copied during bootprocess on the rootdisk. > > Gordan > > ------------------------------------------------------------------------- > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/java >one _______________________________________________ > Open-sharedroot-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-devel Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: <go...@bo...> - 2008-04-17 09:37:11
|
On Thu, 17 Apr 2008, Marc Grimme wrote: >> In the mirror client/server, it would be similar to a DRBD+GFS setup, only >> scaleable to more than 2-3 nodes (IIRC, DRBD only supports up to 2-3 nodes >> at the moment). Each node would mount it's local mirror as OSR (as it does >> with DRBD). >> >> The upshot is that as far as I can make out, unlike GFS, split-brains are >> less of an issue in FS terms - GlusterFS would sort that out, so in >> theory, we could have an n-node cluster with quorum of 1. >> >> The only potential issue with that would be migration of IPs - if it >> split-brains, it would, in theory, cause an IP resource clash. But I think >> the scope for FS corruption would be removed. There might still be file >> clobbering on resync, but the FS certainly wouldn't get totally >> destroyed like with splitbrain GFS and shared SAN storage (DRBD+GFS also >> has the same benefit that you get to keep at least one version of the FS >> after split-brain). > > And wouldn't the ip thing if appropriate being handled via a clustermanager > (rgmanager)? Indeed it would, but that would still be susceptible to split-brain IP clashes. But fencing should, hopefully, stop that from ever happening. >>>> On a separate node, am I correct in presuming that the diet version of >>>> the initrd with the kernel drivers pruned and additional package >>>> filtering added as per the patch I sent a while back was not deemed a >>>> good idea? >>> >>> Thanks for reminding me. I forgot to answer, sorry. >>> >>> The idea itself is good. But originally and by concept the initrd it >>> designed to be an initrd used for different hardware configurations. >> >> Same initrd for multiple configurations? Why is this useful? Different >> configurations could also run different kernels, which would invalidate >> the shared initrd concept... > > No necessarily. I was a design idea and still is a kind of USP and most > important something other customers use. > > Just a small example why. Let's suppose you have servers from HP the same > Product branch (like HP DL38x) but of different generations. Then the onboard > nics would on older ones be the tg3/bmc5700 driver whereas newer Generations > use bnx2 drivers for their onboard nics. And then when bringing in > IBM/Sun/Dell whatever other servers it becomes more complicated. And all this > should be handled by one single shared boot. > > Did this explain the problem? How do you work around the fact that each node needs a different modprobe.conf for the different NIC/driver bindings? Gordan |
From: Marc G. <gr...@at...> - 2008-04-17 07:48:59
|
On Wednesday 16 April 2008 15:13:15 go...@bo... wrote: > On Wed, 16 Apr 2008, Marc Grimme wrote: > >> Does anyone think that adding support for this would be a good idea? I'm > >> working with GlusterFS at the moment, so could try to add the relevant > >> init stuff when I've ironed things out a bit. Maybe as a contrib > >> package, like DRBD? > > > > After going roughly over the features and concepts of Glusterfs, I would > > doubt it being an easy task to build a open-sharedroot cluster with it > > but why not. > > It shouldn't be too different to the OSR NFS setup. There are two options: > 1) diskless client > 2) mirror client/server > > In the diskless client case it would be pretty much the same as NFS. Agreed. I forgot about NFS ;-) . > > In the mirror client/server, it would be similar to a DRBD+GFS setup, only > scaleable to more than 2-3 nodes (IIRC, DRBD only supports up to 2-3 nodes > at the moment). Each node would mount it's local mirror as OSR (as it does > with DRBD). > > The upshot is that as far as I can make out, unlike GFS, split-brains are > less of an issue in FS terms - GlusterFS would sort that out, so in > theory, we could have an n-node cluster with quorum of 1. > > The only potential issue with that would be migration of IPs - if it > split-brains, it would, in theory, cause an IP resource clash. But I think > the scope for FS corruption would be removed. There might still be file > clobbering on resync, but the FS certainly wouldn't get totally > destroyed like with splitbrain GFS and shared SAN storage (DRBD+GFS also > has the same benefit that you get to keep at least one version of the FS > after split-brain). And wouldn't the ip thing if appropriate being handled via a clustermanager (rgmanager)? > > > Still it sounds quite promising and if you like you are welcome to > > contribute. > > OK, thanks. I just wanted to float the idea and see if there are any > strong objections to it first. :-) > > >> On a separate node, am I correct in presuming that the diet version of > >> the initrd with the kernel drivers pruned and additional package > >> filtering added as per the patch I sent a while back was not deemed a > >> good idea? > > > > Thanks for reminding me. I forgot to answer, sorry. > > > > The idea itself is good. But originally and by concept the initrd it > > designed to be an initrd used for different hardware configurations. > > Same initrd for multiple configurations? Why is this useful? Different > configurations could also run different kernels, which would invalidate > the shared initrd concept... No necessarily. I was a design idea and still is a kind of USP and most important something other customers use. Just a small example why. Let's suppose you have servers from HP the same Product branch (like HP DL38x) but of different generations. Then the onboard nics would on older ones be the tg3/bmc5700 driver whereas newer Generations use bnx2 drivers for their onboard nics. And then when bringing in IBM/Sun/Dell whatever other servers it becomes more complicated. And all this should be handled by one single shared boot. Did this explain the problem? > > > That implies we need different kernel modules and tools on the same > > cluster. > > Sure - but clusters like this, at least in my experience, generally tend > to be homogenous, when there is choice. Not in our experience. See above. > > The way I made the patch make some allowance for this is that both the > loaded modules and all the ones listed in /etc/modprobe.conf get included > - just in case. So modprobe.conf could be (ab)used to load additional > modules. But I accept this is potentially a somewhat cringeworthy hack > when used to force load additional modules into the initrd. > > > Say you would > > use a combination of virtulized and unvirtualized nodes in a cluster. As > > of now that is possible. Or just different servers. This would not be > > possible with your diet-patch, would it? > > No, probably not - but then again virtualized and non-virtualized nodes > would be running different kernels (e.g. 2.6.18-53 physical and > 2.6.18-53xen virtual), so the point is somewhat moot. You'd need different > initrds anyway. And the different nodes would use different modprobe.conf > files if their hardware is different. So the only extra requirement with > my patch would be that the initrd is built on the node as the node that > will be running the initrd image. In case of virtual vs. non-virtual > hardware (or just different kernel versions), it would still be a case of > running mkinitrd with different kernel versions with a different > modprobe.conf file, as AFAIK, this gets included in the initrd. You got that point (but keep in mind that only holds for XEN). But see above. > > > I thought of using it as special option to the mkinitrd (--diet or the > > like). Could you provide a patch for this? > > That could be arranged. I think that's a reasonably good idea. But as I > mentioned above, I'm not sure the full-fat initrd actually gains much in > terms of node/hardware compatibility. > > I'll send the --diet optioned patch. I'll leave the choice of > whether --diet should be the default to you guys. :-) ;-) Thanks Gordan. Regards Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |