You can subscribe to this list here.
2006 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
(4) |
Sep
(4) |
Oct
|
Nov
(5) |
Dec
(9) |
2009 |
Jan
(3) |
Feb
(17) |
Mar
(11) |
Apr
(27) |
May
(16) |
Jun
(7) |
Jul
(3) |
Aug
(10) |
Sep
|
Oct
|
Nov
(2) |
Dec
(2) |
2010 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
(2) |
Nov
(2) |
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(18) |
Dec
(3) |
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2015 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dan M. <dan...@or...> - 2009-02-07 00:43:36
|
Finally got the rpms and installed. One other typo on the rhel5/ocfs2 webpage: comoonics-bootimage-1.3-33 (not -32) as comoonics-bootimage-extras-ocfs2-0.1-1 depends on -33. On to the next step! :-) Do I need a redhat cluster.conf file other than to have the ocfs2 cluster.conf file created from it? In other words, can I just start with my own /etc/ocfs2/cluster.conf file? Or will the other installation steps require /etc/cluster/cluster.conf? Thanks, Dan > -----Original Message----- > From: Marc Grimme [mailto:gr...@at...] > Sent: Tuesday, February 03, 2009 1:27 PM > To: ope...@li... > Cc: Dan Magenheimer > Subject: Re: [OSR-users] rpms for ocfs2+rhel5 > > > Hello, > On Tuesday 03 February 2009 18:24:39 Dan Magenheimer wrote: > > (sorry, I accidentally posted this to -devel the first time) > > > > Hello -- > > > > I am a Xen developer and I would like to experiment with > > open-sharedroot on rhel5+ocfs2 on Xen but I am having some > > network problems and have not been able to download the > > 3.6G DVD iso. Is there any way I can download just the > > necessary .rpm files? (i386 only is OK for now) > > > > Also, I have ocfs2 v1.4.1-1, not 1.3.9-0.1. Will the > > specific versions of the rpms listed in the how-to > > work also with ocfs2 1.4.1 or will I need newer versions > > of the coomonics rpms? > Normally it should work. I didn't test it with these > versions. On thing that > comes into my mind is if you are using ocfs2 via openais > (this will not work > out of the box) or via the oracle cluster stack (that is > supposed to be > working). > > I hopefully will release in the next days a new version of > the comoonics rpms > that are also validated against RHEL5 with ocfs. > > > > And is the rpm listed in the document (and below) > > as "el6" a typo? > That's a typo. > > > > Thanks, > > Dan > You're welcome. > Let us know about your success. > > Regards Marc. > > > > List of rpms from the how-to: > > # comoonics-pythonosfix-py-0.1-1 > > # comoonics-bootimage-listfiles-1.3-6.el5 > > # SysVinit-comoonics-2.86-14.atix.1 > > # comoonics-cluster-py-0.1-12 > > # comoonics-cdsl-py-0.2-11 > > # comoonics-bootimage-1.3-32 > > # comoonics-release-0.1-1 > > # comoonics-cs-py-0.1-54 > > # comoonics-bootimage-initscripts-1.3-5.el6 > > # comoonics-bootimage-extras-ocfs2-0.1-1 > > # comoonics-bootimage-extras-xen-0.1-3 (Only needed for xen Guest) > > > > > -------------------------------------------------------------- > ------------- > >--- Create and Deploy Rich Internet Apps outside the browser with > > Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers > can use existing > > skills and code to build responsive, highly engaging > applications that > > combine the power of local resources and data with the > reach of the web. > > Download the Adobe AIR SDK and Ajax docs to start building > applications > > today-http://p.sf.net/sfu/adobe-com > > _______________________________________________ > > Open-sharedroot-users mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users > > > > -- > Gruss / Regards, > > Marc Grimme > http://www.atix.de/ http://www.open-sharedroot.org/ > > |
From: Marc G. <gr...@at...> - 2009-02-03 20:27:28
|
Hello, On Tuesday 03 February 2009 18:24:39 Dan Magenheimer wrote: > (sorry, I accidentally posted this to -devel the first time) > > Hello -- > > I am a Xen developer and I would like to experiment with > open-sharedroot on rhel5+ocfs2 on Xen but I am having some > network problems and have not been able to download the > 3.6G DVD iso. Is there any way I can download just the > necessary .rpm files? (i386 only is OK for now) > > Also, I have ocfs2 v1.4.1-1, not 1.3.9-0.1. Will the > specific versions of the rpms listed in the how-to > work also with ocfs2 1.4.1 or will I need newer versions > of the coomonics rpms? Normally it should work. I didn't test it with these versions. On thing that comes into my mind is if you are using ocfs2 via openais (this will not work out of the box) or via the oracle cluster stack (that is supposed to be working). I hopefully will release in the next days a new version of the comoonics rpms that are also validated against RHEL5 with ocfs. > > And is the rpm listed in the document (and below) > as "el6" a typo? That's a typo. > > Thanks, > Dan You're welcome. Let us know about your success. Regards Marc. > > List of rpms from the how-to: > # comoonics-pythonosfix-py-0.1-1 > # comoonics-bootimage-listfiles-1.3-6.el5 > # SysVinit-comoonics-2.86-14.atix.1 > # comoonics-cluster-py-0.1-12 > # comoonics-cdsl-py-0.2-11 > # comoonics-bootimage-1.3-32 > # comoonics-release-0.1-1 > # comoonics-cs-py-0.1-54 > # comoonics-bootimage-initscripts-1.3-5.el6 > # comoonics-bootimage-extras-ocfs2-0.1-1 > # comoonics-bootimage-extras-xen-0.1-3 (Only needed for xen Guest) > > --------------------------------------------------------------------------- >--- Create and Deploy Rich Internet Apps outside the browser with > Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing > skills and code to build responsive, highly engaging applications that > combine the power of local resources and data with the reach of the web. > Download the Adobe AIR SDK and Ajax docs to start building applications > today-http://p.sf.net/sfu/adobe-com > _______________________________________________ > Open-sharedroot-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Dan M. <dan...@or...> - 2009-02-03 17:25:30
|
(sorry, I accidentally posted this to -devel the first time) Hello -- I am a Xen developer and I would like to experiment with open-sharedroot on rhel5+ocfs2 on Xen but I am having some network problems and have not been able to download the 3.6G DVD iso. Is there any way I can download just the necessary .rpm files? (i386 only is OK for now) Also, I have ocfs2 v1.4.1-1, not 1.3.9-0.1. Will the specific versions of the rpms listed in the how-to work also with ocfs2 1.4.1 or will I need newer versions of the coomonics rpms? And is the rpm listed in the document (and below) as "el6" a typo? Thanks, Dan List of rpms from the how-to: # comoonics-pythonosfix-py-0.1-1 # comoonics-bootimage-listfiles-1.3-6.el5 # SysVinit-comoonics-2.86-14.atix.1 # comoonics-cluster-py-0.1-12 # comoonics-cdsl-py-0.2-11 # comoonics-bootimage-1.3-32 # comoonics-release-0.1-1 # comoonics-cs-py-0.1-54 # comoonics-bootimage-initscripts-1.3-5.el6 # comoonics-bootimage-extras-ocfs2-0.1-1 # comoonics-bootimage-extras-xen-0.1-3 (Only needed for xen Guest) |
From: Stefano E. <ste...@so...> - 2009-01-26 17:22:04
|
Hi Mark thanks for your answer. If I delete the file etc/sysconfig/network-scripts/ifcfg-eth0 when the server reboot, eth0 interface does not go up and the network isn't work but if I run the command: com-mkcdsl -a /etc/sysconfig/network-scripts/ifcfg-eth0 the network works !! Probably more than to cancel, I have to set something else. Il giorno 14/gen/09, alle ore 16:43, Marc Grimme ha scritto: > On Wednesday 14 January 2009 15:55:12 Stefano Elmopi wrote: >> Hi, >> >> I have managed to create a 1 node cluster >> >> [root@clu01 cluster]# clustat >> Cluster Status for cluster01 @ Wed Jan 14 17:16:47 2009 >> Member Status: Quorate >> >> Member Name ID Status >> ------ ---- ---- ------ >> clu01 1 Online, >> Local >> >> but now have the problems with service configuration. >> Before speaking of the Service problem,I ask you for another >> question. >> On another guide I read to do this command: >> >> com-mkcdsl -r /mnt/newroot -a /etc/sysconfig/network-scripts/ifcfg- >> eth0 >> >> whereas in your guide says to delete it, what is the right thing ? > If it is started in the initrd (means referenced in the cluster > configuration > under com_info) you should delete this file. That's the best idea. > Were are > we talking about > com-mkcdsl -r /mnt/newroot -a /etc/sysconfig/network-scripts/ifcfg- > eth0? That > should be removed. >> >> For the problem with Service configuration my cluster.conf is: >> >> <?xml version="1.0"?> >> <!DOCTYPE cluster SYSTEM "/opt/atix/comoonics-cs/xml/rh-cluster.dtd"> >> <cluster config_version="2" name="cluster01"> >> <cman expected_votes="1" two_node="0"> >> <multicast addr="10.43.100.203"/> > |--------------------------------------^ > What does this mean? >> </cman> >> >> <fence_daemon clean_start="1" post_fail_delay="0" >> post_join_delay="3"/> >> >> <clusternodes> >> <clusternode name="clu01" votes="1" nodeid="1"> >> <com_info> >> <syslog name="clu01"/> >> <rootvolume name="/dev/cciss/c0d0p8" >> fstype="ext3" mountopts="ro"/> >> <eth name="eth0" >> mac="00:15:60:56:75:FD" ip="10.43.100.203" mask="255.255.0.0 >> " gateway=""/> >> <multicast addr="10.43.100.203" >> interface="eth0"/> > and this ? >> </com_info> >> </clusternode> >> </clusternodes> >> >> <rm log_level="7" log_facility="local4"> >> <failoverdomains> >> <failoverdomain name="failover" ordered="0"> >> <failoverdomainnode name="clu01" >> priority="1"/> >> </failoverdomain> >> </failoverdomains> >> <resources> >> <ip address="10.43.100.203" monitor_link="1"/> > and this? >> <script file="/etc/init.d/httpd" name="httpd"/> >> </resources> >> <service autostart="0" domain="failover" name="HTTPD"> >> <ip ref="10.43.100.203"/> >> <script ref="httpd"/> >> </service> >> </rm> >> >> </cluster> >> >> but when I start the rgmanager (/etc/init.d/rgmanager start), after a >> few seconds the server reboot !! > I think it's because of the ip you're setting up at the node and with > rgmanager. The first thing rgmanager does is to stop the ip on all > nodes. > This causes the cluster to "reboot". > I would suppose a cluster.conf like as follows: > <?xml version="1.0"?> > <!DOCTYPE cluster SYSTEM "/opt/atix/comoonics-cs/xml/rh-cluster.dtd"> > <cluster config_version="2" name="cluster01"> > <cman expected_votes="1" two_node="0"/> > > <fence_daemon clean_start="1" post_fail_delay="0" > post_join_delay="3"/> > > <clusternodes> > <clusternode name="clu01" votes="1" nodeid="1"> > <com_info> > <syslog name="clu01"/> > <rootvolume name="/dev/cciss/c0d0p8" > fstype="ext3" mountopts="ro"/> > <eth name="eth0" > mac="00:15:60:56:75:FD" ip="10.43.100.203" mask="255.255.0.0 > " gateway=""/> > </com_info> > </clusternode> > </clusternodes> > > <rm log_level="7" log_facility="local4"> > <failoverdomains> > <failoverdomain name="failover" ordered="0"> > <failoverdomainnode name="clu01" > priority="1"/> > </failoverdomain> > </failoverdomains> > <resources> > <!-- Use a different ip. This is a service ip. That must be > different to the > one used by clusternode clu01 --> > <!-- <ip address="10.43.100.203" monitor_link="1"/>--> > <script file="/etc/init.d/httpd" name="httpd"/> > </resources> > <service autostart="0" domain="failover" name="HTTPD"> > <!-- <ip ref="10.43.100.203"/>--> > <script ref="httpd"/> > </service> > </rm> > > </cluster> > >> Below a log of reboot: >> >> Jan 14 17:21:23 clu01 clurgmgrd[31140]: <notice> Resource Group >> Manager Starting >> Jan 14 17:21:23 clu01 clurgmgrd[31140]: <info> Loading Service Data >> Jan 14 17:21:23 clu01 clurgmgrd[31140]: <debug> Loading Resource >> Rules >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 22 rules loaded >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Building Resource >> Trees >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 3 resources defined >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Loading Failover >> Domains >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 1 domains defined >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 101 events defined >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <info> Initializing Services >> Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Initializing >> service:HTTPD >> Jan 14 17:21:24 clu01 clurgmgrd: [31140]: <info> Executing /etc/ >> init.d/ >> httpd stop >> Jan 14 17:21:24 clu01 clurgmgrd: [31140]: <info> Removing IPv4 >> address >> 10.43.100.203/16 from eth0 >> Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] Could not set traffic >> priority. (Bad file descriptor) >> Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] The network interface is >> down. >> Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] entering GATHER state >> from 15. >> Jan 14 17:21:29 clu01 openais[2474]: [TOTEM] entering GATHER state >> from 0. >> Jan 14 17:21:34 clu01 clurgmgrd[31140]: <info> Services Initialized >> Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] AIS Executive Service >> RELEASE 'subrev 1358 version 0.80.3' >> Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Copyright (C) 2002-2006 >> MontaVista Software, Inc and contributor >> s. >> Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Copyright (C) 2006 Red >> Hat, Inc. >> Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] AIS Executive Service: >> started and ready to provide service. >> Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] openais component >> openais_cpg loaded. >> Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais cluster closed process grou >> p service v1.01' >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component >> openais_cfg loaded. >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais configuration service' >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component >> openais_msg loaded. >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais message service B.01.01' >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component >> openais_lck loaded. >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais distributed locking service >> B.01.01' >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component >> openais_evt loaded. >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais event service B.01.01' >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component >> openais_ckpt loaded. >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais checkpoint service B.01.01' >> >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component >> openais_amf loaded. >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais availability management fra >> mework B.01.01' >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component >> openais_clm loaded. >> Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais cluster membership service >> B.01.01' >> Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] openais component >> openais_evs loaded. >> Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais extended virtual synchrony >> service' >> Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] openais component >> openais_cman loaded. >> Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] Registering service >> handler 'openais CMAN membership service 2.0 >> 1' >> Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] Token Timeout (10000 ms) >> retransmit timeout (495 ms) >> Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] token hold (386 ms) >> retransmits before loss (20 retrans) >> Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] join (60 ms) send_join >> (0 >> ms) consensus (4800 ms) merge (200 ms) >> >> Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] downcheck (1000 ms) fail >> to recv const (50 msgs) >> Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] seqno unchanged const >> (30 >> rotations) Maximum network MTU 1500 >> Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] window size per rotation >> (50 messages) maximum messages per rota >> tion (17 messages) >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] send threads (0 threads) >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP token expired >> timeout >> (495 ms) >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP token problem >> counter >> (2000 ms) >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP threshold (10 >> problem >> count) >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP mode set to none. >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] >> heartbeat_failures_allowed (0) >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] max_network_delay (50 >> ms) >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] HeartBeat is Disabled. >> To >> enable set heartbeat_failures_allowed >> >>> 0 >> >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Receive multicast socket >> recv buffer size (262142 bytes). >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Transmit multicast >> socket >> send buffer size (262142 bytes). >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] The network interface >> [10.43.100.203] is now up. >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Created or loaded >> sequence id 164.10.43.100.203 for this ring. >> Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] entering GATHER state >> from 15. >> Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais extended virtual synchrony >> service' >> Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais cluster membership service >> B.01.01' >> Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais availability management fr >> amework B.01.01' >> Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais checkpoint service B.01.01 >> ' >> Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais event service B.01.01' >> Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais distributed locking servic >> e B.01.01' >> Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais message service B.01.01' >> Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais configuration service' >> Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais cluster closed process gro >> up service v1.01' >> Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service >> handler 'openais CMAN membership service 2. >> 01' >> Jan 14 17:23:35 clu01 openais[2470]: [CMAN ] CMAN 2.0.84 (built >> Oct 5 >> 2008 13:08:55) started >> Jan 14 17:23:35 clu01 openais[2470]: [SYNC ] Not using a virtual >> synchrony filter. >> Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Creating commit token >> because I am the rep. >> Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Saving state aru 0 high >> seq received 0 >> Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Storing new sequence id >> for ring a8 >> Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] entering COMMIT state. >> Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] entering RECOVERY state. >> Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] position [0] member >> 10.43.100.203: >> Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] previous ring seq 164 >> rep >> 10.43.100.203 >> Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] aru 0 high delivered 0 >> received flag 1 >> Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] Did not need to >> originate >> any messages in recovery. >> Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] Sending initial ORF >> token >> Jan 14 17:23:36 clu01 openais[2470]: [CLM ] CLM CONFIGURATION CHANGE >> Jan 14 17:23:36 clu01 openais[2470]: [CLM ] New Configuration: >> Jan 14 17:23:36 clu01 openais[2470]: [CLM ] Members Left: >> Jan 14 17:23:36 clu01 openais[2470]: [CLM ] Members Joined: >> Jan 14 17:23:36 clu01 openais[2470]: [CLM ] CLM CONFIGURATION CHANGE >> Jan 14 17:23:36 clu01 openais[2470]: [CLM ] New Configuration: >> Jan 14 17:23:37 clu01 openais[2470]: [CLM ] r(0) >> ip(10.43.100.203) >> Jan 14 17:23:37 clu01 openais[2470]: [CLM ] Members Left: >> Jan 14 17:23:37 clu01 openais[2470]: [CLM ] Members Joined: >> Jan 14 17:23:37 clu01 openais[2470]: [CLM ] r(0) >> ip(10.43.100.203) >> Jan 14 17:23:37 clu01 openais[2470]: [SYNC ] This node is within the >> primary component and will provide servi >> ce. >> Jan 14 17:23:37 clu01 openais[2470]: [TOTEM] entering OPERATIONAL >> state. >> Jan 14 17:23:37 clu01 openais[2470]: [CMAN ] quorum regained, >> resuming >> activity >> Jan 14 17:23:37 clu01 openais[2470]: [CLM ] got nodejoin message >> 10.43.100.203 >> >> Thanks !! >> >> >> Ing. Stefano Elmopi >> Gruppo Darco - Area ICT Sistemi >> Via Ostiense 131/L Corpo B, 00154 Roma >> >> cell. 3466147165 >> tel. 0657060500 >> email:ste...@so... >> >> >> --------------------------------------------------------------------------- >> --- This SF.net email is sponsored by: >> SourcForge Community >> SourceForge wants to tell your story. >> http://p.sf.net/sfu/sf-spreadtheword >> _______________________________________________ >> Open-sharedroot-users mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users > > > > -- > Gruss / Regards, > > Marc Grimme > http://www.atix.de/ http://www.open-sharedroot.org/ > > Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2009-01-14 15:52:59
|
On Wednesday 14 January 2009 15:55:12 Stefano Elmopi wrote: > Hi, > > I have managed to create a 1 node cluster > > [root@clu01 cluster]# clustat > Cluster Status for cluster01 @ Wed Jan 14 17:16:47 2009 > Member Status: Quorate > > Member Name ID Status > ------ ---- ---- ------ > clu01 1 Online, > Local > > but now have the problems with service configuration. > Before speaking of the Service problem,I ask you for another question. > On another guide I read to do this command: > > com-mkcdsl -r /mnt/newroot -a /etc/sysconfig/network-scripts/ifcfg-eth0 > > whereas in your guide says to delete it, what is the right thing ? If it is started in the initrd (means referenced in the cluster configuration under com_info) you should delete this file. That's the best idea. Were are we talking about com-mkcdsl -r /mnt/newroot -a /etc/sysconfig/network-scripts/ifcfg-eth0? That should be removed. > > For the problem with Service configuration my cluster.conf is: > > <?xml version="1.0"?> > <!DOCTYPE cluster SYSTEM "/opt/atix/comoonics-cs/xml/rh-cluster.dtd"> > <cluster config_version="2" name="cluster01"> > <cman expected_votes="1" two_node="0"> > <multicast addr="10.43.100.203"/> |--------------------------------------^ What does this mean? > </cman> > > <fence_daemon clean_start="1" post_fail_delay="0" > post_join_delay="3"/> > > <clusternodes> > <clusternode name="clu01" votes="1" nodeid="1"> > <com_info> > <syslog name="clu01"/> > <rootvolume name="/dev/cciss/c0d0p8" > fstype="ext3" mountopts="ro"/> > <eth name="eth0" > mac="00:15:60:56:75:FD" ip="10.43.100.203" mask="255.255.0.0 > " gateway=""/> > <multicast addr="10.43.100.203" > interface="eth0"/> and this ? > </com_info> > </clusternode> > </clusternodes> > > <rm log_level="7" log_facility="local4"> > <failoverdomains> > <failoverdomain name="failover" ordered="0"> > <failoverdomainnode name="clu01" priority="1"/> > </failoverdomain> > </failoverdomains> > <resources> > <ip address="10.43.100.203" monitor_link="1"/> and this? > <script file="/etc/init.d/httpd" name="httpd"/> > </resources> > <service autostart="0" domain="failover" name="HTTPD"> > <ip ref="10.43.100.203"/> > <script ref="httpd"/> > </service> > </rm> > > </cluster> > > but when I start the rgmanager (/etc/init.d/rgmanager start), after a > few seconds the server reboot !! I think it's because of the ip you're setting up at the node and with rgmanager. The first thing rgmanager does is to stop the ip on all nodes. This causes the cluster to "reboot". I would suppose a cluster.conf like as follows: <?xml version="1.0"?> <!DOCTYPE cluster SYSTEM "/opt/atix/comoonics-cs/xml/rh-cluster.dtd"> <cluster config_version="2" name="cluster01"> <cman expected_votes="1" two_node="0"/> <fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="clu01" votes="1" nodeid="1"> <com_info> <syslog name="clu01"/> <rootvolume name="/dev/cciss/c0d0p8" fstype="ext3" mountopts="ro"/> <eth name="eth0" mac="00:15:60:56:75:FD" ip="10.43.100.203" mask="255.255.0.0 " gateway=""/> </com_info> </clusternode> </clusternodes> <rm log_level="7" log_facility="local4"> <failoverdomains> <failoverdomain name="failover" ordered="0"> <failoverdomainnode name="clu01" priority="1"/> </failoverdomain> </failoverdomains> <resources> <!-- Use a different ip. This is a service ip. That must be different to the one used by clusternode clu01 --> <!-- <ip address="10.43.100.203" monitor_link="1"/>--> <script file="/etc/init.d/httpd" name="httpd"/> </resources> <service autostart="0" domain="failover" name="HTTPD"> <!-- <ip ref="10.43.100.203"/>--> <script ref="httpd"/> </service> </rm> </cluster> > Below a log of reboot: > > Jan 14 17:21:23 clu01 clurgmgrd[31140]: <notice> Resource Group > Manager Starting > Jan 14 17:21:23 clu01 clurgmgrd[31140]: <info> Loading Service Data > Jan 14 17:21:23 clu01 clurgmgrd[31140]: <debug> Loading Resource Rules > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 22 rules loaded > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Building Resource Trees > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 3 resources defined > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Loading Failover Domains > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 1 domains defined > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 101 events defined > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <info> Initializing Services > Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Initializing > service:HTTPD > Jan 14 17:21:24 clu01 clurgmgrd: [31140]: <info> Executing /etc/init.d/ > httpd stop > Jan 14 17:21:24 clu01 clurgmgrd: [31140]: <info> Removing IPv4 address > 10.43.100.203/16 from eth0 > Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] Could not set traffic > priority. (Bad file descriptor) > Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] The network interface is > down. > Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] entering GATHER state > from 15. > Jan 14 17:21:29 clu01 openais[2474]: [TOTEM] entering GATHER state > from 0. > Jan 14 17:21:34 clu01 clurgmgrd[31140]: <info> Services Initialized > Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] AIS Executive Service > RELEASE 'subrev 1358 version 0.80.3' > Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Copyright (C) 2002-2006 > MontaVista Software, Inc and contributor > s. > Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Copyright (C) 2006 Red > Hat, Inc. > Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] AIS Executive Service: > started and ready to provide service. > Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] openais component > openais_cpg loaded. > Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais cluster closed process grou > p service v1.01' > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component > openais_cfg loaded. > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais configuration service' > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component > openais_msg loaded. > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais message service B.01.01' > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component > openais_lck loaded. > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais distributed locking service > B.01.01' > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component > openais_evt loaded. > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais event service B.01.01' > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component > openais_ckpt loaded. > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais checkpoint service B.01.01' > > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component > openais_amf loaded. > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais availability management fra > mework B.01.01' > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component > openais_clm loaded. > Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais cluster membership service > B.01.01' > Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] openais component > openais_evs loaded. > Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais extended virtual synchrony > service' > Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] openais component > openais_cman loaded. > Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] Registering service > handler 'openais CMAN membership service 2.0 > 1' > Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] Token Timeout (10000 ms) > retransmit timeout (495 ms) > Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] token hold (386 ms) > retransmits before loss (20 retrans) > Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] join (60 ms) send_join (0 > ms) consensus (4800 ms) merge (200 ms) > > Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] downcheck (1000 ms) fail > to recv const (50 msgs) > Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] seqno unchanged const (30 > rotations) Maximum network MTU 1500 > Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] window size per rotation > (50 messages) maximum messages per rota > tion (17 messages) > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] send threads (0 threads) > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP token expired timeout > (495 ms) > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP token problem counter > (2000 ms) > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP threshold (10 problem > count) > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP mode set to none. > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] > heartbeat_failures_allowed (0) > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] max_network_delay (50 ms) > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] HeartBeat is Disabled. To > enable set heartbeat_failures_allowed > > > 0 > > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Receive multicast socket > recv buffer size (262142 bytes). > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Transmit multicast socket > send buffer size (262142 bytes). > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] The network interface > [10.43.100.203] is now up. > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Created or loaded > sequence id 164.10.43.100.203 for this ring. > Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] entering GATHER state > from 15. > Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais extended virtual synchrony > service' > Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais cluster membership service > B.01.01' > Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais availability management fr > amework B.01.01' > Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais checkpoint service B.01.01 > ' > Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais event service B.01.01' > Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais distributed locking servic > e B.01.01' > Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais message service B.01.01' > Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais configuration service' > Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais cluster closed process gro > up service v1.01' > Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service > handler 'openais CMAN membership service 2. > 01' > Jan 14 17:23:35 clu01 openais[2470]: [CMAN ] CMAN 2.0.84 (built Oct 5 > 2008 13:08:55) started > Jan 14 17:23:35 clu01 openais[2470]: [SYNC ] Not using a virtual > synchrony filter. > Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Creating commit token > because I am the rep. > Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Saving state aru 0 high > seq received 0 > Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Storing new sequence id > for ring a8 > Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] entering COMMIT state. > Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] entering RECOVERY state. > Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] position [0] member > 10.43.100.203: > Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] previous ring seq 164 rep > 10.43.100.203 > Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] aru 0 high delivered 0 > received flag 1 > Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] Did not need to originate > any messages in recovery. > Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] Sending initial ORF token > Jan 14 17:23:36 clu01 openais[2470]: [CLM ] CLM CONFIGURATION CHANGE > Jan 14 17:23:36 clu01 openais[2470]: [CLM ] New Configuration: > Jan 14 17:23:36 clu01 openais[2470]: [CLM ] Members Left: > Jan 14 17:23:36 clu01 openais[2470]: [CLM ] Members Joined: > Jan 14 17:23:36 clu01 openais[2470]: [CLM ] CLM CONFIGURATION CHANGE > Jan 14 17:23:36 clu01 openais[2470]: [CLM ] New Configuration: > Jan 14 17:23:37 clu01 openais[2470]: [CLM ] r(0) ip(10.43.100.203) > Jan 14 17:23:37 clu01 openais[2470]: [CLM ] Members Left: > Jan 14 17:23:37 clu01 openais[2470]: [CLM ] Members Joined: > Jan 14 17:23:37 clu01 openais[2470]: [CLM ] r(0) ip(10.43.100.203) > Jan 14 17:23:37 clu01 openais[2470]: [SYNC ] This node is within the > primary component and will provide servi > ce. > Jan 14 17:23:37 clu01 openais[2470]: [TOTEM] entering OPERATIONAL state. > Jan 14 17:23:37 clu01 openais[2470]: [CMAN ] quorum regained, resuming > activity > Jan 14 17:23:37 clu01 openais[2470]: [CLM ] got nodejoin message > 10.43.100.203 > > Thanks !! > > > Ing. Stefano Elmopi > Gruppo Darco - Area ICT Sistemi > Via Ostiense 131/L Corpo B, 00154 Roma > > cell. 3466147165 > tel. 0657060500 > email:ste...@so... > > > --------------------------------------------------------------------------- >--- This SF.net email is sponsored by: > SourcForge Community > SourceForge wants to tell your story. > http://p.sf.net/sfu/sf-spreadtheword > _______________________________________________ > Open-sharedroot-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Stefano E. <ste...@so...> - 2009-01-14 14:55:19
|
Hi, I have managed to create a 1 node cluster [root@clu01 cluster]# clustat Cluster Status for cluster01 @ Wed Jan 14 17:16:47 2009 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ clu01 1 Online, Local but now have the problems with service configuration. Before speaking of the Service problem,I ask you for another question. On another guide I read to do this command: com-mkcdsl -r /mnt/newroot -a /etc/sysconfig/network-scripts/ifcfg-eth0 whereas in your guide says to delete it, what is the right thing ? For the problem with Service configuration my cluster.conf is: <?xml version="1.0"?> <!DOCTYPE cluster SYSTEM "/opt/atix/comoonics-cs/xml/rh-cluster.dtd"> <cluster config_version="2" name="cluster01"> <cman expected_votes="1" two_node="0"> <multicast addr="10.43.100.203"/> </cman> <fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="clu01" votes="1" nodeid="1"> <com_info> <syslog name="clu01"/> <rootvolume name="/dev/cciss/c0d0p8" fstype="ext3" mountopts="ro"/> <eth name="eth0" mac="00:15:60:56:75:FD" ip="10.43.100.203" mask="255.255.0.0 " gateway=""/> <multicast addr="10.43.100.203" interface="eth0"/> </com_info> </clusternode> </clusternodes> <rm log_level="7" log_facility="local4"> <failoverdomains> <failoverdomain name="failover" ordered="0"> <failoverdomainnode name="clu01" priority="1"/> </failoverdomain> </failoverdomains> <resources> <ip address="10.43.100.203" monitor_link="1"/> <script file="/etc/init.d/httpd" name="httpd"/> </resources> <service autostart="0" domain="failover" name="HTTPD"> <ip ref="10.43.100.203"/> <script ref="httpd"/> </service> </rm> </cluster> but when I start the rgmanager (/etc/init.d/rgmanager start), after a few seconds the server reboot !! Below a log of reboot: Jan 14 17:21:23 clu01 clurgmgrd[31140]: <notice> Resource Group Manager Starting Jan 14 17:21:23 clu01 clurgmgrd[31140]: <info> Loading Service Data Jan 14 17:21:23 clu01 clurgmgrd[31140]: <debug> Loading Resource Rules Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 22 rules loaded Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Building Resource Trees Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 3 resources defined Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Loading Failover Domains Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 1 domains defined Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> 101 events defined Jan 14 17:21:24 clu01 clurgmgrd[31140]: <info> Initializing Services Jan 14 17:21:24 clu01 clurgmgrd[31140]: <debug> Initializing service:HTTPD Jan 14 17:21:24 clu01 clurgmgrd: [31140]: <info> Executing /etc/init.d/ httpd stop Jan 14 17:21:24 clu01 clurgmgrd: [31140]: <info> Removing IPv4 address 10.43.100.203/16 from eth0 Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] Could not set traffic priority. (Bad file descriptor) Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] The network interface is down. Jan 14 17:21:24 clu01 openais[2474]: [TOTEM] entering GATHER state from 15. Jan 14 17:21:29 clu01 openais[2474]: [TOTEM] entering GATHER state from 0. Jan 14 17:21:34 clu01 clurgmgrd[31140]: <info> Services Initialized Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] AIS Executive Service RELEASE 'subrev 1358 version 0.80.3' Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Copyright (C) 2002-2006 MontaVista Software, Inc and contributor s. Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Copyright (C) 2006 Red Hat, Inc. Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] AIS Executive Service: started and ready to provide service. Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] openais component openais_cpg loaded. Jan 14 17:23:31 clu01 openais[2470]: [MAIN ] Registering service handler 'openais cluster closed process grou p service v1.01' Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component openais_cfg loaded. Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service handler 'openais configuration service' Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component openais_msg loaded. Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service handler 'openais message service B.01.01' Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component openais_lck loaded. Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service handler 'openais distributed locking service B.01.01' Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component openais_evt loaded. Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service handler 'openais event service B.01.01' Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component openais_ckpt loaded. Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service handler 'openais checkpoint service B.01.01' Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component openais_amf loaded. Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service handler 'openais availability management fra mework B.01.01' Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] openais component openais_clm loaded. Jan 14 17:23:32 clu01 openais[2470]: [MAIN ] Registering service handler 'openais cluster membership service B.01.01' Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] openais component openais_evs loaded. Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] Registering service handler 'openais extended virtual synchrony service' Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] openais component openais_cman loaded. Jan 14 17:23:33 clu01 openais[2470]: [MAIN ] Registering service handler 'openais CMAN membership service 2.0 1' Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] Token Timeout (10000 ms) retransmit timeout (495 ms) Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] token hold (386 ms) retransmits before loss (20 retrans) Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] join (60 ms) send_join (0 ms) consensus (4800 ms) merge (200 ms) Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] downcheck (1000 ms) fail to recv const (50 msgs) Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] seqno unchanged const (30 rotations) Maximum network MTU 1500 Jan 14 17:23:33 clu01 openais[2470]: [TOTEM] window size per rotation (50 messages) maximum messages per rota tion (17 messages) Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] send threads (0 threads) Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP token expired timeout (495 ms) Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP token problem counter (2000 ms) Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP threshold (10 problem count) Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] RRP mode set to none. Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] heartbeat_failures_allowed (0) Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] max_network_delay (50 ms) Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0 Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes). Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes). Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] The network interface [10.43.100.203] is now up. Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] Created or loaded sequence id 164.10.43.100.203 for this ring. Jan 14 17:23:34 clu01 openais[2470]: [TOTEM] entering GATHER state from 15. Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service handler 'openais extended virtual synchrony service' Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service handler 'openais cluster membership service B.01.01' Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service handler 'openais availability management fr amework B.01.01' Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service handler 'openais checkpoint service B.01.01 ' Jan 14 17:23:34 clu01 openais[2470]: [SERV ] Initialising service handler 'openais event service B.01.01' Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service handler 'openais distributed locking servic e B.01.01' Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service handler 'openais message service B.01.01' Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service handler 'openais configuration service' Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service handler 'openais cluster closed process gro up service v1.01' Jan 14 17:23:35 clu01 openais[2470]: [SERV ] Initialising service handler 'openais CMAN membership service 2. 01' Jan 14 17:23:35 clu01 openais[2470]: [CMAN ] CMAN 2.0.84 (built Oct 5 2008 13:08:55) started Jan 14 17:23:35 clu01 openais[2470]: [SYNC ] Not using a virtual synchrony filter. Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Creating commit token because I am the rep. Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Saving state aru 0 high seq received 0 Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] Storing new sequence id for ring a8 Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] entering COMMIT state. Jan 14 17:23:35 clu01 openais[2470]: [TOTEM] entering RECOVERY state. Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] position [0] member 10.43.100.203: Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] previous ring seq 164 rep 10.43.100.203 Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] aru 0 high delivered 0 received flag 1 Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] Did not need to originate any messages in recovery. Jan 14 17:23:36 clu01 openais[2470]: [TOTEM] Sending initial ORF token Jan 14 17:23:36 clu01 openais[2470]: [CLM ] CLM CONFIGURATION CHANGE Jan 14 17:23:36 clu01 openais[2470]: [CLM ] New Configuration: Jan 14 17:23:36 clu01 openais[2470]: [CLM ] Members Left: Jan 14 17:23:36 clu01 openais[2470]: [CLM ] Members Joined: Jan 14 17:23:36 clu01 openais[2470]: [CLM ] CLM CONFIGURATION CHANGE Jan 14 17:23:36 clu01 openais[2470]: [CLM ] New Configuration: Jan 14 17:23:37 clu01 openais[2470]: [CLM ] r(0) ip(10.43.100.203) Jan 14 17:23:37 clu01 openais[2470]: [CLM ] Members Left: Jan 14 17:23:37 clu01 openais[2470]: [CLM ] Members Joined: Jan 14 17:23:37 clu01 openais[2470]: [CLM ] r(0) ip(10.43.100.203) Jan 14 17:23:37 clu01 openais[2470]: [SYNC ] This node is within the primary component and will provide servi ce. Jan 14 17:23:37 clu01 openais[2470]: [TOTEM] entering OPERATIONAL state. Jan 14 17:23:37 clu01 openais[2470]: [CMAN ] quorum regained, resuming activity Jan 14 17:23:37 clu01 openais[2470]: [CLM ] got nodejoin message 10.43.100.203 Thanks !! Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Stefano E. <ste...@so...> - 2008-12-18 13:01:28
|
Hi, I followed the GFS HowTo but I have some problems because the server go up but the cluster services don't start and I wanted some information. When the server go up is not configured interface eth0. Where should be configured ? The manual talks about the creation of chroot environment but it is not created on my server. Where should be configured? Thanks. Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2008-12-08 10:29:37
|
On Friday 05 December 2008 14:45:49 Stefano Elmopi wrote: > Hi, > > I'm trying to do an internal NFS-Server because, currently, I have not > a volume External available. > Can I make a internal GFS-Server ? > If yes, there are enough packages that I installed ? So you are planning to use a GFS-OSR Cluster and reexport the GFS-Root via NFS to the clients? > > rpm -qa | grep comoo > SysVinit-comoonics-2.86-14.atix.1 > comoonics-bootimage-extras-nfs-0.1-4 > comoonics-ec-py-0.1-37 > comoonics-cluster-py-0.1-16 > comoonics-bootimage-initscripts-1.3-8.el5 > comoonics-bootimage-listfiles-all-0.1-1 > comoonics-pythonosfix-py-0.1-2 > comoonics-cs-xml-0.5-2 > comoonics-cdsl-py-0.2-10 > comoonics-cs-py-0.1-56 > comoonics-bootimage-1.3-40 Also install the necessary extrafiles: comoonics-bootimage-extras-rhel5 is missing. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Stefano E. <ste...@so...> - 2008-12-05 13:45:56
|
Hi, I'm trying to do an internal NFS-Server because, currently, I have not a volume External available. Can I make a internal GFS-Server ? If yes, there are enough packages that I installed ? rpm -qa | grep comoo SysVinit-comoonics-2.86-14.atix.1 comoonics-bootimage-extras-nfs-0.1-4 comoonics-ec-py-0.1-37 comoonics-cluster-py-0.1-16 comoonics-bootimage-initscripts-1.3-8.el5 comoonics-bootimage-listfiles-all-0.1-1 comoonics-pythonosfix-py-0.1-2 comoonics-cs-xml-0.5-2 comoonics-cdsl-py-0.2-10 comoonics-cs-py-0.1-56 comoonics-bootimage-1.3-40 Thanks Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2008-12-03 15:18:04
|
On Wednesday 03 December 2008 15:24:42 Stefano Elmopi wrote: > Hi, > > The GFS Howto says that you must copy the whole local directory / to > the shared root filesystem > and then create the directories > > /mnt/newroot/proc > /mnt/newroot/sys > > and then create the infrastructure. > The NFS Howto don't say to copy the whole local directory / and create > the directories, > it is not necessary ? Let me put it this way. It's hard to find ;-) . I don't like the NFS Howto cause it describes a too complicated setup. What you are trying to setup is an External NFS-Server. An yes you need to follow these steps as described in the "normal" Howto. We'll have to rewrite this Howto. It's neither clear nor good. Sorry about that. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Stefano E. <ste...@so...> - 2008-12-03 14:24:49
|
Hi, The GFS Howto says that you must copy the whole local directory / to the shared root filesystem and then create the directories /mnt/newroot/proc /mnt/newroot/sys and then create the infrastructure. The NFS Howto don't say to copy the whole local directory / and create the directories, it is not necessary ? Thanks Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2008-12-02 14:59:50
|
This is a known bug in the preview channel. If you update comoonics-cdsl-py package to the latest version it should go away. Marc. On Tuesday 02 December 2008 15:34:56 Stefano Elmopi wrote: > Hi, > > I apologize but I do not know how to write in the previously open thread > and not open other threads. > I want to create a NFS Cluster > To create the infrastructure launch the command: > > com-mkcdslinfrastructure -r /mnt/newroot -i > > but the result is: > > Traceback (most recent call last): > File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? > from comoonics.cdsl.ComCdslRepository import * > File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", > line 28, in ? > defaults_path = os.path.join(cdsls_path,cdslDefaults_element) > NameError: name 'cdslDefaults_element' is not defined > > Thanks. > > > > Ing. Stefano Elmopi > Gruppo Darco - Area ICT Sistemi > Via Ostiense 131/L Corpo B, 00154 Roma > > cell. 3466147165 > tel. 0657060500 > email:ste...@so... > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Stefano E. <ste...@so...> - 2008-12-02 14:35:04
|
Hi, I apologize but I do not know how to write in the previously open thread and not open other threads. I want to create a NFS Cluster To create the infrastructure launch the command: com-mkcdslinfrastructure -r /mnt/newroot -i but the result is: Traceback (most recent call last): File "/usr/bin/com-mkcdslinfrastructure", line 17, in ? from comoonics.cdsl.ComCdslRepository import * File "/usr/lib/python2.4/site-packages/comoonics/cdsl/__init__.py", line 28, in ? defaults_path = os.path.join(cdsls_path,cdslDefaults_element) NameError: name 'cdslDefaults_element' is not defined Thanks. Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2008-12-01 14:55:43
|
Hi, On Monday 01 December 2008 15:39:12 Stefano Elmopi wrote: > Hi, > > it's OK, I am using CentOS5.2. > I've removed all comoonics packages and then I have installed: > > yum install comoonics-bootimage.noarch > > and then > > yum install comoonics-bootimage-extras-nfs > > > Now, on the server I have: > > rpm -qa | grep comoonics > SysVinit-comoonics-2.86-14.atix.1 > comoonics-bootimage-extras-nfs-0.1-4 > comoonics-cluster-py-0.1-16 > comoonics-bootimage-initscripts-1.3-8.el5 > comoonics-bootimage-listfiles-all-0.1-1 > comoonics-cs-py-0.1-56 > comoonics-bootimage-1.3-40 That looks much better. > > there is the script /etc/init.d/bootsr > but there isn't the script fenceacksv !!! > > In this situation, it is all ok ?? Fenceacksv can be installed via yum install comoonics-bootimage-fenceacksv. For nfsroot it is not essential but just follow the howto and use it also. > > Another question. > I followed the guide but I can not have the Syslog, how do I do? In the file cluster.conf under <clusternode><com_info> just add an entry <syslog name="ip"/> and be aware that the syslog server needs to reachable from the ip configured for the clusternode. Then you should get logs during boot. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Stefano E. <ste...@so...> - 2008-12-01 14:39:20
|
Hi, it's OK, I am using CentOS5.2. I've removed all comoonics packages and then I have installed: yum install comoonics-bootimage.noarch and then yum install comoonics-bootimage-extras-nfs Now, on the server I have: rpm -qa | grep comoonics SysVinit-comoonics-2.86-14.atix.1 comoonics-bootimage-extras-nfs-0.1-4 comoonics-cluster-py-0.1-16 comoonics-bootimage-initscripts-1.3-8.el5 comoonics-bootimage-listfiles-all-0.1-1 comoonics-cs-py-0.1-56 comoonics-bootimage-1.3-40 there is the script /etc/init.d/bootsr but there isn't the script fenceacksv !!! In this situation, it is all ok ?? Another question. I followed the guide but I can not have the Syslog, how do I do? Thanks Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2008-11-28 14:54:21
|
On Friday 28 November 2008 15:37:24 Marc Grimme wrote: > Hi, > if you are using RHEL5/CentOS5 it looks like you are using the wrong yum > channel. Remove the rpm comoonics-bootimage-initscripts-1.3-6.el4 and setup > the channel url like described here: > http://www.open-sharedroot.org/faq/can-i-use-yum-or-up2date-to-install-the- >software#rhel5 Also install > comoonics-bootimage-listfiles-rhel5. > comoonics-bootimage-extras-nfs > comoonics-bootimage-initscripts-1.3-6.el5 > > if you are using RHEL4/CentOS4 also install > comoonics-bootimage-listfiles-rhel4. > comoonics-bootimage-extras-nfs > > The initscripts you are searching have been removed. Instead there is only > bootsr and fenceacksv. Everything should be handled from there. > > Just keep me up2date. Sorry got it wrong there still is a /etc/init.d/ccsd-chroot that comes from comoonics-bootimage-initscripts-1.3-6.el4. But is not started automatically. Marc. > > Regards Marc. > > On Friday 28 November 2008 14:18:44 Stefano Elmopi wrote: > > Hi, > > > > thanks for your reply. > > I'm having problems with Yum. I removed all packages Comoonics and > > then I installed it again. > > Now there's the script manage_chroot.sh but I don't find the > > initscript bootsr, ccsd-chroot > > and fenced-chroot. > > Now, I have: > > > > rpm -qa | grep comoonics- > > comoonics-cs-py-0.1-56 > > comoonics-bootimage-1.3-38 > > comoonics-pythonosfix-py-0.1-1 > > comoonics-bootimage-initscripts-1.3-6.el4 > > comoonics-cluster-py-0.1-16 > > comoonics-ec-py-0.1-37 > > comoonics-bootimage-extras-network-0.1-1 > > comoonics-bootimage-extras-nfs-0.1-4 > > comoonics-cdsl-py-0.2-11 > > comoonics-bootimage-listfiles-all-0.1-1 > > comoonics-cs-xml-0.5-2 > > > > I don't now !!!! > > > > > > Ing. Stefano Elmopi > > Gruppo Darco - Area ICT Sistemi > > Via Ostiense 131/L Corpo B, 00154 Roma > > > > cell. 3466147165 > > tel. 0657060500 > > email:ste...@so... > > > > > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your Move Developer's > > challenge Build the coolest Linux based applications with Moblin SDK & > > win great prizes Grand prize is a trip for two to an Open Source event > > anywhere in the world > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > _______________________________________________ > > Open-sharedroot-users mailing list > > Ope...@li... > > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users > > -- > Gruss / Regards, > > Marc Grimme > http://www.atix.de/ http://www.open-sharedroot.org/ > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Marc G. <gr...@at...> - 2008-11-28 14:37:32
|
Hi, if you are using RHEL5/CentOS5 it looks like you are using the wrong yum channel. Remove the rpm comoonics-bootimage-initscripts-1.3-6.el4 and setup the channel url like described here: http://www.open-sharedroot.org/faq/can-i-use-yum-or-up2date-to-install-the-software#rhel5 Also install comoonics-bootimage-listfiles-rhel5. comoonics-bootimage-extras-nfs comoonics-bootimage-initscripts-1.3-6.el5 if you are using RHEL4/CentOS4 also install comoonics-bootimage-listfiles-rhel4. comoonics-bootimage-extras-nfs The initscripts you are searching have been removed. Instead there is only bootsr and fenceacksv. Everything should be handled from there. Just keep me up2date. Regards Marc. On Friday 28 November 2008 14:18:44 Stefano Elmopi wrote: > Hi, > > thanks for your reply. > I'm having problems with Yum. I removed all packages Comoonics and > then I installed it again. > Now there's the script manage_chroot.sh but I don't find the > initscript bootsr, ccsd-chroot > and fenced-chroot. > Now, I have: > > rpm -qa | grep comoonics- > comoonics-cs-py-0.1-56 > comoonics-bootimage-1.3-38 > comoonics-pythonosfix-py-0.1-1 > comoonics-bootimage-initscripts-1.3-6.el4 > comoonics-cluster-py-0.1-16 > comoonics-ec-py-0.1-37 > comoonics-bootimage-extras-network-0.1-1 > comoonics-bootimage-extras-nfs-0.1-4 > comoonics-cdsl-py-0.2-11 > comoonics-bootimage-listfiles-all-0.1-1 > comoonics-cs-xml-0.5-2 > > I don't now !!!! > > > Ing. Stefano Elmopi > Gruppo Darco - Area ICT Sistemi > Via Ostiense 131/L Corpo B, 00154 Roma > > cell. 3466147165 > tel. 0657060500 > email:ste...@so... > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Stefano E. <ste...@so...> - 2008-11-28 13:18:52
|
Hi, thanks for your reply. I'm having problems with Yum. I removed all packages Comoonics and then I installed it again. Now there's the script manage_chroot.sh but I don't find the initscript bootsr, ccsd-chroot and fenced-chroot. Now, I have: rpm -qa | grep comoonics- comoonics-cs-py-0.1-56 comoonics-bootimage-1.3-38 comoonics-pythonosfix-py-0.1-1 comoonics-bootimage-initscripts-1.3-6.el4 comoonics-cluster-py-0.1-16 comoonics-ec-py-0.1-37 comoonics-bootimage-extras-network-0.1-1 comoonics-bootimage-extras-nfs-0.1-4 comoonics-cdsl-py-0.2-11 comoonics-bootimage-listfiles-all-0.1-1 comoonics-cs-xml-0.5-2 I don't now !!!! Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2008-11-26 17:24:43
|
On Wednesday 26 November 2008 16:02:13 Stefano Elmopi wrote: > Hi, > > I am trying to run this command: > > /etc/init.d/ccsd-chroot start > > but the result is: > > /etc/init.d/ccsd-chroot: line 21: /opt/atix/comoonics-bootimage/ > manage_chroot.sh: No such file or directory > > where can i get manage_chroot.sh ?? > I have downloaded all the packages required to make a Cluster on NFS, > but the script manage_chroot.sh there is not !!! Strange. Normally this should be installed by the rpm comoonics-bootimage (the "main" rpm). You mind sending a list of all rpms installed beginning with comoonics- like: rpm -qa | grep comoonics- Thanks Marc. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Stefano E. <ste...@so...> - 2008-11-26 15:21:26
|
Hi, I am trying to run this command: /etc/init.d/ccsd-chroot start but the result is: /etc/init.d/ccsd-chroot: line 21: /opt/atix/comoonics-bootimage/ manage_chroot.sh: No such file or directory where can i get manage_chroot.sh ?? I have downloaded all the packages required to make a Cluster on NFS, but the script manage_chroot.sh there is not !!! Thanks Ing. Stefano Elmopi Gruppo Darco - Area ICT Sistemi Via Ostiense 131/L Corpo B, 00154 Roma cell. 3466147165 tel. 0657060500 email:ste...@so... |
From: Marc G. <gr...@at...> - 2008-09-24 08:49:02
|
Sources are available now. Sorry about that. -marc On Wednesday 24 September 2008 03:58:50 Sunil Mushran wrote: > Are the comoonics packages open sourced (GPL or otherwise)? > Wondering as I could not find the sources on the site. > > Marc Grimme wrote: > > Hello, > > I just wanted to inform you that we have successfully ported the > > Open-Sharedroot Cluster to be used with Novell SLES10 SP2 with OCFS2 > > 1.4.1 (SuSE Version) and above. > > > > More information can be found here: > > http://www.opensharedroot.org/documentation/sles10-ocfs2-shared-root-mini > >-howto > > > > Have fun and always share the root ;-) > > > > - Marc > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge Build the coolest Linux based applications with Moblin SDK & win > great prizes Grand prize is a trip for two to an Open Source event anywhere > in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Open-sharedroot-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-sharedroot-users -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Reiner R. <rot...@at...> - 2008-09-24 06:23:51
|
Hello, On Wednesday 24 September 2008 03:58:50 am Sunil Mushran wrote: > Are the comoonics packages open sourced (GPL or otherwise)? > Wondering as I could not find the sources on the site. Yes of course they are! See the RPM information header for the license details. e.g.: # rpm -qpi comoonics-bootimage-1.3-38.noarch.rpm |grep License|awk '{print $5}' GPL As most of the files are shell or python based scripts, they actually are distributed as sources already. However there are also Source RPMs available on our Download Server: http://download.atix.de/yum/comoonics/suse-linux-es10/preview/SRPMS/ http://download.atix.de/yum/comoonics/redhat-el5/productive/SRPMS/ http://download.atix.de/yum/comoonics/redhat-el4/productive/SRPMS/ They are usually distributed as soon as they pass the internal Quality Assurance. -Reiner |
From: Sunil M. <sun...@or...> - 2008-09-24 01:57:45
|
Are the comoonics packages open sourced (GPL or otherwise)? Wondering as I could not find the sources on the site. Marc Grimme wrote: > Hello, > I just wanted to inform you that we have successfully ported the > Open-Sharedroot Cluster to be used with Novell SLES10 SP2 with OCFS2 1.4.1 > (SuSE Version) and above. > > More information can be found here: > http://www.opensharedroot.org/documentation/sles10-ocfs2-shared-root-mini-howto > > Have fun and always share the root ;-) > > - Marc > > |
From: Marc G. <gr...@at...> - 2008-09-23 19:02:10
|
Hello, I just wanted to inform you that we have successfully ported the Open-Sharedroot Cluster to be used with Novell SLES10 SP2 with OCFS2 1.4.1 (SuSE Version) and above. More information can be found here: http://www.opensharedroot.org/documentation/sles10-ocfs2-shared-root-mini-howto Have fun and always share the root ;-) - Marc -- Regards, Marc Grimme / ATIX AG http://www.atix.de/ http://www.open-sharedroot.org/ |
From: Mark H. <hla...@at...> - 2008-08-26 08:34:24
|
Hi Patrick, the cdsl support is done with help of bind mounts and symbolic links. During the boot process (inside the initrd), /cluster/cdsl/{nodeid} is bind-mounted to /cdsl.local. If you want e.g. /etc/sysconfig/network hostdependent, it has to be replaced by a symbolic link pointing to /cdsl.local/etc/sysconfig/network. The administration of the cdsls should be done using following tools: com-mkcdslinfrastructure (1) - Builds needed infrastructure to create cdsls com-mkcdsl (1) - make cdsl Best Regards, Mark > I'm trying to build an CentOS 5.2 based NFS based cluster to replace > our old CentOS 4.x NFS clusters, but am having trouble understanding > how this comoonics could possible work with NFS. I thought you needed > GFS (or some other cluster filesystem) to get CDSL support. Does NFS > actually have support for CDSL? Are CDSLs handled somewhere else, > above NFS? Or does comoonics NFS support not use CDSLs for NFS based > clusters? If so, then how does that work? -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ |