Am Montag, den 20.04.2009, 18:10 +0530 schrieb er krishna:
On Mon, Apr 20, 2009 at 4:16 PM, Marc - A. Dahlhaus [ Administration | Westermann GmbH ] <firstname.lastname@example.org> wrote:
looks like a fencing related problem, did you actually unfenced the node
that you killed? You need to do this with
fence_ack_manual -n nodename
on one of the remaining nodes, as the whole cluster stops its activities
as long as fencing doesn't return successfully.
>>>>> Sorry, I didn't do it. I just poweroff my pc. I will do it accordingly now (as per your suggestion).
Also, you should remove the empty fence-methods in your configuration or
better remove the entire fencing parts of it as it will use fence_manual
by default if no fencing related configuration is defined.
>>>> ok, I will remove the fencing parts.
Also it is essential for a reliably working cluster to get the fencing
right. You should not by any means use fence_manual in a production
environment. Also you should have redundant fencing in your cluster.
>>> I will make my fencing right & rebuild my cluster.conf file.
We have fence_ilo (HP ilo fencing) and fence_apc (power fencing via PDU)
in place for that purpose but we use FC in production and AoE only for
>>> I don't have these hardware to test.
This was just an example to show how it could be done ;)
Read your logs on the remaining nodes as cman talks a lot of useful
stuff there and also read the entire RHEL-Cluster Manual and the FAQ at
>>> Just asking, I can refer to the log from :
1) /var/log/messages & from 2) dmesg
Is there any other path to see the corresponding logs ?
That's a moving target lately, so it depends on the version you use and the configuration of your syslog daemon.
But cman_tool should tell you at any given time what is going on in your cluster. So take a look at man cman_tool and use this tool to debug your cluster state."cman_tool nodes" and "cman_tool status" should give what you need in most cases...
Further, heartiest thanks for all the valuable information. I will send my cluster.conf file again if i got some error message or i didn't get my nfs failover :).
Just asking for clarification. Suppose I have my nfstest directory exported on network via nfs from the node1 ( here centos ) and it is mounted on node3 (centos2 machine). There is the similar directory (nfstest) available on node two (centos1 machine ). Is it possible that if node1 get crashed then the nfstest directory from centos1 machine (node2) , will be exported on the network and will be automatically mounted on node3 (centos2 ). That i was trying to achieve. Please tell if its possible to achieve this result through this experiment.
You should search the cluster-wiki, there is an cookbook/howto for nfs-clustering there. If some questions remain unanswered, then you could point your question to the helpful crowds on the linux-cluster list (information for that list is on cluster wiki).
Thanks & Best regards,
Am Montag, den 20.04.2009, 15:42 +0530 schrieb er krishna:
> Dear All,
> I am trying to setup NFS failover and its recovery via GFS (using
> CMAN) tool. Of course, I have created a gfs file system over my
> logical volume ( basically its on exported block devices ). I have
> three nodes centos, centos1 and centos2. I am attaching my
> cluster.conf file for further reference. Everything seems fine, but I
> am not able to migrate my NFS services when I poweroff my first node.
> Anybody has any idea about it ?
> Thanks & Best Regards,