----- "CooLCaT@Vienna" <co...@co...> wrote:
> we tought of using openfiler as nfs server, because openfiler includes
> also
> failover to a second
> openfiler installation.
>
> the goal is, that every node can become a failover node, not even only
> two
> nodes which are mirrored via drbd.
It's a nice goal, but the costs far, far outweigh the benefits. If you had the time and experience on hand, even a simple redundant FC setup with two controllers and two switches is in the tens of thousands of dollars. You could go the cheap route with ATAoE, iSCSI, or GNBD and two block servers replicating with drbd in a criss-cross fashion to spread the load (or maybe OpenFiler will be sufficient for you), but it's going to take a while to set up and if you haven't done it before, it may make you go bald. Even though it's a simple redundant configuration, it will cover most failure scenarios. Right now, I think you're overengineering it. If you have concerns about both of your block servers dying at the same time, make them redundant internally and use separate redundant NICs for the storage network. Most of your failures are going to come from software, so get them set up, do all of your benchmarking, tuning, and testing, finalize the configuration, and then leave them alone! Don't network them except on the private SAN and a secure, isolated administrative network. That will give you some a little bit of wiggle room to test your security patches before applying them to your live servers.
Use GFS for the shared root filesystem because it's the most mature, the most configurable, and the most like a natural POSIX compliant filesystem, even though it can be a bitch at first. For other shared filesystems in small clusters, OCFS2 will take care of most needs and it's really easy to set up.
--
Christopher G. Stach II
|