Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
I'm new to bonding but I've been reading a lot in the last couple of days. The problem is that I've tried different modes and they all work (I mean the network connection is up) but it's hard to test them and it's hard to decide which one will work best in my case.
What I'm looking for is high bandwidth.
If I understand correctly most of the modes separate traffic by using different interfaces for different clients. So, when I run a test (iperf for example) it can't go above a single interface bandwidth and I'm getting the same result for each mode - about 950 Mb/s for Gbit ethernet connection.
Here is my setup:
I have 10 identical servers, with 4 ethernet interfaces each. Each server has a partition on the local hard drive and with additional software this partitions are grouped in pairs and mirrored. There is a lot of network traffic between the servers in each pair but not between the pairs.
The clients are 3 web servers that NFS mount the partitions. The good thing is that these are the only three servers that will access the storage servers. We may add another one or two in the future but for now there is no other system that will require high bandwidth to the storage servers.
All systems are running CentOS5.5 64bit, and are connected to the same switch. The switch supports all the required trunking and 802.3ab. The OS device drivers should have all the required features to take advantage of any bonding mode (as far as I know).
So far I selected balance-alb (mode 6) just based on what I've read.
Is that a good choice?
Any recommendations are highly appreciated.
None of the bonding modes (except for balance-rr) will permit a single connection to receive more than one interface's worth of throughput.
The balance-rr mode will round robin packets across the slaves, and in some circumstances can obtain better throughput, but its benefit is usually limited by TCP, in particular the fast retransmission scheme. This is discussed in the bonding.txt documentation (downloadable from sourceforge, or just google for it); look for "tcp_reordering." Even after that adjustment, the tests I ran (admittedly very long ago) topped out at about 2.4 interfaces of throughput in a 4 interface bond for TCP. If you application is UDP, it can get higher rates, but must be able to tolerate out of order delivery (the round robin action generally results in packets being received out of order by the peer).
Some of the modes (alb and tlb, for example) balance traffic by destination host. Others (balance-xor, 802.3ad) can use a hash of the TCP or UDP port number along with the IP address information.
In summary, the alb/tlb modes have the benefit that they will attempt to intelligently assign traffic to slaves, putting "new" traffic streams (destinations, as these modes balance by host) on to the least loaded slave. They have the downside that all traffic to a given host will go across the same slave. The alb mode also makes some people nervous because it sends special ARPs to the peers to "assign" particular slaves to particular peers.
On the other hand, the xor/802.3ad modes (particularly if xmit_hash_policy=layer3+4) can put discrete connections to the same host on different slaves, which may provide better overall throughput between the two hosts. On the down side, the balancing is done by a hash, so your overall utilization is dependent somewhat on the luck of the math, particularly if you have a small number of connections in comparison to the number of slaves (e.g., four connections over four slaves will usually leave at least one slave idle).
Your best bet is to come up with representative workloads, and test each mode and see how they do. It's possible that you may have the unusual case that balance-rr makes good sense (usually the penalty from TCP fast retransmit overwhelms the benefit of striping the traffic across multiple interfaces). Be sure to test -rr with tcp_reordering raised from the default.
Also be aware that your balance of traffic into the systems may be limited by the switch's balance algorithm for the etherchannel-compatible (balance-xor, balance-rr) or 802.3ad modes. Some switches only balance by a MAC address hash; if yours has a TCP/UDP and/or IP (layer 4 or layer 3, respectively) hash, use that.
Thank you very much vosburgh for the detailed message and the time you spent to answer my question.
I read the bonding.txt file but your message was indeed very helpful and appreciated.