I was just hoping for a sanity check. I am hoping to increase the throughput possible on a single TCP connection past the wire speed of a single Ethernet link. I boldly marched forward with the assumption that this is possible with the bonding mechanism. However, after combing the documentation, I believe I may have been mistaken. It appears that no matter how many NICs I throw at the problem, I will still be limited in performance due to the specific hashing mechanisms used to identify which NICs a given packet will be sent to/from.
So for a point blank question, is there any bonding mode which can improve the performance of a single TCP connection across multiple NICs?
Yes, it's possible in balance-rr mode, but it usually won't show linear scaling as you add interfaces. This is caused by out of order delivery if the receiver is a bonding / etherchannel, which will in turn trigger TCP/IP's congestion control algorithms (those can be mitigated to some degree, see below).
I did some tests on this some time ago; if memory serves, a four interface bond could obtain roughly 2.25 interface's worth of throughput to one TCP/IP connection. I don't recall what the scaling was for a two interface bond, I'd expect something around 1.5. There is some discussion of this in the bonding-devel archives.
On the other hand, if you run, e.g., several lower speed links connected in an balance-rr bond, connected to an etherchannel switch, where the outbound interface from the switch is a single higher speed link, you can get good scaling transmitting out to the single higher speed link, provided that your switch is peppy enough to not reorder the packets between their arrival at the etherchannel and delivery to the outbound port. Realisitically, I'd expect some impact here, but not as much as the "receiving at the bond" case.
The relevant bit of the bonding documentation follows:
balance-rr: This mode is the only mode that will permit a single
TCP/IP connection to stripe traffic across multiple
interfaces. It is therefore the only mode that will allow a
single TCP/IP stream to utilize more than one interface's
worth of throughput. This comes at a cost, however: the
striping often results in peer systems receiving packets out
of order, causing TCP/IP's congestion control system to kick
in, often by retransmitting segments.
It is possible to adjust TCP/IP's congestion limits by
altering the net.ipv4.tcp_reordering sysctl parameter. The
usual default value is 3, and the maximum useful value is 127.
For a four interface balance-rr bond, expect that a single
TCP/IP stream will utilize no more than approximately 2.3
interface's worth of throughput, even after adjusting
Note that this out of order delivery occurs when both the
sending and receiving systems are utilizing a multiple
interface bond. Consider a configuration in which a
balance-rr bond feeds into a single higher capacity network
channel (e.g., multiple 100Mb/sec ethernets feeding a single
gigabit ethernet via an etherchannel capable switch). In this
configuration, traffic sent from the multiple 100Mb devices to
a destination connected to the gigabit device will not see
packets out of order. However, traffic sent from the gigabit
device to the multiple 100Mb devices may or may not see
traffic out of order, depending upon the balance policy of the
switch. Many switches do not support any modes that stripe
traffic (instead choosing a port based upon IP or MAC level
addresses); for those devices, traffic flowing from the
gigabit device to the many 100Mb devices will only utilize one
If you are utilizing protocols other than TCP/IP, UDP for
example, and your application can tolerate out of order
delivery, then this mode can allow for single stream datagram
performance that scales near linearly as interfaces are added
to the bond.
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.