Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

802.3ad and "Number of Ports"

Help
Greg
2008-03-06
2013-06-06
  • Greg
    Greg
    2008-03-06

    I'm trying to get 802.3ad set up with 3 GigE cards for a high-performance computing application.  The switch I have is 802.3ad-compliant.  I'm currently not seeing any additional bandwidth or speedup.  The content of the /proc/net/bonding/bond0 file is given at the end of this file. 

    The "Number of ports" is given as "1", which I suspect is related to the root of my problems.  Is configuration of this "Number of ports" parameter accomplished in Linux or on the switch? (I have link aggregation set up on the switch...I think...but something, somewhere, is obviously still wrong).

    Also, from the "bonding.txt" file in the kernel docs:

    "...so in a "gatewayed" configuration, all outgoing traffic will generally use the same device.  Incoming traffic may also end up on a single device, but that is dependent upon the balancing policy of the peer's 8023.ad implementation.  In a "local" configuration, traffic will be distributed across the devices in the bond."

    How, specifically, do I set up a "local" configuration?  It sounds like "gateway" may be what I have, and "local" is what I want.  Is this specified in Linux or on the switch?

    Thanks,
    Greg

    ***

    [fischega@master BOUNCE]$ cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v2.6.1 (October 29, 2004)

    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0

    802.3ad info
    LACP rate: slow
    Active Aggregator Info:
            Aggregator ID: 1
            Number of ports: 1
            Actor Key: 17
            Partner Key: 1
            Partner Mac Address: 00:00:00:00:00:00

    Slave Interface: eth0
    MII Status: up
    Link Failure Count: 1
    Permanent HW addr: 00:e0:81:43:75:9c
    Aggregator ID: 1

    Slave Interface: eth2
    MII Status: up
    Link Failure Count: 1
    Permanent HW addr: 00:1b:21:13:4c:e8
    Aggregator ID: 2

    Slave Interface: eth3
    MII Status: up
    Link Failure Count: 1
    Permanent HW addr: 00:1b:21:13:4c:e9
    Aggregator ID: 3

    ***

     
    • Greg
      Greg
      2008-03-09

      Today I was able to get 802.3ad-mode working on with the switch (It turns out I didn't have the settings quite right - the switch UI is awful).  I am able to get the "Number of Ports" to read "3", and there's a "Partner MAC address" identified.  Still, however, I don't see any speedup.

      Has anyone done LACP/802.3ad with GigE interfaces and/or used channel bonding in high-performance computing (beowulf clustering) situations?  Should I be observing a significant speedup? Thanks.

      Greg

       
      • Jay Vosburgh
        Jay Vosburgh
        2008-03-10

        The balance of traffic depends upon the balance mode selected (the xmit_hash_policy parameter), and the nature of your traffic.  By default, 802.3ad balances by MAC address (the layer2 policy), which means that all traffic to a given peer on the local network will use the same interface in the aggregation.  You can also select a layer2+3 or layer3+4 policy, which utilize varying bits of information from the upper layer protocols to balance the traffic, generally giving a better spread of traffic.  These options are described in the bonding.txt documentation included with the kernel source (or downloaded from sourceforge).

        All of the algorithms are based on math, and not traffic analysis, so there is always a chance of misbalance due to the way the math might work out (e.g., all of the MAC addresses are odd numbers and you have two slaves in the aggregation).

        Also, none of the balance algorithms will stripe traffic from a single network connection across multiple slaves (doing so is a violation of the 802.3ad standard), so any given connection ("conversation" in 802.3ad-speak, which includes UDP sessions) won't utilize more than one interface's worth of throughput.

        Lastly, the above applies only to traffic going out from the bonding system; incoming traffic is balanced according to the policy of the 802.3ad peer (typically a switch).