#348 igb and kernel >3.0 slow performance

closed
None
standalone_driver
1
2014-08-22
2012-07-19
No

Hello,
I don't know, if this the right place for my problem with poor network performance. If not, can you give me perhaps a tip for a better place for my question?

I use different gentoo systems. There aren't any performance problems with kernel 3.0. But if I use a 3.2 kernel the network speed goes from 80MB/s (kernel 3.0, igb-3.4.8) down to 7MB/s (kernel 3.2, igb-3.4.8) and there are many retransmitted segments.
I checked the kernel change logs but I did not find any informations about changes, which can explain this slow network performance.

I tried a vanilla 3.2 kernel with the same results.

If I set the network speed down to 100Mbit (ethtool -s eth0 speed 100 duplex full) the network performance goes up from ~7MB/s to 11,2MB/s without a huge amount of retransmitted segments.

I checked the switches too, but I did not find any error messages.Cables are all fine, because with kernel 3.0 all works as expected.

lspci:
igb 0000:01:00.0: power state changed by ACPI to D0
igb 0000:01:00.0: power state changed by ACPI to D0
igb 0000:01:00.0: PCI INT A -> GSI 28 (level, low) -> IRQ 28
igb 0000:01:00.0: setting latency timer to 64
igb 0000:01:00.0: irq 73 for MSI/MSI-X
igb 0000:01:00.0: irq 74 for MSI/MSI-X
igb 0000:01:00.0: DCA enabled
igb 0000:01:00.0: Intel(R) Gigabit Ethernet Network Connection
igb 0000:01:00.0: eth0: (PCIe:2.5GT/s:Width x4
igb 0000:01:00.0: eth0: MAC: 00:25:90:49:53:58
igb 0000:01:00.0: eth0: PBA No: FFFFFF-0FF
igb 0000:01:00.0: LRO is disabled
igb 0000:01:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)

lshw -C network:
*-network:0
description: Ethernet interface
product: 82576 Gigabit Network Connection
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:01:00.0
logical name: eth0
version: 01
serial: 00:25:90:49:53:58
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: pm msi msix pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=igb driverversion=3.4.8 duplex=full firmware=1.4-3 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:28 memory:fbd60000-fbd7ffff memory:fbd40000-fbd5ffff ioport:e880(size=32) memory:fbd1c000-fbd1ffff memory:fbd20000-fbd3ffff

Do you need any other information? (Board, BIOS Version, ...)

Best regards,
thomas.

Discussion

  • Akeem G. Abodunrin

     
  • Akeem G. Abodunrin

    Hi Thomas,

    I apologize you are having problem with your device. In order to find root-cause of this issue, please send me "lspci -vv" log and ethregs dump.

    Aside, do you know if igb-3.4.7 exhibits same behavior in the same test-setup? And have you tried igb-3.4.8 on the latest kernel -3.4.5? This would help to isolate the issue you are encountering.

    Regards,

    Akeem G. Abodunrin
    Linux Development
    LAN Access Division
    Intel Corporation

     
  • Carolyn Wyborny

    Carolyn Wyborny - 2012-07-19
    • assigned_to: Akeem G. Abodunrin
     
  • samoth kinlop

    samoth kinlop - 2012-07-20

    "ethregs -D" output (ethregs-1.16.0)

     
  • samoth kinlop

    samoth kinlop - 2012-07-20

    Hello Akeem,

    I tried different kernel >3.0 (3.2.21, 3.4.4, 3.5-rc7) with different igb versions (3.2 - 3.4.8), there where no improvements, the network performance was ~8MB/s.

    The network performance is with a 3.0 kernel (3.0.17, 3.0.36) and different igb versions (3.0 - 3.4.8) ~80MB/s.

    switch: dlink DGS-3224 TGR
    server board: super micro x8dtt with the latest firmware 2.1b
    os: gentoo

    btw: I made a test on an other server with knoppix (KNOPPIX_V7.0.3DVD-2012-06-25-DE.iso, igb-3.2.10-k) with the same result ... ~8MB/s.

    best regards,
    thomas.

     
  • Akeem G. Abodunrin

    Hi Thomas,

    I need more information from you so that I could reproduce and isolate this issue…

    Please execute the following on your test system with Kernel-3.2 and send result/log to me:

    “tcpdump –c 10000 –i ethX –w dump.cap”, so that I could have more info about your test network traffic.

    “sysctl –a > sysctl.txt”, so that I could know more about your Kernel runtime configuration/parameters.

    “netstat –s” from both sides before and after test.

    “ethtool –S ethx” before and after, to see driver stats (packet drops, tx/rx_errors or long latency issue)

    In addition, I need to know exact test commands that you are running on both ends. Also, please tell me more about your Link Partner (OS and system configuration).

    You could turn off all offloads, “ethtool –K ethx rx off tx off tso off gro off lso off sg off” or test “netperf –t TCP_RR –H remote –Cc -- -r1”, to see any different in performance issue on this kernel.

    Thanks,

    ~Akeem

     
  • Todd Fujinaka

    Todd Fujinaka - 2013-07-09
    • status: open --> closed
     
  • Todd Fujinaka

    Todd Fujinaka - 2013-07-09

    Closing due to inactivity.

     

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks