|
From: Igor S. <xre...@gm...> - 2009-01-28 18:10:37
|
Hi Alexander,
fix indeed solve crash problem (in vanilla 2.6.28.2) - thank you very much.
But now I have another problem.
When mtu <= 1000 I can not receive packets bigger that 1000. With mtu
1050 and bigger, fragmentation is working correct.
hostA ------(e1000)_gateway_(igb)----hostB
10.48.201.201 10.48.0.254
-------------------------------------------------------------
Sending one ping of size 1500 from host A to hostB is working when
mtu on igb NIC is 1050 or bigger.
Test1:
Ping on client is OK, no packet loss.
Tcpdump on server:
10.48.201.201 > 10.48.0.254: icmp: echo request (frag 63563:1024@0+)
(ttl 63, len 1044)
10.48.201.201 > 10.48.0.254: icmp (frag 63563:456@1024+) (ttl 63, len 476)
10.48.201.201 > 10.48.0.254: icmp (frag 63563:28@1480) (ttl 63, len 48)
10.48.0.254 > 10.48.201.201: icmp: echo reply (frag 59125:1480@0+)
(ttl 64, len 1500)
10.48.0.254 > 10.48.201.201: icmp (frag 59125:28@1480) (ttl 64, len 48)
On igb NIC:
mtu 1050, rx_buffer_len=2048
rx_errors: 0
rx_length_errors: 0
rx_long_length_errors: 0
-------------------------------------------------------------
Test2 - received reply packet dropped by igb NIC.
Sending one ping of size 1500 from host A to hostB is not working when
mtu on igb NIC is 1000 or less.
Client ping statistics:
1 packets transmitted, 0 received, 100% packet loss, time 0ms
Tcpdump on client:
IP (tos 0x0, ttl 64, id 65483, offset 0, flags [+], proto 1, length:
1500) 10.48.201.201 > 10.48.0.254: icmp 1480: echo req uest seq 0
IP (tos 0x0, ttl 64, id 65483, offset 1480, flags [none], proto 1,
length: 48) 10.48.201.201 > 10.48.0.254: icmp
IP (tos 0x0, ttl 63, id 27391, offset 1480, flags [none], proto 1,
length: 48) 10.48.0.254 > 10.48.201.201: icmp
Tcpdump on server:
10.48.201.201 > 10.48.0.254: icmp: echo request (frag 65483:880@0+)
(ttl 63, len 900)
10.48.201.201 > 10.48.0.254: icmp (frag 65483:600@880+) (ttl 63, len 620)
10.48.201.201 > 10.48.0.254: icmp (frag 65483:28@1480) (ttl 63, len 48)
10.48.0.254 > 10.48.201.201: icmp: echo reply (frag 27391:1480@0+)
(ttl 64, len 1500)
10.48.0.254 > 10.48.201.201: icmp (frag 27391:28@1480) (ttl 64, len 48)
In igb NIC:
mtu 900 rx_buffer_len=1024
rx_errors: 2
rx_length_errors: 2
rx_long_length_errors: 2
# ethregs -s 0000:09:00.0 | grep RCTL
RCTL 04058022
SRRCTL(0) 02000000
SRRCTL(1) 80000400
SRRCTL(2) 80000400
SRRCTL(3) 80000400
Thanks for your help and fast response,
Igor.
On Tue, Jan 27, 2009 at 6:34 PM, Duyck, Alexander H
<ale...@in...> wrote:
> Igor,
>
> I believe I found the root cause for the issue. It looks like with mtu settings smaller than 1K, larger packets were spanning multiple descriptors which isn't supported when jumbo frames isn't enabled.
>
> The patch below should resolve the issue for the 2.6.28.2 in kernel driver, and the next release of igb will contain this fix.
>
> Thanks,
>
> Alex
> ---
>
> drivers/net/igb/igb_main.c | 6 ++++--
> 1 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/igb/igb_main.c b/drivers/net/igb/igb_main.c
> index b82b0fb..34df90c 100644
> --- a/drivers/net/igb/igb_main.c
> +++ b/drivers/net/igb/igb_main.c
> @@ -1839,7 +1839,8 @@ static void igb_setup_rctl(struct igb_adapter *adapter)
> */
> rctl &= ~(E1000_RCTL_SBP | E1000_RCTL_LPE | E1000_RCTL_SZ_256);
>
> - if (adapter->netdev->mtu <= ETH_DATA_LEN)
> + /* enable LPE when <= 1K to prevent multi-descriptor receives */
> + if (adapter->rx_buffer_len > IGB_RXBUFFER_1024)
> rctl &= ~E1000_RCTL_LPE;
> else
> rctl |= E1000_RCTL_LPE;
> @@ -1865,7 +1866,8 @@ static void igb_setup_rctl(struct igb_adapter *adapter)
> */
> /* allocations using alloc_page take too long for regular MTU
> * so only enable packet split for jumbo frames */
> - if (rctl & E1000_RCTL_LPE) {
> + if (adapter->netdev->mtu > ETH_DATA_LEN) {
> + rctl |= E1000_RCTL_LPE;
> adapter->rx_ps_hdr_size = IGB_RXBUFFER_128;
> srrctl |= adapter->rx_ps_hdr_size <<
> E1000_SRRCTL_BSIZEHDRSIZE_SHIFT;
|