From: Tantilov, E. S <emi...@in...> - 2011-03-24 23:06:36
|
>-----Original Message----- >From: Sanjay Rao [mailto:sr...@ve...] >Sent: Thursday, March 24, 2011 2:56 PM >To: Tantilov, Emil S >Cc: e10...@li... >Subject: Re: Performance deterioration under load - 82599EB > > >>> Under load, doing a ifconfig ethX down and up brings down the >performance >>> of >>> the NIC to 1/1000th of the initial number. >>> >>> >> Could you provide more detail about the type of traffic you were running? >> >> After the reset is the interface operational at all - i.e. is the >performance degradation the only problem you are seeing? We have a known >bug which we're working on that under similar test (resets while passing >traffic) will cause the device to hang on Rx. >> >> Thanks, >> Emil > >I am using pktgen to generate 8mil pps (65 bytes pkts udp). I have two 10g >interfaces eth6& eth5. It is a simple forwarding test with packets coming >in on eth6 and sent out on eth5. Source and destination configuration of >pktgen is as below: > > >pgset "src_min 100.0.0.1" > >pgset "src_max 100.254.254.254" > >pgset "dst 192.168.3.136" > >pgset "pkt_size 64" > > > >I am running RHEL6 kernel on a DELL 610 server. Missed packets are >accounted by "rx_missed_errors". Performance degradation is the only thing >I am seeing so far. Interface is responsive otherwise. It seems like a >problem with only rx side as tx works fine in this state as well. Let me >know if you need further info. > > > >IRQ assignment is as below: > > > >--- eth6 --- > >IRQ 103 100 > >IRQ 104 200 > >IRQ 105 400 > >IRQ 106 800 > >IRQ 107 1000 > >IRQ 108 2000 > >IRQ 109 4000 > >IRQ 110 8000 > >IRQ 111 10000 > >IRQ 112 20000 > >IRQ 113 40000 > >IRQ 114 80000 > >IRQ 115 100000 > >IRQ 116 200000 > >IRQ 117 400000 > >IRQ 118 800000 > >IRQ 119 1 > >IRQ 120 2 > >IRQ 121 4 > >IRQ 122 8 > >IRQ 123 10 > >IRQ 124 20 > >IRQ 125 40 > >IRQ 126 80 > > > >--- eth5 --- > >IRQ 128 100 > >IRQ 129 200 > >IRQ 130 400 > >IRQ 131 800 > >IRQ 132 1000 > >IRQ 133 2000 > >IRQ 134 4000 > >IRQ 135 8000 > >IRQ 136 10000 > >IRQ 137 20000 > >IRQ 138 40000 > >IRQ 139 80000 > >IRQ 140 100000 > >IRQ 141 200000 > >IRQ 142 400000 > >IRQ 143 800000 > >IRQ 144 1 > >IRQ 145 2 > >IRQ 146 4 > >IRQ 147 8 > >IRQ 148 10 > >IRQ 149 20 > >IRQ 150 40 > >IRQ 151 80 > > >Thanks > >Sanjay > Sanjay, Thanks for the additional details. We attempted a quick repro, but were unable to reproduce the performance drop you described. I think it's best if you submit a bug at e1000.sf.net and include all the details from this email. Also please add the output of dmesg, lspci -vvv, ethtool -d and ethtool -e from the interface under test. Thanks, Emil |