|
From: Michael <mi...@tr...> - 2010-05-11 12:00:55
|
hi all, sorry for the question, but i somehow do not understand the RxIntDelay parameter. I have a Highend Machine whith several 82571EB GB Controllers. This machine is a firewall, and a cat of /proc/net/dev shows that it droppes a lot of packets. The RxDescriptors and TxDescriptors are already set to = 4096, and according to the vendor the RxIntDelay = 128 The machine recieves on each interface during peaktime approx 70K Packets/Second, with higher peaks, and has lots of idle cpu power.. My Support now insists that i modify the RxIntDelay from 128 to 1024 to decrease the package drops.. ( They claim that this is absolutely necessary ) But the Documentation from intel about this feature ( http://www.intel.com/support/network/sb/cs-009209.htm ) states: "If the system is reporting dropped receives, this value may be set too HIGH, causing the driver to run out of available receive descriptors." ( As the Machine is about 70% Idle, i do not car too much about cpu usage.. ) As the firewall is highly productive, i am not allowed to many downtimes, so i am rather cautious about this. But as far as i understand this issue the problem with higher RxIntDelay delay would be: 70K Packets/Second equals approx 7K Packets/Milliseconds, which means if i add a 1 Millisecond delay i would need 7K RxDescriptors (+ their buffers) to store the packets till the Interrrupt kicks in and the cpu takes care of the recieved packets. Thanks in advance, Michael |