|
From: Jim Y. <jy...@gs...> - 2015-01-09 07:01:56
|
Hello Bob and Steve, Interesting discussion. I recently considered the possibility of trying to add some type of TCP "duty cycle" mode. Using the proposed UDP burst mode one might be able to emulate the TCP "duty cycle" mode. But extending iperf to support a TCP burst mode would likely prove useful to help expose microburst issues. Last year we investigated some nagging port congestion issues seen on several 1G ports that were "only" nominally 15% utilized; these ports were consistently transmitting data at about 150Mb/sec 24/7. But I could not replicate the port congestion issue when using iperf even when pushing much higher average throughput. Using the technique documented in the following Cisco technote I eventually confirmed the root cause was likely microbursts: http://www.cisco.com/c/en/us/support/docs/lan-switching/switched-port-analyzer-span/116260-technote-wireshark-00.pdf Instead of saying this 1G network port was 15% utilized I tend to say that 15% of the time this 1G network port was 100% utilized. The question: How much buffering can the switch support before it must start dropping packets if traffic is buffered faster than can be transmitted? In our particular case the source of the microburst (and packet congestion) was the occasional convergence of packets from several TCP streams of video surveillance camera traffic to the single 1G port. Each individual video surveillance TCP stream has a very specific bursty pattern. The DVR sends 30 video frames a second, with each video frame composed of a TCP packet train of about 35 to 45 full sized ethernet frames. So for each TCP stream we see a burst of 35 to 45 packets followed by about a 30th of a second of quiet. If just five TCP video frame packet trains arrive at the access layer switch port buffer at virtually the same instant then we will see the port congestion counter increment. In our case the switch has dual 10G uplink ports to the building distribution switch and the several DVR servers that source the TCP streams are all connected at 10Gb. That means each DVR TCP stream has the potential of arriving at the 1G egress port buffer 10 times faster than the port can transmit. With two 10g uplinks to this switch there's the possibility for traffic to arrive 20 times faster than the 1 Gig port can send. We determined that the switch had egress port buffer size of about 250,000 bytes. That means the port could buffer up to about 165 full size ethernet frames before it would be forced to drop packets. Augmenting iperf to support a TCP burst mode in addition to the proposed UPD burst mode would allow one to simulate the bursty TCP packet trains which might help in exposing microburst issues. Best regards, Jim Y. From: "Bob McMahon (Robert)" <rmc...@br...<mailto:rmc...@br...>> Date: Thursday, January 8, 2015 5:35 PM To: Steve Baillargeon <ste...@er...<mailto:ste...@er...>> Cc: "ipe...@li...<mailto:ipe...@li...>" <ipe...@li...<mailto:ipe...@li...>> Subject: Re: [Iperf-users] iperf UDP burst mode Yes, that’s basically what I suggested. It would look something like iperf –c 10.10.10.10 –b 100M –l 1470/16,32 -u where 16 is the min burst and 32 is the max If one really doesn’t want speed ups and doesn’t care about converging on –b then either: iperf –c 10.10.10.10 –b 100M –l 1470/16 -u or iperf –c 10.10.10.10 –b 100M –l 1470/16,16 -u Though, it’s worth repeating, this will only control application level gaps. If something below (os/driver) or an intermediate device (router/switch) clumps them prior to the iperf server there would be no way of knowing, at least not from the iperf client or server. This would be a best try type of solution (with caveat emptor.) Note: I believe a tool like an IXIA chassis can provide guarantees. Bob From: Steve Baillargeon [mailto:ste...@er...] Sent: Thursday, January 08, 2015 2:18 PM To: Bob (Robert) McMahon Cc: ipe...@li...<mailto:ipe...@li...> Subject: RE: [Iperf-users] iperf UDP burst mode Hi Bob I really think UDP burst mode with some well-understood expectations and restrictions will be useful to test network BW and buffering capacity. What is the next step to hopefully support it? Regarding burst vs bandwidth configuration at the client. What if the user only need to configure bandwidth (below client line rate), burst size and packet size with some restrictions on the possible values. The client would then estimate the gap to satisfy the bandwidth for a given train size (train is probably maybe better than burst) and packet size. Is that what you suggested? Regards Steve <snip> |