I'm new to this mailing list, rather new to using Iperf, and I must warn
you that my background is neither in programming or networking. I
currently have Iperf installed on Windows XP machines and I'm curious
about the behavior of the tool, but more specifically, the -w switch.
Because of some of the issues I experienced using this Windows port of
Iperf, I plan on using the native version on a Unix based OS, but I
think my questions will apply then as well. I'm testing over a
high-latency satellite link. I know this is long, please bare with me.
Any help would be greatly appreciated.
I've read through the archives and online but still can't get a direct
answer to m questions. Here's the most helpful info I've found in the
This is my understanding so far, please correct me if (read: when) I'm
(1) The -w switch really isn't the TCP Receive Window as the
documentation says but rather the send/receive socket buffer size, and
is not directly related to TCP Window Size. This goes against what I had
originally assumed from the documentation and pretty much everything
I've read online.
(2) When the -w switch is used on the receiving server (ex: iperf -s -w
<XX>), this should set the receive buffer size to <XX> (SO_RCVBUF). This
is limited by some maximum value in the operating system.
(3) When the -w switch is used on the sending client (ex: iperf -c -w
<XX>), this should set the send buffer size to <XX> (SO_SNDBUF). This is
limited by some maximum value in the operating system.
Here are my questions/confusion:
(1) Should the values set using the -w switch be shown in a packet
capture such as Wireshark? When I use the -w switch on the server, the
TCP Receive Window seems to remain unchanged and is usually 65535 bytes.
I see the TCP Window change when I use -w values ranging from 32KB to
64KB. Values less than 32KB are not reflected in the packet capture, but
still seem to have an effect on throughput. Not using the -w switch
shows that the buffer size is 8KB, but it too is not reflected in a
packet capture. Using this default value results in low throughput. As
you can see, I am VERY confused!
(2) How does the -w switch behave for multidirectional tests (-r and
-d)? If "iperf -c <IP> -w <XX> -r" is used, does the <XX> become the
Receive Buffer Size (SO_RCVBUF) when it switches to receive mode? Does
the server's value then become the Send Buffer Size (SO_SNDBUF)?
(3) So how does the socket buffer size actually relate to the TCP
(4) Should the -w switch be used for UDP tests?
(5) How should I use the -b (--bandwidth) switch for UDP tests? If I
have a link bandwidth that is 128kbps and I know that I'm going to get a
lower "goodput" due to overhead (IP, TCP, IPSec, tunneling), do I set
the bandwidth to -b 128000, a lesser value, or am I completely
misunderstanding the concept?
Many Many Thanks,
Jimmy El Zorkani