I just checked (using wireshark) how UDP looks like when run against server that's not listening. All the packets, including the first one, look exactly the same, except for the sequence number, which starts at 0. Meaning that there's no special handshake. The 'connected with server port xxx' is thus only client's fiction mimicking similar message which is there for TCP testing (and it's real then).

After a while (anything between "next packet" and few seconds) client did receive ICMP message type 3 code 3 (meaning destination not reachable, port not reachable) from server. It happens sporadically and is obviously ignored by iperf. It seems possible to me that iperf application is not aware of these messages though.


Bob (Robert) McMahon je dne 22/08/14 19:03 napisal-a:
Do you have the server output?

If the client can't reach the server then the following should not happen:

[  3] local port 55373 connected with port 5001

UDP does use a handshake at the start of traffic.  That's how the ports are determined.  The only type of traffic where a client sends without initial reachability to the server is multicast.

Iperf 2.0.5 has known performance problems and on many machines tops out at ~800Mbs.  This is addressed in iperf2's version 2.0.6 or greater.


My initial guess is that you aren't connecting to what you think you are.   Two reasons

o  If the server is not reachable there should be no connected message 
o The thruput is too high

-----Original Message-----
From: Martin T [mailto:m4rtntns@gmail.com] 
Sent: Friday, August 22, 2014 2:04 AM
To: Metod Kozelj; Bob (Robert) McMahon
Cc: iperf-users@lists.sourceforge.net
Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth results if Iperf server is unreachable


please see the full output below:

root@vserver:~# iperf -c -fm -t 600 -i60 -u -b 500m
Client connecting to, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 0.16 MByte (default)
[  3] local port 55373 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-60.0 sec  422744 MBytes   59104 Mbits/sec
[  3] 60.0-120.0 sec  435030 MBytes   60822 Mbits/sec
[  3] 120.0-180.0 sec  402263 MBytes   56240 Mbits/sec
[  3] 180.0-240.0 sec  398167 MBytes   55668 Mbits/sec
[  3] 240.0-300.0 sec  422746 MBytes   59104 Mbits/sec
[  3] 300.0-360.0 sec  381786 MBytes   53378 Mbits/sec
[  3] 360.0-420.0 sec  402263 MBytes   56240 Mbits/sec
[  3] 420.0-480.0 sec  406365 MBytes   56814 Mbits/sec
[  3] 480.0-540.0 sec  438132 MBytes   61395 Mbits/sec
[  3]  0.0-600.0 sec  4108674 MBytes   57443 Mbits/sec
[  3] Sent 6119890 datagrams
read failed: No route to host
[  3] WARNING: did not receive ack of last datagram after 3 tries.

In case of UDP mode the Iperf client will send the data despite the
fact that the Iperf server is not reachable.

Still, to me this looks like a bug. Iperf client reporting ~60Gbps
egress traffic on a virtual-machine with 1GigE vNIC while having
bandwidth specified with -b flag, is IMHO not expected bahavior.


On 8/22/14, Metod Kozelj <metod.kozelj@lugos.si> wrote:

the bandwidth limitation switch (-b) limits the maximum rate with which
sending party (that's usually client) will transmit data if there's no
bottleneck that sending party is able to detect. If test is done using TCP,

bottleneck will be apparent to client (IP stack will always block
if outstanding data is not delivered yet). If test is done using UDP,
party will mostly just transmit data at maximum rate except in some rare

To verify this, you can run iperf in client mode with command similar to

iperf -c localhost -i 1 -p 42000 -u -b500M -t 10

... make sure that the port used in command above (42000) is not used by
other application. If you vary the bandwidth setting, you can se that
a practical maximum speed that even loopback network device can handle. When

experimenting with the command above, I've found a few interesting facts
my particular machine:

  * when targeting machine on my 100Mbps LAN, transmit rate would not go
    beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire
    speed while UDP over ethernet faces some overhead)
  * when targeting loopback device with "low" bandwidth requirement (such
    50Mbps), transmit rate would be exactly half of requested. I don't know
    this is some kind of reporting artefact or it actually does transmit at
    half the rate
  * UDP transmit rate over loopback device would not go beyond 402Mbps.

I was using iperf 2.0.5. And I found out that it behaves similarly on
host (402 Mbps max over loopback, up to 812 Mbps over GigE).

Tests above show that loopback devices (and I would count any virtualised
network devices as such) experience some kind of limits.


-- perl -e 'print
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc


BOFH excuse #299:

The data on your hard drive is out of balance.

Martin T je dne 21/08/14 16:51 napisal-a:

but shouldn't the Iperf client send out traffic at 500Mbps as I had
"-b 500m" specified? In my example is prints unrealistic
bandwidth(~60Gbps) results.


On 8/21/14, Metod Kozelj <metod.kozelj@lugos.si> wrote:

Martin T je dne 21/08/14 15:12 napisal-a:
if I execute "iperf -c -fm -t 600 -i 60 -u -b 500m" and is behind the firewall so that Iperf client is not able to
reach it, then I will see following results printed by Iperf client:

[  ID]   Interval                Transfer                   Bandwidth
[   3]   0.0 - 60.0 sec      422744 MBytes       59104 Mbits/sec
[   3]   60.0 - 120.0 sec  435030 MBytes       60822 Mbits/sec

Why does Iperf client behave like that? Is this a know bug?
That's not a bug in iperf, it's how UDP is working. The main difference
between TCP and UDP is that with TCP, IP stack itself takes care of all

details (such as in-order delivery, retransmissions, rate adaption,
while with UDP stack that's responsibility of application. The only
functionality that iperf application does when using UDP is to fetch the
server (receiving side) report at the end of transmission. Even this
is not performed in perfect way ... sending side only waits for server
for short time and if it filled network buffers, this waiting time can

The same phenomenon can be seen if there's a bottleneck somewhere
nodes and you try to push datarate too high ... routers at either side
bottle will discard packets when their TX buffers get filled up. If TCP

used, this would trigger retransmission in IP stack and all of
would kick in and sending application would notice drop in throughput.
was used, IP stack would not react in any way and application would dump
at top speed.


-- perl -e 'print
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc


BOFH excuse #252:

Our ISP is having {switching,routing,SMDS,frame relay} problems


-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc

BOFH excuse #79:

Look, buddy:  Windows 3.1 IS A General Protection Fault.