Hi,

the bandwidth limitation switch (-b) limits the maximum rate with which sending party (that's usually client) will transmit data if there's no bottleneck that sending party is able to detect. If test is done using TCP, bottleneck will be apparent to client (IP stack will always block transmission if outstanding data is not delivered yet). If test is done using UDP, sending party will mostly just transmit data at maximum rate except in some rare cases.

To verify this, you can run iperf in client mode with command similar to this:

iperf -c localhost -i 1 -p 42000 -u -b500M -t 10

... make sure that the port used in command above (42000) is not used by some other application. If you vary the bandwidth setting, you can se that there's a practical maximum speed that even loopback network device can handle. When experimenting with the command above, I've found a few interesting facts about my particular machine:

I was using iperf 2.0.5. And I found out that it behaves similarly on another host (402 Mbps max over loopback, up to 812 Mbps over GigE).

Tests above show that loopback devices (and I would count any virtualised network devices as such) experience some kind of limits.

Peace!
  Mkx

-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc

BOFH excuse #299:

The data on your hard drive is out of balance.


Martin T je dne 21/08/14 16:51 napisal-a:
Metod,

but shouldn't the Iperf client send out traffic at 500Mbps as I had
"-b 500m" specified? In my example is prints unrealistic
bandwidth(~60Gbps) results.


regards,
Martin

On 8/21/14, Metod Kozelj <metod.kozelj@lugos.si> wrote:
Hi,

Martin T je dne 21/08/14 15:12 napisal-a:
if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and
10.10.10.1 is behind the firewall so that Iperf client is not able to
reach it, then I will see following results printed by Iperf client:

[  ID]   Interval                Transfer                   Bandwidth
[   3]   0.0 - 60.0 sec      422744 MBytes       59104 Mbits/sec
[   3]   60.0 - 120.0 sec  435030 MBytes       60822 Mbits/sec
etc


Why does Iperf client behave like that? Is this a know bug?
That's not a bug in iperf, it's how UDP is working. The main difference
between TCP and UDP is that with TCP, IP stack itself takes care of all the

details (such as in-order delivery, retransmissions, rate adaption, ...),
while with UDP stack that's responsibility of application. The only
functionality that iperf application does when using UDP is to fetch the
server (receiving side) report at the end of transmission. Even this
function
is not performed in perfect way ... sending side only waits for server
report
for short time and if it filled network buffers, this waiting time can be
too
short.

The same phenomenon can be seen if there's a bottleneck somewhere between
the
nodes and you try to push datarate too high ... routers at either side of
the
bottle will discard packets when their TX buffers get filled up. If TCP was

used, this would trigger retransmission in IP stack and all of
TCP-slow-start
would kick in and sending application would notice drop in throughput. If
UDP
was used, IP stack would not react in any way and application would dump
data
at top speed.
--

Peace!
   Mkx

-- perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc

------------------------------------------------------------------------------

BOFH excuse #252:

Our ISP is having {switching,routing,SMDS,frame relay} problems