From: Martin T <m4r...@gm...> - 2014-08-21 13:12:13
|
Hi, if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and 10.10.10.1 is behind the firewall so that Iperf client is not able to reach it, then I will see following results printed by Iperf client: [ ID] Interval Transfer Bandwidth [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec etc Why does Iperf client behave like that? Is this a know bug? thanks, Martin |
From: Metod K. <met...@lu...> - 2014-08-21 14:23:54
|
Hi, Martin T je dne 21/08/14 15:12 napisal-a: > if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and > 10.10.10.1 is behind the firewall so that Iperf client is not able to > reach it, then I will see following results printed by Iperf client: > > [ ID] Interval Transfer Bandwidth > [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec > [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec > etc > > > Why does Iperf client behave like that? Is this a know bug? That's not a bug in iperf, it's how UDP is working. The main difference between TCP and UDP is that with TCP, IP stack itself takes care of all the details (such as in-order delivery, retransmissions, rate adaption, ...), while with UDP stack that's responsibility of application. The only functionality that iperf application does when using UDP is to fetch the server (receiving side) report at the end of transmission. Even this function is not performed in perfect way ... sending side only waits for server report for short time and if it filled network buffers, this waiting time can be too short. The same phenomenon can be seen if there's a bottleneck somewhere between the nodes and you try to push datarate too high ... routers at either side of the bottle will discard packets when their TX buffers get filled up. If TCP was used, this would trigger retransmission in IP stack and all of TCP-slow-start would kick in and sending application would notice drop in throughput. If UDP was used, IP stack would not react in any way and application would dump data at top speed. -- Peace! Mkx -- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc ------------------------------------------------------------------------------ BOFH excuse #252: Our ISP is having {switching,routing,SMDS,frame relay} problems |
From: Martin T <m4r...@gm...> - 2014-08-21 14:51:42
|
Metod, but shouldn't the Iperf client send out traffic at 500Mbps as I had "-b 500m" specified? In my example is prints unrealistic bandwidth(~60Gbps) results. regards, Martin On 8/21/14, Metod Kozelj <met...@lu...> wrote: > Hi, > > Martin T je dne 21/08/14 15:12 napisal-a: >> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >> 10.10.10.1 is behind the firewall so that Iperf client is not able to >> reach it, then I will see following results printed by Iperf client: >> >> [ ID] Interval Transfer Bandwidth >> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >> etc >> >> >> Why does Iperf client behave like that? Is this a know bug? > > That's not a bug in iperf, it's how UDP is working. The main difference > between TCP and UDP is that with TCP, IP stack itself takes care of all the > > details (such as in-order delivery, retransmissions, rate adaption, ...), > while with UDP stack that's responsibility of application. The only > functionality that iperf application does when using UDP is to fetch the > server (receiving side) report at the end of transmission. Even this > function > is not performed in perfect way ... sending side only waits for server > report > for short time and if it filled network buffers, this waiting time can be > too > short. > > The same phenomenon can be seen if there's a bottleneck somewhere between > the > nodes and you try to push datarate too high ... routers at either side of > the > bottle will discard packets when their TX buffers get filled up. If TCP was > > used, this would trigger retransmission in IP stack and all of > TCP-slow-start > would kick in and sending application would notice drop in throughput. If > UDP > was used, IP stack would not react in any way and application would dump > data > at top speed. > -- > > Peace! > Mkx > > -- perl -e 'print > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > ------------------------------------------------------------------------------ > > BOFH excuse #252: > > Our ISP is having {switching,routing,SMDS,frame relay} problems > > |
From: Bob (R. M. <rmc...@br...> - 2014-08-21 19:25:54
|
Can you show the full transaction for both the client and the server (including the command, connects, etc.?) If there is a firewall such that the client can't reach the server it should fail on connect (for both UDP and TCP.) Also, there is an iperf2 which has some fixes with respect to reporting. (Linux testing only) https://sourceforge.net/projects/iperf2/?source=directory Bob -----Original Message----- From: Martin T [mailto:m4r...@gm...] Sent: Thursday, August 21, 2014 7:52 AM To: Metod Kozelj Cc: ipe...@li... Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth results if Iperf server is unreachable Metod, but shouldn't the Iperf client send out traffic at 500Mbps as I had "-b 500m" specified? In my example is prints unrealistic bandwidth(~60Gbps) results. regards, Martin On 8/21/14, Metod Kozelj <met...@lu...> wrote: > Hi, > > Martin T je dne 21/08/14 15:12 napisal-a: >> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >> 10.10.10.1 is behind the firewall so that Iperf client is not able to >> reach it, then I will see following results printed by Iperf client: >> >> [ ID] Interval Transfer Bandwidth >> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >> etc >> >> >> Why does Iperf client behave like that? Is this a know bug? > > That's not a bug in iperf, it's how UDP is working. The main difference > between TCP and UDP is that with TCP, IP stack itself takes care of all the > > details (such as in-order delivery, retransmissions, rate adaption, ...), > while with UDP stack that's responsibility of application. The only > functionality that iperf application does when using UDP is to fetch the > server (receiving side) report at the end of transmission. Even this > function > is not performed in perfect way ... sending side only waits for server > report > for short time and if it filled network buffers, this waiting time can be > too > short. > > The same phenomenon can be seen if there's a bottleneck somewhere between > the > nodes and you try to push datarate too high ... routers at either side of > the > bottle will discard packets when their TX buffers get filled up. If TCP was > > used, this would trigger retransmission in IP stack and all of > TCP-slow-start > would kick in and sending application would notice drop in throughput. If > UDP > was used, IP stack would not react in any way and application would dump > data > at top speed. > -- > > Peace! > Mkx > > -- perl -e 'print > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > ------------------------------------------------------------------------------ > > BOFH excuse #252: > > Our ISP is having {switching,routing,SMDS,frame relay} problems > > ------------------------------------------------------------------------------ Slashdot TV. Video for Nerds. Stuff that matters. http://tv.slashdot.org/ _______________________________________________ Iperf-users mailing list Ipe...@li... https://lists.sourceforge.net/lists/listinfo/iperf-users |
From: Metod K. <met...@lu...> - 2014-08-22 05:42:03
|
Hi, the bandwidth limitation switch (-b) limits the maximum rate with which sending party (that's usually client) will transmit data if there's no bottleneck that sending party is able to detect. If test is done using TCP, bottleneck will be apparent to client (IP stack will always block transmission if outstanding data is not delivered yet). If test is done using UDP, sending party will mostly just transmit data at maximum rate except in some rare cases. To verify this, you can run iperf in client mode with command similar to this: iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 ... make sure that the port used in command above (42000) is not used by some other application. If you vary the bandwidth setting, you can se that there's a practical maximum speed that even loopback network device can handle. When experimenting with the command above, I've found a few interesting facts about my particular machine: * when targeting machine on my 100Mbps LAN, transmit rate would not go beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire speed while UDP over ethernet faces some overhead) * when targeting loopback device with "low" bandwidth requirement (such as 50Mbps), transmit rate would be exactly half of requested. I don't know if this is some kind of reporting artefact or it actually does transmit at half the rate * UDP transmit rate over loopback device would not go beyond 402Mbps. I was using iperf 2.0.5. And I found out that it behaves similarly on another host (402 Mbps max over loopback, up to 812 Mbps over GigE). Tests above show that loopback devices (and I would count any virtualised network devices as such) experience some kind of limits. Peace! Mkx -- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc ------------------------------------------------------------------------------ BOFH excuse #299: The data on your hard drive is out of balance. Martin T je dne 21/08/14 16:51 napisal-a: > Metod, > > but shouldn't the Iperf client send out traffic at 500Mbps as I had > "-b 500m" specified? In my example is prints unrealistic > bandwidth(~60Gbps) results. > > > regards, > Martin > > On 8/21/14, Metod Kozelj <met...@lu...> wrote: >> Hi, >> >> Martin T je dne 21/08/14 15:12 napisal-a: >>> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >>> 10.10.10.1 is behind the firewall so that Iperf client is not able to >>> reach it, then I will see following results printed by Iperf client: >>> >>> [ ID] Interval Transfer Bandwidth >>> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >>> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >>> etc >>> >>> >>> Why does Iperf client behave like that? Is this a know bug? >> That's not a bug in iperf, it's how UDP is working. The main difference >> between TCP and UDP is that with TCP, IP stack itself takes care of all the >> >> details (such as in-order delivery, retransmissions, rate adaption, ...), >> while with UDP stack that's responsibility of application. The only >> functionality that iperf application does when using UDP is to fetch the >> server (receiving side) report at the end of transmission. Even this >> function >> is not performed in perfect way ... sending side only waits for server >> report >> for short time and if it filled network buffers, this waiting time can be >> too >> short. >> >> The same phenomenon can be seen if there's a bottleneck somewhere between >> the >> nodes and you try to push datarate too high ... routers at either side of >> the >> bottle will discard packets when their TX buffers get filled up. If TCP was >> >> used, this would trigger retransmission in IP stack and all of >> TCP-slow-start >> would kick in and sending application would notice drop in throughput. If >> UDP >> was used, IP stack would not react in any way and application would dump >> data >> at top speed. >> -- >> >> Peace! >> Mkx >> >> -- perl -e 'print >> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >> -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >> >> ------------------------------------------------------------------------------ >> >> BOFH excuse #252: >> >> Our ISP is having {switching,routing,SMDS,frame relay} problems >> >> |
From: Bob (R. M. <rmc...@br...> - 2014-08-22 17:03:52
|
Do you have the server output? If the client can't reach the server then the following should not happen: [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 UDP does use a handshake at the start of traffic. That's how the ports are determined. The only type of traffic where a client sends without initial reachability to the server is multicast. Iperf 2.0.5 has known performance problems and on many machines tops out at ~800Mbs. This is addressed in iperf2's version 2.0.6 or greater. http://sourceforge.net/projects/iperf2/?source=directory My initial guess is that you aren't connecting to what you think you are. Two reasons o If the server is not reachable there should be no connected message o The thruput is too high Bob -----Original Message----- From: Martin T [mailto:m4r...@gm...] Sent: Friday, August 22, 2014 2:04 AM To: Metod Kozelj; Bob (Robert) McMahon Cc: ipe...@li... Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth results if Iperf server is unreachable Hi, please see the full output below: root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m ------------------------------------------------------------ Client connecting to 10.10.10.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.16 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec [ 3] Sent 6119890 datagrams read failed: No route to host [ 3] WARNING: did not receive ack of last datagram after 3 tries. root@vserver:~# In case of UDP mode the Iperf client will send the data despite the fact that the Iperf server is not reachable. Still, to me this looks like a bug. Iperf client reporting ~60Gbps egress traffic on a virtual-machine with 1GigE vNIC while having bandwidth specified with -b flag, is IMHO not expected bahavior. regards, Martin On 8/22/14, Metod Kozelj <met...@lu...> wrote: > Hi, > > the bandwidth limitation switch (-b) limits the maximum rate with which > sending party (that's usually client) will transmit data if there's no > bottleneck that sending party is able to detect. If test is done using TCP, > > bottleneck will be apparent to client (IP stack will always block > transmission > if outstanding data is not delivered yet). If test is done using UDP, > sending > party will mostly just transmit data at maximum rate except in some rare > cases. > > To verify this, you can run iperf in client mode with command similar to > this: > > iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 > > ... make sure that the port used in command above (42000) is not used by > some > other application. If you vary the bandwidth setting, you can se that > there's > a practical maximum speed that even loopback network device can handle. When > > experimenting with the command above, I've found a few interesting facts > about > my particular machine: > > * when targeting machine on my 100Mbps LAN, transmit rate would not go > beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire > speed while UDP over ethernet faces some overhead) > * when targeting loopback device with "low" bandwidth requirement (such > as > 50Mbps), transmit rate would be exactly half of requested. I don't know > if > this is some kind of reporting artefact or it actually does transmit at > half the rate > * UDP transmit rate over loopback device would not go beyond 402Mbps. > > I was using iperf 2.0.5. And I found out that it behaves similarly on > another > host (402 Mbps max over loopback, up to 812 Mbps over GigE). > > Tests above show that loopback devices (and I would count any virtualised > network devices as such) experience some kind of limits. > > Peace! > Mkx > > -- perl -e 'print > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > ------------------------------------------------------------------------------ > > BOFH excuse #299: > > The data on your hard drive is out of balance. > > > > Martin T je dne 21/08/14 16:51 napisal-a: >> Metod, >> >> but shouldn't the Iperf client send out traffic at 500Mbps as I had >> "-b 500m" specified? In my example is prints unrealistic >> bandwidth(~60Gbps) results. >> >> >> regards, >> Martin >> >> On 8/21/14, Metod Kozelj <met...@lu...> wrote: >>> Hi, >>> >>> Martin T je dne 21/08/14 15:12 napisal-a: >>>> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >>>> 10.10.10.1 is behind the firewall so that Iperf client is not able to >>>> reach it, then I will see following results printed by Iperf client: >>>> >>>> [ ID] Interval Transfer Bandwidth >>>> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >>>> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >>>> etc >>>> >>>> >>>> Why does Iperf client behave like that? Is this a know bug? >>> That's not a bug in iperf, it's how UDP is working. The main difference >>> between TCP and UDP is that with TCP, IP stack itself takes care of all >>> the >>> >>> details (such as in-order delivery, retransmissions, rate adaption, >>> ...), >>> while with UDP stack that's responsibility of application. The only >>> functionality that iperf application does when using UDP is to fetch the >>> server (receiving side) report at the end of transmission. Even this >>> function >>> is not performed in perfect way ... sending side only waits for server >>> report >>> for short time and if it filled network buffers, this waiting time can >>> be >>> too >>> short. >>> >>> The same phenomenon can be seen if there's a bottleneck somewhere >>> between >>> the >>> nodes and you try to push datarate too high ... routers at either side >>> of >>> the >>> bottle will discard packets when their TX buffers get filled up. If TCP >>> was >>> >>> used, this would trigger retransmission in IP stack and all of >>> TCP-slow-start >>> would kick in and sending application would notice drop in throughput. >>> If >>> UDP >>> was used, IP stack would not react in any way and application would dump >>> data >>> at top speed. >>> -- >>> >>> Peace! >>> Mkx >>> >>> -- perl -e 'print >>> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >>> -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >>> >>> ------------------------------------------------------------------------------ >>> >>> BOFH excuse #252: >>> >>> Our ISP is having {switching,routing,SMDS,frame relay} problems >>> >>> > > |
From: Martin T <m4r...@gm...> - 2014-08-22 09:03:41
|
Hi, please see the full output below: root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m ------------------------------------------------------------ Client connecting to 10.10.10.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.16 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec [ 3] Sent 6119890 datagrams read failed: No route to host [ 3] WARNING: did not receive ack of last datagram after 3 tries. root@vserver:~# In case of UDP mode the Iperf client will send the data despite the fact that the Iperf server is not reachable. Still, to me this looks like a bug. Iperf client reporting ~60Gbps egress traffic on a virtual-machine with 1GigE vNIC while having bandwidth specified with -b flag, is IMHO not expected bahavior. regards, Martin On 8/22/14, Metod Kozelj <met...@lu...> wrote: > Hi, > > the bandwidth limitation switch (-b) limits the maximum rate with which > sending party (that's usually client) will transmit data if there's no > bottleneck that sending party is able to detect. If test is done using TCP, > > bottleneck will be apparent to client (IP stack will always block > transmission > if outstanding data is not delivered yet). If test is done using UDP, > sending > party will mostly just transmit data at maximum rate except in some rare > cases. > > To verify this, you can run iperf in client mode with command similar to > this: > > iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 > > ... make sure that the port used in command above (42000) is not used by > some > other application. If you vary the bandwidth setting, you can se that > there's > a practical maximum speed that even loopback network device can handle. When > > experimenting with the command above, I've found a few interesting facts > about > my particular machine: > > * when targeting machine on my 100Mbps LAN, transmit rate would not go > beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire > speed while UDP over ethernet faces some overhead) > * when targeting loopback device with "low" bandwidth requirement (such > as > 50Mbps), transmit rate would be exactly half of requested. I don't know > if > this is some kind of reporting artefact or it actually does transmit at > half the rate > * UDP transmit rate over loopback device would not go beyond 402Mbps. > > I was using iperf 2.0.5. And I found out that it behaves similarly on > another > host (402 Mbps max over loopback, up to 812 Mbps over GigE). > > Tests above show that loopback devices (and I would count any virtualised > network devices as such) experience some kind of limits. > > Peace! > Mkx > > -- perl -e 'print > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > ------------------------------------------------------------------------------ > > BOFH excuse #299: > > The data on your hard drive is out of balance. > > > > Martin T je dne 21/08/14 16:51 napisal-a: >> Metod, >> >> but shouldn't the Iperf client send out traffic at 500Mbps as I had >> "-b 500m" specified? In my example is prints unrealistic >> bandwidth(~60Gbps) results. >> >> >> regards, >> Martin >> >> On 8/21/14, Metod Kozelj <met...@lu...> wrote: >>> Hi, >>> >>> Martin T je dne 21/08/14 15:12 napisal-a: >>>> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >>>> 10.10.10.1 is behind the firewall so that Iperf client is not able to >>>> reach it, then I will see following results printed by Iperf client: >>>> >>>> [ ID] Interval Transfer Bandwidth >>>> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >>>> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >>>> etc >>>> >>>> >>>> Why does Iperf client behave like that? Is this a know bug? >>> That's not a bug in iperf, it's how UDP is working. The main difference >>> between TCP and UDP is that with TCP, IP stack itself takes care of all >>> the >>> >>> details (such as in-order delivery, retransmissions, rate adaption, >>> ...), >>> while with UDP stack that's responsibility of application. The only >>> functionality that iperf application does when using UDP is to fetch the >>> server (receiving side) report at the end of transmission. Even this >>> function >>> is not performed in perfect way ... sending side only waits for server >>> report >>> for short time and if it filled network buffers, this waiting time can >>> be >>> too >>> short. >>> >>> The same phenomenon can be seen if there's a bottleneck somewhere >>> between >>> the >>> nodes and you try to push datarate too high ... routers at either side >>> of >>> the >>> bottle will discard packets when their TX buffers get filled up. If TCP >>> was >>> >>> used, this would trigger retransmission in IP stack and all of >>> TCP-slow-start >>> would kick in and sending application would notice drop in throughput. >>> If >>> UDP >>> was used, IP stack would not react in any way and application would dump >>> data >>> at top speed. >>> -- >>> >>> Peace! >>> Mkx >>> >>> -- perl -e 'print >>> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >>> -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >>> >>> ------------------------------------------------------------------------------ >>> >>> BOFH excuse #252: >>> >>> Our ISP is having {switching,routing,SMDS,frame relay} problems >>> >>> > > |
From: Martin T <m4r...@gm...> - 2014-08-22 09:50:02
|
Hi, > The report shall report that the receiver did not see the generated packets. The Iperf client reports this by saying that "WARNING: did not receive ack of last datagram after 3 tries". What I find weird are those unrealistic sent traffic results printed by Iperf client. If I execute "iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m" and 10.10.10.1 is firewalled/non-reachable, then I expect output like this: root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m ------------------------------------------------------------ Client connecting to 10.10.10.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.22 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2 port 38755 connected with 10.10.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 3613 MBytes 505 Mbits/sec [ 3] 60.0-120.0 sec 3620 MBytes 506 Mbits/sec [ 3] 120.0-180.0 sec 3618 MBytes 506 Mbits/sec etc In other words Iperf client should send traffic despite the fact that 10.10.10.1 is unreachable because UDP is connectionless and amount of bandwidth sent should be ~500Mbps because this is determined during the execution of client with the "-b" flag. regards, Martin On 8/22/14, Sandro Bureca <sb...@gm...> wrote: > Hi, all, > since udp is connection less by its nature, you may want to flood the > network with iperf even > with no correspondent receiver on the far end. > The report shall report that the receiver did not see the generated > packets. > Sandro > > > On 22 August 2014 11:03, Martin T <m4r...@gm...> wrote: > >> Hi, >> >> please see the full output below: >> >> root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m >> ------------------------------------------------------------ >> Client connecting to 10.10.10.1, UDP port 5001 >> Sending 1470 byte datagrams >> UDP buffer size: 0.16 MByte (default) >> ------------------------------------------------------------ >> [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 >> [ ID] Interval Transfer Bandwidth >> [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec >> [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec >> [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec >> [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec >> [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec >> [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec >> [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec >> [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec >> [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec >> [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec >> [ 3] Sent 6119890 datagrams >> read failed: No route to host >> [ 3] WARNING: did not receive ack of last datagram after 3 tries. >> root@vserver:~# >> >> >> In case of UDP mode the Iperf client will send the data despite the >> fact that the Iperf server is not reachable. >> >> Still, to me this looks like a bug. Iperf client reporting ~60Gbps >> egress traffic on a virtual-machine with 1GigE vNIC while having >> bandwidth specified with -b flag, is IMHO not expected bahavior. >> >> >> regards, >> Martin >> >> >> On 8/22/14, Metod Kozelj <met...@lu...> wrote: >> > Hi, >> > >> > the bandwidth limitation switch (-b) limits the maximum rate with which >> > sending party (that's usually client) will transmit data if there's no >> > bottleneck that sending party is able to detect. If test is done using >> TCP, >> > >> > bottleneck will be apparent to client (IP stack will always block >> > transmission >> > if outstanding data is not delivered yet). If test is done using UDP, >> > sending >> > party will mostly just transmit data at maximum rate except in some >> > rare >> > cases. >> > >> > To verify this, you can run iperf in client mode with command similar >> > to >> > this: >> > >> > iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 >> > >> > ... make sure that the port used in command above (42000) is not used >> > by >> > some >> > other application. If you vary the bandwidth setting, you can se that >> > there's >> > a practical maximum speed that even loopback network device can handle. >> When >> > >> > experimenting with the command above, I've found a few interesting >> > facts >> > about >> > my particular machine: >> > >> > * when targeting machine on my 100Mbps LAN, transmit rate would not >> > go >> > beyond 96Mbps (which is consistent with the fact that 100Mmbps is >> wire >> > speed while UDP over ethernet faces some overhead) >> > * when targeting loopback device with "low" bandwidth requirement >> > (such >> > as >> > 50Mbps), transmit rate would be exactly half of requested. I don't >> know >> > if >> > this is some kind of reporting artefact or it actually does >> > transmit >> at >> > half the rate >> > * UDP transmit rate over loopback device would not go beyond 402Mbps. >> > >> > I was using iperf 2.0.5. And I found out that it behaves similarly on >> > another >> > host (402 Mbps max over loopback, up to 812 Mbps over GigE). >> > >> > Tests above show that loopback devices (and I would count any >> > virtualised >> > network devices as such) experience some kind of limits. >> > >> > Peace! >> > Mkx >> > >> > -- perl -e 'print >> > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >> > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >> > >> > >> ------------------------------------------------------------------------------ >> > >> > BOFH excuse #299: >> > >> > The data on your hard drive is out of balance. >> > >> > >> > >> > Martin T je dne 21/08/14 16:51 napisal-a: >> >> Metod, >> >> >> >> but shouldn't the Iperf client send out traffic at 500Mbps as I had >> >> "-b 500m" specified? In my example is prints unrealistic >> >> bandwidth(~60Gbps) results. >> >> >> >> >> >> regards, >> >> Martin >> >> >> >> On 8/21/14, Metod Kozelj <met...@lu...> wrote: >> >>> Hi, >> >>> >> >>> Martin T je dne 21/08/14 15:12 napisal-a: >> >>>> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >> >>>> 10.10.10.1 is behind the firewall so that Iperf client is not able >> >>>> to >> >>>> reach it, then I will see following results printed by Iperf client: >> >>>> >> >>>> [ ID] Interval Transfer >> >>>> Bandwidth >> >>>> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >> >>>> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >> >>>> etc >> >>>> >> >>>> >> >>>> Why does Iperf client behave like that? Is this a know bug? >> >>> That's not a bug in iperf, it's how UDP is working. The main >> >>> difference >> >>> between TCP and UDP is that with TCP, IP stack itself takes care of >> >>> all >> >>> the >> >>> >> >>> details (such as in-order delivery, retransmissions, rate adaption, >> >>> ...), >> >>> while with UDP stack that's responsibility of application. The only >> >>> functionality that iperf application does when using UDP is to fetch >> the >> >>> server (receiving side) report at the end of transmission. Even this >> >>> function >> >>> is not performed in perfect way ... sending side only waits for >> >>> server >> >>> report >> >>> for short time and if it filled network buffers, this waiting time >> >>> can >> >>> be >> >>> too >> >>> short. >> >>> >> >>> The same phenomenon can be seen if there's a bottleneck somewhere >> >>> between >> >>> the >> >>> nodes and you try to push datarate too high ... routers at either >> >>> side >> >>> of >> >>> the >> >>> bottle will discard packets when their TX buffers get filled up. If >> >>> TCP >> >>> was >> >>> >> >>> used, this would trigger retransmission in IP stack and all of >> >>> TCP-slow-start >> >>> would kick in and sending application would notice drop in >> >>> throughput. >> >>> If >> >>> UDP >> >>> was used, IP stack would not react in any way and application would >> dump >> >>> data >> >>> at top speed. >> >>> -- >> >>> >> >>> Peace! >> >>> Mkx >> >>> >> >>> -- perl -e 'print >> >>> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >> >>> -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >> >>> >> >>> >> ------------------------------------------------------------------------------ >> >>> >> >>> BOFH excuse #252: >> >>> >> >>> Our ISP is having {switching,routing,SMDS,frame relay} problems >> >>> >> >>> >> > >> > >> >> >> ------------------------------------------------------------------------------ >> Slashdot TV. >> Video for Nerds. Stuff that matters. >> http://tv.slashdot.org/ >> _______________________________________________ >> Iperf-users mailing list >> Ipe...@li... >> https://lists.sourceforge.net/lists/listinfo/iperf-users >> > |
From: Metod K. <met...@lu...> - 2014-08-25 06:44:51
|
Hi! I just checked (using wireshark) how UDP looks like when run against server that's not listening. All the packets, including the first one, look exactly the same, except for the sequence number, which starts at 0. Meaning that there's no special handshake. The 'connected with server port xxx' is thus only client's fiction mimicking similar message which is there for TCP testing (and it's real then). After a while (anything between "next packet" and few seconds) client did receive ICMP message type 3 code 3 (meaning destination not reachable, port not reachable) from server. It happens sporadically and is obviously ignored by iperf. It seems possible to me that iperf application is not aware of these messages though. BR, Metod Bob (Robert) McMahon je dne 22/08/14 19:03 napisal-a: > Do you have the server output? > > If the client can't reach the server then the following should not happen: > > [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 > > UDP does use a handshake at the start of traffic. That's how the ports are determined. The only type of traffic where a client sends without initial reachability to the server is multicast. > > Iperf 2.0.5 has known performance problems and on many machines tops out at ~800Mbs. This is addressed in iperf2's version 2.0.6 or greater. > > http://sourceforge.net/projects/iperf2/?source=directory > > My initial guess is that you aren't connecting to what you think you are. Two reasons > > o If the server is not reachable there should be no connected message > o The thruput is too high > > Bob > -----Original Message----- > From: Martin T [mailto:m4r...@gm...] > Sent: Friday, August 22, 2014 2:04 AM > To: Metod Kozelj; Bob (Robert) McMahon > Cc: ipe...@li... > Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth results if Iperf server is unreachable > > Hi, > > please see the full output below: > > root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m > ------------------------------------------------------------ > Client connecting to 10.10.10.1, UDP port 5001 > Sending 1470 byte datagrams > UDP buffer size: 0.16 MByte (default) > ------------------------------------------------------------ > [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec > [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec > [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec > [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec > [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec > [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec > [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec > [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec > [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec > [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec > [ 3] Sent 6119890 datagrams > read failed: No route to host > [ 3] WARNING: did not receive ack of last datagram after 3 tries. > root@vserver:~# > > > In case of UDP mode the Iperf client will send the data despite the > fact that the Iperf server is not reachable. > > Still, to me this looks like a bug. Iperf client reporting ~60Gbps > egress traffic on a virtual-machine with 1GigE vNIC while having > bandwidth specified with -b flag, is IMHO not expected bahavior. > > > regards, > Martin > > > On 8/22/14, Metod Kozelj <met...@lu...> wrote: >> Hi, >> >> the bandwidth limitation switch (-b) limits the maximum rate with which >> sending party (that's usually client) will transmit data if there's no >> bottleneck that sending party is able to detect. If test is done using TCP, >> >> bottleneck will be apparent to client (IP stack will always block >> transmission >> if outstanding data is not delivered yet). If test is done using UDP, >> sending >> party will mostly just transmit data at maximum rate except in some rare >> cases. >> >> To verify this, you can run iperf in client mode with command similar to >> this: >> >> iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 >> >> ... make sure that the port used in command above (42000) is not used by >> some >> other application. If you vary the bandwidth setting, you can se that >> there's >> a practical maximum speed that even loopback network device can handle. When >> >> experimenting with the command above, I've found a few interesting facts >> about >> my particular machine: >> >> * when targeting machine on my 100Mbps LAN, transmit rate would not go >> beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire >> speed while UDP over ethernet faces some overhead) >> * when targeting loopback device with "low" bandwidth requirement (such >> as >> 50Mbps), transmit rate would be exactly half of requested. I don't know >> if >> this is some kind of reporting artefact or it actually does transmit at >> half the rate >> * UDP transmit rate over loopback device would not go beyond 402Mbps. >> >> I was using iperf 2.0.5. And I found out that it behaves similarly on >> another >> host (402 Mbps max over loopback, up to 812 Mbps over GigE). >> >> Tests above show that loopback devices (and I would count any virtualised >> network devices as such) experience some kind of limits. >> >> Peace! >> Mkx >> >> -- perl -e 'print >> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >> -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >> >> ------------------------------------------------------------------------------ >> >> BOFH excuse #299: >> >> The data on your hard drive is out of balance. >> >> >> >> Martin T je dne 21/08/14 16:51 napisal-a: >>> Metod, >>> >>> but shouldn't the Iperf client send out traffic at 500Mbps as I had >>> "-b 500m" specified? In my example is prints unrealistic >>> bandwidth(~60Gbps) results. >>> >>> >>> regards, >>> Martin >>> >>> On 8/21/14, Metod Kozelj <met...@lu...> wrote: >>>> Hi, >>>> >>>> Martin T je dne 21/08/14 15:12 napisal-a: >>>>> if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and >>>>> 10.10.10.1 is behind the firewall so that Iperf client is not able to >>>>> reach it, then I will see following results printed by Iperf client: >>>>> >>>>> [ ID] Interval Transfer Bandwidth >>>>> [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec >>>>> [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec >>>>> etc >>>>> >>>>> >>>>> Why does Iperf client behave like that? Is this a know bug? >>>> That's not a bug in iperf, it's how UDP is working. The main difference >>>> between TCP and UDP is that with TCP, IP stack itself takes care of all >>>> the >>>> >>>> details (such as in-order delivery, retransmissions, rate adaption, >>>> ...), >>>> while with UDP stack that's responsibility of application. The only >>>> functionality that iperf application does when using UDP is to fetch the >>>> server (receiving side) report at the end of transmission. Even this >>>> function >>>> is not performed in perfect way ... sending side only waits for server >>>> report >>>> for short time and if it filled network buffers, this waiting time can >>>> be >>>> too >>>> short. >>>> >>>> The same phenomenon can be seen if there's a bottleneck somewhere >>>> between >>>> the >>>> nodes and you try to push datarate too high ... routers at either side >>>> of >>>> the >>>> bottle will discard packets when their TX buffers get filled up. If TCP >>>> was >>>> >>>> used, this would trigger retransmission in IP stack and all of >>>> TCP-slow-start >>>> would kick in and sending application would notice drop in throughput. >>>> If >>>> UDP >>>> was used, IP stack would not react in any way and application would dump >>>> data >>>> at top speed. >>>> -- >>>> >>>> Peace! >>>> Mkx >>>> >>>> -- perl -e 'print >>>> $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' >>>> -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> BOFH excuse #252: >>>> >>>> Our ISP is having {switching,routing,SMDS,frame relay} problems >>>> >>>> >> -- Peace! Mkx -- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc ------------------------------------------------------------------------------ BOFH excuse #79: Look, buddy: Windows 3.1 IS A General Protection Fault. |
From: Bob (R. M. <rmc...@br...> - 2014-08-26 00:37:22
|
Oops, the only “handshake” is the ARP and there is no negotiation of the port number. The connected is from the server which prints out the ports from the client’s packet (assuming it’s listening on the client’s dest port.) Can you issue a iperf –s –I 0.5 -u and show the results from the server? Bob From: Metod Kozelj [mailto:met...@lu...] Sent: Sunday, August 24, 2014 11:45 PM To: Bob (Robert) McMahon; Martin T Cc: ipe...@li... Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth results if Iperf server is unreachable Hi! I just checked (using wireshark) how UDP looks like when run against server that's not listening. All the packets, including the first one, look exactly the same, except for the sequence number, which starts at 0. Meaning that there's no special handshake. The 'connected with server port xxx' is thus only client's fiction mimicking similar message which is there for TCP testing (and it's real then). After a while (anything between "next packet" and few seconds) client did receive ICMP message type 3 code 3 (meaning destination not reachable, port not reachable) from server. It happens sporadically and is obviously ignored by iperf. It seems possible to me that iperf application is not aware of these messages though. BR, Metod Bob (Robert) McMahon je dne 22/08/14 19:03 napisal-a: Do you have the server output? If the client can't reach the server then the following should not happen: [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 UDP does use a handshake at the start of traffic. That's how the ports are determined. The only type of traffic where a client sends without initial reachability to the server is multicast. Iperf 2.0.5 has known performance problems and on many machines tops out at ~800Mbs. This is addressed in iperf2's version 2.0.6 or greater. http://sourceforge.net/projects/iperf2/?source=directory My initial guess is that you aren't connecting to what you think you are. Two reasons o If the server is not reachable there should be no connected message o The thruput is too high Bob -----Original Message----- From: Martin T [mailto:m4r...@gm...] Sent: Friday, August 22, 2014 2:04 AM To: Metod Kozelj; Bob (Robert) McMahon Cc: ipe...@li...<mailto:ipe...@li...> Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth results if Iperf server is unreachable Hi, please see the full output below: root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m ------------------------------------------------------------ Client connecting to 10.10.10.1, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.16 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec [ 3] Sent 6119890 datagrams read failed: No route to host [ 3] WARNING: did not receive ack of last datagram after 3 tries. root@vserver:~# In case of UDP mode the Iperf client will send the data despite the fact that the Iperf server is not reachable. Still, to me this looks like a bug. Iperf client reporting ~60Gbps egress traffic on a virtual-machine with 1GigE vNIC while having bandwidth specified with -b flag, is IMHO not expected bahavior. regards, Martin On 8/22/14, Metod Kozelj <met...@lu...><mailto:met...@lu...> wrote: Hi, the bandwidth limitation switch (-b) limits the maximum rate with which sending party (that's usually client) will transmit data if there's no bottleneck that sending party is able to detect. If test is done using TCP, bottleneck will be apparent to client (IP stack will always block transmission if outstanding data is not delivered yet). If test is done using UDP, sending party will mostly just transmit data at maximum rate except in some rare cases. To verify this, you can run iperf in client mode with command similar to this: iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 ... make sure that the port used in command above (42000) is not used by some other application. If you vary the bandwidth setting, you can se that there's a practical maximum speed that even loopback network device can handle. When experimenting with the command above, I've found a few interesting facts about my particular machine: * when targeting machine on my 100Mbps LAN, transmit rate would not go beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire speed while UDP over ethernet faces some overhead) * when targeting loopback device with "low" bandwidth requirement (such as 50Mbps), transmit rate would be exactly half of requested. I don't know if this is some kind of reporting artefact or it actually does transmit at half the rate * UDP transmit rate over loopback device would not go beyond 402Mbps. I was using iperf 2.0.5. And I found out that it behaves similarly on another host (402 Mbps max over loopback, up to 812 Mbps over GigE). Tests above show that loopback devices (and I would count any virtualised network devices as such) experience some kind of limits. Peace! Mkx -- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc ------------------------------------------------------------------------------ BOFH excuse #299: The data on your hard drive is out of balance. Martin T je dne 21/08/14 16:51 napisal-a: Metod, but shouldn't the Iperf client send out traffic at 500Mbps as I had "-b 500m" specified? In my example is prints unrealistic bandwidth(~60Gbps) results. regards, Martin On 8/21/14, Metod Kozelj <met...@lu...><mailto:met...@lu...> wrote: Hi, Martin T je dne 21/08/14 15:12 napisal-a: if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and 10.10.10.1 is behind the firewall so that Iperf client is not able to reach it, then I will see following results printed by Iperf client: [ ID] Interval Transfer Bandwidth [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec etc Why does Iperf client behave like that? Is this a know bug? That's not a bug in iperf, it's how UDP is working. The main difference between TCP and UDP is that with TCP, IP stack itself takes care of all the details (such as in-order delivery, retransmissions, rate adaption, ...), while with UDP stack that's responsibility of application. The only functionality that iperf application does when using UDP is to fetch the server (receiving side) report at the end of transmission. Even this function is not performed in perfect way ... sending side only waits for server report for short time and if it filled network buffers, this waiting time can be too short. The same phenomenon can be seen if there's a bottleneck somewhere between the nodes and you try to push datarate too high ... routers at either side of the bottle will discard packets when their TX buffers get filled up. If TCP was used, this would trigger retransmission in IP stack and all of TCP-slow-start would kick in and sending application would notice drop in throughput. If UDP was used, IP stack would not react in any way and application would dump data at top speed. -- Peace! Mkx -- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc ------------------------------------------------------------------------------ BOFH excuse #252: Our ISP is having {switching,routing,SMDS,frame relay} problems -- Peace! Mkx -- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc ________________________________ BOFH excuse #79: Look, buddy: Windows 3.1 IS A General Protection Fault. |
From: Martin T <m4r...@gm...> - 2014-08-28 09:47:43
|
Metod, this "After a while (anything between "next packet" and few seconds) client did receive ICMP message type 3 code 3 (meaning destination not reachable, port not reachable) from server. It happens sporadically and is obviously ignored by iperf. It seems possible to me that iperf application is not aware of these messages though." is a very nice description. I observed exactly the same behavior with Iperf client 2.0.5 in UDP mode. Bob, option -I is not valid for server mode. Last but not least, I received a confirmation from Roberto Lumbreras, who is an Iperf packet maintainer under Debian, that Iperf client statistics in UDP mode are sometimes really broken and as upstream is not very active, there likely be no fix. Still, it would be interesting to know what exactly causes Iperf client statistics to break.. regards, Martin On 8/26/14, Bob (Robert) McMahon <rmc...@br...> wrote: > Oops, the only “handshake” is the ARP and there is no negotiation of the > port number. The connected is from the server which prints out the ports > from the client’s packet (assuming it’s listening on the client’s dest > port.) > > Can you issue a iperf –s –I 0.5 -u and show the results from the server? > > Bob > From: Metod Kozelj [mailto:met...@lu...] > Sent: Sunday, August 24, 2014 11:45 PM > To: Bob (Robert) McMahon; Martin T > Cc: ipe...@li... > Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth > results if Iperf server is unreachable > > Hi! > > I just checked (using wireshark) how UDP looks like when run against server > that's not listening. All the packets, including the first one, look exactly > the same, except for the sequence number, which starts at 0. Meaning that > there's no special handshake. The 'connected with server port xxx' is thus > only client's fiction mimicking similar message which is there for TCP > testing (and it's real then). > > After a while (anything between "next packet" and few seconds) client did > receive ICMP message type 3 code 3 (meaning destination not reachable, port > not reachable) from server. It happens sporadically and is obviously ignored > by iperf. It seems possible to me that iperf application is not aware of > these messages though. > > BR, > Metod > > > Bob (Robert) McMahon je dne 22/08/14 19:03 napisal-a: > > Do you have the server output? > > > > If the client can't reach the server then the following should not happen: > > > > [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 > > > > UDP does use a handshake at the start of traffic. That's how the ports are > determined. The only type of traffic where a client sends without initial > reachability to the server is multicast. > > > > Iperf 2.0.5 has known performance problems and on many machines tops out at > ~800Mbs. This is addressed in iperf2's version 2.0.6 or greater. > > > > http://sourceforge.net/projects/iperf2/?source=directory > > > > My initial guess is that you aren't connecting to what you think you are. > Two reasons > > > > o If the server is not reachable there should be no connected message > > o The thruput is too high > > > > Bob > > -----Original Message----- > > From: Martin T [mailto:m4r...@gm...] > > Sent: Friday, August 22, 2014 2:04 AM > > To: Metod Kozelj; Bob (Robert) McMahon > > Cc: > ipe...@li...<mailto:ipe...@li...> > > Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth > results if Iperf server is unreachable > > > > Hi, > > > > please see the full output below: > > > > root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m > > ------------------------------------------------------------ > > Client connecting to 10.10.10.1, UDP port 5001 > > Sending 1470 byte datagrams > > UDP buffer size: 0.16 MByte (default) > > ------------------------------------------------------------ > > [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 > > [ ID] Interval Transfer Bandwidth > > [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec > > [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec > > [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec > > [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec > > [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec > > [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec > > [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec > > [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec > > [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec > > [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec > > [ 3] Sent 6119890 datagrams > > read failed: No route to host > > [ 3] WARNING: did not receive ack of last datagram after 3 tries. > > root@vserver:~# > > > > > > In case of UDP mode the Iperf client will send the data despite the > > fact that the Iperf server is not reachable. > > > > Still, to me this looks like a bug. Iperf client reporting ~60Gbps > > egress traffic on a virtual-machine with 1GigE vNIC while having > > bandwidth specified with -b flag, is IMHO not expected bahavior. > > > > > > regards, > > Martin > > > > > > On 8/22/14, Metod Kozelj > <met...@lu...><mailto:met...@lu...> wrote: > > Hi, > > > > the bandwidth limitation switch (-b) limits the maximum rate with which > > sending party (that's usually client) will transmit data if there's no > > bottleneck that sending party is able to detect. If test is done using TCP, > > > > bottleneck will be apparent to client (IP stack will always block > > transmission > > if outstanding data is not delivered yet). If test is done using UDP, > > sending > > party will mostly just transmit data at maximum rate except in some rare > > cases. > > > > To verify this, you can run iperf in client mode with command similar to > > this: > > > > iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 > > > > ... make sure that the port used in command above (42000) is not used by > > some > > other application. If you vary the bandwidth setting, you can se that > > there's > > a practical maximum speed that even loopback network device can handle. > When > > > > experimenting with the command above, I've found a few interesting facts > > about > > my particular machine: > > > > * when targeting machine on my 100Mbps LAN, transmit rate would not go > > beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire > > speed while UDP over ethernet faces some overhead) > > * when targeting loopback device with "low" bandwidth requirement (such > > as > > 50Mbps), transmit rate would be exactly half of requested. I don't know > > if > > this is some kind of reporting artefact or it actually does transmit at > > half the rate > > * UDP transmit rate over loopback device would not go beyond 402Mbps. > > > > I was using iperf 2.0.5. And I found out that it behaves similarly on > > another > > host (402 Mbps max over loopback, up to 812 Mbps over GigE). > > > > Tests above show that loopback devices (and I would count any virtualised > > network devices as such) experience some kind of limits. > > > > Peace! > > Mkx > > > > -- perl -e 'print > > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > > > ------------------------------------------------------------------------------ > > > > BOFH excuse #299: > > > > The data on your hard drive is out of balance. > > > > > > > > Martin T je dne 21/08/14 16:51 napisal-a: > > Metod, > > > > but shouldn't the Iperf client send out traffic at 500Mbps as I had > > "-b 500m" specified? In my example is prints unrealistic > > bandwidth(~60Gbps) results. > > > > > > regards, > > Martin > > > > On 8/21/14, Metod Kozelj > <met...@lu...><mailto:met...@lu...> wrote: > > Hi, > > > > Martin T je dne 21/08/14 15:12 napisal-a: > > if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and > > 10.10.10.1 is behind the firewall so that Iperf client is not able to > > reach it, then I will see following results printed by Iperf client: > > > > [ ID] Interval Transfer Bandwidth > > [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec > > [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec > > etc > > > > > > Why does Iperf client behave like that? Is this a know bug? > > That's not a bug in iperf, it's how UDP is working. The main difference > > between TCP and UDP is that with TCP, IP stack itself takes care of all > > the > > > > details (such as in-order delivery, retransmissions, rate adaption, > > ...), > > while with UDP stack that's responsibility of application. The only > > functionality that iperf application does when using UDP is to fetch the > > server (receiving side) report at the end of transmission. Even this > > function > > is not performed in perfect way ... sending side only waits for server > > report > > for short time and if it filled network buffers, this waiting time can > > be > > too > > short. > > > > The same phenomenon can be seen if there's a bottleneck somewhere > > between > > the > > nodes and you try to push datarate too high ... routers at either side > > of > > the > > bottle will discard packets when their TX buffers get filled up. If TCP > > was > > > > used, this would trigger retransmission in IP stack and all of > > TCP-slow-start > > would kick in and sending application would notice drop in throughput. > > If > > UDP > > was used, IP stack would not react in any way and application would dump > > data > > at top speed. > > -- > > > > Peace! > > Mkx > > > > -- perl -e 'print > > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > > > ------------------------------------------------------------------------------ > > > > BOFH excuse #252: > > > > Our ISP is having {switching,routing,SMDS,frame relay} problems > > > > > > > > > > -- > > Peace! > > Mkx > > > > -- perl -e 'print > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > ________________________________ > > BOFH excuse #79: > > > > Look, buddy: Windows 3.1 IS A General Protection Fault. > |
From: Bob (R. M. <rmc...@br...> - 2014-08-28 16:43:48
|
Hi Martin, I want to see server reports (-i), e.g. iperf -s -i 0.5 -p 60002 Bob -----Original Message----- From: Martin T [mailto:m4r...@gm...] Sent: Thursday, August 28, 2014 2:48 AM To: Bob (Robert) McMahon; met...@lu... Cc: ipe...@li... Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth results if Iperf server is unreachable Metod, this "After a while (anything between "next packet" and few seconds) client did receive ICMP message type 3 code 3 (meaning destination not reachable, port not reachable) from server. It happens sporadically and is obviously ignored by iperf. It seems possible to me that iperf application is not aware of these messages though." is a very nice description. I observed exactly the same behavior with Iperf client 2.0.5 in UDP mode. Bob, option -I is not valid for server mode. Last but not least, I received a confirmation from Roberto Lumbreras, who is an Iperf packet maintainer under Debian, that Iperf client statistics in UDP mode are sometimes really broken and as upstream is not very active, there likely be no fix. Still, it would be interesting to know what exactly causes Iperf client statistics to break.. regards, Martin On 8/26/14, Bob (Robert) McMahon <rmc...@br...> wrote: > Oops, the only “handshake” is the ARP and there is no negotiation of the > port number. The connected is from the server which prints out the ports > from the client’s packet (assuming it’s listening on the client’s dest > port.) > > Can you issue a iperf –s –I 0.5 -u and show the results from the server? > > Bob > From: Metod Kozelj [mailto:met...@lu...] > Sent: Sunday, August 24, 2014 11:45 PM > To: Bob (Robert) McMahon; Martin T > Cc: ipe...@li... > Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth > results if Iperf server is unreachable > > Hi! > > I just checked (using wireshark) how UDP looks like when run against server > that's not listening. All the packets, including the first one, look exactly > the same, except for the sequence number, which starts at 0. Meaning that > there's no special handshake. The 'connected with server port xxx' is thus > only client's fiction mimicking similar message which is there for TCP > testing (and it's real then). > > After a while (anything between "next packet" and few seconds) client did > receive ICMP message type 3 code 3 (meaning destination not reachable, port > not reachable) from server. It happens sporadically and is obviously ignored > by iperf. It seems possible to me that iperf application is not aware of > these messages though. > > BR, > Metod > > > Bob (Robert) McMahon je dne 22/08/14 19:03 napisal-a: > > Do you have the server output? > > > > If the client can't reach the server then the following should not happen: > > > > [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 > > > > UDP does use a handshake at the start of traffic. That's how the ports are > determined. The only type of traffic where a client sends without initial > reachability to the server is multicast. > > > > Iperf 2.0.5 has known performance problems and on many machines tops out at > ~800Mbs. This is addressed in iperf2's version 2.0.6 or greater. > > > > http://sourceforge.net/projects/iperf2/?source=directory > > > > My initial guess is that you aren't connecting to what you think you are. > Two reasons > > > > o If the server is not reachable there should be no connected message > > o The thruput is too high > > > > Bob > > -----Original Message----- > > From: Martin T [mailto:m4r...@gm...] > > Sent: Friday, August 22, 2014 2:04 AM > > To: Metod Kozelj; Bob (Robert) McMahon > > Cc: > ipe...@li...<mailto:ipe...@li...> > > Subject: Re: [Iperf-users] Iperf client 2.0.5 shows unrealistic bandwidth > results if Iperf server is unreachable > > > > Hi, > > > > please see the full output below: > > > > root@vserver:~# iperf -c 10.10.10.1 -fm -t 600 -i60 -u -b 500m > > ------------------------------------------------------------ > > Client connecting to 10.10.10.1, UDP port 5001 > > Sending 1470 byte datagrams > > UDP buffer size: 0.16 MByte (default) > > ------------------------------------------------------------ > > [ 3] local 192.168.1.2 port 55373 connected with 10.10.10.1 port 5001 > > [ ID] Interval Transfer Bandwidth > > [ 3] 0.0-60.0 sec 422744 MBytes 59104 Mbits/sec > > [ 3] 60.0-120.0 sec 435030 MBytes 60822 Mbits/sec > > [ 3] 120.0-180.0 sec 402263 MBytes 56240 Mbits/sec > > [ 3] 180.0-240.0 sec 398167 MBytes 55668 Mbits/sec > > [ 3] 240.0-300.0 sec 422746 MBytes 59104 Mbits/sec > > [ 3] 300.0-360.0 sec 381786 MBytes 53378 Mbits/sec > > [ 3] 360.0-420.0 sec 402263 MBytes 56240 Mbits/sec > > [ 3] 420.0-480.0 sec 406365 MBytes 56814 Mbits/sec > > [ 3] 480.0-540.0 sec 438132 MBytes 61395 Mbits/sec > > [ 3] 0.0-600.0 sec 4108674 MBytes 57443 Mbits/sec > > [ 3] Sent 6119890 datagrams > > read failed: No route to host > > [ 3] WARNING: did not receive ack of last datagram after 3 tries. > > root@vserver:~# > > > > > > In case of UDP mode the Iperf client will send the data despite the > > fact that the Iperf server is not reachable. > > > > Still, to me this looks like a bug. Iperf client reporting ~60Gbps > > egress traffic on a virtual-machine with 1GigE vNIC while having > > bandwidth specified with -b flag, is IMHO not expected bahavior. > > > > > > regards, > > Martin > > > > > > On 8/22/14, Metod Kozelj > <met...@lu...><mailto:met...@lu...> wrote: > > Hi, > > > > the bandwidth limitation switch (-b) limits the maximum rate with which > > sending party (that's usually client) will transmit data if there's no > > bottleneck that sending party is able to detect. If test is done using TCP, > > > > bottleneck will be apparent to client (IP stack will always block > > transmission > > if outstanding data is not delivered yet). If test is done using UDP, > > sending > > party will mostly just transmit data at maximum rate except in some rare > > cases. > > > > To verify this, you can run iperf in client mode with command similar to > > this: > > > > iperf -c localhost -i 1 -p 42000 -u -b500M -t 10 > > > > ... make sure that the port used in command above (42000) is not used by > > some > > other application. If you vary the bandwidth setting, you can se that > > there's > > a practical maximum speed that even loopback network device can handle. > When > > > > experimenting with the command above, I've found a few interesting facts > > about > > my particular machine: > > > > * when targeting machine on my 100Mbps LAN, transmit rate would not go > > beyond 96Mbps (which is consistent with the fact that 100Mmbps is wire > > speed while UDP over ethernet faces some overhead) > > * when targeting loopback device with "low" bandwidth requirement (such > > as > > 50Mbps), transmit rate would be exactly half of requested. I don't know > > if > > this is some kind of reporting artefact or it actually does transmit at > > half the rate > > * UDP transmit rate over loopback device would not go beyond 402Mbps. > > > > I was using iperf 2.0.5. And I found out that it behaves similarly on > > another > > host (402 Mbps max over loopback, up to 812 Mbps over GigE). > > > > Tests above show that loopback devices (and I would count any virtualised > > network devices as such) experience some kind of limits. > > > > Peace! > > Mkx > > > > -- perl -e 'print > > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > > > ------------------------------------------------------------------------------ > > > > BOFH excuse #299: > > > > The data on your hard drive is out of balance. > > > > > > > > Martin T je dne 21/08/14 16:51 napisal-a: > > Metod, > > > > but shouldn't the Iperf client send out traffic at 500Mbps as I had > > "-b 500m" specified? In my example is prints unrealistic > > bandwidth(~60Gbps) results. > > > > > > regards, > > Martin > > > > On 8/21/14, Metod Kozelj > <met...@lu...><mailto:met...@lu...> wrote: > > Hi, > > > > Martin T je dne 21/08/14 15:12 napisal-a: > > if I execute "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" and > > 10.10.10.1 is behind the firewall so that Iperf client is not able to > > reach it, then I will see following results printed by Iperf client: > > > > [ ID] Interval Transfer Bandwidth > > [ 3] 0.0 - 60.0 sec 422744 MBytes 59104 Mbits/sec > > [ 3] 60.0 - 120.0 sec 435030 MBytes 60822 Mbits/sec > > etc > > > > > > Why does Iperf client behave like that? Is this a know bug? > > That's not a bug in iperf, it's how UDP is working. The main difference > > between TCP and UDP is that with TCP, IP stack itself takes care of all > > the > > > > details (such as in-order delivery, retransmissions, rate adaption, > > ...), > > while with UDP stack that's responsibility of application. The only > > functionality that iperf application does when using UDP is to fetch the > > server (receiving side) report at the end of transmission. Even this > > function > > is not performed in perfect way ... sending side only waits for server > > report > > for short time and if it filled network buffers, this waiting time can > > be > > too > > short. > > > > The same phenomenon can be seen if there's a bottleneck somewhere > > between > > the > > nodes and you try to push datarate too high ... routers at either side > > of > > the > > bottle will discard packets when their TX buffers get filled up. If TCP > > was > > > > used, this would trigger retransmission in IP stack and all of > > TCP-slow-start > > would kick in and sending application would notice drop in throughput. > > If > > UDP > > was used, IP stack would not react in any way and application would dump > > data > > at top speed. > > -- > > > > Peace! > > Mkx > > > > -- perl -e 'print > > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > > > ------------------------------------------------------------------------------ > > > > BOFH excuse #252: > > > > Our ISP is having {switching,routing,SMDS,frame relay} problems > > > > > > > > > > -- > > Peace! > > Mkx > > > > -- perl -e 'print > $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' > > -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc > > ________________________________ > > BOFH excuse #79: > > > > Look, buddy: Windows 3.1 IS A General Protection Fault. > |