iperf's -r option creates low throughput in reverse direction on 100GbE links
A means to measure network responsiveness and throughput
Brought to you by:
rjmcmahon
Using iperf 2.0.13 on Ubuntu 18.04 to test 100GbE equipment, I've noticed that using iperf's "-r" option results in the expected throughput in the client-to-server direction, but only a fraction of the expected throughput in the server-to-client direction. A work-around is to manually setup client and server roles on each machine, then manually test data transfer one direction at a time, without using the -r option.
Since manually running iperf in each direction works well, I think there's a bug with the -r option.
It's worth mentioning that this is not an issue when testing 1GbE links in the same machines.
Here is a video showing what I'm seeing:
https://youtu.be/FNySfTvJd0w
Many thanks in advance for help.
I've just cloned the latest source code and installed iperf 2.0.14a from it, and the reverse direction throughput is considerably improved, but still not wire-speed. Here's a video showing that:
https://youtu.be/c98yMgkVgpI
Hi Chris,
Thanks for posting this. Can you try your 2.0.14a using --reverse? Also, if you can try with more debugging that could be helpful. To get advanced debugging requires a recompile after ./configure --enable-thread-debug. Finally, the WARNing your seeing is suggesting a mutex lock as a bottleneck. I'll need to make a change to the code to support some options to help fault isolate that. Give me a few days and I'll have something.
Bob
Hi Chris,
I adde support for NUM_REPORT_STRUCTS override. I'm curious if this will help. You'll need the latest (Nov 30) iperf 2.0.14a
Bob
using the latest source with debugging enabled, and adding this "--NUM_REPORT_STRUCTS 50000", here's what I get :
transcript from the client:
transcript from the server:
Last edit: Chris Preimesberger 2019-12-05
Answering chronoligically here:
Using the --reverse option on the 2.0.14a 21 November code resulted in limited throughput int he client-to-server direction, and no transfer in the server-to-client direction. Here's a transcript showing my results for "-r" and "--reverse":
Last edit: Chris Preimesberger 2019-12-05
Thanks Chris, let me dig into this a bit. I really appreciate your help here.
Bob
FYI, I added a man page entry as well:
Also, -P on the server won't have any significant effect per the traffic. It's a way to have the server die after a traffic test completes. My suggestion is to use -t to kill servers and don't use -P on the server at all.
Bob
Hi Bob,
Thank you for all of your holiday weekend work on this. I'm really impressed with the level of support you've been providing, and I'm eager to follow your suggestions. Sadly, I won’t have access to the 100GbE test setup until Monday, so I can't provide any updates until then... In lieu of a true technical update for this ticket, I would like to express gratitude for your years of developing iperf, and let you know that I find your work invaluable. Thank you.
I’ll be sure to provide updates when I can.
Chris
I started wondering if there's a way I could provide useful feedback from home, without real 100GbE interfaces to play on, and I thought of an alternative of just running variants of these commands on the same machine to get very high (100Gb+) throughput:
iperf -s
iperf -c localhost
Would this be a valid/useful way to test your changes until I can get back in the office?
I've compiled 2.0.14a, both with and without "./configure --enable-thread-debug", and for some reason I would consistently get a segmentation fault on the server instance upon the 2nd time my client connected to it (please see attached transcrits from the client and server). going back to iperf v2.0.10 from my distro resolved the seg-faults.
Just an update, I've gotten a few 100G systems and am actively working on this. Please be patient.
Bob
Ok, I think I have a working version as of 12/3. I may do some more tuning after more testing.
Bob
Nice!! I'm now getting wire-speed in both directions every time with the -r option.
Thanks for finding and posting this. Please co let me know if you run into anymore issues.
Bob
Thank you for the fast fix. I'm content with closing this ticket as resolved.
Thank you for reporting it. I'll close the ticket. Please do file new ones when you find something that seems to be working incorrectly. This feedback helps all who rely on these tools.
Bob