While conducting responsiveness tests using the bounceback option, I noticed that the server consistently sends back the same amount of data that it receives from the client. We have been utilizing iperf for throughput measurements across mobile communication networks, which often have unique characteristics in their uplink and downlink directions. This makes it essential to conduct individual tests on unidirectional client-server connections.
The question I have is whether or not there is an argument that I may have unintentionally overlooked in my testing that could force bounceback to generate data solely in one direction (from client to server . I am aware that the server must send back some data to the client to communicate timing information). If such an argument does not currently exist, are there any plans to support unidirectional throughput with the bounceback option?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
The way I do that is with --bounceback-request / --bounceback-reply.
If you configure a large request size and a small reply size, you will get an asymmetrical test, with very low bandwidth need on the return path. If you are limited by the number of packets on the return path, you can use large request sizes spanning multiple MSSs/MTUs to create more packets on the request path.
Those are not unidirectional, but fairly close to it.
Regards,
Jean
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The latest in master has the following bounceback options. The reply & request sizes need to be large enough to carry the bounceback payload with timestamps.
Thank you, Robert, for your swift answer and for pointing me to the relevant implementation. Your insights made me realize that a minimum packet size is necessary to transmit bounceback data effectively.
To elaborate more on my intentions, I'm trying to measure latency under high uplink or downlink load conditions. Initially, my idea was to use the bounceback parameter n (--bounceback[=n]) to increase the number of packets sent per interval, or parameter -l to expand the packet size, aiming to create a significant load in one direction. This is when I noticed that in the backhaul the number and size of data blocks are just the same as in forward drirection resulting in equal throughput in both directions.
However, now I understand that the correct approach for achieving my use case involves combining --bounceback with --working-load[=up|down|bidir][,n] argument, which steers me in the correct direction (see attachment).
Despite this, I have observed a discrepancy while looking at the TCP throughput from the working-load stream in combination with --bounceback. The data indicates 481 Mbits/sec, whereas the IP connection's actual capacity should be around 124 Gbit/s:
Interestingly, omitting --bounceback in the final test boosts the throughput back up to 124 Gbit/sec. I'm curious, do you have any insights as to why the throughput for the working-load stream would significantly drop when --bounceback is additionally utilized?
By the way, I also experimented with augmenting the number of streams in conjunction with the working-load argument. This technique boosted the throughput to approximately 13 Gbit/s with about 24 streams in use. However, this is still significantly less than the channel's actual capabilities.
This is when I noticed that in the backhaul the number and size of data blocks are just the
same as in forward drirection resulting in equal throughput in both directions.
As I said in my previous message, a workaround to to use larger requests, spanning multiple MSSs/MTUs.
Jean
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I would probably use a newer version of iperf. There has been a lot of bug fixes in this area since 2.1.9. I plan to release a new version soon. Until then, you'll need to build from the master branch.
Sorry I missed your message until now.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
No worries, I've been away for a while and didn't have a chance to follow up on this. Now, after updating to iperf 2.2 I see a lot of improvements in the bounceback area, most importantly -working-load down|up seems to work withoug any problems. Thanks! Great work!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
While conducting responsiveness tests using the bounceback option, I noticed that the server consistently sends back the same amount of data that it receives from the client. We have been utilizing iperf for throughput measurements across mobile communication networks, which often have unique characteristics in their uplink and downlink directions. This makes it essential to conduct individual tests on unidirectional client-server connections.
The question I have is whether or not there is an argument that I may have unintentionally overlooked in my testing that could force bounceback to generate data solely in one direction (from client to server . I am aware that the server must send back some data to the client to communicate timing information). If such an argument does not currently exist, are there any plans to support unidirectional throughput with the bounceback option?
Hi,
The way I do that is with --bounceback-request / --bounceback-reply.
If you configure a large request size and a small reply size, you will get an asymmetrical test, with very low bandwidth need on the return path. If you are limited by the number of packets on the return path, you can use large request sizes spanning multiple MSSs/MTUs to create more packets on the request path.
Those are not unidirectional, but fairly close to it.
Regards,
Jean
The latest in master has the following bounceback options. The reply & request sizes need to be large enough to carry the bounceback payload with timestamps.
Thank you, Robert, for your swift answer and for pointing me to the relevant implementation. Your insights made me realize that a minimum packet size is necessary to transmit bounceback data effectively.
To elaborate more on my intentions, I'm trying to measure latency under high uplink or downlink load conditions. Initially, my idea was to use the bounceback parameter n (--bounceback[=n]) to increase the number of packets sent per interval, or parameter -l to expand the packet size, aiming to create a significant load in one direction. This is when I noticed that in the backhaul the number and size of data blocks are just the same as in forward drirection resulting in equal throughput in both directions.
However, now I understand that the correct approach for achieving my use case involves combining --bounceback with --working-load[=up|down|bidir][,n] argument, which steers me in the correct direction (see attachment).
Despite this, I have observed a discrepancy while looking at the TCP throughput from the working-load stream in combination with --bounceback. The data indicates 481 Mbits/sec, whereas the IP connection's actual capacity should be around 124 Gbit/s:
vs.
Interestingly, omitting --bounceback in the final test boosts the throughput back up to 124 Gbit/sec. I'm curious, do you have any insights as to why the throughput for the working-load stream would significantly drop when --bounceback is additionally utilized?
By the way, I also experimented with augmenting the number of streams in conjunction with the working-load argument. This technique boosted the throughput to approximately 13 Gbit/s with about 24 streams in use. However, this is still significantly less than the channel's actual capabilities.
Last edit: Andreas 2024-02-23
As I said in my previous message, a workaround to to use larger requests, spanning multiple MSSs/MTUs.
Jean
I would probably use a newer version of iperf. There has been a lot of bug fixes in this area since 2.1.9. I plan to release a new version soon. Until then, you'll need to build from the master branch.
Sorry I missed your message until now.
No worries, I've been away for a while and didn't have a chance to follow up on this. Now, after updating to iperf 2.2 I see a lot of improvements in the bounceback area, most importantly -working-load down|up seems to work withoug any problems. Thanks! Great work!