Can iperf2 output per packet info (sequence number and timestamp) to a file? Currently it reports id1, id2 and packetID if the "HAVE_PACKET_DEBUG" option is enabled but no timestamp. Is it possible to add it?
Assuming the timestamp of each packet can be added, is there a way to enable or disable it say using a command line argument as in some cases per packet info might not be needed due to its large amount of data.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Writing to file i/o every packet would likely impact actual performance. Iperf 2 was designed to measure network i/o very accurately by minimizing i/o as well as performing i/o in a separate thread than traffic.
Also, the wireshark idea might be another way to achieve your goals
Wtih that said, it is doable technically.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks. Agree with what you said. If it is doable, I am wondering if there is any plan to add this feature. I guess for the time being using Wireshark is probably the way to go.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
There is no plan at the moment. Can you describe the use case?
The general idea with iperf 2 is that there should be some some sort of server side analysis capabilities per the client's traffic. Writing "partial information" to files doesn't align with these goals so I need more information about the use cases proposed before adding a new feature.
Note: What is supported is the rx histograms. Per the man pageThere is a timestamp of the worst case packet so one can find it in a wireshark trace. All of this requires clock sync
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The following are 3 use cases which can be referred to RFC2544 spec. It is also the reason I would think writing per packet info to files on either client or server side for post processing is a way for implementation.
Run a traffic for at least 120 seconds. After 60 seconds, tag one frame with its timestamp recorded at the transmitter, recognize the tagged frame on the receiver and record the timestamp at which the frame is received.
Run a traffic at a rate of 110% of the maximum throughput rate for at least 60 seconds. At timestamp A reduce the rate to 50% of the above rate, record the time of the last frame lost, timestamp B.
Run a traffic with the minimum frame size. Cause a reset on the DUT. Record the time that the last frame, timestamp A, of the initial stream and the first frame of the new stream, timestamp B, are received.
Going through the code base, it seems that packet timestamp is available on client and server, and can be extracted for usage. Just not sure what is the best way to handle it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
ok, let me read through RFC 2544 a bit. I don't think the timestamps are required per se, rather the one way delay measurement. I also need to see when the timestamps exactly the timestamps are to be taken. I think a single sample isn't nearly as good as a the full distribution. Iperf 2 provides one way delay for every packet, every TCP write, etc.
The loss measurement isn't possible. There is no such thing of "timestamp B
for a lost packet. What I think is the goal is to figure out when an oversubscribed link goes from a non-zero PER to a zero PER. I need to think about that a bit.
Thanks for providing this information.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Why would wireshark or any other capture tool not be sufficient ? I just checked and can see all packets with everything nicely decoded including timestamps ... You can filter the way you like and save to file in any format you prefer.
While testing this I found that interestingly enough wireshark reports 447 UDP packets while iperf itself total shows only 270. Any idea anyone ?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
On number 3, what is the DUT? Is it the transmitter, receiver or both? I don't know of a way for iperf to get a timestamp on a DUT reset. So it may not be possible to correlate the writes to a DUT reset.
I also don't like number 1 as defined. Measuring the latency of a single packet isn't sufficient information. Let me think about a better way to figure this out.
One might want to look into Little's law. Iperf 2.1 supports these calculations.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Also, I tend to find that perturbations are better than single events. Basically, have the stable state and measure things, cause a perturbation for some duration and measure things, and then remove the perturbation condition and measure things.
For example, the way this applies for number one is, find the congestion state ahead of time. Apply a load at 50% the congestion for 60 seconds measure things, adjust the load to above congestion (110%) then measure things, then go back to under congested (50%) load.
The cli options would be something like:
** perturbation-after ** (120) units is seconds
** perturbation-hold 60.0 (units is seconds)
** perturbation-tx-bandwidth <value> (units is bits/sec) </value>
Then iperf just needs a single bit in the packet payload to indicate if the perturbation is active or not.
Then there would be a report output per the bit change (along with some delay) which has a full histogram (instead of just one packet)
Last edit: Robert McMahon 2021-06-12
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
According to the spec, the latency test (26.2) should be performed after the throughput test (26.1). If looking at the throughput test, interestingly the implementation of the algorithm is kind of similar to what you stated. The difference is that it checks packet loss or packet loss rate to determine if the DUT throughput is in the equilibrium state.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The DUT can be both the transmitter and the receiver. In real test environment, the DUT is actually connected to a device called traffic endpoint through Ethernet port. It is the traffic endpoint sending or receiving data behind the DUT.
Use case 1 uses a single packet to measure the latency. Although the measurement is repeated for at least 20 times and reports the average value, it still seems a bit unconventional. Hope the Little's Law holds if more measurements can be done.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm reading through the RFC. It's dated 1999. These tests may be a bit incomplete.
On item1, iperf gives latency of every packet via the --histogram option. No need to do the single tagging as every packet is effectively tagged per a sequence number.
On item 2, system recovery seems to be defined as no lost packets. That's not really a good metric. What's better is that the latency goes back to "normal" and the bottleneck queue drains. This test as defined exacerbates the bufferbloat issue.
On item 3, one can use the lost packet count and the packet throughput to compute the outage time.
So the only new thing is for item 2. What's to be measured is the queue depth return to normal so-to-speak. I think the Little's law inP number can be used to define "system recovered."
I need to prototype 2 a bit to see if it works as I think it would.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Also, the cli for number 2 would be something like
congestion-start delay in units seconds congestion-hold time in units seconds congestion-bw rate in units bits/second
Then, there would be a "congestion on" (CO) bit
the receiver would learn the inP value when CO was off. Observe the max inP value after CO bit is set. Start a timer when CO goes off again and stop the timer when inP meets the initial learned inP. Then output the values and the time.
Note that all of these one way delay (OWD) measurements require clock sync.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Can iperf2 output per packet info (sequence number and timestamp) to a file? Currently it reports id1, id2 and packetID if the "HAVE_PACKET_DEBUG" option is enabled but no timestamp. Is it possible to add it?
Assuming the timestamp of each packet can be added, is there a way to enable or disable it say using a command line argument as in some cases per packet info might not be needed due to its large amount of data.
Writing to file i/o every packet would likely impact actual performance. Iperf 2 was designed to measure network i/o very accurately by minimizing i/o as well as performing i/o in a separate thread than traffic.
Also, the wireshark idea might be another way to achieve your goals
Wtih that said, it is doable technically.
Thanks. Agree with what you said. If it is doable, I am wondering if there is any plan to add this feature. I guess for the time being using Wireshark is probably the way to go.
There is no plan at the moment. Can you describe the use case?
The general idea with iperf 2 is that there should be some some sort of server side analysis capabilities per the client's traffic. Writing "partial information" to files doesn't align with these goals so I need more information about the use cases proposed before adding a new feature.
Note: What is supported is the rx histograms. Per the man pageThere is a timestamp of the worst case packet so one can find it in a wireshark trace. All of this requires clock sync
The following are 3 use cases which can be referred to RFC2544 spec. It is also the reason I would think writing per packet info to files on either client or server side for post processing is a way for implementation.
Going through the code base, it seems that packet timestamp is available on client and server, and can be extracted for usage. Just not sure what is the best way to handle it.
ok, let me read through RFC 2544 a bit. I don't think the timestamps are required per se, rather the one way delay measurement. I also need to see when the timestamps exactly the timestamps are to be taken. I think a single sample isn't nearly as good as a the full distribution. Iperf 2 provides one way delay for every packet, every TCP write, etc.
The loss measurement isn't possible. There is no such thing of "timestamp B
for a lost packet. What I think is the goal is to figure out when an oversubscribed link goes from a non-zero PER to a zero PER. I need to think about that a bit.
Thanks for providing this information.
Why would wireshark or any other capture tool not be sufficient ? I just checked and can see all packets with everything nicely decoded including timestamps ... You can filter the way you like and save to file in any format you prefer.
While testing this I found that interestingly enough wireshark reports 447 UDP packets while iperf itself total shows only 270. Any idea anyone ?
Where is wireshark running? Is the client or server reporting 447 packets?
Wireshark or tshark can do it. The question here is that is it doable using iperf2?
I run it on client with -R switch
fyi, I filed a ticket for this
Thank you. Hope it will become available in the next or so release.
I should be able to get to it within a few weeks.
On number 3, what is the DUT? Is it the transmitter, receiver or both? I don't know of a way for iperf to get a timestamp on a DUT reset. So it may not be possible to correlate the writes to a DUT reset.
I also don't like number 1 as defined. Measuring the latency of a single packet isn't sufficient information. Let me think about a better way to figure this out.
One might want to look into Little's law. Iperf 2.1 supports these calculations.
Also, I tend to find that perturbations are better than single events. Basically, have the stable state and measure things, cause a perturbation for some duration and measure things, and then remove the perturbation condition and measure things.
For example, the way this applies for number one is, find the congestion state ahead of time. Apply a load at 50% the congestion for 60 seconds measure things, adjust the load to above congestion (110%) then measure things, then go back to under congested (50%) load.
The cli options would be something like:
** perturbation-after ** (120) units is seconds
** perturbation-hold 60.0 (units is seconds)
** perturbation-tx-bandwidth <value> (units is bits/sec) </value>
Then iperf just needs a single bit in the packet payload to indicate if the perturbation is active or not.
Then there would be a report output per the bit change (along with some delay) which has a full histogram (instead of just one packet)
Last edit: Robert McMahon 2021-06-12
According to the spec, the latency test (26.2) should be performed after the throughput test (26.1). If looking at the throughput test, interestingly the implementation of the algorithm is kind of similar to what you stated. The difference is that it checks packet loss or packet loss rate to determine if the DUT throughput is in the equilibrium state.
The DUT can be both the transmitter and the receiver. In real test environment, the DUT is actually connected to a device called traffic endpoint through Ethernet port. It is the traffic endpoint sending or receiving data behind the DUT.
Use case 1 uses a single packet to measure the latency. Although the measurement is repeated for at least 20 times and reports the average value, it still seems a bit unconventional. Hope the Little's Law holds if more measurements can be done.
I'm reading through the RFC. It's dated 1999. These tests may be a bit incomplete.
On item1, iperf gives latency of every packet via the --histogram option. No need to do the single tagging as every packet is effectively tagged per a sequence number.
On item 2, system recovery seems to be defined as no lost packets. That's not really a good metric. What's better is that the latency goes back to "normal" and the bottleneck queue drains. This test as defined exacerbates the bufferbloat issue.
On item 3, one can use the lost packet count and the packet throughput to compute the outage time.
So the only new thing is for item 2. What's to be measured is the queue depth return to normal so-to-speak. I think the Little's law inP number can be used to define "system recovered."
I need to prototype 2 a bit to see if it works as I think it would.
Also, the cli for number 2 would be something like
congestion-start delay in units seconds
congestion-hold time in units seconds
congestion-bw rate in units bits/second
Then, there would be a "congestion on" (CO) bit
the receiver would learn the inP value when CO was off. Observe the max inP value after CO bit is set. Start a timer when CO goes off again and stop the timer when inP meets the initial learned inP. Then output the values and the time.
Note that all of these one way delay (OWD) measurements require clock sync.