Oh, I can check this in the server header output. Server listening on UDP port 5001 with pid 43577 Read buffer size: 1.44 KByte (Dist bin width= 183 Byte) Enabled receive histograms bin-width=0.100 ms, bins=10000 (clients should use --trip-times) UDP buffer size: 208 KByte (default)
Thanks for your reply. When I run iperf -s -u -e --histograms, and iperf -u --trip-times -e the result shows: [ ID] Interval Transfer Bandwidth Jitter Lost/Total Latency avg/min/max/stdev PPS Rx/inP Read/Timeo/Trunc NetPwr [ 1] 0.00-15.00 sec 562 MBytes 315 Mbits/sec 0.030 ms 51/401246 (0.013%) 0.236/0.174/1.132/0.066 ms -0 pps 401195/6(0) pkts 401196/0/0 166335 [ 1] 0.00-15.00 sec T8(f)-PDF: bin(w=100us):cnt(401194)=2:81670,3:307775,4:5088,5:1025,6:1039,7:1775,8:1690,9:942,10:167,11:20,12:3 (5.00/95.00/99.7%u=0/0)...
Hello, I'm using iperf version 2.2.0 and want to measure the one-way latency of UDP packets. The time between the two hosts is synchronized. When I use the -e option on the server side, I see latency metrics. Are these RTT (Round-Trip Time) values? Also, when I use the --trip-times option on the client side, I get latency metrics as well. How can I get accurate one-way latency measurements?