Menu

Is Little's Law useful for TCP bytes wait times?

2024-07-05
2024-07-05
  • Robert McMahon

    Robert McMahon - 2024-07-05

    Hi All,

    FYI, here is a good review of Little's Law applied to packets.

    A major challenge our industry faces is making responsiveness KPIs more prevalent and actionable to network engineers. Currently, many think of capacity only metrics to qualify a link.

    I'm considering adding the time to service a byte to iperf 2 since TCP is a byte oriented protocol. This time will be calculated on the sender side using Little's law. The "depth" is taken from the tcp_info bytes in flight calculation and the arrival rate is derived from the write rate. This can all be done from the send side.

    Examples below, the new column is Wait in units of ms.

    rjmcmahon@fedora:~/Code/pyflows/iperf2-code$ src/iperf -c 192.168.1.120 -i 1 -e --tcp-cca cubic --tcp-write-prefetch 256K  -w 4M
    ------------------------------------------------------------
    Client connecting to 192.168.1.120, TCP port 5001 with pid 24891 (1/0 flows/load)
    Write buffer size: 131072 Byte
    TCP congestion control set to cubic using cubic
    TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
    TCP window size: 8.00 MByte (WARNING: requested 4.00 MByte)
    Event based writes (pending queue watermark at 262144 bytes)
    ------------------------------------------------------------
    [  1] local 192.168.1.103%enp4s0 port 53530 connected with 192.168.1.120 port 5001 (prefetch=262144) (cubic) (icwnd/mss/irtt=14/1448/80086) (ct=80.15 ms) on 2024-06-27 09:18:05.897 (PDT)
    [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     InF(pkts)/Cwnd(pkts)/RTT(var)    Wait(ms)  NetPwr
    [  1] 0.00-1.00 sec  43.4 MBytes   364 Mbits/sec  348/0        35       89K(63)/130K(92)/2201(229) us  2.004 ms 20664
    [  1] 1.00-2.00 sec  47.0 MBytes   394 Mbits/sec  376/0        10      134K(95)/181K(128)/2508(157) us  2.784 ms 19650
    [  1] 2.00-3.00 sec  48.1 MBytes   404 Mbits/sec  385/0        12      186K(132)/209K(148)/3286(371) us  3.774 ms 15357
    [  1] 3.00-4.00 sec  48.4 MBytes   406 Mbits/sec  387/0        20      159K(113)/159K(113)/1871(270) us  3.210 ms 27111
    [  1] 4.00-5.00 sec  47.0 MBytes   394 Mbits/sec  376/0        13      176K(125)/206K(146)/2608(200) us  3.657 ms 18897
    [  1] 5.00-6.00 sec  46.5 MBytes   390 Mbits/sec  372/0        18      164K(116)/164K(116)/2784(208) us  3.444 ms 17514
    [  1] 6.00-7.00 sec  49.9 MBytes   418 Mbits/sec  399/0        18      159K(113)/159K(113)/1992(209) us  3.113 ms 26254
    [  1] 7.00-8.00 sec  47.1 MBytes   395 Mbits/sec  377/0        21       91K(65)/151K(107)/2468(244) us  1.886 ms 20022
    [  1] 8.00-9.00 sec  48.0 MBytes   403 Mbits/sec  384/0        23      158K(112)/178K(126)/2388(297) us  3.215 ms 21077
    [  1] 9.00-10.00 sec  49.5 MBytes   415 Mbits/sec  396/0        30      182K(129)/182K(129)/2665(350) us  3.591 ms 19476
    [  1] 0.00-10.01 sec   475 MBytes   398 Mbits/sec  3801/0       200        0K(0)/185K(131)/1779(71) us  0.000 ms 27966
    
    rjmcmahon@fedora:~/Code/pyflows/iperf2-code$ src/iperf -c 192.168.1.77 -i 1 -e --tcp-cca cubic --tcp-write-prefetch 256K  -w 4M
    ------------------------------------------------------------
    Client connecting to 192.168.1.77, TCP port 5001 with pid 24901 (1/0 flows/load)
    Write buffer size: 131072 Byte
    TCP congestion control set to cubic using cubic
    TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
    TCP window size: 8.00 MByte (WARNING: requested 4.00 MByte)
    Event based writes (pending queue watermark at 262144 bytes)
    ------------------------------------------------------------
    [  1] local 192.168.1.103%enp4s0 port 48388 connected with 192.168.1.77 port 5001 (prefetch=262144) (cubic) (icwnd/mss/irtt=14/1448/405) (ct=0.45 ms) on 2024-06-27 09:18:30.384 (PDT)
    [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     InF(pkts)/Cwnd(pkts)/RTT(var)    Wait(ms)  NetPwr
    [  1] 0.00-1.00 sec  1.10 GBytes  9.41 Gbits/sec  8975/0        52     1274K(901)/1308K(925)/1107(25) us  1.109 ms 1062548
    [  1] 1.00-2.00 sec  1.10 GBytes  9.42 Gbits/sec  8981/0         0     1643K(1162)/1843K(1304)/1505(28) us  1.429 ms 782165
    [  1] 2.00-3.00 sec  1.10 GBytes  9.42 Gbits/sec  8980/0         0     1804K(1276)/1916K(1355)/1480(18) us  1.569 ms 795288
    [  1] 3.00-4.00 sec  1.10 GBytes  9.41 Gbits/sec  8978/0         0     1702K(1204)/1955K(1383)/1536(33) us  1.481 ms 766123
    [  1] 4.00-5.00 sec  1.10 GBytes  9.41 Gbits/sec  8978/0         0     1705K(1206)/1974K(1396)/1554(25) us  1.484 ms 757249
    [  1] 5.00-6.00 sec  1.10 GBytes  9.42 Gbits/sec  8979/0         0     1701K(1203)/1978K(1399)/1544(26) us  1.480 ms 762238
    [  1] 6.00-7.00 sec  1.10 GBytes  9.42 Gbits/sec  8979/0         0     1743K(1233)/1982K(1402)/1531(29) us  1.517 ms 768710
    [  1] 7.00-8.00 sec  1.10 GBytes  9.41 Gbits/sec  8978/0         0     1795K(1270)/1985K(1404)/1530(34) us  1.562 ms 769127
    [  1] 8.00-9.00 sec  1.10 GBytes  9.41 Gbits/sec  8978/0         0     1771K(1253)/1986K(1405)/1549(42) us  1.541 ms 759693
    [  1] 9.00-10.00 sec  1.10 GBytes  9.42 Gbits/sec  8979/0         0     1662K(1176)/1991K(1408)/1581(19) us  1.446 ms 744399
    [  1] 0.00-10.01 sec  11.0 GBytes  9.40 Gbits/sec  89786/0        52        0K(0)/1991K(1408)/1473(35) us  0.000 ms 797836
    
    Here's an example with a sub millisecond RTT but the IP e2e path has delay (per the use of --fq-rate or SO_MAX_PACING_RATE). This is over 10G wired
    
    rjmcmahon@fedora:~/Code/pyflows/iperf2-code$ src/iperf -c 192.168.1.77 -i 1 -e --tcp-cca cubic --tcp-write-prefetch 256K  -w 4M --fq-rate 100m
    ------------------------------------------------------------
    Client connecting to 192.168.1.77, TCP port 5001 with pid 35066 (1/0 flows/load)
    Write buffer size: 131072 Byte
    fair-queue socket pacing set to  100 Mbit/s
    TCP congestion control set to cubic using cubic
    TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
    TCP window size: 8.00 MByte (WARNING: requested 4.00 MByte)
    Event based writes (pending queue watermark at 262144 bytes)
    ------------------------------------------------------------
    [  1] local 192.168.1.103%enp4s0 port 60198 connected with 192.168.1.77 port 5001 (prefetch=262144) (cubic) (icwnd/mss/irtt=14/1448/319) (ct=0.37 ms) on 2024-06-27 12:07:24.074 (PDT)
    [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     InF(pkts)/Cwnd(pkts)/RTT(var)  fq-rate  Wait(ms)  NetPwr
    [  1] 0.00-1.00 sec  12.3 MBytes   103 Mbits/sec  99/0         0      127K(90)/149K(106)/365(18) us  100 Mbit/sec   10.124 ms 35192
    [  1] 1.00-2.00 sec  11.9 MBytes  99.6 Mbits/sec  95/0         0      127K(90)/149K(106)/375(15) us  100 Mbit/sec   10.444 ms 33205
    [  1] 2.00-3.00 sec  11.9 MBytes  99.6 Mbits/sec  95/0         0      127K(90)/149K(106)/369(18) us  100 Mbit/sec   10.444 ms 33745
    [  1] 3.00-4.00 sec  12.0 MBytes   101 Mbits/sec  96/0         0      127K(90)/149K(106)/377(11) us  100 Mbit/sec   10.335 ms 33376
    [  1] 4.00-5.00 sec  11.9 MBytes  99.6 Mbits/sec  95/0         0      127K(90)/149K(106)/369(15) us  100 Mbit/sec   10.444 ms 33745
    [  1] 5.00-6.00 sec  12.0 MBytes   101 Mbits/sec  96/0         0      127K(90)/149K(106)/373(12) us  100 Mbit/sec   10.335 ms 33734
    [  1] 6.00-7.00 sec  11.9 MBytes  99.6 Mbits/sec  95/0         0      127K(90)/149K(106)/373(22) us  100 Mbit/sec   10.444 ms 33383
    [  1] 7.00-8.00 sec  11.9 MBytes  99.6 Mbits/sec  95/0         0      127K(90)/149K(106)/368(17) us  100 Mbit/sec   10.444 ms 33837
    [  1] 8.00-9.00 sec  11.9 MBytes  99.6 Mbits/sec  95/0         0      127K(90)/149K(106)/379(19) us  100 Mbit/sec   10.444 ms 32854
    [  1] 9.00-10.00 sec  12.0 MBytes   101 Mbits/sec  96/0         0      127K(90)/149K(106)/380(12) us  100 Mbit/sec   10.335 ms 33113
    [  1] 0.00-10.04 sec   120 MBytes   100 Mbits/sec  958/0         0      149K/345(20) us 36225
    
    #if HAVE_TCP_INFLIGHT
        stats->packets_in_flight = (tcp_info_buf.tcpi_unacked - tcp_info_buf.tcpi_sacked - \
                        tcp_info_buf.tcpi_lost + tcp_info_buf.tcpi_retrans);
        stats->bytes_in_flight  = stats->packets_in_flight * tcp_info_buf.tcpi_snd_mss / 1024;
    #endif
    

    Let me know your thoughts on this.

    Thanks,
    Bob

     

    Last edit: Robert McMahon 2024-07-05

Log in to post a comment.