Menu

Path MTU Discovery (PMTUD)

Spider Man
2024-12-18
2025-01-15
1 2 > >> (Page 1 of 2)
  • Spider Man

    Spider Man - 2024-12-18

    Hello,

    I am a Sysadmin and work for a medium sized business in a unique position where I cannot gain further networking support. My experience dealing with this issue spans thousands of hours across countless projects, infrastructure changes, software changes and deep dives with vendors. This specific problem lives with me every day and manifests in different forms. I appreciate any comments and for the time to read my story. Thank you.

    Important findings
    Using IPERF (thank you!) and dozens of other proofs I observe what can be described as a Path MTU Discovery (PMTUD) issue.

    Using IPERF (-u switch) I see a tremendous amount of UDP pack loss within our hypervisor environment. What is most interesting to me is that it is at the host level (the two test machine are on the same VLAN, on the same host)

    Using a native Windows File Transfer we see ice cycling, no steady stream, up, down, up down, pause (0 bytes / second).

    The "bandaid"
    The native MTU for a Windows machine is 1500 (netsh interface ipv4 show subinterface)

    Gasp, adjusting the MTU to 68 (netsh interface ipv4 set subinterface Local Area Connection mtu=68), e.g the lowest possible value ALL of my real life issues of measure this goes away.

    Before the change:
    Copying a 5GB file from one server to the other using File Explorer and Windows File Transfer I see the findings mentioned above, terrible performance, ice cycling, pauses in the connection.

    After the changes:
    2gb/sec, straight line

    2gb/sec is not the point worth celebrating here, its the straight line, that the connection didn't pause, the file is on the destination server faster than Windows File Transfer can checksum what happened.

    The conspiracy
    Cool, fire up IPERF -- WOW is that alot of dropped traffic! (-u option)

    I sprint to my networking escalation peoples to apprise them, the smoking gun! Its an MTU Discovery problem!

    No response, to cool for school. Maybe they want some real-life examples? Done, proofed one of my long standing issues, I can now simulate the issue by manipulating MTU on the guest. No response.

    Ok maybe they need half a dozen proofs. Done. No response.

    Ok how about a dozen more? Done. No response.

    This issue bleeds PMTUD all over it and I'm being ignored.

    Goal
    Find a better way to show PMTUD issues with IPERF for a single host, non routed example.

    The host
    1. Isolated, no other VMs outside of the scope of this test is on the host (2 VMs)
    2. VMware ESXi, v7x

    UCS
    3. Cisco fabric interconnects

    Cores
    4. Cisco Nexus
    5. 10GB uplinks
    6. Active / Passive :(

    To route or not to route
    This is a non-routed example, no traffic should be leaving the host in this test.

    1. VMs are on the same VLAN
    2. VMs have no gateway added in the vNic
    3. VMs are on the same host
    4. DRS is turned off to keep the VMs pinned in place

    The Test VMs
    9. Server 2022
    10. Non-domain joined
    11. Vanilla
    12. Latest IPERF

    Before posting
    13. Can find others on the internet, very few, with the same issue
    14. Most people just think dropped UDP traffic is fine and move along (puppet meme)
    15. Most the things I find are cool but I have trouble reading
    16. Opened VMware ticket (now closed as network issue)
    17. Got stuck for awhile ruling out client side buffers / rx buffers
    18. Replicated the same "bandaid" across many different host types, different OS's, different ages of host. Verified the issue is repeatable and the adjusting MTU downwards solves my problems.

    (this post) I need help!
    If you have any suggestions, any specific tests you would like to see I would like to run those and attach the results here. As nice as it is to lower the mtu of a client to 68 and call it a day I can't sleep at night knowing i'm just lowing my fragmentation rate and avoiding the problem, not solving anything. Thank you again for reading. I hope to give back to the community the ways in which this problem manifest through various software and protocols and share hope my story is read by another Sysadmin tweaking for their networking teams to just take the problem very seriously, even if they can't address it, acknowledge and dig into it.

     
    • Robert McMahon

      Robert McMahon - 2024-12-18

      Maybe try a markov chain of various lengths then use wireshark to see which
      lengths are getting through and which aren't. You'll need to compile from
      master as Markov chain support was recently added.

      https://sourceforge.net/p/iperf2/code/ci/master/tree/

      rjmcmahon@fedora:~/Code/easyomit/iperf2-code$ src/iperf -v
      iperf version 2.2.n (16 Dec 2024) pthreads

      rjmcmahon@fedora:~/Code/easyomit/iperf2-code$ src/iperf -c localhost -u -i
      1 -l
      '<128|0.0,1.0,0.0,0.0,0.0<256|0.0,0.0,1.0,0.0,0.0<512|0.0,0.0,0.0,1.0,0.0<1024|0.0,0.0,0.0,0.0,1.0<1470|1.0,0.0,0.0,0.0,0.0'
      -e -b 1000pps


      Client connecting to localhost, UDP port 5001 with pid 154009 (1/0
      flows/load)
      TOS defaults to 0x0 (dscp=0,ecn=0) (Nagle on)
      Sending lengths using markov chain:
      <128|0.0,1.0,0.0,0.0,0.0<256|0.0,0.0,1.0,0.0,0.0<512|0.0,0.0,0.0,1.0,0.0<1024|0.0,0.0,0.0,0.0,1.0<1470|1.0,0.0,0.0,0.0,0.0
      UDP buffer size: 208 KByte (default)


      [ 1] local 127.0.0.1%lo port 55528 connected with 127.0.0.1 port 5001 on
      2024-12-17 18:38:05.591 (PST)
      [ ID] Interval Transfer Bandwidth Write/Err/Timeo PPS
      [ 1] 0.00-1.00 sec 663 KBytes 5.43 Mbits/sec 1000/0/0 1000 pps
      [ 1] 1.00-2.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 2.00-3.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 3.00-4.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 4.00-5.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 5.00-6.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 6.00-7.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 7.00-8.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 8.00-9.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 9.00-10.00 sec 662 KBytes 5.42 Mbits/sec 1000/0/0 1000 pps
      [ 1] 0.00-10.00 sec 6.47 MBytes 5.43 Mbits/sec 10002/0/0 909 pps
      [ 1] Sent 10003 datagrams
      [ 1] Markov chain:
      <128|0.0,1.0,0.0,0.0,0.0<256|0.0,0.0,1.0,0.0,0.0<512|0.0,0.0,0.0,1.0,0.0<1024|0.0,0.0,0.0,0.0,1.0<1470|1.0,0.0,0.0,0.0,0.0
      transitions: 10000
      [ 1] 128=128(0,0)|0.000/0.000(0/0.000) 256(0,1)|1.000/1.000(2000/1.000)
      512(0,2)|0.000/1.000(0/0.000) 1024(0,3)|0.000/1.000(0/0.000)
      1470(0,4)|0.000/1.000(0/0.000) 2000/10000(20.0%)
      [ 1] 256=128(1,0)|0.000/0.000(0/0.000) 256(1,1)|0.000/0.000(0/0.000)
      512(1,2)|1.000/1.000(2000/1.000) 1024(1,3)|0.000/1.000(0/0.000)
      1470(1,4)|0.000/1.000(0/0.000) 2000/10000(20.0%)
      [ 1] 512=128(2,0)|0.000/0.000(0/0.000) 256(2,1)|0.000/0.000(0/0.000)
      512(2,2)|0.000/0.000(0/0.000) 1024(2,3)|1.000/1.000(2000/1.000)
      1470(2,4)|0.000/1.000(0/0.000) 2000/10000(20.0%)
      [ 1] 1024=128(3,0)|0.000/0.000(0/0.000) 256(3,1)|0.000/0.000(0/0.000)
      512(3,2)|0.000/0.000(0/0.000) 1024(3,3)|0.000/0.000(0/0.000)
      1470(3,4)|1.000/1.000(2000/1.000) 2000/10000(20.0%)
      [ 1] 1470=128(4,0)|1.000/1.000(2000/1.000) 256(4,1)|0.000/1.000(0/0.000)
      512(4,2)|0.000/1.000(0/0.000) 1024(4,3)|0.000/1.000(0/0.000)
      1470(4,4)|0.000/1.000(0/0.000) 2000/10000(20.0%)
      [ 1] Server Report:
      [ ID] Interval Transfer Bandwidth Jitter Lost/Total
      Datagrams
      [ 1] 0.00-10.00 sec 6.47 MBytes 5.43 Mbits/sec 0.000 ms 0/10002 (0%)

      Start a server too

      On Tue, Dec 17, 2024 at 4:55 PM Spider Man spidermanmtu@users.sourceforge.net wrote:

      Hello,

      I am a Sysadmin and work for a medium sized business in a unique position
      where I cannot gain further networking support. My experience dealing with
      this issue spans thousands of hours across countless projects,
      infrastructure changes, software changes and deep dives with vendors. This
      specific problem lives with me every day and manifests in different forms.
      I appreciate any comments and for the time to read my story. Thank you.

      Important findings
      Using IPERF (thank you!) and dozens of other proofs I observe what can be
      described as a Path MTU Discovery (PMTUD) issue.

      Using IPERF (-u switch) I see a tremendous amount of UDP pack loss within
      our hypervisor environment. What is most interesting to me is that it is at
      the host level (the two test machine are on the same VLAN, on the same host)

      Using a native Windows File Transfer we see ice cycling, no steady stream,
      up, down, up down, pause (0 bytes / second).

      The "bandaid"
      The native MTU for a Windows machine is 1500 (netsh interface ipv4 show
      subinterface
      )

      Gasp, adjusting the MTU to 68 (netsh interface ipv4 set subinterface
      Local Area Connection mtu=68
      ), e.g the lowest possible value ALL of my
      real life issues of measure this goes away.

      Before the change:
      Copying a 5GB file from one server to the other using File Explorer and
      Windows File Transfer I see the findings mentioned above, terrible
      performance, ice cycling, pauses in the connection.

      After the changes:
      2gb/sec, straight line

      2gb/sec is not the point worth celebrating here, its the straight line,
      that the connection didn't pause, the file is on the destination server
      faster than Windows File Transfer can checksum what happened.

      The conspiracy
      Cool, fire up IPERF -- WOW is that alot of dropped traffic! (-u option)

      I sprint to my networking escalation peoples to apprise them, the smoking
      gun! Its an MTU Discovery problem!

      No response, to cool for school. Maybe they want some real-life examples?
      Done, proofed one of my long standing issues, I can now simulate the issue
      by manipulating MTU on the guest. No response.

      Ok maybe they need half a dozen proofs. Done. No response.

      Ok how about a dozen more? Done. No response.

      This issue bleeds PMTUD all over it and I'm being ignored.

      Goal
      Find a better way to show PMTUD issues with IPERF for a single host, non
      routed example.

      The host
      1. Isolated, no other VMs outside of the scope of this test is on the host
      (2 VMs)
      2. VMware ESXi, v7x

      UCS
      3. Cisco fabric interconnects

      Cores
      4. Cisco Nexus
      5. 10GB uplinks
      6. Active / Passive :(

      To route or not to route
      This is a non-routed example, no traffic should be leaving the host in
      this test.

      1. VMs are on the same VLAN
      2. VMs have no gateway added in the vNic
      3. VMs are on the same host
      4. DRS is turned off to keep the VMs pinned in place

      The Test VMs
      9. Server 2022
      10. Non-domain joined
      11. Vanilla
      12. Latest IPERF

      Before posting
      13. Can find others on the internet, very few, with the same issue
      14. Most people just think dropped UDP traffic is fine and move along
      (puppet meme)
      15. Most the things I find are cool but I have trouble reading
      16. Opened VMware ticket (now closed as network issue)
      17. Got stuck for awhile ruling out client side buffers / rx buffers
      18. Replicated the same "bandaid" across many different host types,
      different OS's, different ages of host. Verified the issue is repeatable
      and the adjusting MTU downwards solves my problems.

      (this post) I need help!
      If you have any suggestions, any specific tests you would like to see I
      would like to run those and attach the results here. As nice as it is to
      lower the mtu of a client to 68 and call it a day I can't sleep at night
      knowing i'm just lowing my fragmentation rate and avoiding the problem, not
      solving anything. Thank you again for reading. I hope to give back to the
      community the ways in which this problem manifest through various software
      and protocols and share hope my story is read by another Sysadmin tweaking
      for their networking teams to just take the problem very seriously, even if
      they can't address it, acknowledge and dig into it.


      Path MTU Discovery (PMTUD)
      https://sourceforge.net/p/iperf2/discussion/general/thread/d0bbcf531f/?limit=25#2e11


      Sent from sourceforge.net because you indicated interest in
      https://sourceforge.net/p/iperf2/discussion/general/

      To unsubscribe from further messages, please visit
      https://sourceforge.net/auth/subscriptions/

       
  • Spider Man

    Spider Man - 2024-12-18

    Will give this a try tomorrow. Thanks!

     
    • Robert McMahon

      Robert McMahon - 2024-12-18

      I'll probably add a receive side table too. Let me see what I can do.

      Bob

      On Tue, Dec 17, 2024, 8:15 PM Spider Man spidermanmtu@users.sourceforge.net
      wrote:

      Will give this a try tomorrow. Thanks!

      Path MTU Discovery (PMTUD)
      https://sourceforge.net/p/iperf2/discussion/general/thread/d0bbcf531f/?limit=25#7254


      Sent from sourceforge.net because you indicated interest in
      https://sourceforge.net/p/iperf2/discussion/general/

      To unsubscribe from further messages, please visit
      https://sourceforge.net/auth/subscriptions/

       
      • Spider Man

        Spider Man - 2024-12-18

        I'm having issues compiling, tried Windows (Cygwin) and Ubuntu but get errors. Any suggestions? Thanks,

        sudo apt update
        sudo apt install mingw-w64
        cd iperf
        ./configure --host=x86_64-w64-mingw32
        make

         
        • Robert McMahon

          Robert McMahon - 2024-12-18

          can you provide the config.log for ubuntu, after ./configure?

          Bob

          On Wed, Dec 18, 2024 at 10:11 AM Spider Man spidermanmtu@users.sourceforge.net wrote:

          I'm having issues compiling, tried Windows (Cygwin) and Ubuntu but get
          errors. Any suggestions? Thanks,

          sudo apt update
          sudo apt install mingw-w64
          cd iperf
          ./configure --host=x86_64-w64-mingw32
          make


          Path MTU Discovery (PMTUD)
          https://sourceforge.net/p/iperf2/discussion/general/thread/d0bbcf531f/?limit=25#7254/54e0/48b0


          Sent from sourceforge.net because you indicated interest in
          https://sourceforge.net/p/iperf2/discussion/general/

          To unsubscribe from further messages, please visit
          https://sourceforge.net/auth/subscriptions/

           
          • Robert McMahon

            Robert McMahon - 2024-12-18

            FYI, I just added a very crude way to check the markov chain on the server
            side. Pass the same chain to both client and server

            rjmcmahon@fedora:~/Code/easyomit/iperf2-code$ src/iperf -s -u -l
            '<128|0.0,1.0,0.0,0.0,0.0<256|0.0,0.0,1.0,0.0,0.0<512|0.0,0.0,0.0,1.0,0.0<1024|0.0,0.0,0.0,0.0,1.0<1470|1.0,0.0,0.0,0.0,0.0'
            -i 1


            Server listening on UDP port 5001
            UDP buffer size: 208 KByte (default)


            [ 1] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 48639
            [ ID] Interval Transfer Bandwidth Jitter Lost/Total
            Datagrams
            [ 1] 0.00-1.00 sec 663 KBytes 5.43 Mbits/sec 0.004 ms 0/1000 (0%)
            [ 1] 1.00-2.00 sec 662 KBytes 5.42 Mbits/sec 0.001 ms 0/1000 (0%)
            [ 1] 2.00-3.00 sec 662 KBytes 5.42 Mbits/sec 0.000 ms 0/1000 (0%)
            [ 1] 3.00-4.00 sec 662 KBytes 5.42 Mbits/sec 0.001 ms 0/1000 (0%)
            [ 1] 4.00-5.00 sec 662 KBytes 5.42 Mbits/sec 0.004 ms 0/1000 (0%)
            [ 1] 5.00-6.00 sec 662 KBytes 5.42 Mbits/sec 0.001 ms 0/1000 (0%)
            [ 1] 6.00-7.00 sec 662 KBytes 5.42 Mbits/sec 0.001 ms 0/1000 (0%)
            [ 1] 7.00-8.00 sec 662 KBytes 5.42 Mbits/sec 0.001 ms 0/1000 (0%)
            [ 1] 8.00-9.00 sec 662 KBytes 5.42 Mbits/sec 0.001 ms 0/1000 (0%)
            [ 1] 9.00-10.00 sec 662 KBytes 5.42 Mbits/sec 0.002 ms 0/1000 (0%)
            [ 1] 0.00-10.00 sec 6.47 MBytes 5.43 Mbits/sec 0.001 ms 0/10002 (0%)
            [ 1] Markov chain:
            <128|0.0,1.0,0.0,0.0,0.0<256|0.0,0.0,1.0,0.0,0.0<512|0.0,0.0,0.0,1.0,0.0<1024|0.0,0.0,0.0,0.0,1.0<1470|1.0,0.0,0.0,0.0,0.0
            transitions: 10001 unknowns: 0
            [ 1] 128=128(0,0)|0.000/0.000(0/0.000) 256(0,1)|1.000/1.000(1999/0.999)
            512(0,2)|0.000/1.000(0/0.000) 1024(0,3)|0.000/1.000(0/0.000)
            1470(0,4)|0.000/1.000(1/0.001) 2000/10001(20.0%)
            [ 1] 256=128(1,0)|0.000/0.000(0/0.000) 256(1,1)|0.000/0.000(0/0.000)
            512(1,2)|1.000/1.000(2000/1.000) 1024(1,3)|0.000/1.000(0/0.000)
            1470(1,4)|0.000/1.000(0/0.000) 2000/10001(20.0%)
            [ 1] 512=128(2,0)|0.000/0.000(0/0.000) 256(2,1)|0.000/0.000(0/0.000)
            512(2,2)|0.000/0.000(0/0.000) 1024(2,3)|1.000/1.000(2000/1.000)
            1470(2,4)|0.000/1.000(0/0.000) 2000/10001(20.0%)
            [ 1] 1024=128(3,0)|0.000/0.000(0/0.000) 256(3,1)|0.000/0.000(0/0.000)
            512(3,2)|0.000/0.000(0/0.000) 1024(3,3)|0.000/0.000(0/0.000)
            1470(3,4)|1.000/1.000(2000/1.000) 2000/10001(20.0%)
            [ 1] 1470=128(4,0)|1.000/1.000(2000/1.000) 256(4,1)|0.000/1.000(0/0.000)
            512(4,2)|0.000/1.000(0/0.000) 1024(4,3)|0.000/1.000(0/0.000)
            1470(4,4)|0.000/1.000(0/0.000) 2000/10001(20.0%)

            On Wed, Dec 18, 2024 at 10:24 AM Robert McMahon rjmcmahon@rjmcmahon.com
            wrote:

            can you provide the config.log for ubuntu, after ./configure?

            Bob

            On Wed, Dec 18, 2024 at 10:11 AM Spider Man spidermanmtu@users.sourceforge.net wrote:

            I'm having issues compiling, tried Windows (Cygwin) and Ubuntu but get
            errors. Any suggestions? Thanks,

            sudo apt update
            sudo apt install mingw-w64
            cd iperf
            ./configure --host=x86_64-w64-mingw32
            make


            Path MTU Discovery (PMTUD)
            https://sourceforge.net/p/iperf2/discussion/general/thread/d0bbcf531f/?limit=25#7254/54e0/48b0


            Sent from sourceforge.net because you indicated interest in
            https://sourceforge.net/p/iperf2/discussion/general/

            To unsubscribe from further messages, please visit
            https://sourceforge.net/auth/subscriptions/

             
          • Spider Man

            Spider Man - 2024-12-18

            attached

             
          • Spider Man

            Spider Man - 2024-12-18

            What you actually asked for. Attached.

             
  • Spider Man

    Spider Man - 2024-12-18

    Thank you Robert. FYI I am actively working to have Broadcom publish baselines for your software. VMware (Broadcom) currently has no standards of measurement when it comes to this diagnoses, when a customer calls in with this issue it is overlooked and not part of their troubleshooting process (Looking at MTU Discovery health and UDP benchmarks).

     
  • Spider Man

    Spider Man - 2024-12-18

    Thank you Robert. FYI I am actively working to have Broadcom publish baselines for your software. VMware (Broadcom) currently has no standards of measurement when it comes to this diagnoses, when a customer calls in with this issue it is overlooked and not part of their troubleshooting process (Looking at MTU Discovery health and UDP benchmarks).

     
  • Spider Man

    Spider Man - 2024-12-24

    --Update--

    When DSCP marking is allowed:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\QoS]
    "Do not use NLA"=dword:00000001"

    +QoS Policy DSCP modified depending on current network conditions

    In combination with the MTU set to (68):
    netsh interface ipv4 show subinterface

    The IPERF results appear good. I will test some other versions / IPERF2. Thanks,


    c:\temp>iperf3.exe -c 192.168.2.200 -u -b 0 -t 30 -i 1
    Connecting to host 192.168.2.200, port 5201
    [ 4] local 192.168.2.100 port 55241 connected to 192.168.2.200 port 5201
    [ ID] Interval Transfer Bandwidth Total Datagrams
    [ 4] 0.00-1.00 sec 69.1 MBytes 580 Mbits/sec 8850
    [ 4] 1.00-2.00 sec 69.8 MBytes 585 Mbits/sec 8940
    [ 4] 2.00-3.00 sec 75.9 MBytes 635 Mbits/sec 9710
    [ 4] 3.00-4.00 sec 75.9 MBytes 638 Mbits/sec 9720
    [ 4] 4.00-5.00 sec 75.8 MBytes 635 Mbits/sec 9700
    [ 4] 5.00-6.00 sec 74.8 MBytes 627 Mbits/sec 9580
    [ 4] 6.00-7.00 sec 74.5 MBytes 624 Mbits/sec 9530
    [ 4] 7.00-8.00 sec 72.5 MBytes 610 Mbits/sec 9280
    [ 4] 8.00-9.00 sec 72.6 MBytes 609 Mbits/sec 9290
    [ 4] 9.00-10.00 sec 75.1 MBytes 630 Mbits/sec 9610
    [ 4] 10.00-11.00 sec 73.0 MBytes 612 Mbits/sec 9350
    [ 4] 11.00-12.00 sec 75.7 MBytes 635 Mbits/sec 9690
    [ 4] 12.00-13.00 sec 75.5 MBytes 633 Mbits/sec 9660
    [ 4] 13.00-14.00 sec 74.4 MBytes 624 Mbits/sec 9520
    [ 4] 14.00-15.00 sec 71.8 MBytes 602 Mbits/sec 9190
    [ 4] 15.00-16.00 sec 77.0 MBytes 646 Mbits/sec 9850
    [ 4] 16.00-17.00 sec 77.0 MBytes 645 Mbits/sec 9850
    [ 4] 17.00-18.00 sec 74.5 MBytes 624 Mbits/sec 9530
    [ 4] 18.00-19.00 sec 72.7 MBytes 610 Mbits/sec 9300
    [ 4] 19.00-20.00 sec 73.0 MBytes 612 Mbits/sec 9340
    [ 4] 20.00-21.00 sec 77.7 MBytes 652 Mbits/sec 9950
    [ 4] 21.00-22.00 sec 77.8 MBytes 653 Mbits/sec 9960
    [ 4] 22.00-23.00 sec 73.2 MBytes 614 Mbits/sec 9370
    [ 4] 23.00-24.00 sec 76.3 MBytes 641 Mbits/sec 9770
    [ 4] 24.00-25.00 sec 77.4 MBytes 649 Mbits/sec 9910
    [ 4] 25.00-26.00 sec 77.3 MBytes 647 Mbits/sec 9890
    [ 4] 26.00-27.00 sec 78.3 MBytes 658 Mbits/sec 10020
    [ 4] 27.00-28.00 sec 77.4 MBytes 649 Mbits/sec 9910
    [ 4] 28.00-29.00 sec 78.1 MBytes 656 Mbits/sec 10000
    [ 4] 29.00-30.00 sec 79.0 MBytes 663 Mbits/sec 10110


    [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
    [ 4] 0.00-30.00 sec 2.20 GBytes 630 Mbits/sec 0.000 ms 0/0 (0%)
    [ 4] Sent 0 datagrams

    iperf Done.

     

    Last edit: Spider Man 2024-12-24
  • Spider Man

    Spider Man - 2024-12-24

    Set DSCP to 63 in the example above.

    Interestingly if you don't allow DSCP marking (registry entry above) modifying the DSCP UDP value, even setting it to 0 results in bad IPERF results. The combination of the 3 changes are needed to clear the IPERF results.

     
  • Spider Man

    Spider Man - 2025-01-15

    No progress with Broadcom, I believe at this point my inquires are being blocked. They refuse to share any baselines or results from their labs to compare to.

    I owe you tests with different DSCP settings against different environmental configurations, this will take me some time still.

    Any progress on a compiled build with the markov chain? Thank you

     
    • Robert McMahon

      Robert McMahon - 2025-01-15

      Markov chain support is in master for both send & receive side. Not much testing and the cli command is obtuse but seems to work.

      Tickets are here

       
      • Robert

        Robert - 2025-01-15

        Hi,

        Is there some modules which ease using iperf2 within python scripts ? I
        found pyperf2 but not sure if this is it ...

        https://github.com/jinjamator/pyperf2

        Ideally would be to have a full iperf2 module and not calling external
        binary, but I am not sure if there is such work somewhere going on
        (possibly using cython module ) ?

        Just trying to write some scripts and interactively call iperf2 from it.

        For analogy I found iperf3 have some python wrappers

        https://iperf3-python.readthedocs.io/en/latest/installation.html#iperf3-python-wrapper

        Many thx,
        Robert

         
        • Robert McMahon

          Robert McMahon - 2025-01-15

          the python support is in the flows directory. It needs some updates.

           
      • Robert

        Robert - 2025-01-15

        Hi,

        I have a case where on a LAN (say /24) there are N external gateways.

        Say .1 goes to ISP1, .254 to ISP2, .253 to ISP3 etc ... Each gw does NAT.

        Obviously measurements using each ISP exit provide different
        path performance and characteristics.

        Question:

        Is there any way from iperf2 CLI level to set gateway address and to
        override what OS would by default use globally for all applications at a
        given time ?

        Many thx,
        Robert

         
        • Robert McMahon

          Robert McMahon - 2025-01-15

          Sounds like you're asking for policy routing support. Is this right?

           
          • Robert

            Robert - 2025-01-15

            Yes, but only limited to local exit from linux via one interface but to
            selected gateway on the lan.

            I am not asking for PBR in general (beyond local gw).

            Many thx,
            R.

            On Wed, Jan 15, 2025 at 11:22 PM Robert McMahon rjmcmahon@users.sourceforge.net wrote:

            Sounds like you're asking for policy routing
            https://medium.com/@marthin.pasaribu_72336/linux-policy-routing-introduction-37933f8cb62e
            support. Is this right?


            Path MTU Discovery (PMTUD)
            https://sourceforge.net/p/iperf2/discussion/general/thread/d0bbcf531f/?limit=25#8b1b/7b41/47f8/8255


            Sent from sourceforge.net because you indicated interest in
            https://sourceforge.net/p/iperf2/discussion/general/

            To unsubscribe from further messages, please visit
            https://sourceforge.net/auth/subscriptions/

             
            • Robert McMahon

              Robert McMahon - 2025-01-15

              One can set the interface with

              -c <ip>%devname, e.g. iperf -c 192.168.1.25%eth2

              The gateway comes from a route entry. One can use a policy route combined with a -B <sourceip> to get at that. </sourceip>

              iperf itself doesn't have route information nor a way to directly effect the policy route. A script could set it ahead of the iperf command,

              There is the iproute2 code but I probably don't want to pull that into iperf 2, rather have users affect routing themselves with ip route or equivalent

               

              Last edit: Robert McMahon 2025-01-15
              • Robert

                Robert - 2025-01-15

                Well I can't globally adjust the default route in a round robin fashion
                across all gateways on the active production machines. I can do it when I
                am sure it is going to benefit users.

                Having said the above, the policy route with -B <src_ip> is an interesting
                idea ! I can overload the lan int with a bunch of aliases (one per gw and
                pbr that src to specific ISP router). Let me play around with that and see
                if this solves the problem.</src_ip>

                Many thx !!!
                Robert

                On Thu, Jan 16, 2025 at 12:17 AM Robert McMahon rjmcmahon@users.sourceforge.net wrote:

                One can set the interface with -c <ip>%devname, e.g. iperf -c
                192.168.1.25%eth2</ip>

                The gateway comes from a route entry. One can use a policy route combined
                with a -B <sourceip> to get at that. </sourceip>

                iperf itself doesn't have route information nor a way to directly effect
                the policy route. I script could set it ahead of the iperf command,

                There is the iproute2 code but I probably don't want to pull that into
                iperf 2, rather have users affect routing themselves with ip route or
                equivalent


                Path MTU Discovery (PMTUD)
                https://sourceforge.net/p/iperf2/discussion/general/thread/d0bbcf531f/?limit=25#8b1b/7b41/47f8/8255/81b3/3c68


                Sent from sourceforge.net because you indicated interest in
                https://sourceforge.net/p/iperf2/discussion/general/

                To unsubscribe from further messages, please visit
                https://sourceforge.net/auth/subscriptions/

                 
1 2 > >> (Page 1 of 2)

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.