MPD drops ip fragments when disordered

  • valeryprok

    valeryprok - 2014-02-20

    I have two FreeBSD nodes with external addresses A.B.C.D, E.F.G.H and internal addresses,, connected via Internet:

      Side 1                       Side 2
    (, A.B.C.D)  <--  (E.F.G.H,

    Side 1 is accepting connection from Side 2.
    Side 1 mpd.conf:

            create bundle template B3
            set iface idle 1800
            set iface enable tcpmssfix
            set ipcp yes vjcomp
            set iface up-script /usr/local/etc/mpd5/
            set iface down-script /usr/local/etc/mpd5/
            set ipcp ranges
            set ipcp dns
            set bundle enable compression
            set ccp yes mppc
            set mppc yes e40
            set mppc yes e128
            set mppc yes stateless
            create link template L3 l2tp
            set link action bundle B3
            set link yes acfcomp protocomp
            set link no pap chap eap
            set link enable chap
            set link keep-alive 10 60
            set link mtu 1460
            set link enable incoming`

    Side 2 mpd.conf:

            create bundle static B1
            set bundle enable compression
            set bundle enable encryption
            set iface route
            set iface disable on-demand
            set iface idle 0
            set iface enable tcpmssfix
            set ipcp yes vjcomp
            create link static L1 l2tp
            set link action bundle B1
            set auth authname ub
            set link max-redial 0
            set link mtu 1460
            set link keep-alive 20 75
            set l2tp peer
            set link no pap
            set link yes chap
            set link enable no-orig-auth
            set link keep-alive 10 75
            set link disable incoming

    Everything works fine, i.e. when I run on Side 2 “ping” I’ve got: “64 bytes from icmp_seq=0 ttl=64 time=12.193 ms”.

    But if I run on Side 2 “ping –s 1500” there may be packet loss (usually about 50%). So big packets (not only ICMP), that needed to be fragmented SOMETIMES dropped.

    Here is tcpdump output from Side2:


    11:33:43.022127 IP (tos 0x0, ttl 64, id 26332, offset 0, flags [none], proto UDP (17), length 1491)
        E.F.G.H.17866 > A.B.C.D.1701:  l2tp:[S](65444/45127)Ns=2663,Nr=2133 {IP (tos 0x0, ttl 64, id 26331, offset 0, flags [+], proto ICMP (1), length 1452) > ICMP echo request, id 59412, seq 20, length 1432}


    11:33:43.823293 IP (tos 0x0, ttl 64, id 26340, offset 0, flags [none], proto UDP (17), length 135)
        E.F.G.H.17866 > A.B.C.D.1701:  l2tp:[S](65444/45127)Ns=2666,Nr=2133 {IP (tos 0x0, ttl 64, id 26338, offset 1432, flags [none], proto ICMP (1), length 96) > icmp}

    As we can see, big ICMP packet fragmented and wrapped in 2 l2tp packets: PACKET1 and PACKET2.
    Everything will be ok, if Side1 will receive these packets in correct order (PACKET1,PACKET2).
    But if for any reason (f.e. provider delays) order is broken, i.e. side1 receives packets in such order: (PACKET2,PACKET1), there is no response to ping. It seems that MPD simply drops PACKET1 in this case.
    This behavior can be simulated by adding following ipfw rules on Side2:
    ipfw pipe 1 config delay 200
    ipfw add XXXX pipe 1 udp from me to any 1701 out iplen 1400-2000

    Such a problem arises when using pptp(gre) tunnels also.
    MPD version 5.7 (also tried on 5.6)
    FreeBSD 9.2 (also tried on 9.1, 8.3)

    So, is there any way to avoid losing fragmented packets if MPD receives fragments disordered?

    Last edit: valeryprok 2014-02-20
  • Dmitry S. Luhtionov

    mpd does not receive and send packets. All packets going inside netgraph's nodes. You must enable netgraph debugging and find, what is wrong.

  • Alexander Motin

    Alexander Motin - 2014-02-21

    PPP stack relies on correct packet ordering. That is why PPTP, L2TP and other tunnel transports should have sequence numbers and drop packets received out of order. You can disable L2TP dataseq option in mpd to make it ignore packet order, but that is on your own risk. Especially that is important if you are using any form of stateful traffic compression.

    • valeryprok

      valeryprok - 2014-02-21

      Thank you!
      This option made my case work for L2TP.
      But for PPTP there is no such option, isn't it?


Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks