Menu

#3 NHRP registration with a Cisco router does not work

v1.0_(example)
open
None
5
2025-03-20
2015-11-03
Cipher
No

Hi,

I am working on a "DMVPN-like" configuration (GRE/NHRP/IPSec) with spokes being OpenWRT routers and the Hub a Cisco router. Spokes are running OpenWRT with OpenNHRP and strongswan. They do not seem to send the registration request to the Hub when tunnel comes UP. The only I got it to work is by adding a static mapping on my hub (pointing to the spoke's NBMA). Is there any know bugs/incompatibility with strongswan ?

Here is what my opennhrp.conf contains :

interface gre-mygre
map 10.254.0.0/24 <Hub_public_IP> register cisco
multicast nhs
multicast 10.254.0.1
shortcut
redirect
non-caching

Config on the hub side :
interface Tunnel1
ip address 10.254.0.1 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map multicast dynamic
ip nhrp network-id 1
ip nhrp registration no-unique
tunnel source <Hub_public_IP>
tunnel mode gre multipoint
tunnel protection ipsec profile VPNProfile1

Not sure why but when I run a opennhrp update nbma it does not appear to work (it shos 0 entry affected) :
~# opennhrpctl show
[...]
Interface: gre-mygre
Type: static
Protocol-Address: 10.254.0.0/24
NBMA-Address: <Hub_public_IP>

~# opennhrpctl update nbma <New_Hub_public_IP> 10.254.0.0
Status: ok
Entries-Affected: 0

Any insight would be appreciated.

Version is : opennhrp - 0.12.3-2

Cipher

Discussion

  • G Man

    G Man - 2016-04-28

    I tried following the instructions on a bunch of GCE Centos 7 VMs on separate networks. I am using your patched builds for Strongswan & Quagga. I was able to trigger a dynamic IKE Tunnel request through nhrpd running on a spoke. The IKE tunnel connects. But, the CHILD_SA for dynamic GRE tunnel doesnt get completed on hub and responds with TS_UNACCEPTABLE. On the hub the relevant log is,

    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] looking for a child config for 104.196.55.145/32[gre] === 10.30.0.2/32[gre]
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] looking for a child config for 104.196.55.145/32[gre] === 10.30.0.2/32[gre]
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] proposing traffic selectors for us:
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] proposing traffic selectors for us:
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] 10.20.0.2/32[gre]
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] 10.20.0.2/32[gre]
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] proposing traffic selectors for other:
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] proposing traffic selectors for other:
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] 104.154.65.243/32[gre]
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[CFG] 104.154.65.243/32[gre]
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[IKE] traffic selectors 104.196.55.145/32[gre] === 10.30.0.2/32[gre] inacceptable
    Apr 28 01:28:05 instance-3 charon-custom[5985]: 09[IKE] traffic selectors 104.196.55.145/32[gre] === 10.30.0.2/32[gre] inacceptable

    The GRE subnet is 172.17.0.0/16. Output of 'ip l show' on hub,

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP mode DEFAULT qlen 1000
    link/ether 42:01:0a:14:00:02 brd ff:ff:ff:ff:ff:ff
    3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT
    link/gre 0.0.0.0 brd 0.0.0.0
    4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    9: tun0@NONE: <MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN mode DEFAULT
    link/gre 0.0.0.0 brd 0.0.0.0

    ipsec statusall

    Status of IKE charon daemon (strongSwan 5.4.0, Linux 3.10.0-327.13.1.el7.x86_64, x86_64):
    uptime: 64 minutes, since Apr 28 02:23:54 2016
    malloc: sbrk 2408448, mmap 0, used 353376, free 2055072
    worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 5
    loaded plugins: charon aes attr cmac des dnskey fips-prf hmac md5 pem pgp pkcs12 pkcs7 pkcs8 rc2 resolve sha1 sha2 sshkey stroke xauth-generic xcbc pkcs1 x509 revocation constraints
    pubkey openssl random nonce kernel-netlink socket-default updown vici
    Listening IP addresses:
    10.20.0.2
    172.16.20.1
    Connections:
    dmvpn: %any...%any IKEv2, dpddelay=15s
    dmvpn: local: [hub1] uses pre-shared key authentication
    dmvpn: remote: uses pre-shared key authentication
    dmvpn: child: dynamic[gre] === dynamic[gre] TUNNEL, dpdaction=clear
    Security Associations (1 up, 0 connecting):
    dmvpn[1]: ESTABLISHED 64 minutes ago, 10.20.0.2[hub1]...104.154.65.243[spoke2]
    dmvpn[1]: IKEv2 SPIs: df50cfdbc050c585_i d0e8348fa34a1ba0_r*, rekeying in 2 hours, pre-shared key reauthentication in 10 hours
    dmvpn[1]: IKE proposal: AES_CBC_256/HMAC_SHA2_512_256/PRF_HMAC_SHA2_512/ECP_384

    It could an issue due to GCE networking. I am stuck at this point. The child GRE tunnel doesnt get created. And I cant easily find relavant log messages without digging into the internals. Could you help me with some pointers? Also, how would the configs change if both hub & spoke are NATed. I think that may help resolve this.

    Pls lmk if you need any more info. Appreciate it.

    Thanks.

     
  • Timo Teras

    Timo Teras - 2016-04-28

    Looks like something wrong in the strongSwan connection configuration.

    My config looks like (swanctl format):

    connections {
            dmvpn {
                    version = 2
                    pull = no
                    mobike = no
                    dpd_delay = 15
                    dpd_timeout = 30
                    fragmentation = yes
                    unique = replace
                    rekey_time = 4h
                    reauth_time = 13h
                    proposals = aes256-sha512-ecp384
                    local {
                            certs = dmvpn-node-cert.pem
                            auth = pubkey
                    }
                    remote {
                            cacerts = dmvpn-ca.pem
                            revocation = relaxed
                            auth = pubkey
                    }
                    children {
                            dmvpn {
                                    esp_proposals = aes256-sha512-ecp384
                                    local_ts = dynamic[gre]
                                    remote_ts = dynamic[gre]
                                    inactivity = 90m
                                    rekey_time = 100m
                                    mode = transport
                                    dpd_action = clear
                                    reqid = 1
                            }
                    }
            }
    }
    
     
  • G Man

    G Man - 2016-04-28

    Mine is pretty similar to original. Using PSK. Notice change in mode to tunnel - required for NATed environments.

    connections {
    dmvpn {
    version = 2
    pull = no
    mobike = no
    dpd_delay = 15
    dpd_timeout = 30
    fragmentation = yes
    unique = replace
    rekey_time = 4h
    reauth_time = 13h
    proposals = aes256-sha512-ecp384
    local {
    auth = psk
    id = hub1
    }
    remote {
    auth = psk
    }
    children {
    dmvpn {
    esp_proposals = aes256-sha512-ecp384
    local_ts = dynamic[gre]
    remote_ts = dynamic[gre]
    inactivity = 90m
    rekey_time = 100m
    mode = tunnel
    dpd_action = clear
    reqid = 1
    }
    }
    }
    }

     

    Last edit: G Man 2016-04-28
    • Timo Teras

      Timo Teras - 2016-04-28

      That's the reason then. Tunnel/transport setting needs to match both ends. And in DMVPN it's fixed for transport mode. It works perfectly well also with NAT. Tunnel mode should not be used at all.

       
  • G Man

    G Man - 2016-04-28

    Yes, you are right transport mode shld work in NAT as well. But, certain environments block AH/ESP protocols. For example, Google Cloud doesn't allow those by default. Hence, I was using tunnel mode. Few queries for clarity,

    1. Is the mode hardcoded as in the patches? or DMVPN architecture just cannot work in tunnel mode.
    2. I am running "nhrpd -d" on hub. Do I need to pass a special flag to run it as a server?
    3. Why do we need to list IPs of each spoke neighbor in hub bgp config. Eg: neighbor %Spoke1_GRE_IP%. I thought that was auto-discovered in DMVPN Phase 3 with help of NHRP.

    I will try the setup in transport mode. Thanks for the help.

     
    • Timo Teras

      Timo Teras - 2016-04-28

      When NAT is detect. UDP encapsulation is negotiated automatically by IPsec. You can also force UDP mode in the configuration if needed.

      1. DMVPN architecture dictates transport mode.
      2. No special flags needed.
      3. You can achieve that with a nhrp event script - I'm due to push an example in lua soon. Alternatively, the Cumulus tree has a quagga/bgp patch pending to implement the listen subnet stanza allowing to listen a subnet prefix for incoming bgp connections.
       
  • G Man

    G Man - 2016-04-29

    Alright, the tunnel was established after switching to transport mode.

    Status of IKE charon daemon (strongSwan 5.4.0, Linux 3.10.0-327.13.1.el7.x86_64, x86_64):
    uptime: 68 minutes, since Apr 29 03:28:07 2016
    malloc: sbrk 2424832, mmap 0, used 363488, free 2061344
    worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 13
    loaded plugins: charon aes attr cmac des dnskey fips-prf hmac md5 pem pgp pkcs12 pkcs7 pkcs8 rc2 resolve sha1 sha2 sshkey stroke xauth-generic xcbc pkcs1 x509 revocation constraints pubkey openssl random nonce kernel-netlink socket-default updown vici
    Listening IP addresses:
    10.30.0.2
    172.16.30.1
    Connections:
    dmvpn: %any...%any IKEv2, dpddelay=15s
    dmvpn: local: [spoke2] uses pre-shared key authentication
    dmvpn: remote: uses pre-shared key authentication
    dmvpn: child: dynamic[gre] === dynamic[gre] TRANSPORT, dpdaction=clear
    Security Associations (1 up, 0 connecting):
    dmvpn[4]: ESTABLISHED 34 minutes ago, 10.30.0.2[spoke2]...104.196.55.145[hub1]
    dmvpn[4]: IKEv2 SPIs: 0c9c425ba17c9ec4_i* 5cb62a43e1b9965e_r, rekeying in 2 hours, pre-shared key reauthentication in 10 hours
    dmvpn[4]: IKE proposal: AES_CBC_256/HMAC_SHA2_512_256/PRF_HMAC_SHA2_512/ECP_384
    dmvpn{3}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: c3b2f963_i cb87bd5f_o
    dmvpn{3}: AES_CBC_256/HMAC_SHA2_512_256, 0 bytes_i, 10400 bytes_o (104 pkts, 16s ago), rekeying in 62 minutes
    dmvpn{3}: 10.30.0.2/32[gre] === 104.196.55.145/32[gre]

    Pls note the private IP instead of public IP.

    Now, I am unable to ping the tunnel endpoint.

    Spoke has GRE interface,

    DEVICE=tun0
    BOOTPROTO=none
    ONBOOT=yes
    TYPE=GRE
    MY_INNER_IPADDR=172.16.30.1
    KEY=4328974
    TTL=64

    Hub has GRE interface,

    DEVICE=tun0
    BOOTPROTO=none
    ONBOOT=yes
    TYPE=GRE
    MY_INNER_IPADDR=172.16.0.1
    KEY=4328974
    TTL=64

    After tunnel is established, I can't ping 172.16.30.1 from 172.16.0.1. Manual static routes also don't help.

    Consequently, NHRP registration doesnt seem to work. Nothing much shows up on hub. Relavant log messages on spoke,

    2016/04/29 04:49:45 NHRP: NHS: Register 172.16.30.1 -> 172.16.30.1 (timeout 64)
    2016/04/29 04:49:45 NHRP: Send Registration-Request(3) 172.16.30.1 -> 172.16.30.1
    2016/04/29 04:49:45 NHRP: PACKET: Send 10.30.0.2 -> 104.196.55.145
    2016/04/29 04:49:45 NHRP: Netlink-log: Received msg_type 1024, msg_flags 0
    2016/04/29 04:49:45 NHRP: Netlink-log: Received msg_type 1024, msg_flags 0
    2016/04/29 04:49:45 NHRP: Netlink-log: Received msg_type 1024, msg_flags 0
    2016/04/29 04:49:45 NHRP: Netlink-log: Received msg_type 1024, msg_flags 0
    2016/04/29 04:49:45 NHRP: Netlink-log: Received msg_type 1024, msg_flags 0
    2016/04/29 04:49:45 NHRP: Netlink-log: Received msg_type 3, msg_flags 0

    Spoke config:-

    Current configuration:
    !
    log syslog
    log stdout
    nhrp nflog-group 1
    !
    debug nhrp all
    !
    interface eth0
    link-detect
    !
    interface gre0
    link-detect
    !
    interface gre1
    link-detect
    !
    interface gretap0
    link-detect
    !
    interface lo
    link-detect
    !
    interface tun0
    ip nhrp network-id 1
    ip nhrp nhs dynamic nbma 104.196.55.145
    ip nhrp registration no-unique
    ip nhrp shortcut
    no link-detect
    tunnel protection vici profile dmvpn
    tunnel source eth0
    !
    router bgp 65000
    bgp router-id 172.16.30.1
    network 10.30.0.0/24
    neighbor spokes-ibgp peer-group
    neighbor spokes-ibgp remote-as 65000
    neighbor spokes-ibgp disable-connected-check
    neighbor spokes-ibgp advertisement-interval 1
    neighbor spokes-ibgp next-hop-self
    neighbor spokes-ibgp soft-reconfiguration inbound
    neighbor 172.16.0.1 peer-group spokes-ibgp
    exit
    !
    line vty
    !
    end

    Hub Config

    Current configuration:
    !
    log syslog
    nhrp nflog-group 1
    !
    debug nhrp all
    !
    interface eth0
    link-detect
    !
    interface gre0
    link-detect
    !
    interface gretap0
    link-detect
    !
    interface lo
    link-detect
    !
    interface tun0
    ip nhrp network-id 1
    ip nhrp redirect
    ip nhrp registration no-unique
    ip nhrp shortcut
    no link-detect
    tunnel protection vici profile dmvpn
    tunnel source eth0
    !
    router bgp 65000
    bgp router-id 172.16.0.1
    bgp deterministic-med
    network 172.16.0.0/16
    redistribute nhrp
    neighbor spokes-ibgp peer-group
    neighbor spokes-ibgp remote-as 65000
    neighbor spokes-ibgp disable-connected-check
    neighbor spokes-ibgp advertisement-interval 1
    neighbor spokes-ibgp route-reflector-client
    neighbor spokes-ibgp next-hop-self all
    neighbor spokes-ibgp soft-reconfiguration inbound
    neighbor 172.16.30.1 peer-group spokes-ibgp
    exit
    !
    line vty
    !
    end

    ifconfig on spoke

    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
    inet 10.30.0.2 netmask 255.255.255.255 broadcast 10.30.0.2
    inet6 fe80::4001:aff:fe1e:2 prefixlen 64 scopeid 0x20<link>
    ether 42:01:0a:1e:00:02 txqueuelen 1000 (Ethernet)
    RX packets 1745 bytes 256717 (250.7 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 1664 bytes 221269 (216.0 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
    inet 127.0.0.1 netmask 255.0.0.0
    inet6 ::1 prefixlen 128 scopeid 0x10<host>
    loop txqueuelen 0 (Local Loopback)
    RX packets 22 bytes 2156 (2.1 KiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 22 bytes 2156 (2.1 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    tun0: flags=65<UP,RUNNING> mtu 1432
    inet 172.16.30.1 netmask 255.255.255.255
    unspec 00-00-00-00-00-00-F0-B0-00-00-00-00-00-00-00-00 txqueuelen 0 (UNSPEC)
    RX packets 0 bytes 0 (0.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 31 bytes 3100 (3.0 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    What am I missing here? Appreciate any pointers.

    Thank you for your time.

     
    • Timo Teras

      Timo Teras - 2016-04-29

      It is normal for the transport mode to show the local node IP in the private format. The other node sees the NATed address. Regardless of the seeming mismatch of IPs things work as IKE supports NAT traversal. This has been activated properly as seen by "ESP in UDP" note for the SA.

      The configs look ok to me. If the hub does not print any log message, it sounds like packets being dropped due to host firewall, intermediate firewall, or some other issue.

      Things to check:
      Seems it's distribution kernel. 3.10.x series had multiple issues as listed in README.kernel. They may be fixed in your distribution or not.
      Use tcpdump on spoke to see if UDP packets are sent out to internet that match the nhrp registration packets.
      Use tcpdump on hub to see if you get UDP packets in the internet interface maching each NHRP registration attempt or not. Do you get the decrypted nhrp registration packets on the gre interface?
      Use "ip xfrm pol" and "ip xfrm state" to see that strongSwan configured kernel IPsec properly.
      On quagga vtysh use the diagnostic commands listed in README.nhrpd to see what's happening. Both on hub and spoke.
      You probably need to turn off rp_filter (=0). But this is more related to bgp/nhrp than the problem at hand.

       
  • G Man

    G Man - 2016-05-01

    Re-did everything. Still can't get it to work. Haven't checked 3.10 kernel issues yet. I think core issue is that source (spoke) is unable to ping target "mgre" tunnel endpont "10.90.90.1". Intermediate firewalls shouldn't affect as tunnel is supposedly up. So, kernel maybe dropping packets somewhere at source OR strongswan may not be properly setting up ipsec on my distribution. No source firewall issues as it is all disabled.

    GRE Endpoints - src 10.90.90.3, target 10.90.90.1

    ipsec statusall
    dmvpn{1}: 10.13.0.6/32[gre] === 130.211.170.201/32[gre]

    ip xfrm policy
    src 130.211.170.201/32 dst 10.13.0.6/32 proto gre
    dir in priority 2818 ptype main
    tmpl src 0.0.0.0 dst 0.0.0.0
    proto comp reqid 1 mode transport
    level use
    tmpl src 0.0.0.0 dst 0.0.0.0
    proto esp reqid 1 mode transport
    src 10.13.0.6/32 dst 130.211.170.201/32 proto gre
    dir out priority 2818 ptype main
    tmpl src 0.0.0.0 dst 0.0.0.0
    proto comp reqid 1 mode transport
    tmpl src 0.0.0.0 dst 0.0.0.0
    proto esp reqid 1 mode transport

    ip xfrm state
    src 10.13.0.6 dst 130.211.170.201
    proto esp spi 0xc3e7751e reqid 1 mode transport
    replay-window 32
    auth-trunc hmac(sha512) 0xc173f4d1aec3e625922c783b7420ff1e2e6d94c58f906a680a9c5ff83fe8918861094686fad961489e80123634a87919767b4d88f518203b96b63e4fc5c6cd22 256
    enc cbc(aes) 0x7a0bb19c8d8d88c2f3ac319608d669c53f002f28983d9e05134424115d9f48be
    encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
    sel src 10.13.0.6/32 dst 130.211.170.201/32
    src 10.13.0.6 dst 130.211.170.201
    proto comp spi 0x0000c864 reqid 1 mode transport
    replay-window 0
    comp deflate
    sel src 10.13.0.6/32 dst 130.211.170.201/32
    src 130.211.170.201 dst 10.13.0.6
    proto esp spi 0xcf7bfb23 reqid 1 mode transport
    replay-window 32
    auth-trunc hmac(sha512) 0x7a55b637f777f44c74fc83dbcaf4deaca311d44b7dae8da22c08e4af88874c77a8a861cb91df1c38853905612204c3f15ddd88d075e44a625f8ea92855b98abd 256
    enc cbc(aes) 0xb4ca906d3f0fa6e9096869d102aba26ef3838d21a56b1711389bbf5d87dccd18
    encap type espinudp sport 4500 dport 4500 addr 0.0.0.0
    sel src 130.211.170.201/32 dst 10.13.0.6/32
    src 130.211.170.201 dst 10.13.0.6
    proto comp spi 0x0000b3e7 reqid 1 mode transport
    replay-window 0
    comp deflate
    sel src 130.211.170.201/32 dst 10.13.0.6/32

     
    • Timo Teras

      Timo Teras - 2016-05-02

      Well all IPsec side looks ok. Do note that ping will not work until nhrpd has been able to register to the hub. Something is blocking the nhrp registration packets.

      Do you have firewall enabled on the dmvpn nodes? You need to allow on the internet interface ike, esp, gre (inside ipsec); and on the gre interface nhrp + icmp + whatever you want or just everything out on gre.

      You might want to first start disabling firewalling on the vpnc node to see if that fixes things. And then start adding the fw rules. If there's no firewall, it's starting to sound like broken kernel.

       
  • G Man

    G Man - 2016-05-02

    Yes, firewall is disabled on edge vms. There is one intermediate firewall on Google cloud. It allows udp 500/4500, icmp to enable the tunnel. No gre/nhrp allowed.

    I am presuming nhrp registration requests will flow via tunnel and not out of band. If so, for nhrp registration packets to flow through over ipsec+mgre, gre endpoints must be at least ping-able over the dynamic ipsec tunnel (before any routing).

    Please correct me if my understanding is wrong. Hence, i am focussed on getting the mgre working. Post that, nhrp + routes should simply start flowing.

    Issue seems that 10.90.90.1 & 10.90.90.3 are not "auto-discovered" as endpoint gateways in the ipsec state. Is that normal?

    Thank you.

     
    • Timo Teras

      Timo Teras - 2016-05-02

      Yes and no. Yes, the NHRP registration goes inside GRE tunnel and is thus IPsec. However, ping will not work before nhrp is up. Consider NHRP as replacement of ARP. nhrpd uses sendto/recvfrom to the gre device with the public IP addresses to send NHRP control messages before any other traffic is possible. Also NHRP runs directly over GRE, and not inside the inner IP layer. Thus, pingability of the remote IP is bad test. You really need to get the NHRP registration working first. After NHRP Registration is completed, the NHRP daemon will populate the kernel routing and ARP tables in such a way that IP connectivity comes up, and ping starts to work.

      Use tcpdump. The NHRP Registration requests you can observe directly on gre device, and it goes to the internet interface first as GRE packet, and finally as IPsec+GRE packet. You should be able to see packets going out on the spoke, and coming in on the hub.

       
  • G Man

    G Man - 2016-05-02

    tcpdump at spoke:

    tcpdump -s 0 -v -n proto gre

    17:35:51.470103 IP (tos 0x0, ttl 64, id 52505, offset 0, flags [DF], proto GRE (47), length 120)
    107.170.70.218 > 104.154.105.181: GREv0, Flags [key present], key=0x73adb, length 100
    gre-proto-0x2001

    tcpdump -s 0 -v -n -i tun0

    17:35:51.470059 Out ethertype Unknown (0x2001), length 108:
    0x0000: 0001 0800 0000 0000 0000 005c 87bc 0034 ..............4
    0x0010: 0103 0400 0404 0002 0000 0001 0a0d 0006 ................
    0x0020: 0a5a 5a03 0a5a 5a03 00ff 0000 0000 1c20 .ZZ..ZZ.........
    0x0030: 0000 0000 8004 0000 8005 0000 8003 0000 ................
    0x0040: 0009 0014 0020 0000 0000 0000 0400 0400 ................
    0x0050: 0a0d 0006 0a5a 5a03 8000 0000 0000 0000 .....ZZ.........
    0x0060: 0000 0000 0000 0000 0000 0000 ............

    Nothing shows up at hub.

     
  • G Man

    G Man - 2016-05-03

    I think my kernel doesnt like sending a NHRP packet on a tunnel created to transmit GRE. "Out ethertype Unknown (0x2001)" on tun0. That could be the issue.

     
    • Timo Teras

      Timo Teras - 2016-05-03

      The "Unknown (0x2001)" is just tcpdump saying it does not know how to decode nhrp packets. Taking capture to file, and opening it with wireshark would show it just ok.

      tcpdumping 'tun0' shows the registration is sent. The first tcpdump 'proto gre' shows it's also sent out encapsulated. Can you also check if it's sent encoded with tcpdump "host <hub-ip> and udp". After that check if the udp packet arrives in hub.

      But It's starting to look like broken kernel. Check README.kernel: it has multiple issues listed for 3.10.y kernels such as:

      • sendto() was broken causing opennhrp not work at all
        Broken since 3.10-rc1
        commit "GRE: Refactor GRE tunneling code."
        Fixed in 3.10.12, 3.11-rc6
        commit "ip_gre: fix ipgre_header to return correct offset"

      And the tcpdump of "proto gre" you have seems to show incorrect public IPs. This would indicate the above bug is present in the kernel. I recommend upgrading kernels to known good. E.g. 3.12 or 3.14 series. IIRC, also 4.1.y and 4.4.y series are good.

       
  • G Man

    G Man - 2016-05-10

    I was able to send GRE packet to the hub. For every NHRP registration request from spoke,

    strace on hub,
    recvmsg(7, {msg_name(16)={sa_family=AF_PACKET, proto=0x2001, if10, pkttype=PACKET_HOST, addr(4)={778, 08004500}, msg_iov(1)=[{"\0\1\10\0\0\0\0\0\0\0\0\u\213\0004\1\3\4\0\4\4\0\2\0\0\0\1\n)\t\4"..., 1500}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT) = 92

    decrypted tcpdump on hub,
    00:17:45.079614 IP (tos 0x0, ttl 64, id 50535, offset 0, flags [DF], proto GRE (47), length 120)
    104.154.35.55 > 10.41.8.4: GREv0, Flags [key present], key=0x73adb, length 100
    gre-proto-0x2001

    NHRPD log on hub
    nhrpd[914]: 2016/05/10 00:38:05 NHRP: PACKET: Recv 8.0.69.0 -> 10.41.8.4
    nhrpd[914]: 2016/05/10 00:38:05 NHRP: From 8.0.69.0: error: peer not online

    What is "8.0.69.0"? Should it be "104.154.35.55" - spoke's public ip OR "10.90.90.2" - spoke's tun0 ip?

    Looks this could finally be kernel related. From Readme.kernel

    • recvfrom() returned incorrect NBMA address, breaking NAT detection
      Broken since 3.10-rc1
      commit "GRE: Refactor GRE tunneling code."
      Fixed in 3.10.27, 3.12.8, 3.13-rc7
      commit "ip_gre: fix msg_name parsing for recvfrom/recvmsg"
     
  • G Man

    G Man - 2016-05-10

    Its the kernel. Works after upgrading. Any workaround for this issue to avoid the upgrade?

    Thanks for all the help. Nice to see pings going across!

     
    • Timo Teras

      Timo Teras - 2016-05-10

      Very good.

      Unfortunately the kernel bugs cannot be worked around. The only solution is to hava a working kernel.

       
  • G Man

    G Man - 2016-05-13

    When the public interface has a secondary ip configured, i.e, IPADDR2 & PREFIX2, in ifcfg-eth0, it incorrectly picks up the IPADDR2 for tunnel endpoint and eventually nhrp doesn't work. IPADDR2 is private ip. NHRP times out during send(). Kernel 4.5. Tunnel source was simply set to eth0.

    I tried many configurations but it didn't work. I settled by removing IPADDR2, which works for me.

    Possibly a bug, so just wanted to write a quick note. I can share more details if needed.

     
  • Lucas Holcomb

    Lucas Holcomb - 2025-03-20

    Hey Timo,
    I am looking to setup just Phase 1 of DMVPN, so just NHRP registrations.
    Multicast will need to go from spoke to hub and vice-versa, including PIM, RIP, possibly OSPF, and user multicast data. The communications can always happen via spoke to the hub first before routing from HUB to another spokes.
    This also needs to be protected via IPsec, using strongswan route based VPN.

    I know OpenNHRP doesn't fully function with strongswan, but does phase 1 work with it/any settings needed for multicast.

    I can't use quagga NHRP as it does not support multicast.
    I understand FRR version of OpenNHRP to be usable, but FRR is heavy handed for our system a we already have advanced routing protocols.
    Thanks!

     

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.