opennhrp-devel Mailing List for OpenNHRP
Brought to you by:
fabled80
You can subscribe to this list here.
2008 |
Jan
|
Feb
|
Mar
(2) |
Apr
(6) |
May
|
Jun
|
Jul
(3) |
Aug
(10) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
|
Feb
(39) |
Mar
(1) |
Apr
(2) |
May
(12) |
Jun
|
Jul
|
Aug
(8) |
Sep
(2) |
Oct
(19) |
Nov
(2) |
Dec
|
2010 |
Jan
(21) |
Feb
(7) |
Mar
(7) |
Apr
(32) |
May
(4) |
Jun
(4) |
Jul
(3) |
Aug
(3) |
Sep
(10) |
Oct
(6) |
Nov
(2) |
Dec
(11) |
2011 |
Jan
(8) |
Feb
|
Mar
(12) |
Apr
|
May
|
Jun
(7) |
Jul
(10) |
Aug
(4) |
Sep
|
Oct
(4) |
Nov
(9) |
Dec
(24) |
2012 |
Jan
(2) |
Feb
(6) |
Mar
(8) |
Apr
(20) |
May
(8) |
Jun
(20) |
Jul
(6) |
Aug
|
Sep
(11) |
Oct
(3) |
Nov
(25) |
Dec
(1) |
2013 |
Jan
(17) |
Feb
(10) |
Mar
(2) |
Apr
(23) |
May
(13) |
Jun
(14) |
Jul
(3) |
Aug
(4) |
Sep
(7) |
Oct
(6) |
Nov
|
Dec
(19) |
2014 |
Jan
(1) |
Feb
(13) |
Mar
(32) |
Apr
(16) |
May
(43) |
Jun
(22) |
Jul
(7) |
Aug
(7) |
Sep
|
Oct
(3) |
Nov
(3) |
Dec
(2) |
2015 |
Jan
(9) |
Feb
(6) |
Mar
(9) |
Apr
(13) |
May
(17) |
Jun
(5) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(6) |
Jun
(5) |
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2017 |
Jan
|
Feb
(10) |
Mar
|
Apr
|
May
(6) |
Jun
|
Jul
(15) |
Aug
(12) |
Sep
|
Oct
(4) |
Nov
|
Dec
|
2019 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Timo T. <tim...@ik...> - 2023-08-08 07:57:37
|
Hi, On Mon, 7 Aug 2023 13:29:50 -0600 Pankaj Choudhary <pan...@gm...> wrote: > I am using opennhrp 0.14.1. Now I need to support for IPv6 as well, > like IPv6 ip address in NBMA address, NHRP requests must go through > HUB for IPv6 addresses etc. > > Can anyone is having any patch or any special version of opennhrp > that will help to support IPv6? Not that I know of. Opennhrp from my point of view is end-of-life as it integrates only with ipsec-tools which implements only IKEv1 which is end-of-life. Current more up-to-date implementation is found from FreeRangeRouting suite, see: - https://frrouting.org/ - https://github.com/FRRouting/frr/ However, I am not sure of the current status of nhrpd there. It based on the quagga-nhrp work I implemented earlier fixing the most of the design issues of opennhrp. Though, Quagga is also unmaintained now, so the FRR is basically the only maintained implementation at this time. My understanding is that IPv6 should work on the "protocol address" side. Adding IPv6 NBMA support is non-trivial. The way the IPv4 GRE tunnel is used in IPv4 mode is not really supported in IPv6 GRE tunnels. That is, there is one generic GRE tunnel without destination address, and an af_packet socket with sendmsg/recvmsg is used to send NHRP packets to specific NBMA address with sockaddr_ll address. It is not also simple to extend kernel for this because sockaddr_ll.sll_addr is too small field to contain an IPv6 address. Perhaps simplest fix for this without intrusive kernel change would be to have FRR create peer-to-peer GRE tunnels, and strongSwan IPsec pairings. This would make the code work with Linux kernel and strongSwan without extra patching. But this would be fairly extensive change to the FRR codebase. Currently I am fairly busy on other projects, and am not able to look into this without someone sponsoring the work. Timo |
From: Pankaj C. <pan...@gm...> - 2023-08-07 19:30:14
|
Hi All, I am using opennhrp 0.14.1. Now I need to support for IPv6 as well, like IPv6 ip address in NBMA address, NHRP requests must go through HUB for IPv6 addresses etc. Can anyone is having any patch or any special version of opennhrp that will help to support IPv6? Thanks in advance. Regards, Pankaj Choudhary |
From: Eric B. <eri...@go...> - 2019-01-31 17:12:19
|
Hi Timo, Okay thanks for that. Looks like I was looking in the wrong place. Thanks, Eric On Thu, Jan 31, 2019 at 7:39 AM Timo Teras <tim...@ik...> wrote: > On Wed, 30 Jan 2019 16:46:31 +0000 > Eric Burke via opennhrp-devel <ope...@li...> > wrote: > > > Is there any documentation anywhere that would explained the meaning > > of any exit status codes for the peer script? I have seen 'exit > > status 2' and have looked at the man pages, online documentation etc > > but I cannot find this anywhere. > > > > Jan 27 14:35:05 daemon.err 00E0C8133D7F opennhrp[5002]: [10.92.64.1] > > Peer up script failed: exitstatus 2 > > The code printing that is at: > > https://sourceforge.net/p/opennhrp/code/ci/master/tree/nhrp/nhrp_peer.c#l326 > > But basically it means the script exited with the given return code. So > what the code really means is dependent on what kind of script you are > using. > > I am not sure if this can happen with the default script since most > error cases have "|| exit 1". > > I suspect you have custom script, and one of the script commands is > failing with error code 2. > > Maybe changing "exitstatus" to "return code" or something might be more > understandable? > > Hope this explains the issue. > > Timo > |
From: Timo T. <tim...@ik...> - 2019-01-31 07:39:48
|
On Wed, 30 Jan 2019 16:46:31 +0000 Eric Burke via opennhrp-devel <ope...@li...> wrote: > Is there any documentation anywhere that would explained the meaning > of any exit status codes for the peer script? I have seen 'exit > status 2' and have looked at the man pages, online documentation etc > but I cannot find this anywhere. > > Jan 27 14:35:05 daemon.err 00E0C8133D7F opennhrp[5002]: [10.92.64.1] > Peer up script failed: exitstatus 2 The code printing that is at: https://sourceforge.net/p/opennhrp/code/ci/master/tree/nhrp/nhrp_peer.c#l326 But basically it means the script exited with the given return code. So what the code really means is dependent on what kind of script you are using. I am not sure if this can happen with the default script since most error cases have "|| exit 1". I suspect you have custom script, and one of the script commands is failing with error code 2. Maybe changing "exitstatus" to "return code" or something might be more understandable? Hope this explains the issue. Timo |
From: Eric B. <eri...@go...> - 2019-01-30 16:49:22
|
Hi, Is there any documentation anywhere that would explained the meaning of any exit status codes for the peer script? I have seen 'exit status 2' and have looked at the man pages, online documentation etc but I cannot find this anywhere. Jan 27 14:35:05 daemon.err 00E0C8133D7F opennhrp[5002]: [10.92.64.1] Peer up script failed: exitstatus 2 Thanks, Eric |
From: Timo T. <tim...@ik...> - 2017-10-30 07:20:07
|
Hi, Looked in more details the logs now. Somehow it seems that strongSwan is negotiating but not able to successfully to establish IPsec SA between the spokes. Can you set strongSwan in debug mode, and get the logs from it as well when trying to establish the shortcut tunnel? This could be something to do with the 'psk' auth mode. I never tested it (using certs only), though the config looks ok. Please confirm also which strongSwan version and patches are applied? Timo On Sun, 29 Oct 2017 16:28:40 -0400 Lee Cardona <lee...@gm...> wrote: > Hi Timo, > > I've apply the changes but still does not perform spoke-to-spoke > phase 3 for back-end nets. Basically the same dmvpn behavior. > > The prefix filter works preventing /32 coming in via bgp. And the > gre subnet is announced from hub. > > Any specific action I can perform to see why the cache entry being > entered when doing net-to-net loads as "Invalid" and with an "A" > flag? > > Do you think this could be a bug? > > Sent from my iPhone > > > On Oct 26, 2017, at 12:53 AM, Timo Teras <tim...@ik...> wrote: > > > > On Wed, 25 Oct 2017 12:16:21 -0400 > > Lee Cardona <lee...@gm...> wrote: > > > >> Timo, > >> > >> When you say, > >> > >> "The hub should be announcing the GRE subnet via BGP explicitly, so > >> you'll need also for Hub's BGP config: > >> network 10.10.10.0/24" > >> > >> Do you mean the gre subnet for nmba addresses or the tunnel > >> addresses? Just looking to confirm because in my setup the > >> 10.10.10.0/24 subnet is used for the nmba interfaces on eth0's. > >> While the 192.168.0.0/16 is used on the inner tunnel gre > >> interfaces. > >> > >> Did you mean 192.168.0.0/16? > > > > Sorry. Yes, I meant the subnet covering gre1 addreses. Or > > 192.168.0.0/16. I misread the diagram. > > > > Timo |
From: Timo T. <tim...@ik...> - 2017-10-25 05:33:29
|
Hi, I just realized the other spoke's IP arrives as BGP route. I think this is not right. The way it should work is that if there's multiple Hubs they will exchange full Spoke route database with BGP, but Hubs should not announce those to Spokes - I suspect this is a side effect of that. The config I privately use has on Hubs: neighbor DMVPN prefix-list no-hosts out .. ip prefix-list no-hosts seq 5 permit 0.0.0.0/0 le 30 Alternatively, as long as you have only single Hub you could remove the 'redistribute nhrp' from BGP which has the same effect of not announcing spoke host routes. But if you have more than one Hub then you need the redistribute config with the route filters. I think in Cisco FlexVPN this works more transparently because normal spokes do not use BPG at all; instead they get the Hub routes as "IKE routes" announced over IKEv2, and that list does not have the spokes ever. The hub should be announcing the GRE subnet via BGP explicitly, so you'll need also for Hub's BGP config: network 10.10.10.0/24 If you ever add other Hubs they should be in different peer group that get the full routing database including spoke host routes. Rough overview of this is explained in README.nhrpd 'Routing Design' but it's omitting certain parts of this. Need to also add this to the BGP config example. Hope these three Hub specific BGP configs will fix the issue. I will update the README.nhrpd accordingly. And try to add an example. Timo On Tue, 24 Oct 2017 16:45:36 -0400 Lee Cardona <lee...@gm...> wrote: > Timo, > > After some additional investigation, I was able to discover that the > version of frr I was using (3.0rc1 from Aug 9th) lacked some recent > fixes to the route distribution and other nhrp/bgp fixes and updates. > > I have since rebuilt my set up with the latest frr build of #1913 > Oct. 10th this past weekend. > > Source code from: https://ci1.netdef.org/browse/FRR-FRRPULLREQ-1913/ > > Everything looks stable and debug log comes up clean. > > Now when pinging from tunnel interfaces, spoke to spoke (S2S) tunnel > comes up and switches from phase 1 (via hub) to phase 3 (spoke to > spoke) after ~ 10 pings following hub redirect. > > However, I can not get phase 3 spoke to spoke to initiate if the > pings are from the back-end networks. > > > In summary, > > - it appears phase 1 works fine whether its tunnel to tunnel, > back-end to back-end, tunnel to back-end or back-end to tunnel over > the hub > - it appears phase 3 S2S works only when its tunnel to tunnel > - phase 3 S2S does not work if its back-end to back-end, tunnel to > back-end or back-end to tunnel > > I'm just not sure if it's a configuration issue on my end with nhrp > or bgp or if this is a bug? > > Here is a link to a gist that has more detail explanation and the > configs of the hub and spokes plus debug output. > > https://gist.github.com/leecardona/e9230cc42d3a2ab557087d8a63087450 > > > Super thanks again for all the help! > > --Lee > > > On Tue, Oct 17, 2017 at 1:34 AM, Timo Teras <tim...@ik...> wrote: > > > Hi, > > > > On Mon, 16 Oct 2017 13:53:47 -0400 > > Lee Cardona <lee...@gm...> wrote: > > > > > Hi Timo et al, > > > > > > I'm having a problem with shortcuts not installing with > > > ubuntu/strongswan/Frr. > > > > > > Setup is as follows: > > > > > > 1 hub 2 spokes (single machine lab setup with lxc) > > > > > > strongswan loads dmvpn conns ok > > > both spokes register fine > > > Hub has [N] routes back to spokes > > > Spokes have single [N] route back to hub... spoke to hub pings > > > fine and vise-a-versa > > > ibgp works fine - hub bgp sees routes from spokes and spokes see > > > routes from hub and reflected routes sent by hub from other spoke > > > in RIB > > > > Nice. > > > > > But the FIB does not install these routes > > > > > > and a > > > > > > sh ip nhrp shortcuts > > > > > > returns no entries > > > > > > nhrp nflog-group 1 is enabled on hub > > > > > > and iptables NFLOG rule also installed on hub > > > > Could show the iptables rule on hub? > > > > Please also verify from the 'iptables -v -L' command's statistics > > output that the iptables rule is matching packets. > > > > Any logs? Please enable nhrp debugging and post the logs on hub, and > > one spoke. > > > > > Additonally, I've turned on 'nhrp nflog-group 1' along with > > > iptables rule on hub and spokes but not sure if this is needed on > > > spokes. > > > > Spoke will not need this. This is basically enabling kernel to > > notify nhrpd about packets being routed non-optimally. > > > > > Also put 'ip nhrp redirect' in addition to ' ip nhrp shortcut' on > > > spokes but also not sure if 'ip nhrp redirect' is needed on > > > spokes. > > > > Not needed either. "Redirect" concerns hub only. It enables nhrpd to > > send NHRP Traffic Indication messages based on the above mentioned > > kernel notifications. > > > > > I also tried adding 'ip nhrp shortcut' on hub but again not sure > > > if this does anyting on the hub. > > > > This might be needed if you have multiple hubs, and not all spoke > > are connecting to all hubs. In this case this would enable hub to > > initiate shortcuts to spokes that are not directly connected. > > > > > After a long weekend I just can't seem to figure this one out. > > > Help pls! > > > > > > Also have full debug log from frr if that will help > > > > Yes please :) > > > > > router bgp 65000 > > > bgp router-id 10.0.0.1 > > > bgp default show-hostname > > > no bgp default ipv4-unicast > > > neighbor DMVPN peer-group > > > neighbor DMVPN remote-as 65000 > > > neighbor DMVPN disable-connected-check > > > neighbor DMVPN advertisement-interval 1 > > > neighbor 10.0.0.6 peer-group DMVPN > > > neighbor 10.0.0.7 peer-group DMVPN > > > > Btw. FRR should support BGP listen-range feature, so you should be > > good on hub with just: > > > > bgp listen range 10.0.0.0/8 peer-group DMVPN > > > > Instead of listing all spokes. Unfortunately this is not available > > on Quagga (at the time of writing this). > > > > Timo > > |
From: Timo T. <tim...@ik...> - 2017-10-17 05:34:35
|
Hi, On Mon, 16 Oct 2017 13:53:47 -0400 Lee Cardona <lee...@gm...> wrote: > Hi Timo et al, > > I'm having a problem with shortcuts not installing with > ubuntu/strongswan/Frr. > > Setup is as follows: > > 1 hub 2 spokes (single machine lab setup with lxc) > > strongswan loads dmvpn conns ok > both spokes register fine > Hub has [N] routes back to spokes > Spokes have single [N] route back to hub... spoke to hub pings fine > and vise-a-versa > ibgp works fine - hub bgp sees routes from spokes and spokes see > routes from hub and reflected routes sent by hub from other spoke in > RIB Nice. > But the FIB does not install these routes > > and a > > sh ip nhrp shortcuts > > returns no entries > > nhrp nflog-group 1 is enabled on hub > > and iptables NFLOG rule also installed on hub Could show the iptables rule on hub? Please also verify from the 'iptables -v -L' command's statistics output that the iptables rule is matching packets. Any logs? Please enable nhrp debugging and post the logs on hub, and one spoke. > Additonally, I've turned on 'nhrp nflog-group 1' along with iptables > rule on hub and spokes but not sure if this is needed on spokes. Spoke will not need this. This is basically enabling kernel to notify nhrpd about packets being routed non-optimally. > Also put 'ip nhrp redirect' in addition to ' ip nhrp shortcut' on > spokes but also not sure if 'ip nhrp redirect' is needed on spokes. Not needed either. "Redirect" concerns hub only. It enables nhrpd to send NHRP Traffic Indication messages based on the above mentioned kernel notifications. > I also tried adding 'ip nhrp shortcut' on hub but again not sure if > this does anyting on the hub. This might be needed if you have multiple hubs, and not all spoke are connecting to all hubs. In this case this would enable hub to initiate shortcuts to spokes that are not directly connected. > After a long weekend I just can't seem to figure this one out. Help > pls! > > Also have full debug log from frr if that will help Yes please :) > router bgp 65000 > bgp router-id 10.0.0.1 > bgp default show-hostname > no bgp default ipv4-unicast > neighbor DMVPN peer-group > neighbor DMVPN remote-as 65000 > neighbor DMVPN disable-connected-check > neighbor DMVPN advertisement-interval 1 > neighbor 10.0.0.6 peer-group DMVPN > neighbor 10.0.0.7 peer-group DMVPN Btw. FRR should support BGP listen-range feature, so you should be good on hub with just: bgp listen range 10.0.0.0/8 peer-group DMVPN Instead of listing all spokes. Unfortunately this is not available on Quagga (at the time of writing this). Timo |
From: Lee C. <lee...@gm...> - 2017-10-16 17:53:55
|
Hi Timo et al, I'm having a problem with shortcuts not installing with ubuntu/strongswan/Frr. Setup is as follows: 1 hub 2 spokes (single machine lab setup with lxc) strongswan loads dmvpn conns ok both spokes register fine Hub has [N] routes back to spokes Spokes have single [N] route back to hub... spoke to hub pings fine and vise-a-versa ibgp works fine - hub bgp sees routes from spokes and spokes see routes from hub and reflected routes sent by hub from other spoke in RIB But the FIB does not install these routes and a sh ip nhrp shortcuts returns no entries nhrp nflog-group 1 is enabled on hub and iptables NFLOG rule also installed on hub Additonally, I've turned on 'nhrp nflog-group 1' along with iptables rule on hub and spokes but not sure if this is needed on spokes. Also put 'ip nhrp redirect' in addition to ' ip nhrp shortcut' on spokes but also not sure if 'ip nhrp redirect' is needed on spokes. I also tried adding 'ip nhrp shortcut' on hub but again not sure if this does anyting on the hub. After a long weekend I just can't seem to figure this one out. Help pls! Also have full debug log from frr if that will help ===== Configs ::Hub================= nhrp nflog-group 1 debug nhrp all interface gre1 ip nhrp holdtime 3600 ip nhrp network-id 1 ip nhrp nhs dynamic nbma 192.168.0.5 ip nhrp redirect ip nhrp registration no-unique ip nhrp shortcut no link-detect tunnel protection vici profile dmvpn tunnel source eth0 router bgp 65000 bgp router-id 10.0.0.1 bgp default show-hostname no bgp default ipv4-unicast neighbor DMVPN peer-group neighbor DMVPN remote-as 65000 neighbor DMVPN disable-connected-check neighbor DMVPN advertisement-interval 1 neighbor 10.0.0.6 peer-group DMVPN neighbor 10.0.0.7 peer-group DMVPN ! address-family ipv4 unicast network 10.0.0.0/8 network 10.0.0.1/32 redistribute nhrp neighbor DMVPN activate neighbor DMVPN route-reflector-client neighbor DMVPN next-hop-self force neighbor DMVPN soft-reconfiguration inbound exit-address-family ::Spoke1================= nhrp nflog-group 1 debug nhrp all interface gre1 ip nhrp holdtime 3600 ip nhrp network-id 1 ip nhrp nhs dynamic nbma 192.168.0.5 ip nhrp redirect ip nhrp registration no-unique ip nhrp shortcut no link-detect tunnel protection vici profile dmvpn tunnel source eth0 router bgp 65000 bgp router-id 10.0.0.6 bgp default show-hostname no bgp default ipv4-unicast neighbor DMVPN peer-group neighbor DMVPN remote-as 65000 neighbor DMVPN disable-connected-check neighbor DMVPN advertisement-interval 1 neighbor 10.0.0.1 peer-group DMVPN ! address-family ipv4 unicast network 10.50.0.0/16 network 10.0.0.6/32 redistribute nhrp neighbor DMVPN activate neighbor DMVPN soft-reconfiguration inbound exit-address-family ::Spoke2================= nhrp nflog-group 1 debug nhrp all interface gre1 ip nhrp holdtime 3600 ip nhrp network-id 1 ip nhrp nhs dynamic nbma 192.168.0.5 ip nhrp redirect ip nhrp registration no-unique ip nhrp shortcut no link-detect tunnel protection vici profile dmvpn tunnel source eth0 router bgp 65000 bgp router-id 10.0.0.7 bgp default show-hostname no bgp default ipv4-unicast neighbor DMVPN peer-group neighbor DMVPN remote-as 65000 neighbor DMVPN disable-connected-check neighbor DMVPN advertisement-interval 1 neighbor 10.0.0.1 peer-group DMVPN ! address-family ipv4 unicast network 172.31.0.0/16 network 10.0.0.7/32 redistribute nhrp neighbor DMVPN activate neighbor DMVPN soft-reconfiguration inbound exit-address-family |
From: M87tech [Jon] <m8...@gm...> - 2017-08-16 23:14:16
|
Hi Just to say that its all working perfectly now after I pulled the lastest version of frr 3.1 from github Also works perfectly with hub behind NAT! Cheers :) Jon. On Tue, 1 Aug 2017 at 15:42 M87tech [Jon] <m8...@gm...> wrote: > Hi, > > Here are the logs, > > it it includes probably two reboots during this period. > > Cheers, > Jon. > > https://pastebin.com/hbudf045 > > > > On Tue, 1 Aug 2017 at 13:18 Timo Teras <tim...@ik...> wrote: > >> On Tue, 01 Aug 2017 11:57:29 +0000 >> "M87tech [Jon]" <m8...@gm...> wrote: >> >> > root@hub2-nhrp:/home/jon# swanctl --list-sas >> > >> > root@hub2-nhrp:/home/jon# vtysh >> > >> > This is a git build of frr-3.0-rc0-1039-gff9f629d >> > >> > hub2-nhrp# show dmvpn >> > Src Dst Flags SAs Identity >> > >> > (unspec) 51.15.49.245 n 0 >> > >> > >> > hub2-nhrp# show ip nhrp cache >> > % No entries >> >> This looks like a bug. It may be specific to frr codebase - I have not >> tested it if it works, and they've done quite a bit of changes. Could >> you try if it works with Quagga git master, or release 1.2.1 ? >> >> Alternatively, post full debug log from nhrpd's startup to the point of >> trying to establish tunnels. You seem to have already all debugging >> enabled. I would like to see there lines with "if-add" and >> "if-addr-add" amongst other debug messages. If no such lines exist, it >> would suggest frr bug. >> >> Timo >> > -- > M87 TECH > Jon Clayton > > -- M87 TECH Jon Clayton |
From: M87tech [Jon] <m8...@gm...> - 2017-08-01 14:42:22
|
Hi, Here are the logs, it it includes probably two reboots during this period. Cheers, Jon. https://pastebin.com/hbudf045 On Tue, 1 Aug 2017 at 13:18 Timo Teras <tim...@ik...> wrote: > On Tue, 01 Aug 2017 11:57:29 +0000 > "M87tech [Jon]" <m8...@gm...> wrote: > > > root@hub2-nhrp:/home/jon# swanctl --list-sas > > > > root@hub2-nhrp:/home/jon# vtysh > > > > This is a git build of frr-3.0-rc0-1039-gff9f629d > > > > hub2-nhrp# show dmvpn > > Src Dst Flags SAs Identity > > > > (unspec) 51.15.49.245 n 0 > > > > > > hub2-nhrp# show ip nhrp cache > > % No entries > > This looks like a bug. It may be specific to frr codebase - I have not > tested it if it works, and they've done quite a bit of changes. Could > you try if it works with Quagga git master, or release 1.2.1 ? > > Alternatively, post full debug log from nhrpd's startup to the point of > trying to establish tunnels. You seem to have already all debugging > enabled. I would like to see there lines with "if-add" and > "if-addr-add" amongst other debug messages. If no such lines exist, it > would suggest frr bug. > > Timo > -- M87 TECH Jon Clayton |
From: Timo T. <tim...@ik...> - 2017-08-01 12:18:11
|
On Tue, 01 Aug 2017 11:57:29 +0000 "M87tech [Jon]" <m8...@gm...> wrote: > root@hub2-nhrp:/home/jon# swanctl --list-sas > > root@hub2-nhrp:/home/jon# vtysh > > This is a git build of frr-3.0-rc0-1039-gff9f629d > > hub2-nhrp# show dmvpn > Src Dst Flags SAs Identity > > (unspec) 51.15.49.245 n 0 > > > hub2-nhrp# show ip nhrp cache > % No entries This looks like a bug. It may be specific to frr codebase - I have not tested it if it works, and they've done quite a bit of changes. Could you try if it works with Quagga git master, or release 1.2.1 ? Alternatively, post full debug log from nhrpd's startup to the point of trying to establish tunnels. You seem to have already all debugging enabled. I would like to see there lines with "if-add" and "if-addr-add" amongst other debug messages. If no such lines exist, it would suggest frr bug. Timo |
From: M87tech [Jon] <m8...@gm...> - 2017-08-01 11:57:48
|
root@hub2-nhrp:/home/jon# swanctl --list-conns dmvpn: IKEv2, reauthentication every 46800s, rekeying every 14400s local: %any remote: %any local pre-shared key authentication: id: hu...@op...oud remote pre-shared key authentication: id: hu...@op...oud dmvpn: TRANSPORT, rekeying every 6000s local: dynamic[gre] remote: dynamic[gre] root@hub2-nhrp:/home/jon# swanctl --list-sas root@hub2-nhrp:/home/jon# vtysh This is a git build of frr-3.0-rc0-1039-gff9f629d hub2-nhrp# show dmvpn Src Dst Flags SAs Identity (unspec) 51.15.49.245 n 0 hub2-nhrp# show ip nhrp cache % No entries hub2-nhrp# On Tue, 1 Aug 2017 at 12:55 M87tech [Jon] <m8...@gm...> wrote: > Here are the results: > > root@hub2-nhrp:/home/jon# swanctl --list-conns > dmvpn: IKEv2, reauthentication every 46800s, rekeying every 14400s > local: %any > remote: %any > local pre-shared key authentication: > id: hu...@op...oud > remote pre-shared key authentication: > id: hu...@op...oud > dmvpn: TRANSPORT, rekeying every 6000s > local: dynamic[gre] > remote: dynamic[gre] > > > root@hub2-nhrp:/home/jon# swanctl --list-sas > ^^^ no output here > > hub2-nhrp# show dmvpn > Src Dst Flags SAs Identity > > (unspec) 51.15.49.245 n 0 > > hub2-nhrp# show ip nhrp cache > % No entries > > > > hub2-nhrp# > > On Tue, 1 Aug 2017 at 10:02 M87tech [Jon] <m8...@gm...> wrote: > >> NHRPd has never automatically created the SA. The only way I could do >> this was with the manual swanctl command yesterday. >> >> Also there are no error messages. >> >> I'll try those commands shortly. >> >> Cheers, >> Jon. >> >> On Tue, 1 Aug 2017 at 09:54 Timo Teras <tim...@ik...> wrote: >> >>> On Tue, 01 Aug 2017 08:49:07 +0000 >>> "M87tech [Jon]" <m8...@gm...> wrote: >>> >>> > I think that is why there is not automatic SA established, because >>> > there is no GRE traffic to trigger the swanctl policy in the first >>> > place. Thats why only the manual command establishes the child SA. >>> >>> No. Again, nhrpd requests strongSwan to establish SA. Until strongSwan >>> acks active SA back to nhrpd, it's not going to attempt to send any >>> nhrp messages. In dmvpn nhrp is driving IKE; IKE is not being driven by >>> the traffic acquire like in ike tunnel mode. >>> >>> So after starting from clean slate. Is strongSwan now establishing >>> SA's? Are they fully established? >>> >>> What does say: >>> swanctl --list-conns >>> swanctl --list-sas >>> >>> And nhrpd's: >>> show dmvpn >>> show ip nhrp cache >>> >>> Timo >>> >> -- >> M87 TECH >> Jon Clayton >> >> -- > M87 TECH > Jon Clayton > > -- M87 TECH Jon Clayton |
From: M87tech [Jon] <m8...@gm...> - 2017-08-01 11:55:58
|
Here are the results: root@hub2-nhrp:/home/jon# swanctl --list-conns dmvpn: IKEv2, reauthentication every 46800s, rekeying every 14400s local: %any remote: %any local pre-shared key authentication: id: hu...@op...oud remote pre-shared key authentication: id: hu...@op...oud dmvpn: TRANSPORT, rekeying every 6000s local: dynamic[gre] remote: dynamic[gre] root@hub2-nhrp:/home/jon# swanctl --list-sas ^^^ no output here hub2-nhrp# show dmvpn Src Dst Flags SAs Identity (unspec) 51.15.49.245 n 0 hub2-nhrp# show ip nhrp cache % No entries hub2-nhrp# On Tue, 1 Aug 2017 at 10:02 M87tech [Jon] <m8...@gm...> wrote: > NHRPd has never automatically created the SA. The only way I could do > this was with the manual swanctl command yesterday. > > Also there are no error messages. > > I'll try those commands shortly. > > Cheers, > Jon. > > On Tue, 1 Aug 2017 at 09:54 Timo Teras <tim...@ik...> wrote: > >> On Tue, 01 Aug 2017 08:49:07 +0000 >> "M87tech [Jon]" <m8...@gm...> wrote: >> >> > I think that is why there is not automatic SA established, because >> > there is no GRE traffic to trigger the swanctl policy in the first >> > place. Thats why only the manual command establishes the child SA. >> >> No. Again, nhrpd requests strongSwan to establish SA. Until strongSwan >> acks active SA back to nhrpd, it's not going to attempt to send any >> nhrp messages. In dmvpn nhrp is driving IKE; IKE is not being driven by >> the traffic acquire like in ike tunnel mode. >> >> So after starting from clean slate. Is strongSwan now establishing >> SA's? Are they fully established? >> >> What does say: >> swanctl --list-conns >> swanctl --list-sas >> >> And nhrpd's: >> show dmvpn >> show ip nhrp cache >> >> Timo >> > -- > M87 TECH > Jon Clayton > > -- M87 TECH Jon Clayton |
From: M87tech [Jon] <m8...@gm...> - 2017-08-01 09:03:16
|
NHRPd has never automatically created the SA. The only way I could do this was with the manual swanctl command yesterday. Also there are no error messages. I'll try those commands shortly. Cheers, Jon. On Tue, 1 Aug 2017 at 09:54 Timo Teras <tim...@ik...> wrote: > On Tue, 01 Aug 2017 08:49:07 +0000 > "M87tech [Jon]" <m8...@gm...> wrote: > > > I think that is why there is not automatic SA established, because > > there is no GRE traffic to trigger the swanctl policy in the first > > place. Thats why only the manual command establishes the child SA. > > No. Again, nhrpd requests strongSwan to establish SA. Until strongSwan > acks active SA back to nhrpd, it's not going to attempt to send any > nhrp messages. In dmvpn nhrp is driving IKE; IKE is not being driven by > the traffic acquire like in ike tunnel mode. > > So after starting from clean slate. Is strongSwan now establishing > SA's? Are they fully established? > > What does say: > swanctl --list-conns > swanctl --list-sas > > And nhrpd's: > show dmvpn > show ip nhrp cache > > Timo > -- M87 TECH Jon Clayton |
From: Timo T. <tim...@ik...> - 2017-08-01 08:54:12
|
On Tue, 01 Aug 2017 08:49:07 +0000 "M87tech [Jon]" <m8...@gm...> wrote: > I think that is why there is not automatic SA established, because > there is no GRE traffic to trigger the swanctl policy in the first > place. Thats why only the manual command establishes the child SA. No. Again, nhrpd requests strongSwan to establish SA. Until strongSwan acks active SA back to nhrpd, it's not going to attempt to send any nhrp messages. In dmvpn nhrp is driving IKE; IKE is not being driven by the traffic acquire like in ike tunnel mode. So after starting from clean slate. Is strongSwan now establishing SA's? Are they fully established? What does say: swanctl --list-conns swanctl --list-sas And nhrpd's: show dmvpn show ip nhrp cache Timo |
From: M87tech [Jon] <m8...@gm...> - 2017-08-01 08:49:24
|
I think that is why there is not automatic SA established, because there is no GRE traffic to trigger the swanctl policy in the first place. Thats why only the manual command establishes the child SA. On Tue, 1 Aug 2017 at 09:48 M87tech [Jon] <m8...@gm...> wrote: > Nothing on any of the GRE interfaces, not a peep, no counters no nothing. > :-( > > > On Tue, 1 Aug 2017 at 09:43 Timo Teras <tim...@ik...> wrote: > >> On Tue, 01 Aug 2017 08:29:40 +0000 >> "M87tech [Jon]" <m8...@gm...> wrote: >> >> > Ok understood, in that case ipsec encapsulates it and provides the >> > tunnel to reach the endpoint that lives (in my case) behind nat and >> > as you say, as I'm only stating the public address to be resolved in >> > the nhrp section, so I can imagine then it could get confused and >> > possibly drop the packet. >> > >> > I don't even think its getting to the point of getting confused so far >> > though. Ideally I need to look at the encrypted traffic to see if >> > there is any nhrp being sent down it. Also I'm not seeing anything >> > in the debug logs on either side to give me some hits into why its >> > failing? >> > >> > I'll have another bash at this tonight. >> >> tcpdump the GRE interface. You should be able to see all NHRP traffic >> there in plaintext. Wireshark is also able to analyze it. >> >> Timo >> > -- > M87 TECH > Jon Clayton > > -- M87 TECH Jon Clayton |
From: M87tech [Jon] <m8...@gm...> - 2017-08-01 08:48:38
|
Nothing on any of the GRE interfaces, not a peep, no counters no nothing. :-( On Tue, 1 Aug 2017 at 09:43 Timo Teras <tim...@ik...> wrote: > On Tue, 01 Aug 2017 08:29:40 +0000 > "M87tech [Jon]" <m8...@gm...> wrote: > > > Ok understood, in that case ipsec encapsulates it and provides the > > tunnel to reach the endpoint that lives (in my case) behind nat and > > as you say, as I'm only stating the public address to be resolved in > > the nhrp section, so I can imagine then it could get confused and > > possibly drop the packet. > > > > I don't even think its getting to the point of getting confused so far > > though. Ideally I need to look at the encrypted traffic to see if > > there is any nhrp being sent down it. Also I'm not seeing anything > > in the debug logs on either side to give me some hits into why its > > failing? > > > > I'll have another bash at this tonight. > > tcpdump the GRE interface. You should be able to see all NHRP traffic > there in plaintext. Wireshark is also able to analyze it. > > Timo > -- M87 TECH Jon Clayton |
From: Timo T. <tim...@ik...> - 2017-08-01 08:44:00
|
On Tue, 01 Aug 2017 08:29:40 +0000 "M87tech [Jon]" <m8...@gm...> wrote: > Ok understood, in that case ipsec encapsulates it and provides the > tunnel to reach the endpoint that lives (in my case) behind nat and > as you say, as I'm only stating the public address to be resolved in > the nhrp section, so I can imagine then it could get confused and > possibly drop the packet. > > I don't even think its getting to the point of getting confused so far > though. Ideally I need to look at the encrypted traffic to see if > there is any nhrp being sent down it. Also I'm not seeing anything > in the debug logs on either side to give me some hits into why its > failing? > > I'll have another bash at this tonight. tcpdump the GRE interface. You should be able to see all NHRP traffic there in plaintext. Wireshark is also able to analyze it. Timo |
From: M87tech [Jon] <m8...@gm...> - 2017-08-01 08:29:58
|
Hi, Ok understood, in that case ipsec encapsulates it and provides the tunnel to reach the endpoint that lives (in my case) behind nat and as you say, as I'm only stating the public address to be resolved in the nhrp section, so I can imagine then it could get confused and possibly drop the packet. I don't even think its getting to the point of getting confused so far though. Ideally I need to look at the encrypted traffic to see if there is any nhrp being sent down it. Also I'm not seeing anything in the debug logs on either side to give me some hits into why its failing? I'll have another bash at this tonight. Cheers! Jon. On Tue, 1 Aug 2017 at 06:24 Timo Teras <tim...@ik...> wrote: > On Mon, 31 Jul 2017 19:27:41 +0000 > "M87tech [Jon]" <m8...@gm...> wrote: > > > Out of interest, what address will nhrp bind to and source its > > messages from? I'm assuming it binds to the gre tunnel address? If > > so I'm still not seeing any packets being sent from the gre tunnel? > > NHRP works directly on top of layer 2, so there's no addresses to bind > to. The protocol number is 0x2001. > > > does it matter that both ends are behind NAT? I thought transport > > mode allowed this.? > > It might confuse nhrpd. All spokes can be behind NAT. But the hub nodes > by design should not be behind NAT. > > Timo > -- M87 TECH Jon Clayton |
From: Timo T. <tim...@ik...> - 2017-08-01 05:24:20
|
On Mon, 31 Jul 2017 19:27:41 +0000 "M87tech [Jon]" <m8...@gm...> wrote: > Out of interest, what address will nhrp bind to and source its > messages from? I'm assuming it binds to the gre tunnel address? If > so I'm still not seeing any packets being sent from the gre tunnel? NHRP works directly on top of layer 2, so there's no addresses to bind to. The protocol number is 0x2001. > does it matter that both ends are behind NAT? I thought transport > mode allowed this.? It might confuse nhrpd. All spokes can be behind NAT. But the hub nodes by design should not be behind NAT. Timo |
From: Timo T. <tim...@ik...> - 2017-07-31 11:13:02
|
On Mon, 31 Jul 2017 11:09:09 +0000 "M87tech [Jon]" <m8...@gm...> wrote: > should gre1 be a /32? still not seeing any traffic try and go down > gre1 Yes. Read the README.nhrpd. It's a /32, and nhrpd will publish more routes once IKE + NHRP is working. Timo |
From: Timo T. <tim...@ik...> - 2017-07-31 11:12:02
|
On Mon, 31 Jul 2017 10:59:19 +0000 "M87tech [Jon]" <m8...@gm...> wrote: > I've added "update-source" 172.16.0.20 in the bgp commands to see if > any difference but doesn't seem to have done anything. I thought > possibly it was binding to wrong interface and not causing ipsec to > kick in. Do note that quagga/frr nhrpd drivers directly strongSwan. It will request IKE before anything. Only after IKE and NHRP registration is done, nhrpd will publish the routers to zebra which allows bgpd link to succeed. Timo |
From: M87tech [Jon] <m8...@gm...> - 2017-07-31 11:09:27
|
should gre1 be a /32? still not seeing any traffic try and go down gre1 On Mon, 31 Jul 2017 at 11:59 M87tech [Jon] <m8...@gm...> wrote: > I've added "update-source" 172.16.0.20 in the bgp commands to see if any > difference but doesn't seem to have done anything. I thought possibly it > was binding to wrong interface and not causing ipsec to kick in. > > Cheers, > Jon. > > > On Mon, 31 Jul 2017 at 11:32 M87tech [Jon] <m8...@gm...> wrote: > >> ok got systemd working >> >> Wondering if these earlier messages in frr.log are related, although they >> seem to stop. >> >> *2017/07/31 11:27:58.27 NHRP: INTERFACE_STATE: Cannot find IF ens18 in >> VRF 0* >> >> 2017/07/31 11:23:26.70 NHRP: NHS: Waiting link for 51.15.49.245 >> 2017/07/31 11:23:32.69 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:23:32.72 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:24:06.23 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:25:11.76 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:25:24.05 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:25:26.70 NHRP: NHS: Waiting link for 51.15.49.245 >> 2017/07/31 11:25:54.77 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:26:04.24 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:26:04.24 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:26:37.78 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:27:26.71 NHRP: NHS: Waiting link for 51.15.49.245 >> 2017/07/31 11:27:30.26 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 >> 2017/07/31 11:27:58.27 NHRP: INTERFACE_ADDRESS_DEL: Cannot find IF 2 in >> VRF 0 >> 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 >> 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 >> 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 >> *2017/07/31 11:27:58.27 NHRP: INTERFACE_STATE: Cannot find IF ens18 in >> VRF 0* >> 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 >> 2017/07/31 11:27:58.27 NHRP: INTERFACE_ADDRESS_DEL: Cannot find IF 2 in >> VRF 0 >> 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 >> 2017/07/31 11:27:58.27 NHRP: INTERFACE_ADDRESS_DEL: Cannot find IF 2 in >> VRF 0 >> 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 >> 2017/07/31 11:27:59.42 NHRP: vici_reconnect: failure connecting VICI >> socket: Connection refused >> 2017/07/31 11:28:00.21 BGP: Terminating on signal >> 2017/07/31 11:28:00.21 ZEBRA: release_daemon_chunks: Released 0 label >> chunks >> 2017/07/31 11:28:00.21 ZEBRA: client 15 disconnected. 0 vnc routes >> removed from the rib >> 2017/07/31 11:28:00.21 ZEBRA: release_daemon_chunks: Released 0 label >> chunks >> 2017/07/31 11:28:00.21 ZEBRA: client 14 disconnected. 0 bgp routes >> removed from the rib >> 2017/07/31 11:28:00.23 NHRP: Exiting... >> 2017/07/31 11:28:00.23 NHRP: Done. >> 2017/07/31 11:28:00.23 ZEBRA: release_daemon_chunks: Released 0 label >> chunks >> 2017/07/31 11:28:00.23 ZEBRA: client 16 disconnected. 0 nhrp routes >> removed from the rib >> 2017/07/31 11:28:00.26 ZEBRA: Terminating on signal >> 2017/07/31 11:28:00.26 ZEBRA: IRDP: Received shutdown notification. >> 2017/07/31 11:28:27.12 NHRP: VICI: Connected >> 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes >> 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes >> 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes >> 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes >> 2017/07/31 11:28:27.27 NHRP: VICI: Message 1, 1 bytes >> 2017/07/31 11:28:27.33 NHRP: [0x563ae8cffc40] Resolving ' >> hub6.wizznet.co.uk' >> 2017/07/31 11:28:27.33 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:28:27.36 NHRP: [0x563ae8cffc40] Resolved with 1 results >> 2017/07/31 11:28:27.41 NHRP: NHS: Waiting link for 51.15.49.245 >> >> On Mon, 31 Jul 2017 at 11:26 M87tech [Jon] <m8...@gm...> wrote: >> >>> Sorted the pre-shared key bit, but its still not loading at boot - >>> that's another matter though. >>> >>> now I manually load it with load-conn I see some new info >>> >>> root@hub2-nhrp:/home/jon# swanctl --list-conn >>> >>> dmvpn: IKEv2, reauthentication every 46800s, rekeying every 14400s >>> local: %any >>> remote: %any >>> local pre-shared key authentication: >>> id: hu...@my...oud >>> remote pre-shared key authentication: >>> id: hu...@my...oud >>> dmvpn: TRANSPORT, rekeying every 6000s >>> local: dynamic[gre] >>> remote: dynamic[gre] >>> >>> >>> However tcpdump still showing no attempts for 500 or 4500 :( >>> >>> and i see bgp sourcing from the wrong address, still not kicking it >>> into life. >>> also same messages in frr.log >>> >>> 2017/07/31 11:21:26.65 NHRP: [0x55e9835b6d30] Resolved with 1 results >>> 2017/07/31 11:21:26.70 NHRP: NHS: Waiting link for 51.15.49.245 >>> 2017/07/31 11:22:11.38 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:22:27.67 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:23:01.20 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:23:26.70 NHRP: NHS: Waiting link for 51.15.49.245 >>> 2017/07/31 11:23:32.69 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:23:32.72 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:24:06.23 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> 2017/07/31 11:25:11.76 NHRP: Netlink: Received msg_type 28, msg_flags 0 >>> >>> >>> On Mon, 31 Jul 2017 at 11:22 Timo Teras <tim...@ik...> wrote: >>> >>>> On Mon, 31 Jul 2017 10:11:51 +0000 >>>> "M87tech [Jon]" <m8...@gm...> wrote: >>>> >>>> > I wonder now if its something to do with the PSK / secret as it just >>>> > said it was ignoring the unsupported secret when I did the reload. >>>> >>>> Could be related. >>>> >>>> > root@hub2-nhrp:/home/jon# swanctl --list-conns >>>> >>>> If --list-conns returns nothing, it means that charon does not have the >>>> configuration files loaded. This should print information about the >>>> 'dmvpn' configuration. >>>> >>>> > root@hub2-nhrp:/home/jon# swanctl --reload-settings >>>> > >>>> > *root@hub2-nhrp:/home/jon# swanctl --load-all* >>>> > *ignoring unsupported secret 'dmvpn-secret'* >>>> > *no authorities found, 0 unloaded* >>>> > *no pools found, 0 unloaded* >>>> > *loaded connection 'dmvpn'* >>>> > *successfully loaded 1 connections, 0 unloaded* >>>> >>>> So this loaded the config files. Sounds like systemd did not do it. The >>>> problem with the secret is also an issue. But after swanctl says it's >>>> loaded the connection and the secret, then things might look better. >>>> >>>> Timo >>>> >>> -- >>> M87 TECH >>> Jon Clayton >>> >>> -- >> M87 TECH >> Jon Clayton >> >> -- > M87 TECH > Jon Clayton > > -- M87 TECH Jon Clayton |
From: M87tech [Jon] <m8...@gm...> - 2017-07-31 10:59:37
|
I've added "update-source" 172.16.0.20 in the bgp commands to see if any difference but doesn't seem to have done anything. I thought possibly it was binding to wrong interface and not causing ipsec to kick in. Cheers, Jon. On Mon, 31 Jul 2017 at 11:32 M87tech [Jon] <m8...@gm...> wrote: > ok got systemd working > > Wondering if these earlier messages in frr.log are related, although they > seem to stop. > > *2017/07/31 11:27:58.27 NHRP: INTERFACE_STATE: Cannot find IF ens18 in VRF > 0* > > 2017/07/31 11:23:26.70 NHRP: NHS: Waiting link for 51.15.49.245 > 2017/07/31 11:23:32.69 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:23:32.72 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:24:06.23 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:25:11.76 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:25:24.05 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:25:26.70 NHRP: NHS: Waiting link for 51.15.49.245 > 2017/07/31 11:25:54.77 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:26:04.24 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:26:04.24 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:26:37.78 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:27:26.71 NHRP: NHS: Waiting link for 51.15.49.245 > 2017/07/31 11:27:30.26 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 > 2017/07/31 11:27:58.27 NHRP: INTERFACE_ADDRESS_DEL: Cannot find IF 2 in > VRF 0 > 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 > 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 > 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 > *2017/07/31 11:27:58.27 NHRP: INTERFACE_STATE: Cannot find IF ens18 in VRF > 0* > 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 > 2017/07/31 11:27:58.27 NHRP: INTERFACE_ADDRESS_DEL: Cannot find IF 2 in > VRF 0 > 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 > 2017/07/31 11:27:58.27 NHRP: INTERFACE_ADDRESS_DEL: Cannot find IF 2 in > VRF 0 > 2017/07/31 11:27:58.27 NHRP: Netlink: Received msg_type 29, msg_flags 0 > 2017/07/31 11:27:59.42 NHRP: vici_reconnect: failure connecting VICI > socket: Connection refused > 2017/07/31 11:28:00.21 BGP: Terminating on signal > 2017/07/31 11:28:00.21 ZEBRA: release_daemon_chunks: Released 0 label > chunks > 2017/07/31 11:28:00.21 ZEBRA: client 15 disconnected. 0 vnc routes removed > from the rib > 2017/07/31 11:28:00.21 ZEBRA: release_daemon_chunks: Released 0 label > chunks > 2017/07/31 11:28:00.21 ZEBRA: client 14 disconnected. 0 bgp routes removed > from the rib > 2017/07/31 11:28:00.23 NHRP: Exiting... > 2017/07/31 11:28:00.23 NHRP: Done. > 2017/07/31 11:28:00.23 ZEBRA: release_daemon_chunks: Released 0 label > chunks > 2017/07/31 11:28:00.23 ZEBRA: client 16 disconnected. 0 nhrp routes > removed from the rib > 2017/07/31 11:28:00.26 ZEBRA: Terminating on signal > 2017/07/31 11:28:00.26 ZEBRA: IRDP: Received shutdown notification. > 2017/07/31 11:28:27.12 NHRP: VICI: Connected > 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes > 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes > 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes > 2017/07/31 11:28:27.27 NHRP: VICI: Message 5, 1 bytes > 2017/07/31 11:28:27.27 NHRP: VICI: Message 1, 1 bytes > 2017/07/31 11:28:27.33 NHRP: [0x563ae8cffc40] Resolving ' > hub6.wizznet.co.uk' > 2017/07/31 11:28:27.33 NHRP: Netlink: Received msg_type 28, msg_flags 0 > 2017/07/31 11:28:27.36 NHRP: [0x563ae8cffc40] Resolved with 1 results > 2017/07/31 11:28:27.41 NHRP: NHS: Waiting link for 51.15.49.245 > > On Mon, 31 Jul 2017 at 11:26 M87tech [Jon] <m8...@gm...> wrote: > >> Sorted the pre-shared key bit, but its still not loading at boot - that's >> another matter though. >> >> now I manually load it with load-conn I see some new info >> >> root@hub2-nhrp:/home/jon# swanctl --list-conn >> >> dmvpn: IKEv2, reauthentication every 46800s, rekeying every 14400s >> local: %any >> remote: %any >> local pre-shared key authentication: >> id: hu...@my...oud >> remote pre-shared key authentication: >> id: hu...@my...oud >> dmvpn: TRANSPORT, rekeying every 6000s >> local: dynamic[gre] >> remote: dynamic[gre] >> >> >> However tcpdump still showing no attempts for 500 or 4500 :( >> >> and i see bgp sourcing from the wrong address, still not kicking it into >> life. >> also same messages in frr.log >> >> 2017/07/31 11:21:26.65 NHRP: [0x55e9835b6d30] Resolved with 1 results >> 2017/07/31 11:21:26.70 NHRP: NHS: Waiting link for 51.15.49.245 >> 2017/07/31 11:22:11.38 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:22:27.67 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:23:01.20 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:23:26.70 NHRP: NHS: Waiting link for 51.15.49.245 >> 2017/07/31 11:23:32.69 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:23:32.72 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:24:06.23 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:24:37.97 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> 2017/07/31 11:25:11.76 NHRP: Netlink: Received msg_type 28, msg_flags 0 >> >> >> On Mon, 31 Jul 2017 at 11:22 Timo Teras <tim...@ik...> wrote: >> >>> On Mon, 31 Jul 2017 10:11:51 +0000 >>> "M87tech [Jon]" <m8...@gm...> wrote: >>> >>> > I wonder now if its something to do with the PSK / secret as it just >>> > said it was ignoring the unsupported secret when I did the reload. >>> >>> Could be related. >>> >>> > root@hub2-nhrp:/home/jon# swanctl --list-conns >>> >>> If --list-conns returns nothing, it means that charon does not have the >>> configuration files loaded. This should print information about the >>> 'dmvpn' configuration. >>> >>> > root@hub2-nhrp:/home/jon# swanctl --reload-settings >>> > >>> > *root@hub2-nhrp:/home/jon# swanctl --load-all* >>> > *ignoring unsupported secret 'dmvpn-secret'* >>> > *no authorities found, 0 unloaded* >>> > *no pools found, 0 unloaded* >>> > *loaded connection 'dmvpn'* >>> > *successfully loaded 1 connections, 0 unloaded* >>> >>> So this loaded the config files. Sounds like systemd did not do it. The >>> problem with the secret is also an issue. But after swanctl says it's >>> loaded the connection and the secret, then things might look better. >>> >>> Timo >>> >> -- >> M87 TECH >> Jon Clayton >> >> -- > M87 TECH > Jon Clayton > > -- M87 TECH Jon Clayton |