Re: [mpls-linux-general] ldpd on FC5
Status: Beta
Brought to you by:
jleu
|
From: Adrian P. <adr...@gm...> - 2009-12-21 07:54:09
|
Hello,
I'm not sure if I remember correctly, but 3 or 4 years ago when I tested
ldpd I had similar problems (when issuing mpls ip ldpd would die).
You should do a packet capture and see if ldp sends a notification before
dying - there might be a mismatch in parameters or there could be a problem
with ldpd itself. I remember I couldn't use it for anything - meaning I
didnt learn/distribute labels or wasn't stable.
Looking back over the archives here's what we managed to do (remember, these
are mails from 2006!):
I've managed to get to ldp peers to communicate with each other (and
avoid that nasty hang after handshaking) using this setup:
LER1 LER2
eth1-------------------------------------------------eth0
172.16.0.2 172.16.0.3
On LER1, in ldpd I issued these commands:
uml-1# conf t
uml-1(config)# mpls ldp
uml-1(config-ldp)# transport-address 172.16.0.2
uml-1(config-ldp)# exit
uml-1(config)# int eth1
uml-1(config-if)# mpls ip
uml-1(config-if-ldp)#
uml-1(config-if-ldp)# exit
uml-1(config-if)# exit
uml-1(config)# exit
uml-1# sh ldp
LSR-ID: 7f000001 Admin State: ENABLED
Transport Address: ac100002
Control Mode: ORDERED Repair Mode: GLOBAL
Propogate Release: TRUE Label Merge: TRUE
Retention Mode: LIBERAL Loop Detection Mode: NONE
TTL-less-domain: FALSE
Local TCP Port: 646 Local UDP Port: 646
Keep-alive Time: 45 Keep-alive Interval: 15
Hello Time: 15 Hello Interval: 5
------------------------------------------------------------------------------------------
On LER2, in ldpd I issued these commands:
uml-1# conf t
uml-1(config)# mpls ldp
uml-1(config-ldp)# transport-address 172.16.0.3
uml-1(config-ldp)# exit
uml-1(config)# int eth0
uml-1(config-if)# mpls ip
uml-1(config-if-ldp)#
uml-1(config-if-ldp)# exit
uml-1(config-if)# exit
uml-1(config)# exit
uml-1# sh ldp
LSR-ID: 7f000001 Admin State: ENABLED
Transport Address: ac100003
Control Mode: ORDERED Repair Mode: GLOBAL
Propogate Release: TRUE Label Merge: TRUE
Retention Mode: LIBERAL Loop Detection Mode: NONE
TTL-less-domain: FALSE
Local TCP Port: 646 Local UDP Port: 646
Keep-alive Time: 45 Keep-alive Interval: 15
Hello Time: 15 Hello Interval: 5
------------------------------------------------------------------------------------------
I thought of setting an explicit transport address, because I found
out that after the discovery of the potential LDP neighbours, LDP
session establition can begin. The active and passive roles are
determined using the transport address. If I didn't set a transport
address, the default one would be 0x00000002 for both entities.
I guess that this is the place where one would freeze - because it
expected different transport address.
In this case, the one with the greater transport address will have an
active role in establishing the session.
After this handshake, I get these periodic messages from ldpd:
On LER1:
OUT: ldp_session_attempt_setup: MPLS_NON_BLOCKING
addr delete: 0x82df450
addr delete: 0x82fd8e0
addr delete: 0x8306aa0
session delete
On LER2:
OUT: Receive an unknown notification: 00000000
addr delete: 0x80a3c08
addr delete: 0x80a3ce0
session delete
If I issue an show ldp neighbour on LER2 I get this:
uml-1# sh ldp neighbor
Peer LDP Ident: 127.0.0.1:0; Local LDP Ident: 127.0.0.1:0
TCP connection: n/a
State: discovery; Msgs sent/recv: -/-;
Up time: -
LDP discovery sources:
eth0
but I don't think that a complete session is established. When I issue
sh ldp session this is the output:
uml-1# sh ldp session
no established sessions
This is probably because I didn't set some commands, but I don't know
which commands are necessary.
Another question would be how do I set a fec (to be advertised to the
other lsr's)?
... and later on:
Ok. I've managed to go one step ahead. Following the Ethereal packet stream,
I noticed that one peer was issuing a LDP Notification prior to its
disconnection. The notification had a status data with the value of 0x11
(which according to the LDP RFC means 'Session rejected - Parameters
advertisment mode').
So, I changed the distribution mode on both peers from DoD to DU and now it
goes further, but for a little time.
I can get for a brief second a session, with fecs, neighbour information
like this:
uml-1# sh ldp neigh
Peer LDP Ident: 172.16.0.3:0; Local LDP Ident: 172.16.0.2:0
TCP connection: 0.0.0.0.646 - 172.16.0.3.44012
State: OPERATIONAL; Msgs sent/recv: 5/17; UNSOLICITED
Up time: 00:00:01
LDP discovery sources:
eth1
Addresses bound to peer:
172.16.0.2 172.16.0.3 172.16.1.3 10.1.1.1
141.85.43.122
The thing is, that 2 seconds later, both ldpd's (from both peers) crash
(output is: Aborted.). Ethereal captures all sorts of LDP packets indicating
that the session was establish(ed)(ing), because I saw that fecs were
exchanged, together with proposed labels.
After all the handshaking is done, one of the peers sends a TCP FIN, and
then (after the other peer replies with another FIN) they both die.
If you want, I have a packet dump captured in ethereal and I will attach it
tomorrow.
Tomorrow I want to try out the new version of LDP, because today I had other
things to try out.
I guess that even if I used the older version, in the end I learned a few
things about LDP and the initialization process. :)
If you have any ideas, I'd like to hear them.
This is all I've got. Hope it helps!
Regards,
Adrian
On Sun, Dec 20, 2009 at 5:22 PM, Bojana Petrovic <bo...@gm...> wrote:
> Hi everybody,
>
> Due to compilation issues with quagga-mpls on Fedora 10, I have decided to
> try with only quagga-0.99.6-01.fc5.mpls.1.956b.i386.rpm package I could
> find at the moment on the internet. This quagga-mpls.rpm I set on Fedora 5
> environment using mpls packages found in the repo.
> I got LDP daemon running, but when tried to configure LDP like this:
>
> vtysh
> conf t
> mpls ldp
> exit
> int eth1
> mpls ip
> end
>
> I got: *Warning: closing connection to ldpd because of an I/O error, and
> ldpd daemon is closed*.
>
> If I try to make ldpd.conf file with a line including "mpls ip" similar to
> this:
> !
> interface eth0
> mpls ip
> !
> and then to start ldpd daemon, I get this error:
> *Error occured during reading below line
> mpls ip*
>
> Additionally if I configure interface in zebra.conf like this:
> !
> interface eth1
> mpls labelspace 0
> ip address 100.0.0.1/24
> !
> After starting zebra, these lines are converting into this:
> !
> interface eth1
> mpls labelspace 0
> ip address 100.0.0.1/24
> ipv6 nd suppress-ra
> !
>
> Does anybody know wheather this environment can work, and is that possible
> to solve these errors?
>
> Best regards,
> Bojana
>
>
>
>
>
>
>
> ------------------------------------------------------------------------------
> This SF.Net email is sponsored by the Verizon Developer Community
> Take advantage of Verizon's best-in-class app development support
> A streamlined, 14 day to market process makes app distribution fast and
> easy
> Join now and get one step closer to millions of Verizon customers
> http://p.sf.net/sfu/verizon-dev2dev
> _______________________________________________
> mpls-linux-general mailing list
> mpl...@li...
> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
>
>
|