Re: [mpls-linux-general] how much of LDP is actually working?
Status: Beta
Brought to you by:
jleu
|
From: Razvan D. <raz...@gm...> - 2005-08-12 11:18:55
|
On 8/10/05, James R. Leu <jl...@mi...> wrote:
> I will try and look at your data tonight and see if I can find anything.
> But the first thing is you need to get OSPF running.
>=20
i have finally managed to get ospfd and ldpd working using quagga-mpls
from your development tree; the problem was a piece of code that had
to be removed (i just wrapped it inside a comment)
line 2433:
else if (oi->state =3D=3D ISM_InterfaceDown)
{
zlog_warn ("Ignoring packet from [%s] received on interface that is "
"down [%s]",
inet_ntoa (iph->ip_src), ifp->name);
stream_free (ibuf);
return 0;
}
ospfd can't run properly because it is informed (probably by the zebra
daemon) that the interface is down; however, if i do something to that
interface outside of ospfd (for example, i added a static route using
ip route) the interface comes up and ospfd starts running
i don't know if this bug is a quagga bug; i haven't' dug into the code
yet (I'll probably do that soon)
I've looked in the original quagga-0.98.0 version and saw that this
piece of code is absent; it makes sense to add it, but why did you do
it?
i don't know if that piece of code was supposed to keep things on
track; removing it made LDP work, although "work" is not a very good
way to express the way it acts; i know it's a dynamic protocol but it
seems to be too dynamic; in less than 3 minutes it continuously
changed label values (I've been using 4 machines in a linear topology)
and a ping didn't work (it managed to reach the other end but got lost
on its way back); here is the output of two sequential commands on the
ldpd CLI:
ldpd_mpls2# show ldp database
11.0.3.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
11.0.2.0/30 local binding: label: gen 10000
11.0.3.0/30 local binding: label: gen 10001
11.0.4.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
11.0.2.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
11.0.1.0/30 local binding: label: gen 10002
11.0.1.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
11.0.2.0/30 local binding: label: gen 10102
11.0.3.0/30 local binding: label: gen 10103
11.0.1.0/30 remote binding: label: gen 10090 lsr: 11.0.1.2:0 ingress
11.0.1.0/30 local binding: label: gen 10105
11.0.2.0/30 remote binding: no outlabel lsr: 11.0.1.2:0
11.0.3.0/30 remote binding: no outlabel lsr: 11.0.1.2:0
ldpd_mpls2# show ld
ldpd_mpls2# show ldp da
ldpd_mpls2# show ldp database
11.0.3.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
11.0.2.0/30 local binding: label: gen 10000
11.0.3.0/30 local binding: label: gen 10001
11.0.4.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
11.0.2.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
11.0.1.0/30 local binding: label: gen 10002
11.0.1.0/30 remote binding: no outlabel lsr: 11.0.3.2:0
as you can see some labels start disappearing (a little later ldpd crashes =
:) )
somewhere between those those two commands i used the mpls command to
look at some of the kernel MPLS objects that ldpd created; here's the
output
root@MPLS-2:/home/mpls2# mpls ilm show
ILM entry label gen 10117 labelspace 0 proto ipv4
pop peek (0 bytes, 0 pkts, 0 dropped)
ILM entry label gen 10115 labelspace 0 proto ipv4
pop peek (0 bytes, 0 pkts, 0 dropped)
ILM entry label gen 10114 labelspace 0 proto ipv4
pop peek (0 bytes, 0 pkts, 0 dropped)
ILM entry label gen 10002 labelspace 0 proto ipv4
pop peek (59928 bytes, 681 pkts, 0 dropped)
ILM entry label gen 10001 labelspace 0 proto ipv4
pop peek (0 bytes, 0 pkts, 0 dropped)
ILM entry label gen 10000 labelspace 0 proto ipv4
pop peek (0 bytes, 0 pkts, 0 dropped)
root@MPLS-2:/home/mpls2# mpls nhlfe show
NHLFE entry key 0x0000001f mtu 1496 propagate_ttl
push gen 10106 set eth1 ipv4 11.0.2.1 (1024 bytes, 14 pkts, 0 drop=
ped)
NHLFE entry key 0x0000001e mtu 1496 propagate_ttl
push gen 10102 set eth1 ipv4 11.0.2.1 (5150 bytes, 65 pkts, 0 drop=
ped)
NHLFE entry key 0x0000001d mtu 1496 propagate_ttl
push gen 10098 set eth1 ipv4 11.0.2.1 (5150 bytes, 65 pkts, 0 drop=
ped)
NHLFE entry key 0x0000001c mtu 1496 propagate_ttl
push gen 10094 set eth1 ipv4 11.0.2.1 (5150 bytes, 65 pkts, 0 drop=
ped)
NHLFE entry key 0x0000001b mtu 1496 propagate_ttl
push gen 10090 set eth1 ipv4 11.0.2.1 (5150 bytes, 65 pkts, 0 drop=
ped)
NHLFE entry key 0x0000001a mtu 1496 propagate_ttl
push gen 10086 set eth1 ipv4 11.0.2.1 (5062 bytes, 64 pkts, 0 drop=
ped)
NHLFE entry key 0x00000019 mtu 1496 propagate_ttl
push gen 10082 set eth1 ipv4 11.0.2.1 (4998 bytes, 63 pkts, 0 drop=
ped)
NHLFE entry key 0x00000018 mtu 1496 propagate_ttl
push gen 10078 set eth1 ipv4 11.0.2.1 (4718 bytes, 59 pkts, 0 drop=
ped)
NHLFE entry key 0x00000017 mtu 1496 propagate_ttl
push gen 10074 set eth1 ipv4 11.0.2.1 (4884 bytes, 61 pkts, 0 drop=
ped)
NHLFE entry key 0x00000016 mtu 1496 propagate_ttl
push gen 10070 set eth1 ipv4 11.0.2.1 (5094 bytes, 62 pkts, 0 drop=
ped)
NHLFE entry key 0x00000015 mtu 1496 propagate_ttl
push gen 10066 set eth1 ipv4 11.0.2.1 (4902 bytes, 61 pkts, 0 drop=
ped)
NHLFE entry key 0x00000014 mtu 1496 propagate_ttl
push gen 10064 set eth1 ipv4 11.0.2.1 (4990 bytes, 60 pkts, 0 drop=
ped)
NHLFE entry key 0x00000013 mtu 1496 propagate_ttl
push gen 10060 set eth1 ipv4 11.0.2.1 (4864 bytes, 61 pkts, 0 drop=
ped)
NHLFE entry key 0x00000012 mtu 1496 propagate_ttl
push gen 10056 set eth1 ipv4 11.0.2.1 (4846 bytes, 61 pkts, 0 drop=
ped)
NHLFE entry key 0x00000011 mtu 1496 propagate_ttl
push gen 10052 set eth1 ipv4 11.0.2.1 (3960 bytes, 50 pkts, 0 drop=
ped)
NHLFE entry key 0x00000010 mtu 1496 propagate_ttl
push gen 10050 set eth1 ipv4 11.0.2.1 (1650 bytes, 20 pkts, 0 drop=
ped)
NHLFE entry key 0x0000000f mtu 1496 propagate_ttl
push gen 10046 set eth1 ipv4 11.0.2.1 (1384 bytes, 17 pkts, 0 drop=
ped)
NHLFE entry key 0x0000000e mtu 1496 propagate_ttl
push gen 10044 set eth1 ipv4 11.0.2.1 (1496 bytes, 15 pkts, 0 drop=
ped)
NHLFE entry key 0x0000000d mtu 1496 propagate_ttl
push gen 10040 set eth1 ipv4 11.0.2.1 (1066 bytes, 16 pkts, 0 drop=
ped)
NHLFE entry key 0x0000000c mtu 1496 propagate_ttl
push gen 10036 set eth1 ipv4 11.0.2.1 (1102 bytes, 15 pkts, 0 drop=
ped)
NHLFE entry key 0x0000000b mtu 1496 propagate_ttl
push gen 10034 set eth1 ipv4 11.0.2.1 (1546 bytes, 15 pkts, 0 drop=
ped)
NHLFE entry key 0x0000000a mtu 1496 propagate_ttl
push gen 10030 set eth1 ipv4 11.0.2.1 (1308 bytes, 17 pkts, 0 drop=
ped)
NHLFE entry key 0x00000009 mtu 1496 propagate_ttl
push gen 10026 set eth1 ipv4 11.0.2.1 (1104 bytes, 16 pkts, 0 drop=
ped)
NHLFE entry key 0x00000008 mtu 1496 propagate_ttl
push gen 10024 set eth1 ipv4 11.0.2.1 (1192 bytes, 15 pkts, 0 drop=
ped)
NHLFE entry key 0x00000007 mtu 1496 propagate_ttl
push gen 10020 set eth1 ipv4 11.0.2.1 (1066 bytes, 16 pkts, 0 drop=
ped)
NHLFE entry key 0x00000006 mtu 1496 propagate_ttl
push gen 10016 set eth1 ipv4 11.0.2.1 (1066 bytes, 16 pkts, 0 drop=
ped)
NHLFE entry key 0x00000005 mtu 1496 propagate_ttl
push gen 10012 set eth1 ipv4 11.0.2.1 (1014 bytes, 15 pkts, 0 drop=
ped)
NHLFE entry key 0x00000004 mtu 1496 propagate_ttl
push gen 10008 set eth1 ipv4 11.0.2.1 (1026 bytes, 15 pkts, 0 drop=
ped)
NHLFE entry key 0x00000003 mtu 1496 propagate_ttl
push gen 10004 set eth1 ipv4 11.0.2.1 (1142 bytes, 16 pkts, 0 drop=
ped)
NHLFE entry key 0x00000002 mtu 1496 propagate_ttl
push gen 10000 set eth1 ipv4 11.0.2.1 (1408 bytes, 14 pkts, 0 drop=
ped)
root@MPLS-2:/home/mpls2# mpls xc show
root@MPLS-2:/home/mpls2#=20
i don't know why the ldpd keeps creating nhlfe (which is the main
reason or effect why so many labels are generated)?
do you have any ideas why these things happen? and if you do, some
pointers about where to start looking for bugs the code would be
appreciated
Razvan
|