mpls-linux-general Mailing List for MPLS for Linux (Page 126)
Status: Beta
Brought to you by:
jleu
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(26) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(22) |
Feb
(19) |
Mar
(19) |
Apr
(45) |
May
(52) |
Jun
(101) |
Jul
(79) |
Aug
(24) |
Sep
(43) |
Oct
(54) |
Nov
(71) |
Dec
(53) |
| 2002 |
Jan
(111) |
Feb
(123) |
Mar
(67) |
Apr
(61) |
May
(75) |
Jun
(26) |
Jul
(36) |
Aug
(41) |
Sep
(79) |
Oct
(85) |
Nov
(58) |
Dec
(39) |
| 2003 |
Jan
(26) |
Feb
(61) |
Mar
(80) |
Apr
(56) |
May
(39) |
Jun
(44) |
Jul
(28) |
Aug
(25) |
Sep
(4) |
Oct
(20) |
Nov
(38) |
Dec
(9) |
| 2004 |
Jan
(14) |
Feb
(14) |
Mar
(68) |
Apr
(17) |
May
(45) |
Jun
(42) |
Jul
(41) |
Aug
(23) |
Sep
(46) |
Oct
(89) |
Nov
(55) |
Dec
(33) |
| 2005 |
Jan
(74) |
Feb
(39) |
Mar
(105) |
Apr
(96) |
May
(43) |
Jun
(48) |
Jul
(21) |
Aug
(22) |
Sep
(33) |
Oct
(28) |
Nov
(29) |
Dec
(81) |
| 2006 |
Jan
(37) |
Feb
(32) |
Mar
(147) |
Apr
(37) |
May
(33) |
Jun
(28) |
Jul
(15) |
Aug
(20) |
Sep
(15) |
Oct
(23) |
Nov
(30) |
Dec
(40) |
| 2007 |
Jan
(20) |
Feb
(24) |
Mar
(65) |
Apr
(69) |
May
(41) |
Jun
(53) |
Jul
(39) |
Aug
(76) |
Sep
(53) |
Oct
(43) |
Nov
(26) |
Dec
(24) |
| 2008 |
Jan
(19) |
Feb
(67) |
Mar
(91) |
Apr
(75) |
May
(47) |
Jun
(63) |
Jul
(68) |
Aug
(39) |
Sep
(44) |
Oct
(33) |
Nov
(62) |
Dec
(84) |
| 2009 |
Jan
(14) |
Feb
(39) |
Mar
(55) |
Apr
(63) |
May
(16) |
Jun
(9) |
Jul
(4) |
Aug
(6) |
Sep
(1) |
Oct
(2) |
Nov
(10) |
Dec
(5) |
| 2010 |
Jan
(3) |
Feb
(1) |
Mar
(5) |
Apr
(13) |
May
(4) |
Jun
(5) |
Jul
(2) |
Aug
(8) |
Sep
(6) |
Oct
(1) |
Nov
(2) |
Dec
(2) |
| 2011 |
Jan
(1) |
Feb
(21) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
(6) |
Sep
|
Oct
|
Nov
(2) |
Dec
(6) |
| 2012 |
Jan
(5) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(5) |
Aug
(3) |
Sep
(6) |
Oct
|
Nov
|
Dec
|
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: GroupShield f. E. (EXCHANGE_GERAL)
<NAI...@pt...> - 2002-11-06 19:04:12
|
Action Taken: The message was quarantined and replaced with a text informing the recipient of the action taken. To: mpl...@li... <mpl...@li...> From: Mail Delivery System <Mai...@li...> Sent: 1820251392,29525448 Subject: [mpls-linux-general] Mail delivery failed: returning message to sender Attachment Details:- Attachment Name: N/A File: Infected.msg Infected? Yes Repaired? No Blocked? No Deleted? No Virus Name: Exploit-MIME.gen.exe |
|
From: ANTIGEN_BRIGADOON <ANT...@sp...> - 2002-11-06 18:48:47
|
Antigen for Exchange found Body of Message infected with VIRUS= Exploit-MIME.gen.exe (NAI) virus. The file is currently Removed. The message, "[mpls-linux-general] Mail delivery failed: returning message to sender", was sent from Mail Delivery System and was discovered in Realtime Scan Job\Yoon, Jaebin\Inbox located at Netcom Systems/NETCOMSYSTEMS/BRIGADOON. ========================================================= E-mail confidentiality -------------------------------- This e-mail contains confidential and / or privileged information belonging to Spirent Communications Inc, its affiliates and / or subsidiaries. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution and / or the taking of any action based upon reliance on the contents of this transmission is strictly forbidden. If you have received this message in error please notify the sender by return e-mail and delete it from your system. If you require assistance, please contact our IT department. Spirent Communications, 26750 Agoura Rd. Calabasas, Ca 91302 Tel No. 818-676-2586 Fax No. 818-676-2700 ========================================================= |
|
From: James R. L. <jl...@ne...> - 2002-11-05 15:36:21
|
From what I've seen below, it looks like I need to do a host to network
conversion before storing anything in the zebra rib. The next release
will contain the fix.
I will also include the change for distributing static routes. In the
future I would like to have an access-list or a route-map control this, but
until I (or someone else) has time to code it up, I think it would be
usefull to be egress for static routes by default.
Thanks for you work!
> But I noticed that from the following static routes only one were propagated:
>
> 1.110.10.0/24
> 2.110.10.0/24
>
> The problem of this is that the prefix_match() function only works correctly
> when
> called with prefixes in network order. And it gets these to prefixes in host
> order
> and regrads the two routes as identical.
>
> Please find attched the interesting part of my debug session. I starts where the
> Router wants to export its routes (a session was just established) and
> ldp_label_mapping_initial_callback() is called the first time.
>
> Here is what I understood from the debug output:
>
> The functions ldp_label_mapping_initial_callback() goes through a list of FECs
> it knows via the Recognize_New_Fec () function from startup.
> It then tries to check whether these FECs have already been sent to the sessions
> partner. For this the function ldp_attr_find_upstream_state() usually returns
> NULL.
> This is also true when it comes to the route 1.110.10.0/24. But when it comes to
> the next route 2.110.10.0/24 it returns non NULL! This is because the function
> finds
> the FEC entry of 1.110.10.0/24 and thinks they are identical.
>
> At the end of the gdb session I executed the backtrace command.
>
> What do you think?
>
>
> ----------------------- Attached gdb session protocol -----------------------
> Breakpoint 5, ldp_label_mapping_initial_callback (timer=0x80b9ea8,
> extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:444
> 444 if (ldp_policy_export_check(g->user_data, &fec, nh_addr,
> nh_session) ==
> (gdb) p/x fec
> $67 = {_refcnt = 0x0, _global = {lle_next = 0x0, lle_prev = 0x0}, _inlabel = {
> lle_next = 0x0, lle_prev = 0x0}, _outlabel = {lle_next = 0x0,
> lle_prev = 0x0}, _tree = {lle_next = 0x0, lle_prev = 0x0}, _addr = {
> lle_next = 0x0, lle_prev = 0x0}, _fec = {lle_next = 0x0, lle_prev = 0x0},
> _nh = {lle_next = 0x0, lle_prev = 0x0}, fs_root_us = {llh_first = 0x0,
> llh_last = 0x0}, fs_root_ds = {llh_first = 0x0, llh_last = 0x0}, prefix = {
> protocol = 0x2, u = {ipv6 = {0x0, 0xc, 0x2, 0x1, 0x0 <repeats 12 times>},
> ipv4 = 0x1020c00}}, prefix_len = 0x18, type = 0x1}
> (gdb) c
> Continuing.
>
> Breakpoint 5, ldp_label_mapping_initial_callback (timer=0x80b9ea8,
> extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:444
> 444 if (ldp_policy_export_check(g->user_data, &fec, nh_addr,
> nh_session) ==
> (gdb) p/x fec
> $68 = {_refcnt = 0x0, _global = {lle_next = 0x0, lle_prev = 0x0}, _inlabel = {
> lle_next = 0x0, lle_prev = 0x0}, _outlabel = {lle_next = 0x0,
> lle_prev = 0x0}, _tree = {lle_next = 0x0, lle_prev = 0x0}, _addr = {
> lle_next = 0x0, lle_prev = 0x0}, _fec = {lle_next = 0x0, lle_prev = 0x0},
> _nh = {lle_next = 0x0, lle_prev = 0x0}, fs_root_us = {llh_first = 0x0,
> llh_last = 0x0}, fs_root_ds = {llh_first = 0x0, llh_last = 0x0}, prefix = {
> protocol = 0x2, u = {ipv6 = {0x0, 0xa, 0x6e, 0x1, 0x0 <repeats 12 times>},
> ipv4 = 0x16e0a00}}, prefix_len = 0x18, type = 0x1}
> (gdb) c
> Continuing.
> ldp_label_mapping_with_xc: enter
> ENTER: Prepare_Label_Mapping_Attributes
> EXIT: Prepare_Label_Mapping_Attributes
> ENTER: ldp_label_mapping_send
> OUT: In Label Added
> OUT: LPD Header : protocolVersion = 1
> OUT: pduLength = 33
> OUT: lsrAddress = a0a0964
> OUT: labelSpace = 0
> OUT: LABEL MAPPING MSG ***START***:
> OUT: baseMsg : uBit = 0
> OUT: msgType = 400
> OUT: msgLength = 23
> OUT: msgId = 8
> OUT: fecTlv:
> OUT: Tlv:
> OUT: BaseTlv: uBit = 0
> OUT: fBit = 0
> OUT: type = 100
> OUT: length = 7
> OUT: fecTlv->numberFecElements = 1
> OUT: elem 0 type is 2
> OUT: Fec Element : type = 2, addFam = 1, preLen = 24, address =
> 16e0a00
> OUT:
> OUT: fecTlv.wcElemExists = 0
> OUT: genLblTlv:
> OUT: Tlv:
> OUT: BaseTlv: uBit = 0
> OUT: fBit = 0
> OUT: type = 200
> OUT: length = 4
> OUT: genLbl data: label = 16
> OUT: Label mapping msg does not have atm label Tlv
> OUT: Label mapping msg does not have fr label Tlv
> OUT: Label mapping msg does not have hop count Tlv
> OUT: Label mapping msg does not have path vector Tlv
> OUT: Label mapping msg does not have label messageId Tlv
> OUT: Label mapping msg does not have LSPID Tlv
> OUT: Label mapping msg does not have traffic Tlv
> OUT: LABEL MAPPING MSG ***END***:
> OUT: Label Mapping Sent to 0a0a0d01:0 for 016e0a00/24
> EXIT: ldp_label_mapping_send
> ldp_label_mapping_with_xc: exit
>
> Breakpoint 5, ldp_label_mapping_initial_callback (timer=0x80b9ea8,
> extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:444
> 444 if (ldp_policy_export_check(g->user_data, &fec, nh_addr,
> nh_session) ==
> (gdb) p/x fec
> $69 = {_refcnt = 0x0, _global = {lle_next = 0x0, lle_prev = 0x0}, _inlabel = {
> lle_next = 0x0, lle_prev = 0x0}, _outlabel = {lle_next = 0x0,
> lle_prev = 0x0}, _tree = {lle_next = 0x0, lle_prev = 0x0}, _addr = {
> lle_next = 0x0, lle_prev = 0x0}, _fec = {lle_next = 0x0, lle_prev = 0x0},
> _nh = {lle_next = 0x0, lle_prev = 0x0}, fs_root_us = {llh_first = 0x0,
> llh_last = 0x0}, fs_root_ds = {llh_first = 0x0, llh_last = 0x0}, prefix = {
> protocol = 0x2, u = {ipv6 = {0x0, 0xa, 0x6e, 0x2, 0x0 <repeats 12 times>},
> ipv4 = 0x26e0a00}}, prefix_len = 0x18, type = 0x1}
> (gdb) n
> 452 if ((us_attr = ldp_attr_find_upstream_state(g, s, &fec,
> (gdb) s
> ldp_attr_find_upstream_state (g=0x80b1020, s=0x80b7bf8, f=0xbffff850,
> state=LDP_LSP_STATE_MAP_SENT) at ldp_attr.c:172
> 172 ldp_attr_list *us_list = ldp_attr_find_upstream_all(g, s, f);
> (gdb) s
> ldp_attr_find_upstream_all (g=0x80b1020, s=0x80b7bf8, f=0xbffff850)
> at ldp_attr.c:574
> 574 LDP_ASSERT(s && f && g);
> (gdb) s
> 576 if ((fnode = _ldp_attr_get_fec2(g, f, LDP_FALSE)) == NULL) {
> (gdb) s
> _ldp_attr_get_fec2 (g=0x80b1020, f=0xbffff850, flag=LDP_FALSE)
> at ldp_attr.c:747
> 747 ldp_fec *fnode = NULL;
> (gdb) n
> 749 if (ldp_tree_get(g->fec_tree, f->prefix.u.ipv4, f->prefix_len,
> (gdb) s
> ldp_tree_get (tree=0x80b0b80, key=40765952, length=24, info=0xbffff7a8)
> at impl_tree.c:85
> 85 p.family = AF_INET;
> (gdb) p/x tree
> $70 = 0x80b0b80
> (gdb) p/x *tree
> Attempt to dereference a generic pointer.
> (gdb) list
> 80 void **info)
> 81 {
> 82 struct route_node *node;
> 83 struct prefix p;
> 84
> 85 p.family = AF_INET;
> 86 p.prefixlen = length;
> 87 p.u.prefix4.s_addr = key;
> 88
> 89 if ((node = route_node_lookup(tree,&p))) {
> (gdb) s
> 86 p.prefixlen = length;
> (gdb)
> 87 p.u.prefix4.s_addr = key;
> (gdb)
> 89 if ((node = route_node_lookup(tree,&p))) {
> (gdb)
> route_node_lookup (table=0x80b0b80, p=0xbffff76c) at table.c:300
> 300 node = table->top;
> (gdb) p/x *table
> $71 = {top = 0x80bb588}
> (gdb) s
> 302 while (node && node->p.prefixlen <= p->prefixlen &&
> (gdb) list -
> 292 #endif /* HAVE_IPV6 */
> 293
> 294 /* Lookup same prefix node. Return NULL when we can't find route. */
> 295 struct route_node *
> 296 route_node_lookup (struct route_table *table, struct prefix *p)
> 297 {
> 298 struct route_node *node;
> 299
> 300 node = table->top;
> 301
> (gdb) list
> 302 while (node && node->p.prefixlen <= p->prefixlen &&
> 303 prefix_match (&node->p, p))
> 304 {
> 305 if (node->p.prefixlen == p->prefixlen && node->info)
> 306 return route_lock_node (node);
> 307
> 308 node = node->link[check_bit(&p->u.prefix, node->p.prefixlen)];
> 309 }
> 310
> 311 return NULL;
> (gdb) p/x node
> $72 = 0x80bb588
> (gdb) p/x *node
> $73 = {p = {family = 0x2, prefixlen = 0x18, u = {prefix = 0x0, prefix4 = {
> s_addr = 0x16e0a00}, lp = {id = {s_addr = 0x16e0a00}, adv_router = {
> s_addr = 0x0}}, val = {0x0, 0xa, 0x6e, 0x1, 0x0, 0x0, 0x0, 0x0}}},
> table = 0x80b0b80, parent = 0x0, link = {0x0, 0x0}, lock = 0x1,
> info = 0x80bb518, aggregate = 0x0}
> (gdb) s
> 303 prefix_match (&node->p, p))
> (gdb) p/x p
> $74 = 0xbffff76c
> (gdb) p/x *p
> $75 = {family = 0x2, prefixlen = 0x18, u = {prefix = 0x0, prefix4 = {
> s_addr = 0x26e0a00}, lp = {id = {s_addr = 0x26e0a00}, adv_router = {
> s_addr = 0xbffff7fc}}, val = {0x0, 0xa, 0x6e, 0x2, 0xfc, 0xf7, 0xff,
> 0xbf}}}
> (gdb) p/x &node->p
> $76 = 0x80bb588
> (gdb) p/x *&node->p
> $77 = {family = 0x2, prefixlen = 0x18, u = {prefix = 0x0, prefix4 = {
> s_addr = 0x16e0a00}, lp = {id = {s_addr = 0x16e0a00}, adv_router = {
> s_addr = 0x0}}, val = {0x0, 0xa, 0x6e, 0x1, 0x0, 0x0, 0x0, 0x0}}}
> (gdb) s
> prefix_match (n=0x80bb588, p=0xbffff76c) at prefix.c:75
> 75 u_char *np = (u_char *)&n->u.prefix;
> (gdb) list -
> 65 }
> 66
> 67 /* If n includes p prefix then return 1 else return 0. */
> 68 int
> 69 prefix_match (struct prefix *n, struct prefix *p)
> 70 {
> 71 int offset;
> 72 int shift;
> 73
> 74 /* Set both prefix's head pointer. */
> (gdb) list
> 75 u_char *np = (u_char *)&n->u.prefix;
> 76 u_char *pp = (u_char *)&p->u.prefix;
> 77
> 78 /* If n's prefix is longer than p's one return 0. */
> 79 if (n->prefixlen > p->prefixlen)
> 80 return 0;
> 81
> 82 offset = n->prefixlen / PNBBY;
> 83 shift = n->prefixlen % PNBBY;
> 84
> (gdb) n
> 76 u_char *pp = (u_char *)&p->u.prefix;
> (gdb) n
> 79 if (n->prefixlen > p->prefixlen)
> (gdb) p/x *np
> $78 = 0x0
> (gdb) p/x *pp
> $79 = 0x0
> (gdb) n
> 82 offset = n->prefixlen / PNBBY;
> (gdb) n
> 83 shift = n->prefixlen % PNBBY;
> (gdb) n
> 85 if (shift)
> (gdb) p offset
> $80 = 3
> (gdb) p shift
> $81 = 0
> (gdb) n
> 89 while (offset--)
> (gdb) n
> 90 if (np[offset] != pp[offset])
> (gdb) p np[offset]
> $82 = 110 'n'
> (gdb) n
> 92 return 1;
> (gdb) p np[0]
> $83 = 0 '\0'
> (gdb) p np[1]
> $84 = 10 '\n'
> (gdb) p np[2]
> $85 = 110 'n'
> (gdb) p np[3]
> $86 = 1 '\001'
> (gdb) p pp[0]
> $87 = 0 '\0'
> (gdb) p pp[1]
> $88 = 10 '\n'
> (gdb) p pp[2]
> $89 = 110 'n'
> (gdb) p pp[3]
> $90 = 2 '\002'
> (gdb)
> (gdb) bt
> #0 prefix_match (n=0x80bb588, p=0xbffff76c) at prefix.c:92
> #1 0x0807a572 in route_node_lookup (table=0x80b0b80, p=0xbffff76c)
> at table.c:303
> #2 0x0804af86 in ldp_tree_get (tree=0x80b0b80, key=40765952, length=24,
> info=0xbffff7a8) at impl_tree.c:89
> #3 0x08051e2f in _ldp_attr_get_fec2 (g=0x80b1020, f=0xbffff850,
> flag=LDP_FALSE) at ldp_attr.c:749
> #4 0x08051aa1 in ldp_attr_find_upstream_all (g=0x80b1020, s=0x80b7bf8,
> f=0xbffff850) at ldp_attr.c:576
> #5 0x08050f1b in ldp_attr_find_upstream_state (g=0x80b1020, s=0x80b7bf8,
> f=0xbffff850, state=LDP_LSP_STATE_MAP_SENT) at ldp_attr.c:172
> #6 0x0805db11 in ldp_label_mapping_initial_callback (timer=0x80b9ea8,
> extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:452
> #7 0x0804ad09 in mpls_timer (thread=0xbffff9e4) at impl_timer.c:28
> #8 0x080787a2 in thread_call (thread=0xbffff9e4) at thread.c:647
> #9 0x0804c229 in main (argc=1, argv=0xbffffab4) at mpls_main.c:224
> (gdb)
>
>
> Kind regards,
> Georg Klug
>
>
> > Thanks for the fix to zebra. I will apply it to my tree.
> >
> > After creating the static routes did you check the contents of
> > mplsd:show ip route? If the routes do not show up in mplsd's routing table
> > there is no chance for LDP to operate on them. Also remember that LDP only
> > originates label mappings for directly connected routes (this can be changed
> > simply by changing the code in impl_policy.c:ldp_policy_egress_check()) it
> > should propogate label mappings for anything that matches the routing table.
> >
> > James R. Leu
> >
>
--
James R. Leu
|
|
From: Georg K. <gk...@gi...> - 2002-11-05 15:29:10
|
Hi Jim,
thanks for the hint. With the following patch:
--------------------------------------------------
diff -u impl_policy.c.orig impl_policy.c
--- impl_policy.c.orig Tue Nov 5 10:21:44 2002
+++ impl_policy.c Tue Nov 5 10:22:22 2002
@@ -29,7 +29,7 @@
ldp_prefix2zebra_prefix(route,&p);
if ((rn = route_node_lookup(rib_table_ipv4->table,&p))) {
rib = rn->info;
- if (rib->type == ZEBRA_ROUTE_CONNECT) {
+ if (rib->type == ZEBRA_ROUTE_CONNECT || rib->type == ZEBRA_ROUTE_STATIC) {
// || p.prefixlen == 32) {
result = LDP_TRUE;
}
----------------------------------------------------
the static routes were seen by the LDP, and also they were propagated! Thanks
a lot!
But I noticed that from the following static routes only one were propagated:
1.110.10.0/24
2.110.10.0/24
The problem of this is that the prefix_match() function only works correctly
when
called with prefixes in network order. And it gets these to prefixes in host
order
and regrads the two routes as identical.
Please find attched the interesting part of my debug session. I starts where the
Router wants to export its routes (a session was just established) and
ldp_label_mapping_initial_callback() is called the first time.
Here is what I understood from the debug output:
The functions ldp_label_mapping_initial_callback() goes through a list of FECs
it knows via the Recognize_New_Fec () function from startup.
It then tries to check whether these FECs have already been sent to the sessions
partner. For this the function ldp_attr_find_upstream_state() usually returns
NULL.
This is also true when it comes to the route 1.110.10.0/24. But when it comes to
the next route 2.110.10.0/24 it returns non NULL! This is because the function
finds
the FEC entry of 1.110.10.0/24 and thinks they are identical.
At the end of the gdb session I executed the backtrace command.
What do you think?
----------------------- Attached gdb session protocol -----------------------
Breakpoint 5, ldp_label_mapping_initial_callback (timer=0x80b9ea8,
extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:444
444 if (ldp_policy_export_check(g->user_data, &fec, nh_addr,
nh_session) ==
(gdb) p/x fec
$67 = {_refcnt = 0x0, _global = {lle_next = 0x0, lle_prev = 0x0}, _inlabel = {
lle_next = 0x0, lle_prev = 0x0}, _outlabel = {lle_next = 0x0,
lle_prev = 0x0}, _tree = {lle_next = 0x0, lle_prev = 0x0}, _addr = {
lle_next = 0x0, lle_prev = 0x0}, _fec = {lle_next = 0x0, lle_prev = 0x0},
_nh = {lle_next = 0x0, lle_prev = 0x0}, fs_root_us = {llh_first = 0x0,
llh_last = 0x0}, fs_root_ds = {llh_first = 0x0, llh_last = 0x0}, prefix = {
protocol = 0x2, u = {ipv6 = {0x0, 0xc, 0x2, 0x1, 0x0 <repeats 12 times>},
ipv4 = 0x1020c00}}, prefix_len = 0x18, type = 0x1}
(gdb) c
Continuing.
Breakpoint 5, ldp_label_mapping_initial_callback (timer=0x80b9ea8,
extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:444
444 if (ldp_policy_export_check(g->user_data, &fec, nh_addr,
nh_session) ==
(gdb) p/x fec
$68 = {_refcnt = 0x0, _global = {lle_next = 0x0, lle_prev = 0x0}, _inlabel = {
lle_next = 0x0, lle_prev = 0x0}, _outlabel = {lle_next = 0x0,
lle_prev = 0x0}, _tree = {lle_next = 0x0, lle_prev = 0x0}, _addr = {
lle_next = 0x0, lle_prev = 0x0}, _fec = {lle_next = 0x0, lle_prev = 0x0},
_nh = {lle_next = 0x0, lle_prev = 0x0}, fs_root_us = {llh_first = 0x0,
llh_last = 0x0}, fs_root_ds = {llh_first = 0x0, llh_last = 0x0}, prefix = {
protocol = 0x2, u = {ipv6 = {0x0, 0xa, 0x6e, 0x1, 0x0 <repeats 12 times>},
ipv4 = 0x16e0a00}}, prefix_len = 0x18, type = 0x1}
(gdb) c
Continuing.
ldp_label_mapping_with_xc: enter
ENTER: Prepare_Label_Mapping_Attributes
EXIT: Prepare_Label_Mapping_Attributes
ENTER: ldp_label_mapping_send
OUT: In Label Added
OUT: LPD Header : protocolVersion = 1
OUT: pduLength = 33
OUT: lsrAddress = a0a0964
OUT: labelSpace = 0
OUT: LABEL MAPPING MSG ***START***:
OUT: baseMsg : uBit = 0
OUT: msgType = 400
OUT: msgLength = 23
OUT: msgId = 8
OUT: fecTlv:
OUT: Tlv:
OUT: BaseTlv: uBit = 0
OUT: fBit = 0
OUT: type = 100
OUT: length = 7
OUT: fecTlv->numberFecElements = 1
OUT: elem 0 type is 2
OUT: Fec Element : type = 2, addFam = 1, preLen = 24, address =
16e0a00
OUT:
OUT: fecTlv.wcElemExists = 0
OUT: genLblTlv:
OUT: Tlv:
OUT: BaseTlv: uBit = 0
OUT: fBit = 0
OUT: type = 200
OUT: length = 4
OUT: genLbl data: label = 16
OUT: Label mapping msg does not have atm label Tlv
OUT: Label mapping msg does not have fr label Tlv
OUT: Label mapping msg does not have hop count Tlv
OUT: Label mapping msg does not have path vector Tlv
OUT: Label mapping msg does not have label messageId Tlv
OUT: Label mapping msg does not have LSPID Tlv
OUT: Label mapping msg does not have traffic Tlv
OUT: LABEL MAPPING MSG ***END***:
OUT: Label Mapping Sent to 0a0a0d01:0 for 016e0a00/24
EXIT: ldp_label_mapping_send
ldp_label_mapping_with_xc: exit
Breakpoint 5, ldp_label_mapping_initial_callback (timer=0x80b9ea8,
extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:444
444 if (ldp_policy_export_check(g->user_data, &fec, nh_addr,
nh_session) ==
(gdb) p/x fec
$69 = {_refcnt = 0x0, _global = {lle_next = 0x0, lle_prev = 0x0}, _inlabel = {
lle_next = 0x0, lle_prev = 0x0}, _outlabel = {lle_next = 0x0,
lle_prev = 0x0}, _tree = {lle_next = 0x0, lle_prev = 0x0}, _addr = {
lle_next = 0x0, lle_prev = 0x0}, _fec = {lle_next = 0x0, lle_prev = 0x0},
_nh = {lle_next = 0x0, lle_prev = 0x0}, fs_root_us = {llh_first = 0x0,
llh_last = 0x0}, fs_root_ds = {llh_first = 0x0, llh_last = 0x0}, prefix = {
protocol = 0x2, u = {ipv6 = {0x0, 0xa, 0x6e, 0x2, 0x0 <repeats 12 times>},
ipv4 = 0x26e0a00}}, prefix_len = 0x18, type = 0x1}
(gdb) n
452 if ((us_attr = ldp_attr_find_upstream_state(g, s, &fec,
(gdb) s
ldp_attr_find_upstream_state (g=0x80b1020, s=0x80b7bf8, f=0xbffff850,
state=LDP_LSP_STATE_MAP_SENT) at ldp_attr.c:172
172 ldp_attr_list *us_list = ldp_attr_find_upstream_all(g, s, f);
(gdb) s
ldp_attr_find_upstream_all (g=0x80b1020, s=0x80b7bf8, f=0xbffff850)
at ldp_attr.c:574
574 LDP_ASSERT(s && f && g);
(gdb) s
576 if ((fnode = _ldp_attr_get_fec2(g, f, LDP_FALSE)) == NULL) {
(gdb) s
_ldp_attr_get_fec2 (g=0x80b1020, f=0xbffff850, flag=LDP_FALSE)
at ldp_attr.c:747
747 ldp_fec *fnode = NULL;
(gdb) n
749 if (ldp_tree_get(g->fec_tree, f->prefix.u.ipv4, f->prefix_len,
(gdb) s
ldp_tree_get (tree=0x80b0b80, key=40765952, length=24, info=0xbffff7a8)
at impl_tree.c:85
85 p.family = AF_INET;
(gdb) p/x tree
$70 = 0x80b0b80
(gdb) p/x *tree
Attempt to dereference a generic pointer.
(gdb) list
80 void **info)
81 {
82 struct route_node *node;
83 struct prefix p;
84
85 p.family = AF_INET;
86 p.prefixlen = length;
87 p.u.prefix4.s_addr = key;
88
89 if ((node = route_node_lookup(tree,&p))) {
(gdb) s
86 p.prefixlen = length;
(gdb)
87 p.u.prefix4.s_addr = key;
(gdb)
89 if ((node = route_node_lookup(tree,&p))) {
(gdb)
route_node_lookup (table=0x80b0b80, p=0xbffff76c) at table.c:300
300 node = table->top;
(gdb) p/x *table
$71 = {top = 0x80bb588}
(gdb) s
302 while (node && node->p.prefixlen <= p->prefixlen &&
(gdb) list -
292 #endif /* HAVE_IPV6 */
293
294 /* Lookup same prefix node. Return NULL when we can't find route. */
295 struct route_node *
296 route_node_lookup (struct route_table *table, struct prefix *p)
297 {
298 struct route_node *node;
299
300 node = table->top;
301
(gdb) list
302 while (node && node->p.prefixlen <= p->prefixlen &&
303 prefix_match (&node->p, p))
304 {
305 if (node->p.prefixlen == p->prefixlen && node->info)
306 return route_lock_node (node);
307
308 node = node->link[check_bit(&p->u.prefix, node->p.prefixlen)];
309 }
310
311 return NULL;
(gdb) p/x node
$72 = 0x80bb588
(gdb) p/x *node
$73 = {p = {family = 0x2, prefixlen = 0x18, u = {prefix = 0x0, prefix4 = {
s_addr = 0x16e0a00}, lp = {id = {s_addr = 0x16e0a00}, adv_router = {
s_addr = 0x0}}, val = {0x0, 0xa, 0x6e, 0x1, 0x0, 0x0, 0x0, 0x0}}},
table = 0x80b0b80, parent = 0x0, link = {0x0, 0x0}, lock = 0x1,
info = 0x80bb518, aggregate = 0x0}
(gdb) s
303 prefix_match (&node->p, p))
(gdb) p/x p
$74 = 0xbffff76c
(gdb) p/x *p
$75 = {family = 0x2, prefixlen = 0x18, u = {prefix = 0x0, prefix4 = {
s_addr = 0x26e0a00}, lp = {id = {s_addr = 0x26e0a00}, adv_router = {
s_addr = 0xbffff7fc}}, val = {0x0, 0xa, 0x6e, 0x2, 0xfc, 0xf7, 0xff,
0xbf}}}
(gdb) p/x &node->p
$76 = 0x80bb588
(gdb) p/x *&node->p
$77 = {family = 0x2, prefixlen = 0x18, u = {prefix = 0x0, prefix4 = {
s_addr = 0x16e0a00}, lp = {id = {s_addr = 0x16e0a00}, adv_router = {
s_addr = 0x0}}, val = {0x0, 0xa, 0x6e, 0x1, 0x0, 0x0, 0x0, 0x0}}}
(gdb) s
prefix_match (n=0x80bb588, p=0xbffff76c) at prefix.c:75
75 u_char *np = (u_char *)&n->u.prefix;
(gdb) list -
65 }
66
67 /* If n includes p prefix then return 1 else return 0. */
68 int
69 prefix_match (struct prefix *n, struct prefix *p)
70 {
71 int offset;
72 int shift;
73
74 /* Set both prefix's head pointer. */
(gdb) list
75 u_char *np = (u_char *)&n->u.prefix;
76 u_char *pp = (u_char *)&p->u.prefix;
77
78 /* If n's prefix is longer than p's one return 0. */
79 if (n->prefixlen > p->prefixlen)
80 return 0;
81
82 offset = n->prefixlen / PNBBY;
83 shift = n->prefixlen % PNBBY;
84
(gdb) n
76 u_char *pp = (u_char *)&p->u.prefix;
(gdb) n
79 if (n->prefixlen > p->prefixlen)
(gdb) p/x *np
$78 = 0x0
(gdb) p/x *pp
$79 = 0x0
(gdb) n
82 offset = n->prefixlen / PNBBY;
(gdb) n
83 shift = n->prefixlen % PNBBY;
(gdb) n
85 if (shift)
(gdb) p offset
$80 = 3
(gdb) p shift
$81 = 0
(gdb) n
89 while (offset--)
(gdb) n
90 if (np[offset] != pp[offset])
(gdb) p np[offset]
$82 = 110 'n'
(gdb) n
92 return 1;
(gdb) p np[0]
$83 = 0 '\0'
(gdb) p np[1]
$84 = 10 '\n'
(gdb) p np[2]
$85 = 110 'n'
(gdb) p np[3]
$86 = 1 '\001'
(gdb) p pp[0]
$87 = 0 '\0'
(gdb) p pp[1]
$88 = 10 '\n'
(gdb) p pp[2]
$89 = 110 'n'
(gdb) p pp[3]
$90 = 2 '\002'
(gdb)
(gdb) bt
#0 prefix_match (n=0x80bb588, p=0xbffff76c) at prefix.c:92
#1 0x0807a572 in route_node_lookup (table=0x80b0b80, p=0xbffff76c)
at table.c:303
#2 0x0804af86 in ldp_tree_get (tree=0x80b0b80, key=40765952, length=24,
info=0xbffff7a8) at impl_tree.c:89
#3 0x08051e2f in _ldp_attr_get_fec2 (g=0x80b1020, f=0xbffff850,
flag=LDP_FALSE) at ldp_attr.c:749
#4 0x08051aa1 in ldp_attr_find_upstream_all (g=0x80b1020, s=0x80b7bf8,
f=0xbffff850) at ldp_attr.c:576
#5 0x08050f1b in ldp_attr_find_upstream_state (g=0x80b1020, s=0x80b7bf8,
f=0xbffff850, state=LDP_LSP_STATE_MAP_SENT) at ldp_attr.c:172
#6 0x0805db11 in ldp_label_mapping_initial_callback (timer=0x80b9ea8,
extra=0x80b7bf8, g=0x80b1020) at ldp_label_mapping.c:452
#7 0x0804ad09 in mpls_timer (thread=0xbffff9e4) at impl_timer.c:28
#8 0x080787a2 in thread_call (thread=0xbffff9e4) at thread.c:647
#9 0x0804c229 in main (argc=1, argv=0xbffffab4) at mpls_main.c:224
(gdb)
Kind regards,
Georg Klug
> Thanks for the fix to zebra. I will apply it to my tree.
>
> After creating the static routes did you check the contents of
> mplsd:show ip route? If the routes do not show up in mplsd's routing table
> there is no chance for LDP to operate on them. Also remember that LDP only
> originates label mappings for directly connected routes (this can be changed
> simply by changing the code in impl_policy.c:ldp_policy_egress_check()) it
> should propogate label mappings for anything that matches the routing table.
>
> James R. Leu
>
|
|
From: James R. L. <jl...@ne...> - 2002-11-04 21:44:45
|
test2 -- James R. Leu |
|
From: Georg K. <gk...@gi...> - 2002-11-04 07:59:30
|
Hi all,
I had the same problem, here. The problem lies in the zebra code. The rib
structure
(especially rib_table) is not initialized for static routes. Please try t=
he
additional
line in the zrib.c
---------------------------------------------------------------
--- zrib.c.orig Mon Nov 4 08:51:09 2002
+++ zrib.c Mon Nov 4 08:51:31 2002
@@ -133,6 +133,7 @@
rib->type =3D ZEBRA_ROUTE_STATIC;
rib->distance =3D si->distance;
rib->metric =3D 0;
+ rib->rib_table =3D rib_table_ipv4;
rib->nexthop_num =3D 0;
switch (si->type)
---------------------------------------------------------------
Unfortunately, there might be other memebers of the struct which are not
assigned
correctly. Anyway, the zebra code won't crash anymore.
Please note: the ospfd will only propagate theses static routes, if you a=
dd the
"redistribute static" line to the ospfd.conf.
Unfortunately, the LDP is not aware of these static routes and I couldn't=
get it
to assign labels for those routes :-(
Kind regards,
Georg Klug
> After you patched zebra to created mplsd did you do a 'make distclean' =
from
> the top level? Are you sure you're running the zebra binary created at=
the
> sametime as the mplsd binary?
>
> I will look into this failure, but it looks like it is in some code tha=
t I
> don't mess with.
>
> Thank for the backtrace, this is very helpful.
>
> On Fri, Nov 01, 2002 at 12:08:14PM -0300, Pl=EDnio de Paula wrote:
> > Zebra stopped generating core dump (maybe because I upgraded the
> OS), but problem persists...
> > I compiled zebra with debug info and here is the backtrace:
> > --------------------------------------
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x0805c8fb in rib_install_lower (rn=3D0x8087600, rib=3D0x8087638) at =
rib.c:661
> > 661 if (rib->rib_table->rib_install_kernel)
> > (gdb) bt
> > #0 0x0805c8fb in rib_install_lower (rn=3D0x8087600, rib=3D0x8087638)
> at rib.c:661
> > #1 0x0805cae0 in rib_process (rn=3D0x8087600, del=3D0x0) at rib.c:78=
5
> > #2 0x0804d6d6 in static_ipv4_add (p=3D0xbffff8c0, gate=3D0x0,
> > ifname=3D0x8087a20 "eth4", distance=3D1 '\001', table=3D0) at zri=
b.c:297
> > #3 0x0804da5a in static_ipv4_func (vty=3D0xbffff8b8, add_cmd=3D1,
> > dest_str=3D0x80870b8 "10.10.1.10/32",
> > mask_str=3D0x1 <Address 0x1 out of bounds>, gate_str=3D0x8087a20 =
"eth4",
> > distance_str=3D0x0) at zrib.c:440
> > #4 0x0804daa4 in ip_route (self=3D0x806b4e0, vty=3D0x8087240, argc=3D=
2, argv=3D0x2)
> > at zrib.c:457
> > #5 0x08054952 in cmd_execute_command_strict (vline=3D0x8087550,
> vty=3D0x8087240,
> > cmd=3D0x0) at command.c:1963
> > #6 0x08054a7b in config_from_file (vty=3D0x8087240, fp=3D0x80870d0)
> > at command.c:2001
> > #7 0x08051755 in vty_read_file (confp=3D0x80870d0) at vty.c:2079
> > #8 0x08051a23 in vty_read_config (config_file=3D0x0,
> > config_current_dir=3D0x806b260 "zebra.conf",
> > config_default_dir=3D0x806b26b "/usr/local/etc/zebra.conf") at vt=
y.c:2266
> > #9 0x0804b682 in main (argc=3D0, argv=3D0xbffffb54) at main.c:287
> > #10 0x420158d4 in __libc_start_main () from /lib/i686/libc.so.6
> > --------------------------------------
> >
> > This is my zebra.conf:
> > --------------------------------------
> > hostname routerA
> >
> > interface eth5
> > description Fiber1000 Interface -> routerB
> > ip address 10.10.2.1/24
> > shutdown
> >
> > interface eth4
> > description Fiber1000 Interface -> clientA
> > ip address 10.10.1.1/24
> > no shutdown ***
> >
> > interface eth3
> > description Fiber100 Interface -> Optical Network
> > no shutdown
> >
> > interface eth2
> > description Fiber100 Interface -> Optical Network
> > no shutdown
> >
> > interface eth1
> > description Fiber100 Interface -> Optical Network
> > no shutdown
> >
> > ip route 10.10.1.10/32 eth4 <- This causes seg fault if eth4 (***) is=
up
> > | Instantaneous seg fault if initially
> down then brought up within zebra vty
> > | Same problem applies to Giga and
> Fast NICs (All opticals)
> > ----------------------------------------
> >
> >
> > -----Mensagem original-----
> > De: James R. Leu [mailto:jl...@mi...]
> > Enviada em: quinta-feira, 31 de outubro de 2002 20:45
> > Para: Pl=EDnio de Paula
> > Cc: mpl...@li...
> > Assunto: Re: RES: [mpls-linux-general] Zebra LDP Crash (Discovered
> > source of problem)
> >
> >
> > No one else is using static routes (that I know of). I know I've nev=
er
> > tried it. Do you get a core file? Give me the backtrace from it and=
I'll
> > try to fix it.
> >
> > On Thu, Oct 31, 2002 at 06:48:07PM -0300, Pl=EDnio de Paula wrote:
> > > My configuration of zebra included static routes! With LDP patch
> they cause zebra segmentation fault!
> > >
> > > Without static routes, LDP-patched-zebra runs OK...
> > >
> > > Is this happening with everybody?
> > >
> > > See you!
> > >
> > > Pl=EDnio de Paula
> > > UNICAMP/Brazil
> > >
> > > -----Mensagem original-----
> > > De: James R. Leu [mailto:jl...@mi...]
> > > Enviada em: quinta-feira, 31 de outubro de 2002 15:11
> > > Para: Pl=EDnio de Paula
> > > Cc: Gianfranco Delli Carri; mpl...@li...=
t
> > > Assunto: Re: [mpls-linux-general] Zebra LDP Crash
> > >
> > >
> > > Do you have acore file? Can you get me the backtrace from the core=
dump?
> > >
> > > On Thu, Oct 31, 2002 at 02:04:17PM -0300, Pl=EDnio de Paula wrote:
> > > > Hello Gianfranco,
> > > >
> > > > I=B4m trying to compile zebra with LDP patch in the same
> configuration as yours. The compilation goes OK, but
> > > > when I call zebra, it generates core dump. Have you crossed
> similar problems? What did you do about them?
> > > >
> > > > Pl=EDnio de Paula
> > > > UNICAMP
> > > >
> > > > -----Mensagem original-----
> > > > De: Gianfranco Delli Carri [mailto:gf....@nc...]
> > > > Enviada em: quarta-feira, 30 de outubro de 2002 22:13
> > > > Para: 'mpl...@li...'
> > > > Assunto: [mpls-linux-general] Zebra LDP session
> > > >
> > > >
> > > > Hi to all,
> > > >
> > > > I have a linux box (2.4.19) patched with mpls-linux-1.170 and
> zebra-0.93b
> > > > patched with ldp-portable-0.250.
> > > >
> > > > When I a start mlpsd after zebra and ospfs, in my CISCO router MP=
LS/LDP
> > > > enabled, I can see the LDP connection setting UP, but after few
> second (hold
> > > > timer) it come down.
> > > >
> > > > Debugging MPLSD I can see:
> > > >
> > > > /usr/local/sbin/mplsd
> > > > ldp_if_new:
> > > > 2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts
> > > > 2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metri=
c 1 mtu
> > > > 16436
> > > > 2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo
> > > > 2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419
> metric 1 mtu
> > > > 1500
> > > > 2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface e=
th0
> > > > 2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250
> > > > 2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250
> > > > 2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0
> > > > 2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1
> > > > 2002/10/31 02:00:24 MPLS: ifindex 2
> > > > session delete
> > > >
> > > > Debugging CISCO LDP:
> > > >
> > > > Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30,
> 10.254.2.6
> > > > <-> 10.254.0.250
> > > > Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30,
> > > > 10.254.2.6:11439 <-> 10.254.0.250:646
> > > > Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0=
x0)
> > > > Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x6782=
7E30
> > > > 10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0
> > > > Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <=
->
> > > > 10.254.0.250:646, adj 0x67827E30
> > > > Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30,
> 10.254.2.6
> > > > <-> 10.254.0.250
> > > > Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30,
> > > > 10.254.2.6:11440 <-> 10.254.0.250:646
> > > > Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0=
x0)
> > > > Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp=
0x0)
> > > > Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to
> 10.254.0.250:0 (pp 0x0)
> > > > Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.25=
0:0 (pp
> > > > 0x0)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 =
(pp
> > > > 0x6225D768)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to
> 10.254.0.250:0 (pp
> > > > 0x6225D768)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to
> 10.254.0.250:0 (pp
> > > > 0x6225D768)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to
> 10.254.0.250:0 (pp
> > > > 0x6225D768)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to
> 10.254.0.250:0 (pp
> > > > 0x6225D768)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to
> 10.254.0.250:0 (pp
> > > > 0x6225D768)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to
> 10.254.0.250:0 (pp
> > > > 0x6225D768)
> > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to
> 10.254.0.250:0 (pp
> > > > 0x6225D768)
> > > > etc...
> > > > Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for ad=
j
> > > > 0x67827E30, 10.254.0.250:0, will close conn
> > > > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (p=
p
> > > > 0x6225D768)
> > > > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (p=
p
> > > > 0x6225D768)
> > > > Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj
> 0x67827E30
> > > > Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <=
->
> > > > 10.254.0.250:646, adj 0x67827E30
> > > >
> > > > Ah... my MPLSD process come to use all the CPU time:
> > > >
> > > > ps aux
> > > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME CO=
MMAND
> > > > root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10
> > > > /usr/local/sbin/mplsd
> > > >
> > > > and I'm always unable to telnet on it, the session freeze.
> > > >
> > > > telnet 10.254.0.250 2610
> > > > Trying 10.254.0.250...
> > > > Connected to 10.254.0.250.
> > > > Escape character is '^]'.
> > > >
> > > >
> > > >
> > > > Have you any kind of idea ?
> > > >
> > > > Thanks in advance.
> > > >
> > > > Regards,
> > > >
> > > > Gianfranco
> > > >
> > > >
> > > > -------------------------------------------------------
> > > > This sf.net email is sponsored by: Influence the future
> > > > of Java(TM) technology. Join the Java Community
> > > > Process(SM) (JCP(SM)) program now.
> > > > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en
> > > > _______________________________________________
> > > > mpls-linux-general mailing list
> > > > mpl...@li...
> > > > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
> > > >
> > > >
> > > > -------------------------------------------------------
> > > > This sf.net email is sponsored by: Influence the future
> > > > of Java(TM) technology. Join the Java Community
> > > > Process(SM) (JCP(SM)) program now.
> > > > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en
> > > > _______________________________________________
> > > > mpls-linux-general mailing list
> > > > mpl...@li...
> > > > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
> > >
> > > --
> > > James R. Leu
> >
> > --
> > James R. Leu
> >
> >
> > -------------------------------------------------------
> > This sf.net email is sponsored by: See the NEW Palm
> > Tungsten T handheld. Power & Color in a compact size!
> > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en
> > _______________________________________________
> > mpls-linux-general mailing list
> > mpl...@li...
> > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
>
> --
> James R. Leu
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by: See the NEW Palm
> Tungsten T handheld. Power & Color in a compact size!
> http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en
> _______________________________________________
> mpls-linux-general mailing list
> mpl...@li...
> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
>
|
|
From: Folke A. <aeo...@ho...> - 2002-11-02 02:31:55
|
hi, James :)
i happened found the following data seems odd :
0x00000003 2450571/-678892776/0 3 PUSH(gen 3) SET(eth0,11.0.1.1)
~~~~~~~~~~~~
i sent about 3GB data out of this label and
its counter becomes negative. can you tell me
how to change the definition of this counter
into long long int ? or how to clear the counter
back to zero when it reaches the MAX_INT?
thanks :)
folke.
_________________________________________________________________
Surf the Web without missing calls! Get MSN Broadband.
http://resourcecenter.msn.com/access/plans/freeactivation.asp
|
|
From: James R. L. <jl...@mi...> - 2002-11-01 19:47:54
|
On Thu, Oct 31, 2002 at 07:28:23PM +0100, mes...@em... wrote: > I have linux Red Hat 7.3, 2.4.14 kernel, mpls-linux-1.1.170. > I am having problems compiling mpls-linux. Kernel compiles fine with MPLS > support included. I run 7.3, I'm completly up-to-date with RedHat changes, except I'm still at the 2.4.18-3 kernel. I copied the redhat kernel source to another directory: cp -a /usr/src/linux-2.4.18-3 /usr/src/linux-mpls Patched that new directory: cd /usr/src/linux-mpls/ patch -p1 < /usr/src/mpls-linux-1.1/patches/linux-kernel.diff Compiled kernel ..... Modified the utils Makefile: edit /usr/src/mpls-linux-1.1/utils/Makefile (added -I /usr/src/linux-mpls/include/ to the CFLAGS) Compiled fine for me. > But the when I run make mplsadm in /utils dir, it returns me these errors: > > > gcc -g -Wall -I/usr/include/linux -c -o mplsadm.o mplsadm.c > In file included from /usr/include/bits/types.h:143, > from /usr/include/stdio.h:36, > from mplsadm.c:2: > /usr/include/bits/pthreadtypes.h:48: parse error before `size_t' > /usr/include/bits/pthreadtypes.h:48: warning: no semicolon at end of struct or > union > /usr/include/bits/pthreadtypes.h:51: parse error before `__stacksize' > /usr/include/bits/pthreadtypes.h:51: warning: data definition has no type or > storage class > /usr/include/bits/pthreadtypes.h:52: warning: data definition has no type or > storage class > In file included from /usr/include/_G_config.h:44, > from /usr/include/libio.h:32, > from /usr/include/stdio.h:65, > from mplsadm.c:2: > /usr/include/gconv.h:72: parse error before `size_t' > /usr/include/gconv.h:85: parse error before `size_t' > /usr/include/gconv.h:94: parse error before `size_t' > /usr/include/gconv.h:170: parse error before `size_t' > /usr/include/gconv.h:170: warning: no semicolon at end of struct or union > /usr/include/gconv.h:173: parse error before `}' > /usr/include/gconv.h:173: warning: data definition has no type or storage > class > In file included from /usr/include/libio.h:32, > from /usr/include/stdio.h:65, > from mplsadm.c:2: > /usr/include/_G_config.h:47: field `__cd' has incomplete type > /usr/include/_G_config.h:50: field `__cd' has incomplete type > /usr/include/_G_config.h:53: confused by earlier errors, bailing out > make: *** [mplsadm.o] Error 1 > > > Can anyone help me, please. Is there something missing in my Makefile? > > Thank you in advance. > Matevz > > > ------------------- > http://www.email.si > > > ------------------------------------------------------- > This sf.net email is sponsored by: Influence the future > of Java(TM) technology. Join the Java Community > Process(SM) (JCP(SM)) program now. > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general -- James R. Leu |
|
From: James R. L. <jl...@mi...> - 2002-11-01 15:15:23
|
After you patched zebra to created mplsd did you do a 'make distclean' fr= om the top level? Are you sure you're running the zebra binary created at t= he sametime as the mplsd binary? I will look into this failure, but it looks like it is in some code that = I don't mess with. Thank for the backtrace, this is very helpful. On Fri, Nov 01, 2002 at 12:08:14PM -0300, Pl=EDnio de Paula wrote: > Zebra stopped generating core dump (maybe because I upgraded the OS), b= ut problem persists... > I compiled zebra with debug info and here is the backtrace: > -------------------------------------- > Program received signal SIGSEGV, Segmentation fault. > 0x0805c8fb in rib_install_lower (rn=3D0x8087600, rib=3D0x8087638) at ri= b.c:661 > 661 if (rib->rib_table->rib_install_kernel) > (gdb) bt > #0 0x0805c8fb in rib_install_lower (rn=3D0x8087600, rib=3D0x8087638) a= t rib.c:661 > #1 0x0805cae0 in rib_process (rn=3D0x8087600, del=3D0x0) at rib.c:785 > #2 0x0804d6d6 in static_ipv4_add (p=3D0xbffff8c0, gate=3D0x0, > ifname=3D0x8087a20 "eth4", distance=3D1 '\001', table=3D0) at zrib.= c:297 > #3 0x0804da5a in static_ipv4_func (vty=3D0xbffff8b8, add_cmd=3D1, > dest_str=3D0x80870b8 "10.10.1.10/32", > mask_str=3D0x1 <Address 0x1 out of bounds>, gate_str=3D0x8087a20 "e= th4", > distance_str=3D0x0) at zrib.c:440 > #4 0x0804daa4 in ip_route (self=3D0x806b4e0, vty=3D0x8087240, argc=3D2= , argv=3D0x2) > at zrib.c:457 > #5 0x08054952 in cmd_execute_command_strict (vline=3D0x8087550, vty=3D= 0x8087240, > cmd=3D0x0) at command.c:1963 > #6 0x08054a7b in config_from_file (vty=3D0x8087240, fp=3D0x80870d0) > at command.c:2001 > #7 0x08051755 in vty_read_file (confp=3D0x80870d0) at vty.c:2079 > #8 0x08051a23 in vty_read_config (config_file=3D0x0, > config_current_dir=3D0x806b260 "zebra.conf", > config_default_dir=3D0x806b26b "/usr/local/etc/zebra.conf") at vty.= c:2266 > #9 0x0804b682 in main (argc=3D0, argv=3D0xbffffb54) at main.c:287 > #10 0x420158d4 in __libc_start_main () from /lib/i686/libc.so.6 > -------------------------------------- >=20 > This is my zebra.conf: > -------------------------------------- > hostname routerA >=20 > interface eth5 > description Fiber1000 Interface -> routerB > ip address 10.10.2.1/24 > shutdown >=20 > interface eth4 > description Fiber1000 Interface -> clientA > ip address 10.10.1.1/24 > no shutdown *** >=20 > interface eth3 > description Fiber100 Interface -> Optical Network > no shutdown >=20 > interface eth2 > description Fiber100 Interface -> Optical Network > no shutdown >=20 > interface eth1 > description Fiber100 Interface -> Optical Network > no shutdown >=20 > ip route 10.10.1.10/32 eth4 <- This causes seg fault if eth4 (***) is u= p > | Instantaneous seg fault if initially down then brought up within= zebra vty > | Same problem applies to Giga and Fast NICs (All opticals) > ---------------------------------------- >=20 >=20 > -----Mensagem original----- > De: James R. Leu [mailto:jl...@mi...] > Enviada em: quinta-feira, 31 de outubro de 2002 20:45 > Para: Pl=EDnio de Paula > Cc: mpl...@li... > Assunto: Re: RES: [mpls-linux-general] Zebra LDP Crash (Discovered > source of problem) >=20 >=20 > No one else is using static routes (that I know of). I know I've never > tried it. Do you get a core file? Give me the backtrace from it and I= 'll > try to fix it. >=20 > On Thu, Oct 31, 2002 at 06:48:07PM -0300, Pl=EDnio de Paula wrote: > > My configuration of zebra included static routes! With LDP patch they= cause zebra segmentation fault! > >=20 > > Without static routes, LDP-patched-zebra runs OK... > >=20 > > Is this happening with everybody? > >=20 > > See you! > >=20 > > Pl=EDnio de Paula > > UNICAMP/Brazil > >=20 > > -----Mensagem original----- > > De: James R. Leu [mailto:jl...@mi...] > > Enviada em: quinta-feira, 31 de outubro de 2002 15:11 > > Para: Pl=EDnio de Paula > > Cc: Gianfranco Delli Carri; mpl...@li... > > Assunto: Re: [mpls-linux-general] Zebra LDP Crash > >=20 > >=20 > > Do you have acore file? Can you get me the backtrace from the core d= ump? > >=20 > > On Thu, Oct 31, 2002 at 02:04:17PM -0300, Pl=EDnio de Paula wrote: > > > Hello Gianfranco, > > >=20 > > > I=B4m trying to compile zebra with LDP patch in the same configurat= ion as yours. The compilation goes OK, but > > > when I call zebra, it generates core dump. Have you crossed similar= problems? What did you do about them? > > >=20 > > > Pl=EDnio de Paula > > > UNICAMP > > >=20 > > > -----Mensagem original----- > > > De: Gianfranco Delli Carri [mailto:gf....@nc...] > > > Enviada em: quarta-feira, 30 de outubro de 2002 22:13 > > > Para: 'mpl...@li...' > > > Assunto: [mpls-linux-general] Zebra LDP session > > >=20 > > >=20 > > > Hi to all, > > >=20 > > > I have a linux box (2.4.19) patched with mpls-linux-1.170 and zebra= -0.93b > > > patched with ldp-portable-0.250. > > >=20 > > > When I a start mlpsd after zebra and ospfs, in my CISCO router MPLS= /LDP > > > enabled, I can see the LDP connection setting UP, but after few sec= ond (hold > > > timer) it come down. > > >=20 > > > Debugging MPLSD I can see: > > >=20 > > > /usr/local/sbin/mplsd > > > ldp_if_new: > > > 2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts > > > 2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric = 1 mtu > > > 16436 > > > 2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo > > > 2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 met= ric 1 mtu > > > 1500 > > > 2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth= 0 > > > 2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250 > > > 2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250 > > > 2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0 > > > 2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1 > > > 2002/10/31 02:00:24 MPLS: ifindex 2=20 > > > session delete > > >=20 > > > Debugging CISCO LDP: > > >=20 > > > Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.= 254.2.6 > > > <-> 10.254.0.250 > > > Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30, > > > 10.254.2.6:11439 <-> 10.254.0.250:646 > > > Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0= ) > > > Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x67827E= 30 > > > 10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0 > > > Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <-> > > > 10.254.0.250:646, adj 0x67827E30 > > > Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.= 254.2.6 > > > <-> 10.254.0.250 > > > Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30, > > > 10.254.2.6:11440 <-> 10.254.0.250:646 > > > Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0= ) > > > Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp 0= x0) > > > Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 = (pp 0x0) > > > Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:= 0 (pp > > > 0x0) > > > Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (p= p > > > 0x6225D768) > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.25= 0:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.25= 0:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.25= 0:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.25= 0:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.25= 0:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.25= 0:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.25= 0:0 (pp > > > 0x6225D768) > > > etc... > > > Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj > > > 0x67827E30, 10.254.0.250:0, will close conn > > > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > > > 0x6225D768) > > > Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj 0x67= 827E30 > > > Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <-> > > > 10.254.0.250:646, adj 0x67827E30 > > >=20 > > > Ah... my MPLSD process come to use all the CPU time: > > >=20 > > > ps aux > > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMM= AND > > > root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10 > > > /usr/local/sbin/mplsd > > >=20 > > > and I'm always unable to telnet on it, the session freeze. > > >=20 > > > telnet 10.254.0.250 2610 > > > Trying 10.254.0.250... > > > Connected to 10.254.0.250. > > > Escape character is '^]'. > > >=20 > > >=20 > > >=20 > > > Have you any kind of idea ? > > >=20 > > > Thanks in advance. > > >=20 > > > Regards, > > >=20 > > > Gianfranco > > >=20 > > >=20 > > > ------------------------------------------------------- > > > This sf.net email is sponsored by: Influence the future=20 > > > of Java(TM) technology. Join the Java Community=20 > > > Process(SM) (JCP(SM)) program now.=20 > > > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > > > _______________________________________________ > > > mpls-linux-general mailing list > > > mpl...@li... > > > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > >=20 > > >=20 > > > ------------------------------------------------------- > > > This sf.net email is sponsored by: Influence the future=20 > > > of Java(TM) technology. Join the Java Community=20 > > > Process(SM) (JCP(SM)) program now.=20 > > > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > > > _______________________________________________ > > > mpls-linux-general mailing list > > > mpl...@li... > > > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > >=20 > > --=20 > > James R. Leu >=20 > --=20 > James R. Leu >=20 >=20 > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm=20 > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general --=20 James R. Leu |
|
From: <pp...@cp...> - 2002-11-01 15:07:54
|
Zebra stopped generating core dump (maybe because I upgraded the OS), =
but problem persists...
I compiled zebra with debug info and here is the backtrace:
--------------------------------------
Program received signal SIGSEGV, Segmentation fault.
0x0805c8fb in rib_install_lower (rn=3D0x8087600, rib=3D0x8087638) at =
rib.c:661
661 if (rib->rib_table->rib_install_kernel)
(gdb) bt
#0 0x0805c8fb in rib_install_lower (rn=3D0x8087600, rib=3D0x8087638) at =
rib.c:661
#1 0x0805cae0 in rib_process (rn=3D0x8087600, del=3D0x0) at rib.c:785
#2 0x0804d6d6 in static_ipv4_add (p=3D0xbffff8c0, gate=3D0x0,
ifname=3D0x8087a20 "eth4", distance=3D1 '\001', table=3D0) at =
zrib.c:297
#3 0x0804da5a in static_ipv4_func (vty=3D0xbffff8b8, add_cmd=3D1,
dest_str=3D0x80870b8 "10.10.1.10/32",
mask_str=3D0x1 <Address 0x1 out of bounds>, gate_str=3D0x8087a20 =
"eth4",
distance_str=3D0x0) at zrib.c:440
#4 0x0804daa4 in ip_route (self=3D0x806b4e0, vty=3D0x8087240, argc=3D2, =
argv=3D0x2)
at zrib.c:457
#5 0x08054952 in cmd_execute_command_strict (vline=3D0x8087550, =
vty=3D0x8087240,
cmd=3D0x0) at command.c:1963
#6 0x08054a7b in config_from_file (vty=3D0x8087240, fp=3D0x80870d0)
at command.c:2001
#7 0x08051755 in vty_read_file (confp=3D0x80870d0) at vty.c:2079
#8 0x08051a23 in vty_read_config (config_file=3D0x0,
config_current_dir=3D0x806b260 "zebra.conf",
config_default_dir=3D0x806b26b "/usr/local/etc/zebra.conf") at =
vty.c:2266
#9 0x0804b682 in main (argc=3D0, argv=3D0xbffffb54) at main.c:287
#10 0x420158d4 in __libc_start_main () from /lib/i686/libc.so.6
--------------------------------------
This is my zebra.conf:
--------------------------------------
hostname routerA
interface eth5
description Fiber1000 Interface -> routerB
ip address 10.10.2.1/24
shutdown
interface eth4
description Fiber1000 Interface -> clientA
ip address 10.10.1.1/24
no shutdown ***
interface eth3
description Fiber100 Interface -> Optical Network
no shutdown
interface eth2
description Fiber100 Interface -> Optical Network
no shutdown
interface eth1
description Fiber100 Interface -> Optical Network
no shutdown
ip route 10.10.1.10/32 eth4 <- This causes seg fault if eth4 (***) is up
| Instantaneous seg fault if initially down then brought up within =
zebra vty
| Same problem applies to Giga and Fast NICs (All opticals)
----------------------------------------
-----Mensagem original-----
De: James R. Leu [mailto:jl...@mi...]
Enviada em: quinta-feira, 31 de outubro de 2002 20:45
Para: Pl=EDnio de Paula
Cc: mpl...@li...
Assunto: Re: RES: [mpls-linux-general] Zebra LDP Crash (Discovered
source of problem)
No one else is using static routes (that I know of). I know I've never
tried it. Do you get a core file? Give me the backtrace from it and =
I'll
try to fix it.
On Thu, Oct 31, 2002 at 06:48:07PM -0300, Pl=EDnio de Paula wrote:
> My configuration of zebra included static routes! With LDP patch they =
cause zebra segmentation fault!
>=20
> Without static routes, LDP-patched-zebra runs OK...
>=20
> Is this happening with everybody?
>=20
> See you!
>=20
> Pl=EDnio de Paula
> UNICAMP/Brazil
>=20
> -----Mensagem original-----
> De: James R. Leu [mailto:jl...@mi...]
> Enviada em: quinta-feira, 31 de outubro de 2002 15:11
> Para: Pl=EDnio de Paula
> Cc: Gianfranco Delli Carri; mpl...@li...
> Assunto: Re: [mpls-linux-general] Zebra LDP Crash
>=20
>=20
> Do you have acore file? Can you get me the backtrace from the core =
dump?
>=20
> On Thu, Oct 31, 2002 at 02:04:17PM -0300, Pl=EDnio de Paula wrote:
> > Hello Gianfranco,
> >=20
> > I=B4m trying to compile zebra with LDP patch in the same =
configuration as yours. The compilation goes OK, but
> > when I call zebra, it generates core dump. Have you crossed similar =
problems? What did you do about them?
> >=20
> > Pl=EDnio de Paula
> > UNICAMP
> >=20
> > -----Mensagem original-----
> > De: Gianfranco Delli Carri [mailto:gf....@nc...]
> > Enviada em: quarta-feira, 30 de outubro de 2002 22:13
> > Para: 'mpl...@li...'
> > Assunto: [mpls-linux-general] Zebra LDP session
> >=20
> >=20
> > Hi to all,
> >=20
> > I have a linux box (2.4.19) patched with mpls-linux-1.170 and =
zebra-0.93b
> > patched with ldp-portable-0.250.
> >=20
> > When I a start mlpsd after zebra and ospfs, in my CISCO router =
MPLS/LDP
> > enabled, I can see the LDP connection setting UP, but after few =
second (hold
> > timer) it come down.
> >=20
> > Debugging MPLSD I can see:
> >=20
> > /usr/local/sbin/mplsd
> > ldp_if_new:
> > 2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts
> > 2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric 1 =
mtu
> > 16436
> > 2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo
> > 2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 =
metric 1 mtu
> > 1500
> > 2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth0
> > 2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250
> > 2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250
> > 2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0
> > 2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1
> > 2002/10/31 02:00:24 MPLS: ifindex 2=20
> > session delete
> >=20
> > Debugging CISCO LDP:
> >=20
> > Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, =
10.254.2.6
> > <-> 10.254.0.250
> > Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30,
> > 10.254.2.6:11439 <-> 10.254.0.250:646
> > Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0)
> > Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj =
0x67827E30
> > 10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0
> > Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <->
> > 10.254.0.250:646, adj 0x67827E30
> > Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, =
10.254.2.6
> > <-> 10.254.0.250
> > Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30,
> > 10.254.2.6:11440 <-> 10.254.0.250:646
> > Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0)
> > Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp =
0x0)
> > Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 =
(pp 0x0)
> > Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:0 =
(pp
> > 0x0)
> > Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to =
10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to =
10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to =
10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to =
10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to =
10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to =
10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to =
10.254.0.250:0 (pp
> > 0x6225D768)
> > etc...
> > Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj
> > 0x67827E30, 10.254.0.250:0, will close conn
> > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp
> > 0x6225D768)
> > Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj =
0x67827E30
> > Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <->
> > 10.254.0.250:646, adj 0x67827E30
> >=20
> > Ah... my MPLSD process come to use all the CPU time:
> >=20
> > ps aux
> > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME =
COMMAND
> > root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10
> > /usr/local/sbin/mplsd
> >=20
> > and I'm always unable to telnet on it, the session freeze.
> >=20
> > telnet 10.254.0.250 2610
> > Trying 10.254.0.250...
> > Connected to 10.254.0.250.
> > Escape character is '^]'.
> >=20
> >=20
> >=20
> > Have you any kind of idea ?
> >=20
> > Thanks in advance.
> >=20
> > Regards,
> >=20
> > Gianfranco
> >=20
> >=20
> > -------------------------------------------------------
> > This sf.net email is sponsored by: Influence the future=20
> > of Java(TM) technology. Join the Java Community=20
> > Process(SM) (JCP(SM)) program now.=20
> > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en
> > _______________________________________________
> > mpls-linux-general mailing list
> > mpl...@li...
> > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
> >=20
> >=20
> > -------------------------------------------------------
> > This sf.net email is sponsored by: Influence the future=20
> > of Java(TM) technology. Join the Java Community=20
> > Process(SM) (JCP(SM)) program now.=20
> > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en
> > _______________________________________________
> > mpls-linux-general mailing list
> > mpl...@li...
> > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
>=20
> --=20
> James R. Leu
--=20
James R. Leu
|
|
From: James R. L. <jl...@mi...> - 2002-10-31 23:19:13
|
Now that mplsd/zebra are being used by a more varied group of _users_ I figure it is time to send an e-mail explain what I expect from someone who has encountered a segfault or mplsd goes into an infinit loop or other serious failures. This does _not_ include things like: -incorrect routing information (OSPF routes aren't installed in forwarding table) -incorrect LSP setup (LDP didn't create the LSPs you were expecting) -LDP never forms adjacencies (or sessions) These problems (and other like it) need to be handled differently. First off, if you experience ANY sort of problems with mpls-linux or ldp-portable please search the mailing-list archives for others with the same problem. I try to answer all new questions by responding to the mailing-list. If it's been asked/answered before, you should be able to find it in the archive. Second, when you experience a serious failure that no one else has reported (or a solution hasn't been found) please file a bug report on http://sf.net/projects/mpls-linux/. In that bug report include: -a _good_ ascii drawing (or a URL with a GIF or a JPG image) of the topology you are using. Include in that drawing all interface addresses. -the zebra/ospfd/mplsd/bgpd configs you are using. -a desciption of what you did that caused the problem, and if it is easily reproducible -if a "core" file resulted please include a stack trace. Read the "Debugging A Crashed Program" in the gdb tutorial listed at the bottom of this e-mail. -if a infinit look is occuring, run the offending program inside of gdb and hit cntrl-C when the loop is encountered. Read the "Running A Program Inside The Debugger" in the tutorial listed at thet bottom of this e-mail. -if you have any coding skills at all, try and dig into the problem. The more you dig into it, the more info you will gather for me, the quicker it will get fixed. -if tyou find the bug, provide me a patch generated with the 'diff' program. I like using 'diff -uNr' Finally, realize that I work on the mpls-linux project in my spare time. I do not get paid for it. The only reason I work on it is that I like it. So please do not do anything that will cause me not to like working on this project. If anyone has any further info they would like to add let me know. I will include this document in the ldp-portable distribution. GDB tutorial: http://users.actcom.co.il/~choo/lupg/tutorials/debugging/debugging-with-gdb.html -- James R. Leu |
|
From: James R. L. <jl...@mi...> - 2002-10-31 22:44:32
|
No one else is using static routes (that I know of). I know I've never tried it. Do you get a core file? Give me the backtrace from it and I'l= l try to fix it. On Thu, Oct 31, 2002 at 06:48:07PM -0300, Pl=EDnio de Paula wrote: > My configuration of zebra included static routes! With LDP patch they c= ause zebra segmentation fault! >=20 > Without static routes, LDP-patched-zebra runs OK... >=20 > Is this happening with everybody? >=20 > See you! >=20 > Pl=EDnio de Paula > UNICAMP/Brazil >=20 > -----Mensagem original----- > De: James R. Leu [mailto:jl...@mi...] > Enviada em: quinta-feira, 31 de outubro de 2002 15:11 > Para: Pl=EDnio de Paula > Cc: Gianfranco Delli Carri; mpl...@li... > Assunto: Re: [mpls-linux-general] Zebra LDP Crash >=20 >=20 > Do you have acore file? Can you get me the backtrace from the core dum= p? >=20 > On Thu, Oct 31, 2002 at 02:04:17PM -0300, Pl=EDnio de Paula wrote: > > Hello Gianfranco, > >=20 > > I=B4m trying to compile zebra with LDP patch in the same configuratio= n as yours. The compilation goes OK, but > > when I call zebra, it generates core dump. Have you crossed similar p= roblems? What did you do about them? > >=20 > > Pl=EDnio de Paula > > UNICAMP > >=20 > > -----Mensagem original----- > > De: Gianfranco Delli Carri [mailto:gf....@nc...] > > Enviada em: quarta-feira, 30 de outubro de 2002 22:13 > > Para: 'mpl...@li...' > > Assunto: [mpls-linux-general] Zebra LDP session > >=20 > >=20 > > Hi to all, > >=20 > > I have a linux box (2.4.19) patched with mpls-linux-1.170 and zebra-0= .93b > > patched with ldp-portable-0.250. > >=20 > > When I a start mlpsd after zebra and ospfs, in my CISCO router MPLS/L= DP > > enabled, I can see the LDP connection setting UP, but after few secon= d (hold > > timer) it come down. > >=20 > > Debugging MPLSD I can see: > >=20 > > /usr/local/sbin/mplsd > > ldp_if_new: > > 2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts > > 2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric 1 = mtu > > 16436 > > 2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo > > 2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 metri= c 1 mtu > > 1500 > > 2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth0 > > 2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250 > > 2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250 > > 2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0 > > 2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1 > > 2002/10/31 02:00:24 MPLS: ifindex 2=20 > > session delete > >=20 > > Debugging CISCO LDP: > >=20 > > Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.25= 4.2.6 > > <-> 10.254.0.250 > > Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30, > > 10.254.2.6:11439 <-> 10.254.0.250:646 > > Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > > Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x67827E30 > > 10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0 > > Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <-> > > 10.254.0.250:646, adj 0x67827E30 > > Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.25= 4.2.6 > > <-> 10.254.0.250 > > Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30, > > 10.254.2.6:11440 <-> 10.254.0.250:646 > > Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > > Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp 0x0= ) > > Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 (p= p 0x0) > > Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:0 = (pp > > 0x0) > > Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (pp > > 0x6225D768) > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:= 0 (pp > > 0x6225D768) > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:= 0 (pp > > 0x6225D768) > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:= 0 (pp > > 0x6225D768) > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:= 0 (pp > > 0x6225D768) > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:= 0 (pp > > 0x6225D768) > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:= 0 (pp > > 0x6225D768) > > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:= 0 (pp > > 0x6225D768) > > etc... > > Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj > > 0x67827E30, 10.254.0.250:0, will close conn > > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > > 0x6225D768) > > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > > 0x6225D768) > > Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj 0x6782= 7E30 > > Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <-> > > 10.254.0.250:646, adj 0x67827E30 > >=20 > > Ah... my MPLSD process come to use all the CPU time: > >=20 > > ps aux > > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAN= D > > root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10 > > /usr/local/sbin/mplsd > >=20 > > and I'm always unable to telnet on it, the session freeze. > >=20 > > telnet 10.254.0.250 2610 > > Trying 10.254.0.250... > > Connected to 10.254.0.250. > > Escape character is '^]'. > >=20 > >=20 > >=20 > > Have you any kind of idea ? > >=20 > > Thanks in advance. > >=20 > > Regards, > >=20 > > Gianfranco > >=20 > >=20 > > ------------------------------------------------------- > > This sf.net email is sponsored by: Influence the future=20 > > of Java(TM) technology. Join the Java Community=20 > > Process(SM) (JCP(SM)) program now.=20 > > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > > _______________________________________________ > > mpls-linux-general mailing list > > mpl...@li... > > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > >=20 > >=20 > > ------------------------------------------------------- > > This sf.net email is sponsored by: Influence the future=20 > > of Java(TM) technology. Join the Java Community=20 > > Process(SM) (JCP(SM)) program now.=20 > > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > > _______________________________________________ > > mpls-linux-general mailing list > > mpl...@li... > > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general >=20 > --=20 > James R. Leu --=20 James R. Leu |
|
From: <pp...@cp...> - 2002-10-31 21:47:46
|
My configuration of zebra included static routes! With LDP patch they = cause zebra segmentation fault! Without static routes, LDP-patched-zebra runs OK... Is this happening with everybody? See you! Pl=EDnio de Paula UNICAMP/Brazil -----Mensagem original----- De: James R. Leu [mailto:jl...@mi...] Enviada em: quinta-feira, 31 de outubro de 2002 15:11 Para: Pl=EDnio de Paula Cc: Gianfranco Delli Carri; mpl...@li... Assunto: Re: [mpls-linux-general] Zebra LDP Crash Do you have acore file? Can you get me the backtrace from the core = dump? On Thu, Oct 31, 2002 at 02:04:17PM -0300, Pl=EDnio de Paula wrote: > Hello Gianfranco, >=20 > I=B4m trying to compile zebra with LDP patch in the same configuration = as yours. The compilation goes OK, but > when I call zebra, it generates core dump. Have you crossed similar = problems? What did you do about them? >=20 > Pl=EDnio de Paula > UNICAMP >=20 > -----Mensagem original----- > De: Gianfranco Delli Carri [mailto:gf....@nc...] > Enviada em: quarta-feira, 30 de outubro de 2002 22:13 > Para: 'mpl...@li...' > Assunto: [mpls-linux-general] Zebra LDP session >=20 >=20 > Hi to all, >=20 > I have a linux box (2.4.19) patched with mpls-linux-1.170 and = zebra-0.93b > patched with ldp-portable-0.250. >=20 > When I a start mlpsd after zebra and ospfs, in my CISCO router = MPLS/LDP > enabled, I can see the LDP connection setting UP, but after few second = (hold > timer) it come down. >=20 > Debugging MPLSD I can see: >=20 > /usr/local/sbin/mplsd > ldp_if_new: > 2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts > 2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric 1 = mtu > 16436 > 2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo > 2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 metric = 1 mtu > 1500 > 2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth0 > 2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250 > 2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250 > 2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0 > 2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1 > 2002/10/31 02:00:24 MPLS: ifindex 2=20 > session delete >=20 > Debugging CISCO LDP: >=20 > Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, = 10.254.2.6 > <-> 10.254.0.250 > Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30, > 10.254.2.6:11439 <-> 10.254.0.250:646 > Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x67827E30 > 10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0 > Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <-> > 10.254.0.250:646, adj 0x67827E30 > Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, = 10.254.2.6 > <-> 10.254.0.250 > Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30, > 10.254.2.6:11440 <-> 10.254.0.250:646 > Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp 0x0) > Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 (pp = 0x0) > Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:0 = (pp > 0x0) > Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > etc... > Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj > 0x67827E30, 10.254.0.250:0, will close conn > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj = 0x67827E30 > Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <-> > 10.254.0.250:646, adj 0x67827E30 >=20 > Ah... my MPLSD process come to use all the CPU time: >=20 > ps aux > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10 > /usr/local/sbin/mplsd >=20 > and I'm always unable to telnet on it, the session freeze. >=20 > telnet 10.254.0.250 2610 > Trying 10.254.0.250... > Connected to 10.254.0.250. > Escape character is '^]'. >=20 >=20 >=20 > Have you any kind of idea ? >=20 > Thanks in advance. >=20 > Regards, >=20 > Gianfranco >=20 >=20 > ------------------------------------------------------- > This sf.net email is sponsored by: Influence the future=20 > of Java(TM) technology. Join the Java Community=20 > Process(SM) (JCP(SM)) program now.=20 > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general >=20 >=20 > ------------------------------------------------------- > This sf.net email is sponsored by: Influence the future=20 > of Java(TM) technology. Join the Java Community=20 > Process(SM) (JCP(SM)) program now.=20 > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general --=20 James R. Leu |
|
From: <mes...@em...> - 2002-10-31 18:28:22
|
I have linux Red Hat 7.3, 2.4.14 kernel, mpls-linux-1.1.170.
I am having problems compiling mpls-linux. Kernel compiles fine with MPLS
support included.
But the when I run make mplsadm in /utils dir, it returns me these errors:
gcc -g -Wall -I/usr/include/linux -c -o mplsadm.o mplsadm.c
In file included from /usr/include/bits/types.h:143,
from /usr/include/stdio.h:36,
from mplsadm.c:2:
/usr/include/bits/pthreadtypes.h:48: parse error before `size_t'
/usr/include/bits/pthreadtypes.h:48: warning: no semicolon at end of struct or
union
/usr/include/bits/pthreadtypes.h:51: parse error before `__stacksize'
/usr/include/bits/pthreadtypes.h:51: warning: data definition has no type or
storage class
/usr/include/bits/pthreadtypes.h:52: warning: data definition has no type or
storage class
In file included from /usr/include/_G_config.h:44,
from /usr/include/libio.h:32,
from /usr/include/stdio.h:65,
from mplsadm.c:2:
/usr/include/gconv.h:72: parse error before `size_t'
/usr/include/gconv.h:85: parse error before `size_t'
/usr/include/gconv.h:94: parse error before `size_t'
/usr/include/gconv.h:170: parse error before `size_t'
/usr/include/gconv.h:170: warning: no semicolon at end of struct or union
/usr/include/gconv.h:173: parse error before `}'
/usr/include/gconv.h:173: warning: data definition has no type or storage
class
In file included from /usr/include/libio.h:32,
from /usr/include/stdio.h:65,
from mplsadm.c:2:
/usr/include/_G_config.h:47: field `__cd' has incomplete type
/usr/include/_G_config.h:50: field `__cd' has incomplete type
/usr/include/_G_config.h:53: confused by earlier errors, bailing out
make: *** [mplsadm.o] Error 1
Can anyone help me, please. Is there something missing in my Makefile?
Thank you in advance.
Matevz
-------------------
http://www.email.si
|
|
From: James R. L. <jl...@mi...> - 2002-10-31 17:10:23
|
Do you have acore file? Can you get me the backtrace from the core dump? On Thu, Oct 31, 2002 at 02:04:17PM -0300, Pl=EDnio de Paula wrote: > Hello Gianfranco, >=20 > I=B4m trying to compile zebra with LDP patch in the same configuration = as yours. The compilation goes OK, but > when I call zebra, it generates core dump. Have you crossed similar pro= blems? What did you do about them? >=20 > Pl=EDnio de Paula > UNICAMP >=20 > -----Mensagem original----- > De: Gianfranco Delli Carri [mailto:gf....@nc...] > Enviada em: quarta-feira, 30 de outubro de 2002 22:13 > Para: 'mpl...@li...' > Assunto: [mpls-linux-general] Zebra LDP session >=20 >=20 > Hi to all, >=20 > I have a linux box (2.4.19) patched with mpls-linux-1.170 and zebra-0.9= 3b > patched with ldp-portable-0.250. >=20 > When I a start mlpsd after zebra and ospfs, in my CISCO router MPLS/LDP > enabled, I can see the LDP connection setting UP, but after few second = (hold > timer) it come down. >=20 > Debugging MPLSD I can see: >=20 > /usr/local/sbin/mplsd > ldp_if_new: > 2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts > 2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric 1 mt= u > 16436 > 2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo > 2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 metric = 1 mtu > 1500 > 2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth0 > 2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250 > 2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250 > 2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0 > 2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1 > 2002/10/31 02:00:24 MPLS: ifindex 2=20 > session delete >=20 > Debugging CISCO LDP: >=20 > Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.254.= 2.6 > <-> 10.254.0.250 > Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30, > 10.254.2.6:11439 <-> 10.254.0.250:646 > Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x67827E30 > 10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0 > Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <-> > 10.254.0.250:646, adj 0x67827E30 > Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.254.= 2.6 > <-> 10.254.0.250 > Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30, > 10.254.2.6:11440 <-> 10.254.0.250:646 > Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp 0x0) > Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 (pp = 0x0) > Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:0 (p= p > 0x0) > Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 = (pp > 0x6225D768) > etc... > Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj > 0x67827E30, 10.254.0.250:0, will close conn > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj 0x67827E= 30 > Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <-> > 10.254.0.250:646, adj 0x67827E30 >=20 > Ah... my MPLSD process come to use all the CPU time: >=20 > ps aux > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10 > /usr/local/sbin/mplsd >=20 > and I'm always unable to telnet on it, the session freeze. >=20 > telnet 10.254.0.250 2610 > Trying 10.254.0.250... > Connected to 10.254.0.250. > Escape character is '^]'. >=20 >=20 >=20 > Have you any kind of idea ? >=20 > Thanks in advance. >=20 > Regards, >=20 > Gianfranco >=20 >=20 > ------------------------------------------------------- > This sf.net email is sponsored by: Influence the future=20 > of Java(TM) technology. Join the Java Community=20 > Process(SM) (JCP(SM)) program now.=20 > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general >=20 >=20 > ------------------------------------------------------- > This sf.net email is sponsored by: Influence the future=20 > of Java(TM) technology. Join the Java Community=20 > Process(SM) (JCP(SM)) program now.=20 > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general --=20 James R. Leu |
|
From: <pp...@cp...> - 2002-10-31 17:04:00
|
Hello Gianfranco,
I=B4m trying to compile zebra with LDP patch in the same configuration =
as yours. The compilation goes OK, but
when I call zebra, it generates core dump. Have you crossed similar =
problems? What did you do about them?
Pl=EDnio de Paula
UNICAMP
-----Mensagem original-----
De: Gianfranco Delli Carri [mailto:gf....@nc...]
Enviada em: quarta-feira, 30 de outubro de 2002 22:13
Para: 'mpl...@li...'
Assunto: [mpls-linux-general] Zebra LDP session
Hi to all,
I have a linux box (2.4.19) patched with mpls-linux-1.170 and =
zebra-0.93b
patched with ldp-portable-0.250.
When I a start mlpsd after zebra and ospfs, in my CISCO router MPLS/LDP
enabled, I can see the LDP connection setting UP, but after few second =
(hold
timer) it come down.
Debugging MPLSD I can see:
/usr/local/sbin/mplsd
ldp_if_new:
2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts
2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric 1 mtu
16436
2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo
2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 metric 1 =
mtu
1500
2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth0
2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250
2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250
2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0
2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1
2002/10/31 02:00:24 MPLS: ifindex 2=20
session delete
Debugging CISCO LDP:
Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, =
10.254.2.6
<-> 10.254.0.250
Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30,
10.254.2.6:11439 <-> 10.254.0.250:646
Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0)
Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x67827E30
10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0
Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <->
10.254.0.250:646, adj 0x67827E30
Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, =
10.254.2.6
<-> 10.254.0.250
Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30,
10.254.2.6:11440 <-> 10.254.0.250:646
Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0)
Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp 0x0)
Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 (pp =
0x0)
Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:0 (pp
0x0)
Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 =
(pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 =
(pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 =
(pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 =
(pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 =
(pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 =
(pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 =
(pp
0x6225D768)
etc...
Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj
0x67827E30, 10.254.0.250:0, will close conn
Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj =
0x67827E30
Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <->
10.254.0.250:646, adj 0x67827E30
Ah... my MPLSD process come to use all the CPU time:
ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10
/usr/local/sbin/mplsd
and I'm always unable to telnet on it, the session freeze.
telnet 10.254.0.250 2610
Trying 10.254.0.250...
Connected to 10.254.0.250.
Escape character is '^]'.
Have you any kind of idea ?
Thanks in advance.
Regards,
Gianfranco
-------------------------------------------------------
This sf.net email is sponsored by: Influence the future=20
of Java(TM) technology. Join the Java Community=20
Process(SM) (JCP(SM)) program now.=20
http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en
_______________________________________________
mpls-linux-general mailing list
mpl...@li...
https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
|
|
From: James R. L. <jl...@mi...> - 2002-10-31 01:27:52
|
One other person has seen this. Try these things to help me debug it: -turn on mplsd tracing -run under gdb and hit cntrl C and get a stack trace. On Thu, Oct 31, 2002 at 02:13:11AM +0100, Gianfranco Delli Carri wrote: > Hi to all, > > I have a linux box (2.4.19) patched with mpls-linux-1.170 and zebra-0.93b > patched with ldp-portable-0.250. > > When I a start mlpsd after zebra and ospfs, in my CISCO router MPLS/LDP > enabled, I can see the LDP connection setting UP, but after few second (hold > timer) it come down. > > Debugging MPLSD I can see: > > /usr/local/sbin/mplsd > ldp_if_new: > 2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts > 2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric 1 mtu > 16436 > 2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo > 2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 metric 1 mtu > 1500 > 2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth0 > 2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250 > 2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250 > 2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0 > 2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1 > 2002/10/31 02:00:24 MPLS: ifindex 2 > session delete > > Debugging CISCO LDP: > > Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.254.2.6 > <-> 10.254.0.250 > Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30, > 10.254.2.6:11439 <-> 10.254.0.250:646 > Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x67827E30 > 10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0 > Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <-> > 10.254.0.250:646, adj 0x67827E30 > Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.254.2.6 > <-> 10.254.0.250 > Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30, > 10.254.2.6:11440 <-> 10.254.0.250:646 > Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0) > Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp 0x0) > Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 (pp 0x0) > Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:0 (pp > 0x0) > Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp > 0x6225D768) > etc... > Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj > 0x67827E30, 10.254.0.250:0, will close conn > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp > 0x6225D768) > Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj 0x67827E30 > Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <-> > 10.254.0.250:646, adj 0x67827E30 > > Ah... my MPLSD process come to use all the CPU time: > > ps aux > USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND > root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10 > /usr/local/sbin/mplsd > > and I'm always unable to telnet on it, the session freeze. > > telnet 10.254.0.250 2610 > Trying 10.254.0.250... > Connected to 10.254.0.250. > Escape character is '^]'. > > > > Have you any kind of idea ? > > Thanks in advance. > > Regards, > > Gianfranco > > > ------------------------------------------------------- > This sf.net email is sponsored by: Influence the future > of Java(TM) technology. Join the Java Community > Process(SM) (JCP(SM)) program now. > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general -- James R. Leu |
|
From: Gianfranco D. C. <gf....@nc...> - 2002-10-31 01:11:49
|
Hi to all,
I have a linux box (2.4.19) patched with mpls-linux-1.170 and zebra-0.93b
patched with ldp-portable-0.250.
When I a start mlpsd after zebra and ospfs, in my CISCO router MPLS/LDP
enabled, I can see the LDP connection setting UP, but after few second (hold
timer) it come down.
Debugging MPLSD I can see:
/usr/local/sbin/mplsd
ldp_if_new:
2002/10/31 02:00:24 MPLS: MPLSd (0.93b) starts
2002/10/31 02:00:24 MPLS: interface add lo index 1 flags 73 metric 1 mtu
16436
2002/10/31 02:00:24 MPLS: address add 127.0.0.1 to interface lo
2002/10/31 02:00:24 MPLS: interface add eth0 index 2 flags 4419 metric 1 mtu
1500
2002/10/31 02:00:24 MPLS: address add 10.254.0.250 to interface eth0
2002/10/31 02:00:24 MPLS: router-id change 10.254.0.250
2002/10/31 02:00:24 MPLS: router-id update 10.254.0.250
2002/10/31 02:00:24 MPLS: router add 0.0.0.0/0
2002/10/31 02:00:24 MPLS: nexthop 10.254.0.1
2002/10/31 02:00:24 MPLS: ifindex 2
session delete
Debugging CISCO LDP:
Oct 31 02:00:24.584 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.254.2.6
<-> 10.254.0.250
Oct 31 02:00:24.584 CET: ldp: ldp conn is up; adj 0x67827E30,
10.254.2.6:11439 <-> 10.254.0.250:646
Oct 31 02:00:24.584 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0)
Oct 31 02:00:24.604 CET: ldp: ldp conn closed by peer; adj 0x67827E30
10.254.2.6:11439 <-> 10.254.0.250:646, FastEthernet0/0
Oct 31 02:00:24.604 CET: ldp: Closing ldp conn 10.254.2.6:11439 <->
10.254.0.250:646, adj 0x67827E30
Oct 31 02:00:29.588 CET: ldp: Opening ldp conn; adj 0x67827E30, 10.254.2.6
<-> 10.254.0.250
Oct 31 02:00:29.588 CET: ldp: ldp conn is up; adj 0x67827E30,
10.254.2.6:11440 <-> 10.254.0.250:646
Oct 31 02:00:29.588 CET: ldp: Sent init msg to 10.254.0.250 (pp 0x0)
Oct 31 02:00:29.600 CET: ldp: Rcvd init msg from 10.254.0.250 (pp 0x0)
Oct 31 02:00:29.600 CET: ldp: Sent keepalive msg to 10.254.0.250:0 (pp 0x0)
Oct 31 02:00:29.604 CET: ldp: Rcvd keepalive msg from 10.254.0.250:0 (pp
0x0)
Oct 31 02:00:29.608 CET: ldp: Sent address msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:29.608 CET: ldp: Sent label mapping msg to 10.254.0.250:0 (pp
0x6225D768)
etc...
Oct 31 02:00:44.605 CET: ldp: Discovery hold timer expired for adj
0x67827E30, 10.254.0.250:0, will close conn
Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:44.605 CET: ldp: Sent notif msg to 10.254.0.250:0 (pp
0x6225D768)
Oct 31 02:00:44.605 CET: ldp: Close LDP transport conn for adj 0x67827E30
Oct 31 02:00:44.605 CET: ldp: Closing ldp conn 10.254.2.6:11440 <->
10.254.0.250:646, adj 0x67827E30
Ah... my MPLSD process come to use all the CPU time:
ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 769 98.4 0.7 2076 904 pts/0 R 02:00 5:10
/usr/local/sbin/mplsd
and I'm always unable to telnet on it, the session freeze.
telnet 10.254.0.250 2610
Trying 10.254.0.250...
Connected to 10.254.0.250.
Escape character is '^]'.
Have you any kind of idea ?
Thanks in advance.
Regards,
Gianfranco
|
|
From: Georg K. <gk...@gi...> - 2002-10-30 17:27:00
|
Hi Jim,
> If the code was relying on the location of the bits within the unsigned
> int, then I would have to take into account endianess.
yes, you are right with that. But the MPLS kernel code assigns outgoing labels
as unsigned ints as you can see from the following code:
------------------------------------------
static unsigned int mpls_out_info_key = 1;
unsigned int mpls_get_out_key(void) {
mpls_out_info_key++;
return mpls_out_info_key;
}
------------------------------------------
These keys are then printed as bit fields in the mpls_print_key() function.
That is exactly what we are seeing: different prints on Pentium and PPC:
Pentium log:
mpls_add_out_label: enter
mpls_out_info_hold: enter
mpls_out_info_hold: new count 1
mpls_out_info_hold: exit
Key UNKNOWN 00000005
mpls_add_out_label: enter
and the PPC Log:
mpls_add_out_label: enter
mpls_out_info_hold: enter
mpls_out_info_hold: new count 1
mpls_out_info_hold: exit
Key GEN 1 0
mpls_add_out_label: enter
In both cases the key was assigned via the function mpls_get_out_key() to
the value 5!
> ... Since the code only
> uses the bit fields to form a unique key and does not try to access the bit
> values once they have been converted to a key, there is no need to be
> concerned about byte-ordering.
So that is not true. The keys are assigned in the above case as intergers and
afterwards (in the print function) interpreted as bitfields.
Please also note that I am not concerned about the byte ordering, but about the
bitfield orders which is a different issue! It means that the first bit field
is the most significant bitfield or the least significant one!
Kind regards,
Georg Klug
|
|
From: James R. L. <jl...@mi...> - 2002-10-30 14:59:59
|
If the code was relying on the location of the bits within the unsigned
int, then I would have to take into account endianess. Since the code only
uses the bit fields to form a unique key and does not try to access the bit
values once they have been converted to a key, there is no need to be concerned
about byte-ordering.
Try this simple program on a x86 and PPC and you will see it just doesn't
matter.
#include <stdio.h>
#include <string.h>
struct bitfield {
unsigned int foo:10;
unsigned int bar:20;
unsigned int shemp:2;
};
struct holder {
union {
struct bitfield bits;
unsigned int mark;
} u;
};
int main(int argc, char **argv) {
unsigned int intermediate;
struct holder test;
memset(&test,0,sizeof(test));
test.u.bits.foo = 1022;
test.u.bits.bar = 33;
test.u.bits.shemp = 2;
intermediate = test.u.mark;
memset(&test,0,sizeof(test));
test.u.mark = intermediate;
fprintf(stdout, "foo: %d bar: %d shemp: %d\n", test.u.bits.foo,
test.u.bits.bar, test.u.bits.shemp);
return 0;
}
On Wed, Oct 30, 2002 at 01:33:45PM +0100, Georg Klug wrote:
> Hi Jim,
>
> > When I first heard of people trying to run mpls-linux on PPC the first
> > thing I though about was byte-ordering issues with these keys. After
> > thinking about it for a long time, it turns out that there is no need
> > to worry. Since the keys are only used inside the kernel and never
> > undergo a net-host or host-net conversion, it doesn't matter what the
> > resulting bit layout of the key is.
>
> I would usually agree, if the code would only interpret the bitfields
> when they were set that way. But have a look at the following lines in
> the function mpls_print_key:
>
> --------------------------- snip ---------------------
> void mpls_print_key(u32 key) {
> struct mpls_key mlk;
>
> mlk.u.mark = key;
> switch(mlk.u.gen.type) {
> case MPLS_LABEL_GEN:
> printk("Key GEN %d %d\n",mlk.u.gen.gen,mlk.u.gen.index);
> break;
> --------------------------- snip ---------------------
>
> Those construction, where a union is assigned via u32 and then interpreted
> as a bitfield, need the bitfield ordered correctly. (Especiall if a key is
> assigned through the function mpls_get_out_key() and then printed via
> mpls_print_key() it will lead to wrong output).
>
> So I would recommend at least to use the define __LITTLE_ENDIAN_BITFIELD.
>
> > mpls-linux uses all 32bits to store
> > it in the radix tree. The only thing that matters is that the value
> > is unique, which it is no matter what byte ordering we are dealing with.
> >
> > ldp-portable has to deal with this quite a bit. Look in
> > ldp-portable/lib/ldp_nortel.h and you will see many example of preservation
> > of bit ordering in a bitfield.
>
> They use a BITFIELD_ASCENDING macro depending on the
> LITTLE/BIG_ENDIAN_BYTE_ORDER,
> which is not portable in general, but works for gcc used with Linux.
>
> As I pointed out before, using bitfields is not portable in general,
> since the ordering is compiler and machine dependent and not defined in the
> C standard. Please see the following thread in the ppc-dev List as a reference.
>
> http://www.geocrawler.com/mail/thread.php3?subject=Help+with+cross-endian+bitfie
> lds%3F&list=3
>
> Do you agree?
>
> Kind regards,
> Georg klug
>
>
> >
> > On Tue, Oct 29, 2002 at 10:03:27PM +0100, Georg Klug wrote:
> > > Hi all,
> > >
> > > today I was wrestling with some endian issues in the MPLS kernel patch.
> > > Especially the following bitfield definition is interesting:
> > >
> > > struct mpls_gen_key {
> > > unsigned int index:10;
> > > unsigned int gen:20;
> > > unsigned int type:2;
> > > };
> > >
> > > The C language does not define exactly how these bits are stored into
> > > memory. It might either be right-to-left assigned or left-to-right
> > > assigned depending on the machine or even on the compiler used. So using
> > > this kind of struct is not portable in general. But looking into the
> > > kernel sources there are also such structure definitions (i.e. struct iphdr
> > > in include/linux/ip.h where the first byte of the ip header is defined as
> > > a bitfield).
> > >
> > > Anyway gcc on powerpc uses left-to-right assigned bit-fields, which
> > > causes the struct above not to work correctly on powerpc.
> > >
> > > Here is what the struct would look like on powerpc machines:
> > >
> > > Byte 0 Byte 1 Byte 2 Byte 3
> > > 76543210 76543210 76543210 76543210
> > > | index || ----------- gen -------|||< (last 2 bits are type)
> > >
> > > On the little endian x86 architecture with right-to-left assigned bit-
> > > fields the struct look like:
> > >
> > > Byte 0 Byte 1 Byte 2 Byte 3
> > > 76543210 76543210 76543210 76543210
> > > |in-lo | |g-lo||| | g-mid| |||g-hi|
> > >
> > > So the original fields are stored as follows:
> > > index: Byte1[Bits 1-0] Byte0[Bits 7-0]
> > > gen: Byte3[Bits 5-0] Byte2[Bits 7-0] Byte1[Bits 7-2]
> > > type Byte3[Bits 7-6]
> > >
> > > This looks rather complex but after byte swapping which must be applied
> > > on little-endian machines before sending the data on the wire it looks
> > > as follows:
> > >
> > > Byte 0 Byte 1 Byte 2 Byte 3
> > > 76543210 76543210 76543210 76543210
> > > ||| ------- gen -----------||-----index-|
> > >
> > > which is what we wanted.
> > >
> > > As we saw from the C standard there is no good solution working under
> > > any circumstances. But the kernel uses such structs with the define
> > > __LITTLE_ENDIAN_BITFIELD, and therfore a solution to make the struct
> > > above working on big endian machines too would be:
> > >
> > > struct mpls_gen_key {
> > > #if defined (__LITTLE_ENDIAN_BITFIELD)
> > > unsigned int index:10;
> > > unsigned int gen:20;
> > > unsigned int type:2;
> > > #else
> > > unsigned int type:2;
> > > unsigned int gen:20;
> > > unsigned int index:10;
> > > #endif
> > > };
> > >
> > > What do you think?
> > >
> > > Kind regards,
> > > Georg Klug
> > >
> > >
> > >
> > > -------------------------------------------------------
> > > This sf.net email is sponsored by:ThinkGeek
> > > Welcome to geek heaven.
> > > http://thinkgeek.com/sf
> > > _______________________________________________
> > > mpls-linux-general mailing list
> > > mpl...@li...
> > > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
> >
> > --
> > James R. Leu
> >
> >
> > -------------------------------------------------------
> > This sf.net email is sponsored by:ThinkGeek
> > Welcome to geek heaven.
> > http://thinkgeek.com/sf
> > _______________________________________________
> > mpls-linux-general mailing list
> > mpl...@li...
> > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
> >
>
--
James R. Leu
|
|
From: James R. L. <jl...@mi...> - 2002-10-30 12:59:13
|
What version of the kernel are you trying to patch? On Wed, Oct 30, 2002 at 05:27:30PM +0530, Amritpal Singh wrote: > Hi All > > I am trying to upgrade the MPLS version running in my small setup from 1.0 to 1.170(latest) > > But Iam facing problems when I try to patch the kernel with the latest patch. > > Here is the log of the error I get when I issue the command patch -p1 < ../mpls-linux-1.1/patches/linux-kernel.diff > > [root@Terminus linux]# patch -p1 < ../mpls-linux-1.1/patches/linux-kernel.diff > > patching file drivers/net/ppp_generic.c > > Hunk #1 succeeded at 55 (offset -2 lines). > > Hunk #2 succeeded at 257 (offset -25 lines). > > Hunk #3 succeeded at 294 (offset -2 lines). > > Hunk #4 succeeded at 289 (offset -25 lines). > > Hunk #5 succeeded at 326 (offset -2 lines). > > patching file include/linux/if_arp.h > > Hunk #1 FAILED at 83. > > 1 out of 1 hunk FAILED -- saving rejects to file include/linux/if_arp.h.rej > > patching file include/linux/if_ether.h > > Hunk #1 succeeded at 60 (offset -1 lines). > > patching file include/linux/mpls.h > > patching file include/linux/netdevice.h > > Hunk #1 succeeded at 317 (offset -12 lines). > > patching file include/linux/netfilter_ipv4/ipt_MPLS.h > > patching file include/linux/ppp_defs.h > > patching file include/linux/rtnetlink.h > > patching file include/net/dst.h > > patching file include/net/ip.h > > patching file include/net/ip_fib.h > > patching file include/net/mpls.h > > patching file net/Config.in > > Hunk #1 succeeded at 75 with fuzz 1 (offset -6 lines). > > patching file net/Makefile > > patching file net/core/dst.c > > patching file net/core/neighbour.c > > Hunk #1 succeeded at 963 (offset 11 lines). > > patching file net/ipv4/af_inet.c > > Hunk #1 succeeded at 116 with fuzz 2. > > Hunk #2 succeeded at 928 (offset -8 lines). > > patching file net/ipv4/fib_semantics.c > > Hunk #1 succeeded at 148 (offset -2 lines). > > Hunk #2 succeeded at 238 (offset -1 lines). > > Hunk #3 succeeded at 474 (offset -10 lines). > > Hunk #4 succeeded at 500 (offset -1 lines). > > Hunk #5 succeeded at 680 (offset -8 lines). > > Hunk #6 succeeded at 707 (offset -1 lines). > > patching file net/ipv4/ip_output.c > > Hunk #1 succeeded at 112 (offset -1 lines). > > patching file net/ipv4/netfilter/Config.in > > Hunk #1 succeeded at 59 with fuzz 1 (offset -15 lines). > > patching file net/ipv4/netfilter/Makefile > > Hunk #1 succeeded at 61 with fuzz 1 (offset -7 lines). > > patching file net/ipv4/netfilter/ipt_MPLS.c > > patching file net/ipv4/route.c > > Hunk #2 succeeded at 1201 (offset -10 lines). > > Hunk #4 succeeded at 2064 (offset -7 lines). > > Hunk #6 succeeded at 2158 (offset -7 lines). > > patching file net/mpls/Makefile > > patching file net/mpls/mpls_if.c > > patching file net/mpls/mpls_in_info.c > > patching file net/mpls/mpls_init.c > > patching file net/mpls/mpls_input.c > > patching file net/mpls/mpls_ioctls.c > > patching file net/mpls/mpls_opcode.c > > patching file net/mpls/mpls_out_info.c > > patching file net/mpls/mpls_output.c > > patching file net/mpls/mpls_proc.c > > patching file net/mpls/mpls_ref.c > > patching file net/mpls/mpls_tunnel.c > > patching file net/mpls/mpls_utils.c > > patching file net/netsyms.c > > "Hunk #1 FAILED at 577. > > 1 out of 1 hunk FAILED -- saving rejects to file net/netsyms.c.rej" > > [root@Terminus linux]# > > > > And when I went ahead and did a make bzImage(having done make dep and make clean) , I got the following error. > > > > rict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -c -o igmp.o igmp.c > > gcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -c -o sysctl_net_ipv4.o sysctl_net_ipv4.c > > gcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -c -o fib_frontend.o fib_frontend.c > > gcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -c -o fib_semantics.o fib_semantics.c > > gcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -c -o fib_hash.o fib_hash.c > > rm -f ipv4.o > > ld -m elf_i386 -r -o ipv4.o utils.o route.o inetpeer.o proc.o protocol.o ip_input.o ip_fragment.o ip_forward.o ip_options.o ip_output.o ip_sockglue.o tcp.o tcp_input.o tcp_output.o tcp_timer.o tcp_ipv4.o tcp_minisocks.o raw.o udp.o arp.o icmp.o devinet.o af_inet.o igmp.o sysctl_net_ipv4.o fib_frontend.o fib_semantics.o fib_hash.o > > make[3]: Leaving directory `/usr/src/linux/net/ipv4' > > make[2]: Leaving directory `/usr/src/linux/net/ipv4' > > make -C ipv4/netfilter > > make[2]: Entering directory `/usr/src/linux/net/ipv4/netfilter' > > make all_targets > > make[3]: Entering directory `/usr/src/linux/net/ipv4/netfilter' > > gcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -DEXPORT_SYMTAB -c ip_tables.c > > gcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -c -o iptable_mangle.o iptable_mangle.cgcc -D__KERNEL__ -I/usr/src/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -c -o ipt_MPLS.o ipt_MPLS.c > > ipt_MPLS.c: In function `target': > > ipt_MPLS.c:20: warning: unused variable `i' > > ipt_MPLS.c: At top level: > > "ipt_MPLS.c:70: parse error before string constant > > ipt_MPLS.c:70: warning: type defaults to `int' in declaration of `MODULE_LICENSE' > > ipt_MPLS.c:70: warning: function declaration isn't a prototype > > ipt_MPLS.c:70: warning: data definition has no type or storage class > > make[3]: *** [ipt_MPLS.o] Error 1" > > make[3]: Leaving directory `/usr/src/linux/net/ipv4/netfilter' > > make[2]: *** [first_rule] Error 2 > > make[2]: Leaving directory `/usr/src/linux/net/ipv4/netfilter' > > make[1]: *** [_subdir_ipv4/netfilter] Error 2 > > make[1]: Leaving directory `/usr/src/linux/net' > > make: *** [_dir_net] Error 2 > > [root@Terminus linux]# > > > > Is this a known problem with the patch or a misconfiguration at my end ? > > I didnt see any problem when I applied the mpls-linux-1.0 patch 2-3 months back. > > > > Please help > > Amrit > > -------------------------------- > -- James R. Leu |
|
From: Georg K. <gk...@gi...> - 2002-10-30 12:30:36
|
Hi Jim,
> When I first heard of people trying to run mpls-linux on PPC the first
> thing I though about was byte-ordering issues with these keys. After
> thinking about it for a long time, it turns out that there is no need
> to worry. Since the keys are only used inside the kernel and never
> undergo a net-host or host-net conversion, it doesn't matter what the
> resulting bit layout of the key is.
I would usually agree, if the code would only interpret the bitfields
when they were set that way. But have a look at the following lines in
the function mpls_print_key:
--------------------------- snip ---------------------
void mpls_print_key(u32 key) {
struct mpls_key mlk;
mlk.u.mark = key;
switch(mlk.u.gen.type) {
case MPLS_LABEL_GEN:
printk("Key GEN %d %d\n",mlk.u.gen.gen,mlk.u.gen.index);
break;
--------------------------- snip ---------------------
Those construction, where a union is assigned via u32 and then interpreted
as a bitfield, need the bitfield ordered correctly. (Especiall if a key is
assigned through the function mpls_get_out_key() and then printed via
mpls_print_key() it will lead to wrong output).
So I would recommend at least to use the define __LITTLE_ENDIAN_BITFIELD.
> mpls-linux uses all 32bits to store
> it in the radix tree. The only thing that matters is that the value
> is unique, which it is no matter what byte ordering we are dealing with.
>
> ldp-portable has to deal with this quite a bit. Look in
> ldp-portable/lib/ldp_nortel.h and you will see many example of preservation
> of bit ordering in a bitfield.
They use a BITFIELD_ASCENDING macro depending on the
LITTLE/BIG_ENDIAN_BYTE_ORDER,
which is not portable in general, but works for gcc used with Linux.
As I pointed out before, using bitfields is not portable in general,
since the ordering is compiler and machine dependent and not defined in the
C standard. Please see the following thread in the ppc-dev List as a reference.
http://www.geocrawler.com/mail/thread.php3?subject=Help+with+cross-endian+bitfie
lds%3F&list=3
Do you agree?
Kind regards,
Georg klug
>
> On Tue, Oct 29, 2002 at 10:03:27PM +0100, Georg Klug wrote:
> > Hi all,
> >
> > today I was wrestling with some endian issues in the MPLS kernel patch.
> > Especially the following bitfield definition is interesting:
> >
> > struct mpls_gen_key {
> > unsigned int index:10;
> > unsigned int gen:20;
> > unsigned int type:2;
> > };
> >
> > The C language does not define exactly how these bits are stored into
> > memory. It might either be right-to-left assigned or left-to-right
> > assigned depending on the machine or even on the compiler used. So using
> > this kind of struct is not portable in general. But looking into the
> > kernel sources there are also such structure definitions (i.e. struct iphdr
> > in include/linux/ip.h where the first byte of the ip header is defined as
> > a bitfield).
> >
> > Anyway gcc on powerpc uses left-to-right assigned bit-fields, which
> > causes the struct above not to work correctly on powerpc.
> >
> > Here is what the struct would look like on powerpc machines:
> >
> > Byte 0 Byte 1 Byte 2 Byte 3
> > 76543210 76543210 76543210 76543210
> > | index || ----------- gen -------|||< (last 2 bits are type)
> >
> > On the little endian x86 architecture with right-to-left assigned bit-
> > fields the struct look like:
> >
> > Byte 0 Byte 1 Byte 2 Byte 3
> > 76543210 76543210 76543210 76543210
> > |in-lo | |g-lo||| | g-mid| |||g-hi|
> >
> > So the original fields are stored as follows:
> > index: Byte1[Bits 1-0] Byte0[Bits 7-0]
> > gen: Byte3[Bits 5-0] Byte2[Bits 7-0] Byte1[Bits 7-2]
> > type Byte3[Bits 7-6]
> >
> > This looks rather complex but after byte swapping which must be applied
> > on little-endian machines before sending the data on the wire it looks
> > as follows:
> >
> > Byte 0 Byte 1 Byte 2 Byte 3
> > 76543210 76543210 76543210 76543210
> > ||| ------- gen -----------||-----index-|
> >
> > which is what we wanted.
> >
> > As we saw from the C standard there is no good solution working under
> > any circumstances. But the kernel uses such structs with the define
> > __LITTLE_ENDIAN_BITFIELD, and therfore a solution to make the struct
> > above working on big endian machines too would be:
> >
> > struct mpls_gen_key {
> > #if defined (__LITTLE_ENDIAN_BITFIELD)
> > unsigned int index:10;
> > unsigned int gen:20;
> > unsigned int type:2;
> > #else
> > unsigned int type:2;
> > unsigned int gen:20;
> > unsigned int index:10;
> > #endif
> > };
> >
> > What do you think?
> >
> > Kind regards,
> > Georg Klug
> >
> >
> >
> > -------------------------------------------------------
> > This sf.net email is sponsored by:ThinkGeek
> > Welcome to geek heaven.
> > http://thinkgeek.com/sf
> > _______________________________________________
> > mpls-linux-general mailing list
> > mpl...@li...
> > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
>
> --
> James R. Leu
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> mpls-linux-general mailing list
> mpl...@li...
> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
>
|
|
From: James R. L. <jl...@mi...> - 2002-10-30 02:19:39
|
Hello,
When I first heard of people trying to run mpls-linux on PPC the first
thing I though about was byte-ordering issues with these keys. After
thinking about it for a long time, it turns out that there is no need
to worry. Since the keys are only used inside the kernel and never
undergo a net-host or host-net conversion, it doesn't matter what the
resulting bit layout of the key is. mpls-linux uses all 32bits to store
it in the radix tree. The only thing that matters is that the value
is unique, which it is no matter what byte ordering we are dealing with.
ldp-portable has to deal with this quite a bit. Look in
ldp-portable/lib/ldp_nortel.h and you will see many example of preservation
of bit ordering in a bitfield.
On Tue, Oct 29, 2002 at 10:03:27PM +0100, Georg Klug wrote:
> Hi all,
>
> today I was wrestling with some endian issues in the MPLS kernel patch.
> Especially the following bitfield definition is interesting:
>
> struct mpls_gen_key {
> unsigned int index:10;
> unsigned int gen:20;
> unsigned int type:2;
> };
>
> The C language does not define exactly how these bits are stored into
> memory. It might either be right-to-left assigned or left-to-right
> assigned depending on the machine or even on the compiler used. So using
> this kind of struct is not portable in general. But looking into the
> kernel sources there are also such structure definitions (i.e. struct iphdr
> in include/linux/ip.h where the first byte of the ip header is defined as
> a bitfield).
>
> Anyway gcc on powerpc uses left-to-right assigned bit-fields, which
> causes the struct above not to work correctly on powerpc.
>
> Here is what the struct would look like on powerpc machines:
>
> Byte 0 Byte 1 Byte 2 Byte 3
> 76543210 76543210 76543210 76543210
> | index || ----------- gen -------|||< (last 2 bits are type)
>
> On the little endian x86 architecture with right-to-left assigned bit-
> fields the struct look like:
>
> Byte 0 Byte 1 Byte 2 Byte 3
> 76543210 76543210 76543210 76543210
> |in-lo | |g-lo||| | g-mid| |||g-hi|
>
> So the original fields are stored as follows:
> index: Byte1[Bits 1-0] Byte0[Bits 7-0]
> gen: Byte3[Bits 5-0] Byte2[Bits 7-0] Byte1[Bits 7-2]
> type Byte3[Bits 7-6]
>
> This looks rather complex but after byte swapping which must be applied
> on little-endian machines before sending the data on the wire it looks
> as follows:
>
> Byte 0 Byte 1 Byte 2 Byte 3
> 76543210 76543210 76543210 76543210
> ||| ------- gen -----------||-----index-|
>
> which is what we wanted.
>
> As we saw from the C standard there is no good solution working under
> any circumstances. But the kernel uses such structs with the define
> __LITTLE_ENDIAN_BITFIELD, and therfore a solution to make the struct
> above working on big endian machines too would be:
>
> struct mpls_gen_key {
> #if defined (__LITTLE_ENDIAN_BITFIELD)
> unsigned int index:10;
> unsigned int gen:20;
> unsigned int type:2;
> #else
> unsigned int type:2;
> unsigned int gen:20;
> unsigned int index:10;
> #endif
> };
>
> What do you think?
>
> Kind regards,
> Georg Klug
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> mpls-linux-general mailing list
> mpl...@li...
> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
--
James R. Leu
|
|
From: Georg K. <gk...@gi...> - 2002-10-29 21:00:10
|
Hi all,
today I was wrestling with some endian issues in the MPLS kernel patch.
Especially the following bitfield definition is interesting:
struct mpls_gen_key {
unsigned int index:10;
unsigned int gen:20;
unsigned int type:2;
};
The C language does not define exactly how these bits are stored into
memory. It might either be right-to-left assigned or left-to-right
assigned depending on the machine or even on the compiler used. So using
this kind of struct is not portable in general. But looking into the
kernel sources there are also such structure definitions (i.e. struct iphdr
in include/linux/ip.h where the first byte of the ip header is defined as
a bitfield).
Anyway gcc on powerpc uses left-to-right assigned bit-fields, which
causes the struct above not to work correctly on powerpc.
Here is what the struct would look like on powerpc machines:
Byte 0 Byte 1 Byte 2 Byte 3
76543210 76543210 76543210 76543210
| index || ----------- gen -------|||< (last 2 bits are type)
On the little endian x86 architecture with right-to-left assigned bit-
fields the struct look like:
Byte 0 Byte 1 Byte 2 Byte 3
76543210 76543210 76543210 76543210
|in-lo | |g-lo||| | g-mid| |||g-hi|
So the original fields are stored as follows:
index: Byte1[Bits 1-0] Byte0[Bits 7-0]
gen: Byte3[Bits 5-0] Byte2[Bits 7-0] Byte1[Bits 7-2]
type Byte3[Bits 7-6]
This looks rather complex but after byte swapping which must be applied
on little-endian machines before sending the data on the wire it looks
as follows:
Byte 0 Byte 1 Byte 2 Byte 3
76543210 76543210 76543210 76543210
||| ------- gen -----------||-----index-|
which is what we wanted.
As we saw from the C standard there is no good solution working under
any circumstances. But the kernel uses such structs with the define
__LITTLE_ENDIAN_BITFIELD, and therfore a solution to make the struct
above working on big endian machines too would be:
struct mpls_gen_key {
#if defined (__LITTLE_ENDIAN_BITFIELD)
unsigned int index:10;
unsigned int gen:20;
unsigned int type:2;
#else
unsigned int type:2;
unsigned int gen:20;
unsigned int index:10;
#endif
};
What do you think?
Kind regards,
Georg Klug
|
|
From: James R. L. <jl...@mi...> - 2002-10-28 21:48:37
|
I've applied your fix to my developement tree.
Thank you.
On Mon, Oct 28, 2002 at 03:25:54PM +0100, Georg Klug wrote:
> Hi all,
>
> playing a bit with the nice IOCTL functions of the MPLS patched
> kernel (v1.170), I found out that the SIOCMPLSNHLFEGET ioctl does
> not work correctly. The problem lies probably in the function
> mpls_get_out_label(), where a memcpy of the looked up info just
> changes the local variable label.
> INHO a memcpy is wrong here, due to the fact the two arguments
> point to different types. So I tried the following change in net/mpls/:
>
>
> -------------------------------- snip -------------
> diff -u mpls_out_info.c mpls_out_info.c.orig
> --- mpls_out_info.c Mon Oct 28 14:50:00 2002
> +++ mpls_out_info.c.orig Fri Sep 27 11:01:44 2002
> @@ -290,14 +290,8 @@
> retval = -ESRCH;
> goto mpls_get_out_label_cleanup;
> }
> -// memcpy(&(out->mol_label),&label,sizeof(struct mpls_label));
> - out->mol_age = moi->moi_age;
> - out->mol_mtu = moi->moi_mtu;
> - out->mol_propogate_ttl = moi->moi_propogate_ttl;
> - out->mol_label.ml_index = moi->moi_ifindex;
> - out->mol_label.ml_type = MPLS_LABEL_KEY;
> - out->mol_label.u.ml_key = moi->moi_key;
> - out->mol_label.__refcnt = moi->__refcnt;
> + memcpy(&(out->mol_label),&label,sizeof(struct mpls_label));
> + out->mol_age = moi->moi_age;
> mpls_out_info_release(moi);
> } else {
> retval = -ENXIO;
> -------------------------------------------------
>
> What do you think about this patch?
>
> Kind regards,
> Georg Klug
>
>
>
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> mpls-linux-general mailing list
> mpl...@li...
> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general
--
James R. Leu
|