Re: [mpls-linux-devel] mpls-linux on 64 bit kernel
Status: Beta
Brought to you by:
jleu
From: Chris R. <Chr...@nr...> - 2010-05-18 14:49:14
|
Hi Jorge Good and bad news. Good news is the lock up is fixed. I can see, via wireshark and debug, ping packets being received without the system locking. Bad news is, the reply is not going out. Although, I'm not sure the back-to-back host configuration is correct so that maybe the reason. ergo here is what is configured: Host A: eth1 - 10.50.1.1/24 eth0:1 - 10.50.210.1/24 /sbin/mpls labelspace set dev eth1 labelspace 0 LSPKey=`/sbin/mpls nhlfe add key 0 instructions push gen 1000 nexthop eth1 ipv4 10.50.1.2 | grep key | cut -c 17-26` /sbin/ip route add 10.50.110.0/24 via 10.50.1.2 mpls $LSPKey /sbin/mpls ilm add label gen 2000 labelspace 0 Host B: eth1 - 10.50.1.2/24 eth0:1 - 10.50.110.1/24 /sbin/mpls labelspace set dev eth1 labelspace 0 LSPKey=`/sbin/mpls nhlfe add key 0 instructions push gen 2000 nexthop eth1 ipv4 10.50.1.1 | grep key | cut -c 17-26` /sbin/ip route add 10.50.210.0/24 via 10.50.1.1 mpls $LSPKey /sbin/mpls ilm add label gen 1000 labelspace 0 +++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++++++++++ Host A configuration info: eth0 Link encap:Ethernet HWaddr 00:21:86:52:8D:E4 inet addr:10.128.112.6 Bcast:10.128.112.255 Mask:255.255.255.0 inet6 addr: fe80::221:86ff:fe52:8de4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:512 errors:0 dropped:0 overruns:0 frame:0 TX packets:666 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:70643 (68.9 KiB) TX bytes:84260 (82.2 KiB) Memory:fa200000-fa220000 eth0:1 Link encap:Ethernet HWaddr 00:21:86:52:8D:E4 inet addr:10.50.110.1 Bcast:10.50.110.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fa200000-fa220000 eth1 Link encap:Ethernet HWaddr 00:05:1B:00:47:5E inet addr:10.50.1.2 Bcast:10.50.1.255 Mask:255.255.255.0 inet6 addr: fe80::205:1bff:fe00:475e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:19 errors:0 dropped:0 overruns:0 frame:0 TX packets:31 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3469 (3.3 KiB) TX bytes:5209 (5.0 KiB) # mpls nhlfe show NHLFE entry key 0x00000002 mtu 1496 propagate_ttl push gen 2000 set eth1 ipv4 10.50.1.1 (0 bytes, 0 pkts) # mpls ilm show ILM entry label gen 1000 labelspace 0 proto ipv4 pop peek (0 bytes, 0 pkts) # ip route show 10.50.210.0/24 via 10.50.1.1 dev eth1 mpls 0x2 ++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++ Host B configuration dump: eth0 Link encap:Ethernet HWaddr 00:21:86:52:8D:E4 inet addr:10.128.112.6 Bcast:10.128.112.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:947 errors:0 dropped:0 overruns:0 frame:0 TX packets:1293 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:141125 (137.8 KiB) TX bytes:156121 (152.4 KiB) Memory:fa200000-fa220000 eth0:1 Link encap:Ethernet HWaddr 00:21:86:52:8D:E4 inet addr:10.50.110.1 Bcast:10.50.110.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fa200000-fa220000 eth1 Link encap:Ethernet HWaddr 00:05:1B:00:47:5E inet addr:10.50.1.2 Bcast:10.50.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20 errors:0 dropped:0 overruns:0 frame:0 TX packets:31 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3551 (3.4 KiB) TX bytes:5209 (5.0 KiB) # mpls nhlfe show NHLFE entry key 0x00000002 mtu 1496 propagate_ttl push gen 2000 set eth1 ipv4 10.50.1.1 (1260 bytes, 15 pkts) [root@FARP ~]# mpls ilm show ILM entry label gen 1000 labelspace 0 proto ipv4 pop peek (0 bytes, 0 pkts) ...................Chris On 05/18/2010 06:00 AM, Jorge Boncompte [DTI2] wrote: > El 18/05/2010 11:09, Jorge Boncompte [DTI2] escribió: > >> El 18/05/2010 0:15, Chris Robson escribió: >> >>> Ashan >>> >>> I'm currently trying to test this but having lock up problems. However, >>> I have cornered the problem to the following code snip. Note the >>> conditional compile statement. This maybe the cause of the x86_64 >>> system locking up, but (and a big but) it also locks up with my x86_32 >>> systems as well. Will be collecting more data tomorrow but if anyone >>> has any ideas I'm all ears. FYI, the lock up occurs when >>> mpls_opcode_peek is called from mpls_skb_recv(). >>> >>> int mpls_opcode_peek(struct sk_buff *skb) >>> { >>> u32 shim; >>> >>> #define CAN_WE_ASSUME_32BIT_ALIGNED 0 >>> #if CAN_WE_ASSUME_32BIT_ALIGNED >>> shim = ntohl(*((u32*)&skb->network_header)); >>> #else >>> memcpy(&shim,skb->network_header,sizeof(u32)); >>> shim = ntohl(shim); >>> #endif >>> >>> if (!(MPLSCB(skb)->flag)) { >>> MPLSCB(skb)->ttl = shim& 0xFF; >>> MPLSCB(skb)->flag = 1; >>> } >>> MPLSCB(skb)->bos = (shim>> 8 )& 0x1; >>> MPLSCB(skb)->exp = (shim>> 9 )& 0x7; >>> MPLSCB(skb)->label = (shim>> 12)& 0xFFFFF; >>> >>> return MPLS_RESULT_RECURSE; >>> } >>> >> Ok, that rang a bell. The problem is that in arches with BITS_PER_LONG> 32, >> some of the members of the skbuff are not pointers (network_header, >> transport_header, ...) but offsets. >> >> I'll cook a patch and send it to you to test. >> >> > Try this. Maybe it does not apply to your tree because it is against the > codebase i am cleaning off but it should be simple to resolve. > > > > > > ------------------------------------------------------------------------------ > > > > > _______________________________________________ > mpls-linux-devel mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-devel > |