mpls-linux-general Mailing List for MPLS for Linux (Page 5)
Status: Beta
Brought to you by:
jleu
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(26) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(22) |
Feb
(19) |
Mar
(19) |
Apr
(45) |
May
(52) |
Jun
(101) |
Jul
(79) |
Aug
(24) |
Sep
(43) |
Oct
(54) |
Nov
(71) |
Dec
(53) |
2002 |
Jan
(111) |
Feb
(123) |
Mar
(67) |
Apr
(61) |
May
(75) |
Jun
(26) |
Jul
(36) |
Aug
(41) |
Sep
(79) |
Oct
(85) |
Nov
(58) |
Dec
(39) |
2003 |
Jan
(26) |
Feb
(61) |
Mar
(80) |
Apr
(56) |
May
(39) |
Jun
(44) |
Jul
(28) |
Aug
(25) |
Sep
(4) |
Oct
(20) |
Nov
(38) |
Dec
(9) |
2004 |
Jan
(14) |
Feb
(14) |
Mar
(68) |
Apr
(17) |
May
(45) |
Jun
(42) |
Jul
(41) |
Aug
(23) |
Sep
(46) |
Oct
(89) |
Nov
(55) |
Dec
(33) |
2005 |
Jan
(74) |
Feb
(39) |
Mar
(105) |
Apr
(96) |
May
(43) |
Jun
(48) |
Jul
(21) |
Aug
(22) |
Sep
(33) |
Oct
(28) |
Nov
(29) |
Dec
(81) |
2006 |
Jan
(37) |
Feb
(32) |
Mar
(147) |
Apr
(37) |
May
(33) |
Jun
(28) |
Jul
(15) |
Aug
(20) |
Sep
(15) |
Oct
(23) |
Nov
(30) |
Dec
(40) |
2007 |
Jan
(20) |
Feb
(24) |
Mar
(65) |
Apr
(69) |
May
(41) |
Jun
(53) |
Jul
(39) |
Aug
(76) |
Sep
(53) |
Oct
(43) |
Nov
(26) |
Dec
(24) |
2008 |
Jan
(19) |
Feb
(67) |
Mar
(91) |
Apr
(75) |
May
(47) |
Jun
(63) |
Jul
(68) |
Aug
(39) |
Sep
(44) |
Oct
(33) |
Nov
(62) |
Dec
(84) |
2009 |
Jan
(14) |
Feb
(39) |
Mar
(55) |
Apr
(63) |
May
(16) |
Jun
(9) |
Jul
(4) |
Aug
(6) |
Sep
(1) |
Oct
(2) |
Nov
(10) |
Dec
(5) |
2010 |
Jan
(3) |
Feb
(1) |
Mar
(5) |
Apr
(13) |
May
(4) |
Jun
(5) |
Jul
(2) |
Aug
(8) |
Sep
(6) |
Oct
(1) |
Nov
(2) |
Dec
(2) |
2011 |
Jan
(1) |
Feb
(21) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
(6) |
Sep
|
Oct
|
Nov
(2) |
Dec
(6) |
2012 |
Jan
(5) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(5) |
Aug
(3) |
Sep
(6) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Ahsan T. <ahs...@gm...> - 2010-04-25 10:48:10
|
Hi, Where is the source for mpls-linux 1.63 (containing diff files for kernel and ebtables etc). could someone post the link. Thanks, Ahsan |
From: <jl...@mi...> - 2010-04-21 04:48:16
|
There was a patch just posted to the devel list for 2.6.31 On Fri, Feb 19, 2010 at 11:43:51AM +0100, Arnaud Delcasse wrote: > Hello, > > Is it possible to apply MPLS functionalities on kernels different from > the fedora 2.6.27 one ? > > I would like to build a 2.6.29 or 2.6.30 kernel with MPLS > functionalities. Is there any patch available that I could try to apply > or adapt to other kernels ? > > Thank you. > > Regards > > Arnaud Delcasse > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general -- James R. Leu jl...@mi... |
From: Renato W. <ren...@gm...> - 2010-04-20 18:37:24
|
Right, network namespaces can act as VRFs. There's no need for patches, but you should have a kernel greater than 2.6.29 for full network namespaces support. (without the need for disabling sysfs) Luckily I already ported the MPLS core for kernel 2.6.32.11, you can grab the patches in my previous email. 2010/4/20 David Fraser <da...@ab...> > > Hi James and all, > > Apologies for the newbie question. > > I gather that there is a build of MPLS-Linux for Fedora Core 10. I also > gather that FC10 features network namespaces. (And subsequently, FC12 > features linux containers (LXC). > > Now has anyone got MPLS-Linux working with network namespaces (or LXC > containers) - or are additional patches needed (and do those patches > exist)? Or is there an architectural barrier preventing this? > > From my mail trawling - I got the impression that with older kernels > the linux-vrf functionality could be used in conjunction with MPLS-Linux > (correct me if I am wrong). However, I'd like something based on the > newer kernel-supported features if possible. > > Thanks a lot, > > David > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > |
From: Maneesh S. <son...@gm...> - 2010-04-20 15:08:58
|
Hi Adtian Popa Here is the output of "# dmesg | grep -i mpls" root@maneesh:~# dmesg | grep -i mpls [17179574.512000] MPLS: version 1.950 [17179574.512000] MPLS: protocol driver interface - <jl...@mi...> [17179574.512000] MPLS DEBUG net/mpls/mpls_sysfs.c:130:mpls_sysfs_init: enter [17179574.512000] MPLS DEBUG net/mpls/mpls_sysfs.c:139:mpls_sysfs_init: exit [17179574.512000] Registered MPLS tunnel mpls0 [17179574.548000] MPLS: IPv4 over MPLS support [17179574.548000] MPLS DEBUG net/mpls/mpls_ilm.c:126:mpls_ilm_dst_alloc: enter [17179574.552000] MPLS DEBUG net/mpls/mpls_ilm.c:156:mpls_ilm_dst_alloc: exit [17179574.720000] MPLS: Ethernet over MPLS support But when I added "Network Options->IP: MPLS support". Now it is working. Thanks to you and Wind Dong. On Tue, Apr 20, 2010 at 4:46 PM, Adrian Popa <adr...@gm...>wrote: > Maneesh, do a dmesg | grep -i mpls and see if MPLS support is loaded. You > can have it as a module (mpls4), or maybe it's already built into the > kernel. > > On Tue, Apr 20, 2010 at 11:27 AM, Maneesh Soni <son...@gm...>wrote: > >> Hello >> I am using ubuntu linux 2.6.31-4. >> Now I want to implement MPLS in linux kernel. For that I am using linux >> kernel 2.6.15.1 and mpls-linux version 1.950 package. I followed >> mpls-linux-1.950 installation guide. But during the practicals, when I give >> command: >> >> *# **mpls labelspace set dev eth2 labelspace 0 >> # mpls ilm add label gen 1000 labelspace 0** >> RTNETLINK answers: Cannot allocate memory >> >> *I also tried to do: >> >> *# modprobe mpls4 >> FATAL: Module mpls4 not found. >> * >> >> Please tell me what are the possible solutions to this !!!! >> >> >> -- >> Regards, >> Maneesh Soni. >> >> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> mpls-linux-general mailing list >> mpl...@li... >> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general >> >> > -- Thanks and Regards, Maneesh Soni. TELECOM Bretagne 2,rue de la châtaigneraie CS17607 35576 Cesson Sévigné Cedex France Tel: +33 6 46 25 88 13 |
From: Adrian P. <adr...@gm...> - 2010-04-20 14:46:51
|
Maneesh, do a dmesg | grep -i mpls and see if MPLS support is loaded. You can have it as a module (mpls4), or maybe it's already built into the kernel. On Tue, Apr 20, 2010 at 11:27 AM, Maneesh Soni <son...@gm...>wrote: > Hello > I am using ubuntu linux 2.6.31-4. > Now I want to implement MPLS in linux kernel. For that I am using linux > kernel 2.6.15.1 and mpls-linux version 1.950 package. I followed > mpls-linux-1.950 installation guide. But during the practicals, when I give > command: > > *# **mpls labelspace set dev eth2 labelspace 0 > # mpls ilm add label gen 1000 labelspace 0** > RTNETLINK answers: Cannot allocate memory > > *I also tried to do: > > *# modprobe mpls4 > FATAL: Module mpls4 not found. > * > > Please tell me what are the possible solutions to this !!!! > > > -- > Regards, > Maneesh Soni. > > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > |
From: David F. <da...@ab...> - 2010-04-20 11:38:41
|
Hi James and all, Apologies for the newbie question. I gather that there is a build of MPLS-Linux for Fedora Core 10. I also gather that FC10 features network namespaces. (And subsequently, FC12 features linux containers (LXC). Now has anyone got MPLS-Linux working with network namespaces (or LXC containers) - or are additional patches needed (and do those patches exist)? Or is there an architectural barrier preventing this? From my mail trawling - I got the impression that with older kernels the linux-vrf functionality could be used in conjunction with MPLS-Linux (correct me if I am wrong). However, I'd like something based on the newer kernel-supported features if possible. Thanks a lot, David |
From: <win...@fr...> - 2010-04-20 09:37:32
|
Hello Maneesh, mpls4 in my server locates in "/lib/modules/2.6.15.1-100418/kernel/net/ipv4/mpls4.ko". In kernel menuconfig, probably means "Network Options->IP: MPLS support", but which according to .config is CONFIG_IP_MPLS=y/m, so I'm not sure. Thanks, Wind Quoting Maneesh Soni <son...@gm...>: > Hello > I am using ubuntu linux 2.6.31-4. > Now I want to implement MPLS in linux kernel. For that I am using linux > kernel 2.6.15.1 and mpls-linux version 1.950 package. I followed > mpls-linux-1.950 installation guide. But during the practicals, when I give > command: > > *# **mpls labelspace set dev eth2 labelspace 0 > # mpls ilm add label gen 1000 labelspace 0** > RTNETLINK answers: Cannot allocate memory > > *I also tried to do: > > *# modprobe mpls4 > FATAL: Module mpls4 not found. > * > > Please tell me what are the possible solutions to this !!!! > > > -- > Regards, > Maneesh Soni. > |
From: Maneesh S. <son...@gm...> - 2010-04-20 08:27:29
|
Hello I am using ubuntu linux 2.6.31-4. Now I want to implement MPLS in linux kernel. For that I am using linux kernel 2.6.15.1 and mpls-linux version 1.950 package. I followed mpls-linux-1.950 installation guide. But during the practicals, when I give command: *# **mpls labelspace set dev eth2 labelspace 0 # mpls ilm add label gen 1000 labelspace 0** RTNETLINK answers: Cannot allocate memory *I also tried to do: *# modprobe mpls4 FATAL: Module mpls4 not found. * Please tell me what are the possible solutions to this !!!! -- Regards, Maneesh Soni. |
From: Maneesh S. <son...@gm...> - 2010-04-13 09:27:09
|
hii I am using ubuntu linux 2.6.31-4. Now I want to implement MPLS in linux kernel. For that I am using linux kernel 2.6.15.1 and mpls-linux version 1.950 package. I am following mpls-linux installation guide. After patching MPLS package with linux kernel, when I compile the kernel(using make ) , I am getting following errors: *fs/binfmt_aout.c: Assembler messages: fs/binfmt_aout.c:156**: Error: suffix or operands invalid for `cmp' make[1]: *** [fs/binfmt_aout.o] Error 1 make: *** [fs] Error 2 * Please tell me what are the possible solutions to this !!!! If this problem is related to linux distribution, I am not bounded to use only ubuntu. But I tryed to find MPLS installation guide for Fedora but I didnt get any. -- Thanks and Regards, Maneesh Soni. TELECOM Bretagne 2,rue de la châtaigneraie CS17607 35576 Cesson Sévigné Cedex France Tel: +33 6 46 25 88 13 |
From: 亮 崔 <fun...@ya...> - 2010-03-29 07:21:39
|
hello everone: I have compiled ldp-portable/quagga-mpls.diff to fedora 6, i have compliled sucessfully. i can launch zebra,bgpd,but i can.'t ldpd. ============================================== [root@localhost /]# service ldpd start 启动 ldpd:[确定] [root@localhost /]# telnet 127.0.0.1 2610 Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote host: Connection refused ================================================ i don't know what reason? thanks for your reply. e-mail:fun...@ya... |
From: Adrian P. <adr...@gm...> - 2010-03-07 10:59:51
|
Hi there, Just make sure you have IP connectivity between your routers and your computers and traceroute will work both for IP and for MPLS. Just add the necessary routes on your routers so that they too can reach the PCs. I't just that the replies for traceroute will come over IP, not over MPLS - but that is not a problem. Good luck, Adrian On Sun, Mar 7, 2010 at 4:36 AM, kuldonk doenk <ku...@ya...> wrote: > > > ------------------------------ > *From:* Adrian Popa <adr...@gm...> > *To:* kuldonk doenk <ku...@ya...> > *Cc:* mpls milis <mpl...@li...>; james leu < > jl...@mi...>; james leu <jl...@us...> > *Sent:* Fri, March 5, 2010 1:51:06 AM > *Subject:* Re: [mpls-linux-general] can't traceroute and RTT MPLS greater > than Ordinary IP Network. > > Hello, > > Here are the answers to your questions: > 1) for the traceroute command - did you expect to see the hops in the mpls > network? If yes, then make sure your MPLS routers have the necessary routing > information to return the ICMP TTL exceeded message back to the source, but > via IP. This happens because your "traceroute" packets are encapsulated in > MPLS at the edge, their TTL is copied in the MPLS TTL, and the frame is > forwarded using MPLS. > When the MPLS TTL expires, the node should (it may depend on the > configuration) issue an ICMP TTL expired back to the source. Since this ICMP > traffic is between your router and your Host A, it will not travel through > MPLS, because your router doesn't have the routing mapping necessary to > encapsulate and forward your traffic. So, it will rely on IP to send the > message. If your routing table(s) don't have the necessary information to > route the packet, it will be dropped, and traceroute will show *. > > > > In order for traceroute to work properly, you should check that it works on > your setup but only over IP (no mpls configuration), and also, you can use > tcpdump on "any" interface on one router to see if it sends back ICMP TTL > exceeded (or it can be ICMP port unreachable - depending on your traceroute > type). > > yes. I wan to see it. So..for Static MPLS linux as I could not be done > for traceroute to see hops??? > > 2) Your question also has your answer in it! Since you can capture traffic > and see the label, then your network must be running MPLS. I don't know who > you have to prove this (are the persons knowledgeable in networking?), but > it's good enough evidence for me... > > I just wanted to prove whether my MPLS congfiguration really have to RUN. > > > 3) There are two aspects of the performance you are mentioning. MPLS was > intended to reduce IP lookup time, but I would say that todays networks > don't use MPLS because of that reason. Advances in hardware (like Cisco's > CEF) have made lookup very fast and it's no longer a software process in > most high end routers. Most often MPLS is used to reduce the router's need > for a internet/global routing table... > In my experiments I managed to get MPLS performance close to IP > performance, by disabling the debug messages in the mpls-linux > implementation. You can do that by running "echo 0 > /sys/net/mpls/debug" > (or something like that - haven't done it for years). Even with debugs > disabled, there is no guarantee that you will see any speedup over IP. Keep > in mind that your setup is small and your routing tables are small - so IP > lookup time (even done in software) is comparable with MPLS switching. You > can try feeding ~100.000 routes in your routing tables and then you should > see some speed difference... > > Hope my answers helped, > Regards, > Adrian > > > > On Thu, Mar 4, 2010 at 6:01 PM, kuldonk doenk <ku...@ya...> wrote: > >> hi james and all .. >> I tried build basic testbed MPLS with make a label and routing protocolmanually(static). >> I use fedora 8 MPLS 1962. >> below is a picture and configuration : >> http://wenkul.files.wordpress.com/2010/03/lsp-1.jpg >> >> Host A (ubuntu) : >> >> Ifconfig eth0 192.168.1.1 netmask 255.255.255.0 up >> >> ip route add default via 192.168.1.2 >> >> >> LER 1 : >> >> echo "1" > /proc/sys/net/mpls/debug >> >> Modprobe mpls4 >> >> mpls nhlfe add key 0 instructions push gen 100 nexthop eth1 ipv4 >> 192.168.2.2 >> >> ip route add 192.168.7.0/24 via 192.168.2.2 mpls 0x2 >> >> mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.1.1 >> >> mpls labelspace set dev eth1 labelspace 0 >> >> mpls ilm add label gen 200 labelspace 0 >> >> mpls xc add ilm_label gen 200 ilm_labelspace 0 nhlfe_key 0x3 >> >> >> LSR : >> >> echo "1" > /proc/sys/net/mpls/debug >> modprobe mpls4 >> mpls labelspace set dev eth0 labelspace 0 >> mpls ilm add label gen 100 labelspace 0 >> mpls nhlfe add key 0 instructions push gen 700 nexthop eth2 ipv4 >> 192.168.4.2 >> mpls xc add ilm_label gen 100 ilm_labelspace 0 nhlfe_key 0x2 >> mpls labelspace set dev eth2 labelspace 0 >> mpls ilm add label gen 800 labelspace 0 >> mpls nhlfe add key 0 instructions push gen 200 nexthop eth0 ipv4 >> 192.168.2.1 >> mpls xc add ilm_label gen 800 ilm_labelspace 0 nhlfe_key 0x3 >> >> >> LER 2 : >> >> echo "1" > /proc/sys/net/mpls/debug >> modprobe mpls4 >> mpls labelspace set dev eth1 labelspace 0 >> mpls ilm add label gen 700 labelspace 0 >> mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.7.2 >> mpls xc add ilm_label gen 700 ilm_labelspace 0 nhlfe_key 0x2 >> mpls nhlfe add key 0 instructions push gen 800 nexthop eth1 ipv4 >> 192.168.4.1 >> ip route add 192.168.1.0/24 via 192.168.4.1 mpls 0x3 >> >> Host B (windows) : >> >> IPADDRESS 192.168.7.2 >> >> NETMASK 255.255.255.0 >> >> GATEWAY 192.168.7.1 >> >> >> 1. First Question >> >> for Both of Host, i using default gateway. >> >> then I do ping from host A to host B. Such as this results : >> >> host A: >> root@mpls5-desktop:~# ping 192.168.7.2 >> PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. >> 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms >> 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms >> 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms >> >> but when i do "traceoute" the result doesn't match what I expected. >> >> Such as this results : >> >> host A: >> >> root@mpls5-desktop:~# traceroute >> 192.168.7.2 >> >> traceroute to 192.168.7.2 (192.168.7.2), 30 hops max, 60 byte packets >> 1 mpls1.local (192.168.1.2) 0.096 ms 0.059 ms 0.036 ms >> 2 * * * >> 3 * * * >> 4 192.168.7.2 (192.168.7.2) 0.263 ms 0.245 ms 0.260 ms >> >> >> My first question is "what is the problem and what should I do"? >> >> >> 2. Second Question. how to prove or to know whether the MPLS network has >> been able to work on a network?? >> >> because when i using ethereal or tcpdump in LSR, i got the package MPLS, such >> as the following picture : >> >> http://wenkul.files.wordpress.com/2010/01/screenshot-ethereal.png >> >> (capture using ethereal) >> >> and >> >> http://wenkul.files.wordpress.com/2010/03/eth1-ping.png >> (capture with >> tcpdump) >> >> >> 3. Third Question >> I also tried to compare the Round Trip Time (RTT) from ordinary IP >> networks with MPLS networks by using the "ping" command as shown above. >> >> when RTT ordinary IP networks : >> host A >> >> root@mpls5-desktop:~# ping 192.168.7.2 -c 30 >> PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. >> 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=0.344 ms >> 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=0.308 ms >> 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=0.321 ms >> >> When RTT of MPLS network: >> host A: >> root@mpls5-desktop:~# ping 192.168.7.2 -c 30 >> PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. >> 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms >> 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms >> 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms >> >> it should theoretically MPLS network RTT is smaller than the normal IP >> networks. MPLS goal is to streamline the process of analyzing the data >> packets in an MPLS router making faster than normal IP networks. but Why >> the value of common IP network ping faster than MPLS network ???? >> >> I want to ask for help to all , because I need a short time to solve this >> problem. please help me. >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> mpls-linux-general mailing list >> mpl...@li... >> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general >> >> > > |
From: kuldonk d. <ku...@ya...> - 2010-03-07 02:36:42
|
________________________________ From: Adrian Popa <adr...@gm...> To: kuldonk doenk <ku...@ya...> Cc: mpls milis <mpl...@li...>; james leu <jl...@mi...>; james leu <jl...@us...> Sent: Fri, March 5, 2010 1:51:06 AM Subject: Re: [mpls-linux-general] can't traceroute and RTT MPLS greater than Ordinary IP Network. Hello, Here are the answers to your questions: 1) for the traceroute command - did you expect to see the hops in the mpls network? If yes, then make sure your MPLS routers have the necessary routing information to return the ICMP TTL exceeded message back to the source, but via IP. This happens because your "traceroute" packets are encapsulated in MPLS at the edge, their TTL is copied in the MPLS TTL, and the frame is forwarded using MPLS. When the MPLS TTL expires, the node should (it may depend on the configuration) issue an ICMP TTL expired back to the source. Since this ICMP traffic is between your router and your Host A, it will not travel through MPLS, because your router doesn't have the routing mapping necessary to encapsulate and forward your traffic. So, it will rely on IP to send the message. If your routing table(s) don't have the necessary information to route the packet, it will be dropped, and traceroute will show *. In order for traceroute to work properly, you should check that it works on your setup but only over IP (no mpls configuration), and also, you can use tcpdump on "any" interface on one router to see if it sends back ICMP TTL exceeded (or it can be ICMP port unreachable - depending on your traceroute type). yes. I wan to see it. So..for Static MPLS linux as I could not be done for traceroute to see hops??? 2) Your question also has your answer in it! Since you can capture traffic and see the label, then your network must be running MPLS. I don't know who you have to prove this (are the persons knowledgeable in networking?), but it's good enough evidence for me... I just wanted to prove whether my MPLS congfiguration really have to RUN. 3) There are two aspects of the performance you are mentioning. MPLS was intended to reduce IP lookup time, but I would say that todays networks don't use MPLS because of that reason. Advances in hardware (like Cisco's CEF) have made lookup very fast and it's no longer a software process in most high end routers. Most often MPLS is used to reduce the router's need for a internet/global routing table... In my experiments I managed to get MPLS performance close to IP performance, by disabling the debug messages in the mpls-linux implementation. You can do that by running "echo 0 > /sys/net/mpls/debug" (or something like that - haven't done it for years). Even with debugs disabled, there is no guarantee that you will see any speedup over IP. Keep in mind that your setup is small and your routing tables are small - so IP lookup time (even done in software) is comparable with MPLS switching. You can try feeding ~100.000 routes in your routing tables and then you should see some speed difference... Hope my answers helped, Regards, Adrian On Thu, Mar 4, 2010 at 6:01 PM, kuldonk doenk <ku...@ya...> wrote: hi james and all .. >I tried build basic testbed MPLS with make a label and routing protocolmanually(static). I use fedora 8 MPLS 1962. >below is a picture and configuration : >http://wenkul.files.wordpress.com/2010/03/lsp-1.jpg > >Host A (ubuntu) : > >Ifconfig eth0 192.168.1.1 netmask 255.255.255.0 up >ip route >add default via 192.168.1.2 > > >LER 1 : >> > > >echo "1" > /proc/sys/net/mpls/debug >Modprobe >mpls4 >mpls >nhlfe add key 0 instructions push gen 100 nexthop eth1 ipv4 192.168.2.2 >ip >route add 192.168.7.0/24 via 192.168.2.2 mpls 0x2 >mpls >nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.1.1 >mpls >labelspace set dev eth1 labelspace 0 >mpls >ilm add label gen 200 labelspace 0 >mpls xc add ilm_label gen 200 ilm_labelspace 0 nhlfe_key 0x3 > > >LSR : >>echo "1" > /proc/sys/net/mpls/debug >modprobe mpls4 >mpls labelspace set dev eth0 labelspace 0 >mpls ilm add label gen 100 labelspace 0 >mpls nhlfe add key 0 instructions push gen 700 nexthop eth2 ipv4 192.168.4.2 >>mpls xc add ilm_label gen 100 ilm_labelspace 0 nhlfe_key 0x2 >mpls labelspace set dev eth2 labelspace 0 >mpls ilm add label gen 800 labelspace 0 >mpls nhlfe add key 0 instructions push gen 200 nexthop eth0 ipv4 192.168.2.1 >>mpls xc add ilm_label gen 800 ilm_labelspace 0 nhlfe_key 0x3 > > >LER 2 : >echo "1" > /proc/sys/net/mpls/debug >>modprobe mpls4 >mpls labelspace set dev eth1 > labelspace 0 >mpls ilm add label gen 700 labelspace 0 >mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.7.2 >mpls xc add ilm_label gen 700 ilm_labelspace 0 nhlfe_key 0x2 >mpls nhlfe add key 0 instructions push gen 800 nexthop eth1 ipv4 192.168.4.1 >>ip route add 192.168.1.0/24 via 192.168.4.1 mpls 0x3 > > > >Host B (windows) : >> >IPADDRESS 192.168.7.2 >NETMASK 255.255.255.0 >GATEWAY 192.168.7.1 > > >1. First Question > >for Both of Host, i using default gateway. >then I do ping from host A to host B. Such as this results : >host A: >root@mpls5-desktop:~# ping 192.168.7.2 >PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. >64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms >>64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms >64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms > > >but when i do "traceoute" the result doesn't match what I expected. >Such as this results : >host A: > >root@mpls5-desktop:~# traceroute 192.168.7.2 > >traceroute to 192.168.7.2 (192.168.7.2), 30 hops max, 60 byte packets > 1 mpls1.local (192.168.1.2) 0.096 ms 0.059 ms 0.036 ms > 2 * * * > 3 * * * > 4 192.168.7.2 (192.168.7.2) 0.263 ms 0.245 ms 0.260 ms > > > >My first question is "what is the problem and what should I do"? > > >2. Second Question. how to prove or to know whether the MPLS network has been able to work on a network?? > >because when i using ethereal or tcpdump in LSR, i got the package MPLS, > such as the following picture : >http://wenkul.files.wordpress.com/2010/01/screenshot-ethereal.png >> (capture using ethereal) >and >http://wenkul.files.wordpress.com/2010/03/eth1-ping.png >> (capture with tcpdump) > > >3. Third Question >I also tried to compare the Round Trip Time (RTT) from ordinary IP networks with MPLS networks by using the "ping" command as shown above. > >when > RTT ordinary IP networks : >>host A >> >root@mpls5-desktop:~# ping 192.168.7.2 -c 30 >PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. >64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=0.344 ms >>64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=0.308 ms >64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=0.321 ms > >When RTT of > MPLS network: >host A: >root@mpls5-desktop:~# ping 192.168.7.2 -c 30 >PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. >64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms >>64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms >64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms > >it should theoretically MPLS network RTT is smaller than the normal IP networks. MPLS goal is to streamline the process of analyzing the data packets in an MPLS router making faster than normal IP networks. but Why the value of common IP network ping faster than MPLS network ???? > >I want to ask for help to all , because I need a short time to > solve this problem. please help me. > > > > > > >------------------------------------------------------------------------------ >>Download Intel® Parallel Studio Eval >>Try the new software tools for yourself. Speed compiling, find bugs >>proactively, and fine-tune applications for parallel performance. >>See why Intel Parallel Studio got high marks during beta. >http://p.sf.net/sfu/intel-sw-dev >_______________________________________________ >>mpls-linux-general mailing list >mpl...@li... >https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > |
From: Adrian P. <adr...@gm...> - 2010-03-04 18:51:14
|
Hello, Here are the answers to your questions: 1) for the traceroute command - did you expect to see the hops in the mpls network? If yes, then make sure your MPLS routers have the necessary routing information to return the ICMP TTL exceeded message back to the source, but via IP. This happens because your "traceroute" packets are encapsulated in MPLS at the edge, their TTL is copied in the MPLS TTL, and the frame is forwarded using MPLS. When the MPLS TTL expires, the node should (it may depend on the configuration) issue an ICMP TTL expired back to the source. Since this ICMP traffic is between your router and your Host A, it will not travel through MPLS, because your router doesn't have the routing mapping necessary to encapsulate and forward your traffic. So, it will rely on IP to send the message. If your routing table(s) don't have the necessary information to route the packet, it will be dropped, and traceroute will show *. In order for traceroute to work properly, you should check that it works on your setup but only over IP (no mpls configuration), and also, you can use tcpdump on "any" interface on one router to see if it sends back ICMP TTL exceeded (or it can be ICMP port unreachable - depending on your traceroute type). 2) Your question also has your answer in it! Since you can capture traffic and see the label, then your network must be running MPLS. I don't know who you have to prove this (are the persons knowledgeable in networking?), but it's good enough evidence for me... 3) There are two aspects of the performance you are mentioning. MPLS was intended to reduce IP lookup time, but I would say that todays networks don't use MPLS because of that reason. Advances in hardware (like Cisco's CEF) have made lookup very fast and it's no longer a software process in most high end routers. Most often MPLS is used to reduce the router's need for a internet/global routing table... In my experiments I managed to get MPLS performance close to IP performance, by disabling the debug messages in the mpls-linux implementation. You can do that by running "echo 0 > /sys/net/mpls/debug" (or something like that - haven't done it for years). Even with debugs disabled, there is no guarantee that you will see any speedup over IP. Keep in mind that your setup is small and your routing tables are small - so IP lookup time (even done in software) is comparable with MPLS switching. You can try feeding ~100.000 routes in your routing tables and then you should see some speed difference... Hope my answers helped, Regards, Adrian On Thu, Mar 4, 2010 at 6:01 PM, kuldonk doenk <ku...@ya...> wrote: > hi james and all .. > I tried build basic testbed MPLS with make a label and routing protocolmanually(static). > I use fedora 8 MPLS 1962. > below is a picture and configuration : > http://wenkul.files.wordpress.com/2010/03/lsp-1.jpg > > Host A (ubuntu) : > > Ifconfig eth0 192.168.1.1 netmask 255.255.255.0 up > > ip route add default via 192.168.1.2 > > > LER 1 : > > echo "1" > /proc/sys/net/mpls/debug > > Modprobe mpls4 > > mpls nhlfe add key 0 instructions push gen 100 nexthop eth1 ipv4 > 192.168.2.2 > > ip route add 192.168.7.0/24 via 192.168.2.2 mpls 0x2 > > mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.1.1 > > mpls labelspace set dev eth1 labelspace 0 > > mpls ilm add label gen 200 labelspace 0 > > mpls xc add ilm_label gen 200 ilm_labelspace 0 nhlfe_key 0x3 > > > LSR : > > echo "1" > /proc/sys/net/mpls/debug > modprobe mpls4 > mpls labelspace set dev eth0 labelspace 0 > mpls ilm add label gen 100 labelspace 0 > mpls nhlfe add key 0 instructions push gen 700 nexthop eth2 ipv4 > 192.168.4.2 > mpls xc add ilm_label gen 100 ilm_labelspace 0 nhlfe_key 0x2 > mpls labelspace set dev eth2 labelspace 0 > mpls ilm add label gen 800 labelspace 0 > mpls nhlfe add key 0 instructions push gen 200 nexthop eth0 ipv4 > 192.168.2.1 > mpls xc add ilm_label gen 800 ilm_labelspace 0 nhlfe_key 0x3 > > > LER 2 : > > echo "1" > /proc/sys/net/mpls/debug > modprobe mpls4 > mpls labelspace set dev eth1 labelspace 0 > mpls ilm add label gen 700 labelspace 0 > mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.7.2 > mpls xc add ilm_label gen 700 ilm_labelspace 0 nhlfe_key 0x2 > mpls nhlfe add key 0 instructions push gen 800 nexthop eth1 ipv4 > 192.168.4.1 > ip route add 192.168.1.0/24 via 192.168.4.1 mpls 0x3 > > Host B (windows) : > > IPADDRESS 192.168.7.2 > > NETMASK 255.255.255.0 > > GATEWAY 192.168.7.1 > > > 1. First Question > > for Both of Host, i using default gateway. > > then I do ping from host A to host B. Such as this results : > > host A: > root@mpls5-desktop:~# ping 192.168.7.2 > PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. > 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms > 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms > 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms > > but when i do "traceoute" the result doesn't match what I expected. > > Such as this results : > > host A: > > root@mpls5-desktop:~# traceroute > 192.168.7.2 > > traceroute to 192.168.7.2 (192.168.7.2), 30 hops max, 60 byte packets > 1 mpls1.local (192.168.1.2) 0.096 ms 0.059 ms 0.036 ms > 2 * * * > 3 * * * > 4 192.168.7.2 (192.168.7.2) 0.263 ms 0.245 ms 0.260 ms > > > My first question is "what is the problem and what should I do"? > > > 2. Second Question. how to prove or to know whether the MPLS network has > been able to work on a network?? > > because when i using ethereal or tcpdump in LSR, i got the package MPLS, such > as the following picture : > > http://wenkul.files.wordpress.com/2010/01/screenshot-ethereal.png >> > (capture using ethereal) > > and > > http://wenkul.files.wordpress.com/2010/03/eth1-ping.png >> (capture with > tcpdump) > > > 3. Third Question > I also tried to compare the Round Trip Time (RTT) from ordinary IP > networks with MPLS networks by using the "ping" command as shown above. > > when RTT ordinary IP networks : > host A >> > root@mpls5-desktop:~# ping 192.168.7.2 -c 30 > PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. > 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=0.344 ms > 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=0.308 ms > 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=0.321 ms > > When RTT of MPLS network: > host A: > root@mpls5-desktop:~# ping 192.168.7.2 -c 30 > PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. > 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms > 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms > 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms > > it should theoretically MPLS network RTT is smaller than the normal IP > networks. MPLS goal is to streamline the process of analyzing the data > packets in an MPLS router making faster than normal IP networks. but Why > the value of common IP network ping faster than MPLS network ???? > > I want to ask for help to all , because I need a short time to solve this > problem. please help me. > > > > > > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > |
From: kuldonk d. <ku...@ya...> - 2010-03-04 16:01:32
|
hi james and all .. I tried build basic testbed MPLS with make a label and routing protocolmanually(static). I use fedora 8 MPLS 1962. below is a picture and configuration : http://wenkul.files.wordpress.com/2010/03/lsp-1.jpg Host A (ubuntu) : Ifconfig eth0 192.168.1.1 netmask 255.255.255.0 up ip route add default via 192.168.1.2 LER 1 : echo "1" > /proc/sys/net/mpls/debug Modprobe mpls4 mpls nhlfe add key 0 instructions push gen 100 nexthop eth1 ipv4 192.168.2.2 ip route add 192.168.7.0/24 via 192.168.2.2 mpls 0x2 mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.1.1 mpls labelspace set dev eth1 labelspace 0 mpls ilm add label gen 200 labelspace 0 mpls xc add ilm_label gen 200 ilm_labelspace 0 nhlfe_key 0x3 LSR : echo "1" > /proc/sys/net/mpls/debug modprobe mpls4 mpls labelspace set dev eth0 labelspace 0 mpls ilm add label gen 100 labelspace 0 mpls nhlfe add key 0 instructions push gen 700 nexthop eth2 ipv4 192.168.4.2 mpls xc add ilm_label gen 100 ilm_labelspace 0 nhlfe_key 0x2 mpls labelspace set dev eth2 labelspace 0 mpls ilm add label gen 800 labelspace 0 mpls nhlfe add key 0 instructions push gen 200 nexthop eth0 ipv4 192.168.2.1 mpls xc add ilm_label gen 800 ilm_labelspace 0 nhlfe_key 0x3 LER 2 : echo "1" > /proc/sys/net/mpls/debug modprobe mpls4 mpls labelspace set dev eth1 labelspace 0 mpls ilm add label gen 700 labelspace 0 mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.7.2 mpls xc add ilm_label gen 700 ilm_labelspace 0 nhlfe_key 0x2 mpls nhlfe add key 0 instructions push gen 800 nexthop eth1 ipv4 192.168.4.1 ip route add 192.168.1.0/24 via 192.168.4.1 mpls 0x3 Host B (windows) : IPADDRESS 192.168.7.2 NETMASK 255.255.255.0 GATEWAY 192.168.7.1 1. First Question for Both of Host, i using default gateway. then I do ping from host A to host B. Such as this results : host A: root@mpls5-desktop:~# ping 192.168.7.2 PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms but when i do "traceoute" the result doesn't match what I expected. Such as this results : host A: root@mpls5-desktop:~# traceroute 192.168.7.2 traceroute to 192.168.7.2 (192.168.7.2), 30 hops max, 60 byte packets 1 mpls1.local (192.168.1.2) 0.096 ms 0.059 ms 0.036 ms 2 * * * 3 * * * 4 192.168.7.2 (192.168.7.2) 0.263 ms 0.245 ms 0.260 ms My first question is "what is the problem and what should I do"? 2. Second Question. how to prove or to know whether the MPLS network has been able to work on a network?? because when i using ethereal or tcpdump in LSR, i got the package MPLS, such as the following picture : http://wenkul.files.wordpress.com/2010/01/screenshot-ethereal.png >> (capture using ethereal) and http://wenkul.files.wordpress.com/2010/03/eth1-ping.png >> (capture with tcpdump) 3. Third Question I also tried to compare the Round Trip Time (RTT) from ordinary IP networks with MPLS networks by using the "ping" command as shown above. when RTT ordinary IP networks : host A >> root@mpls5-desktop:~# ping 192.168.7.2 -c 30 PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=0.344 ms 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=0.308 ms 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=0.321 ms When RTT of MPLS network: host A: root@mpls5-desktop:~# ping 192.168.7.2 -c 30 PING 192.168.7.2 (192.168.7.2) 56(84) bytes of data. 64 bytes from 192.168.7.2: icmp_seq=1 ttl=125 time=1.81 ms 64 bytes from 192.168.7.2: icmp_seq=2 ttl=125 time=1.79 ms 64 bytes from 192.168.7.2: icmp_seq=3 ttl=125 time=1.78 ms it should theoretically MPLS network RTT is smaller than the normal IP networks. MPLS goal is to streamline the process of analyzing the data packets in an MPLS router making faster than normal IP networks. but Why the value of common IP network ping faster than MPLS network ???? I want to ask for help to all , because I need a short time to solve this problem. please help me. |
From: Arnaud D. <ade...@ci...> - 2010-02-19 11:00:44
|
Hello, Is it possible to apply MPLS functionalities on kernels different from the fedora 2.6.27 one ? I would like to build a 2.6.29 or 2.6.30 kernel with MPLS functionalities. Is there any patch available that I could try to apply or adapt to other kernels ? Thank you. Regards Arnaud Delcasse |
From: James L. <jl...@mi...> - 2010-01-27 05:12:44
|
On Tue, Jan 26, 2010 at 03:01:16PM -0600, Max Veprinsky wrote: > Hello list, > > I noticed that the mpls-quagga got repo has been updated to the latest > quagga release. I'm not seeing the mpls ldp related patches, I may be > just blind. Can someone confirm please The default branch is master which tracks quagga-git. You will want to checkout mpls-master. > Thanks > > -- > Max Veprinsky > > > > ------------------------------------------------------------------------------ > The Planet: dedicated and managed hosting, cloud storage, colocation > Stay online with enterprise data centers and the best network in the business > Choose flexible plans and management services without long-term contracts > Personal 24x7 support from experience hosting pros just a phone call away. > http://p.sf.net/sfu/theplanet-com > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general -- James R. Leu jl...@mi... |
From: Max V. <mve...@xc...> - 2010-01-26 21:02:32
|
Hello list, I noticed that the mpls-quagga got repo has been updated to the latest quagga release. I'm not seeing the mpls ldp related patches, I may be just blind. Can someone confirm please Thanks -- Max Veprinsky |
From: Max V. <mve...@xc...> - 2010-01-21 20:27:43
|
Hello List -- Max Veprinsky |
From: Adrian P. <adr...@gm...> - 2009-12-21 07:54:09
|
Hello, I'm not sure if I remember correctly, but 3 or 4 years ago when I tested ldpd I had similar problems (when issuing mpls ip ldpd would die). You should do a packet capture and see if ldp sends a notification before dying - there might be a mismatch in parameters or there could be a problem with ldpd itself. I remember I couldn't use it for anything - meaning I didnt learn/distribute labels or wasn't stable. Looking back over the archives here's what we managed to do (remember, these are mails from 2006!): I've managed to get to ldp peers to communicate with each other (and avoid that nasty hang after handshaking) using this setup: LER1 LER2 eth1-------------------------------------------------eth0 172.16.0.2 172.16.0.3 On LER1, in ldpd I issued these commands: uml-1# conf t uml-1(config)# mpls ldp uml-1(config-ldp)# transport-address 172.16.0.2 uml-1(config-ldp)# exit uml-1(config)# int eth1 uml-1(config-if)# mpls ip uml-1(config-if-ldp)# uml-1(config-if-ldp)# exit uml-1(config-if)# exit uml-1(config)# exit uml-1# sh ldp LSR-ID: 7f000001 Admin State: ENABLED Transport Address: ac100002 Control Mode: ORDERED Repair Mode: GLOBAL Propogate Release: TRUE Label Merge: TRUE Retention Mode: LIBERAL Loop Detection Mode: NONE TTL-less-domain: FALSE Local TCP Port: 646 Local UDP Port: 646 Keep-alive Time: 45 Keep-alive Interval: 15 Hello Time: 15 Hello Interval: 5 ------------------------------------------------------------------------------------------ On LER2, in ldpd I issued these commands: uml-1# conf t uml-1(config)# mpls ldp uml-1(config-ldp)# transport-address 172.16.0.3 uml-1(config-ldp)# exit uml-1(config)# int eth0 uml-1(config-if)# mpls ip uml-1(config-if-ldp)# uml-1(config-if-ldp)# exit uml-1(config-if)# exit uml-1(config)# exit uml-1# sh ldp LSR-ID: 7f000001 Admin State: ENABLED Transport Address: ac100003 Control Mode: ORDERED Repair Mode: GLOBAL Propogate Release: TRUE Label Merge: TRUE Retention Mode: LIBERAL Loop Detection Mode: NONE TTL-less-domain: FALSE Local TCP Port: 646 Local UDP Port: 646 Keep-alive Time: 45 Keep-alive Interval: 15 Hello Time: 15 Hello Interval: 5 ------------------------------------------------------------------------------------------ I thought of setting an explicit transport address, because I found out that after the discovery of the potential LDP neighbours, LDP session establition can begin. The active and passive roles are determined using the transport address. If I didn't set a transport address, the default one would be 0x00000002 for both entities. I guess that this is the place where one would freeze - because it expected different transport address. In this case, the one with the greater transport address will have an active role in establishing the session. After this handshake, I get these periodic messages from ldpd: On LER1: OUT: ldp_session_attempt_setup: MPLS_NON_BLOCKING addr delete: 0x82df450 addr delete: 0x82fd8e0 addr delete: 0x8306aa0 session delete On LER2: OUT: Receive an unknown notification: 00000000 addr delete: 0x80a3c08 addr delete: 0x80a3ce0 session delete If I issue an show ldp neighbour on LER2 I get this: uml-1# sh ldp neighbor Peer LDP Ident: 127.0.0.1:0; Local LDP Ident: 127.0.0.1:0 TCP connection: n/a State: discovery; Msgs sent/recv: -/-; Up time: - LDP discovery sources: eth0 but I don't think that a complete session is established. When I issue sh ldp session this is the output: uml-1# sh ldp session no established sessions This is probably because I didn't set some commands, but I don't know which commands are necessary. Another question would be how do I set a fec (to be advertised to the other lsr's)? ... and later on: Ok. I've managed to go one step ahead. Following the Ethereal packet stream, I noticed that one peer was issuing a LDP Notification prior to its disconnection. The notification had a status data with the value of 0x11 (which according to the LDP RFC means 'Session rejected - Parameters advertisment mode'). So, I changed the distribution mode on both peers from DoD to DU and now it goes further, but for a little time. I can get for a brief second a session, with fecs, neighbour information like this: uml-1# sh ldp neigh Peer LDP Ident: 172.16.0.3:0; Local LDP Ident: 172.16.0.2:0 TCP connection: 0.0.0.0.646 - 172.16.0.3.44012 State: OPERATIONAL; Msgs sent/recv: 5/17; UNSOLICITED Up time: 00:00:01 LDP discovery sources: eth1 Addresses bound to peer: 172.16.0.2 172.16.0.3 172.16.1.3 10.1.1.1 141.85.43.122 The thing is, that 2 seconds later, both ldpd's (from both peers) crash (output is: Aborted.). Ethereal captures all sorts of LDP packets indicating that the session was establish(ed)(ing), because I saw that fecs were exchanged, together with proposed labels. After all the handshaking is done, one of the peers sends a TCP FIN, and then (after the other peer replies with another FIN) they both die. If you want, I have a packet dump captured in ethereal and I will attach it tomorrow. Tomorrow I want to try out the new version of LDP, because today I had other things to try out. I guess that even if I used the older version, in the end I learned a few things about LDP and the initialization process. :) If you have any ideas, I'd like to hear them. This is all I've got. Hope it helps! Regards, Adrian On Sun, Dec 20, 2009 at 5:22 PM, Bojana Petrovic <bo...@gm...> wrote: > Hi everybody, > > Due to compilation issues with quagga-mpls on Fedora 10, I have decided to > try with only quagga-0.99.6-01.fc5.mpls.1.956b.i386.rpm package I could > find at the moment on the internet. This quagga-mpls.rpm I set on Fedora 5 > environment using mpls packages found in the repo. > I got LDP daemon running, but when tried to configure LDP like this: > > vtysh > conf t > mpls ldp > exit > int eth1 > mpls ip > end > > I got: *Warning: closing connection to ldpd because of an I/O error, and > ldpd daemon is closed*. > > If I try to make ldpd.conf file with a line including "mpls ip" similar to > this: > ! > interface eth0 > mpls ip > ! > and then to start ldpd daemon, I get this error: > *Error occured during reading below line > mpls ip* > > Additionally if I configure interface in zebra.conf like this: > ! > interface eth1 > mpls labelspace 0 > ip address 100.0.0.1/24 > ! > After starting zebra, these lines are converting into this: > ! > interface eth1 > mpls labelspace 0 > ip address 100.0.0.1/24 > ipv6 nd suppress-ra > ! > > Does anybody know wheather this environment can work, and is that possible > to solve these errors? > > Best regards, > Bojana > > > > > > > > ------------------------------------------------------------------------------ > This SF.Net email is sponsored by the Verizon Developer Community > Take advantage of Verizon's best-in-class app development support > A streamlined, 14 day to market process makes app distribution fast and > easy > Join now and get one step closer to millions of Verizon customers > http://p.sf.net/sfu/verizon-dev2dev > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > |
From: Bojana P. <bo...@gm...> - 2009-12-20 15:22:15
|
Hi everybody, Due to compilation issues with quagga-mpls on Fedora 10, I have decided to try with only quagga-0.99.6-01.fc5.mpls.1.956b.i386.rpm package I could find at the moment on the internet. This quagga-mpls.rpm I set on Fedora 5 environment using mpls packages found in the repo. I got LDP daemon running, but when tried to configure LDP like this: vtysh conf t mpls ldp exit int eth1 mpls ip end I got: *Warning: closing connection to ldpd because of an I/O error, and ldpd daemon is closed*. If I try to make ldpd.conf file with a line including "mpls ip" similar to this: ! interface eth0 mpls ip ! and then to start ldpd daemon, I get this error: *Error occured during reading below line mpls ip* Additionally if I configure interface in zebra.conf like this: ! interface eth1 mpls labelspace 0 ip address 100.0.0.1/24 ! After starting zebra, these lines are converting into this: ! interface eth1 mpls labelspace 0 ip address 100.0.0.1/24 ipv6 nd suppress-ra ! Does anybody know wheather this environment can work, and is that possible to solve these errors? Best regards, Bojana |
From: 亮 崔 <fun...@ya...> - 2009-12-19 07:12:26
|
hello: I already download mpls-linux(mpls-linux-1.950.tar.bz2) ,I have install it on fedora 4. now I download ldp(ldp-0.1.3.tar.gz), I wanna install ldp on fedora 4. but I don't know if mpls and ldp can be intergrated. or implement their function. thanks for your reply. e-mail:fun...@ya... ___________________________________________________________ 好玩贺卡等你发,邮箱贺卡全新上线! http://card.mail.cn.yahoo.com/ |
From: Bojana P. <bo...@gm...> - 2009-12-12 22:15:30
|
Hi everybody, I'm trying very hard these weeks to set mpls-quagga and LDP work on Fedora 8 (mpls-kernel 2.6.26.6-49.fc8.mpls.1.962) and I'm facing following problems. I read carefully old posts, and have made conclusion that recent problems are connected with Fedora 10, so I realized that solution for F8 might work. According to this list I tried following attempts: 1) Downloaded from http://repo.or.cz/ mpls-ldp-portable-60f294f3e2263031be4e7de64e57dd4b721a72d7.tar.gz mpls-quagga-c1ff1abca7f16db0fbbb64ef07cbfefa44e03cb4.tar.gz and compiled according to instructions - change variable DEFSRC in ldpd/create-links file to ldp-portable directory and execute "create-link" script. - chmod 777 bootstrap.sh - ./bootstrap.sh - ./configure - make - make install I didn't recognized any of errors, but I cannot access to the LDP daemon. I can run all other daemons (zebra, ospfd...). But I can not use LDPD?? 2) I tried to use *make-rpm-jleu* batch file, but I met following error: *install: cannot stat `/usr/src/redhat/BUILD/quagga-0.99.11/redhat/ldpd.init': No such file or directory *Is that possible to make RPM in this way? Are there something that should be done before running this script? 3) I tried to apply quagga-mpls.diff to the quagga source distribution (0.99.6), and did simple: *patch -p1 < quagga-mpls.diff*: I found that some files cannot be successfuly patched and met errors similar like this one: *11 out of 42 hunks FAILED -- saving rejects to file zebra/zebra_rib.c.rej can't find file to patch at input line 76700 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |diff -uNr --exclude=.p4config --exclude=make-rpm-jleu --exclude=update-from-kernel.sh quagga/zebra/zebra_routemap.c quagga-mpls/zebra/zebra_routemap.c |--- quagga/zebra/zebra_routemap.c 2007-06-14 05:02:14.000000000 -0500 |+++ quagga-mpls/zebra/zebra_routemap.c 2008-02-19 22:55:08.000000000 -0600 -------------------*------- *File to patch: * Cause non of these attempts weren't successful, please, can anyone point what I'm doing wrong or offer to us some alternative solution? I'm ready to accept any of combination - Fedora distributions 5/8/10, mpls-kernels, and quagga version just to make this work. I can see that many people facing similar problem and I'm wondering does anybody have some combination running succsseful? I would appriciate any advice, Best Regards Bojana Petrovic |
From: Leucio R. <leu...@gm...> - 2009-12-03 11:38:32
|
hi list, I'm in the same point of wind. i've tried to modify the impl_lock.c file (and so on with the next compile error!), redefining the mpls_malloc and mpls_free declaration. mpls_malloc(MTYPE_LDP, sizeof(int)); instead of mpls_malloc(sizeof(int)); and mpls_free(MTYPE_LDP, handle); instead of mpls_free(handle); according with the prototypes of funcion XMALLOC and XFREE in impl_mm.c. it's seems to be a good idea, but in the next step i have problem with a zmalloc function. have anyone find a patch for this problem? thanks'a a lot 2009/11/16 <win...@fr...> > Hi James, > > How long will it take to repair mpls-quagga tree. Or is it possible > that we try to test first some old but stable one? > > Thanks, > -Wind > > Quoting James Leu <jl...@mi...>: > > > Wind, > > > > The quagga-mpls tree is in a bit of flux. I will try to repair it and > > email the list when it is ready to be built. > > > > On Sun, Nov 08, 2009 at 04:43:41PM +0800, win...@fr... wrote: > >> Hello list, > >> > >> I have been successfully install mpls-kernel, mpls-ebtables, > >> mpls-iptables and mpls-iproute2 via source with git clone from > >> git://repo.or.cz according instruction from mpls-linux wiki homepage. > >> For I am using debian/lenny so I could not use RPMs. > >> When I try to compile mpls-quagga with mpls-ldp-portable, I meet > problem. > >> Following is what I do compiling, > >> First I change DEFSRC in create-links according to my path of > >> mpls-ldp-portable then "sh create-links", then I use "sh bootstrap.sh" > >> to generate configure file. > >> Then I use ./configure --enable-mpls --enable-ldpd --enable-rsvpd to > >> configure to make sure including mpls & ldpd with quagga. After make, > >> I notice some error, so I use make check to see, and find the below > >> error: > >> impl_lock.c:8:35: error: macro "mpls_malloc" requires 2 arguments, but > >> only 1 given > >> impl_lock.c: In function ‘mpls_lock_create’: > >> impl_lock.c:8: error: ‘mpls_malloc’ undeclared (first use in this > function) > >> impl_lock.c:8: error: (Each undeclared identifier is reported only once > >> impl_lock.c:8: error: for each function it appears in.) > >> impl_lock.c:28:19: error: macro "mpls_free" requires 2 arguments, but > >> only 1 given > >> impl_lock.c: In function ‘mpls_lock_delete’: > >> impl_lock.c:28: error: ‘mpls_free’ undeclared (first use in this > function) > >> make: *** [impl_lock.o] Error 1 > >> It seems mpls_malloc need 2 argument, but only 1 given. > >> I wonder if the sources from git is the suitable one I should use? > >> Could anyone help me? > >> > >> Thanks, > >> -Wind > >> > >> > >> > ------------------------------------------------------------------------------ > >> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > 30-Day > >> trial. Simplify your report design, integration and deployment - > >> and focus on > >> what you do best, core application coding. Discover what's new with > >> Crystal Reports now. http://p.sf.net/sfu/bobj-july > >> _______________________________________________ > >> mpls-linux-general mailing list > >> mpl...@li... > >> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > > > -- > > James R. Leu > > jl...@mi... > > > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > -- Ricci Leuciantonio |
From: kuldonk d. <ku...@ya...> - 2009-11-24 10:19:10
|
Hai.. i tried to built simple MPLS network. Same like “MPLS for Linux: IPv4 over MPLS: two LER one LSR example for mpls-linux-1.95x and i got source rpm, from here l kernel-2.6.26.6-49.fc8.mpls.1.962.i686. l iproute-2.6.26-2.fc8.mpls.1.962.i386 l ebtables-2.0.8-3.fc8.mpls.1.962.i386 l iptables-1.4.1.1-2.fc8.mpls.1.962.i386. l kernel-devel-2.6.26.6-49.fc8.mpls.1.962.i686 l kernel-headers-2.6.26.6-49.fc8.mpls.1.962.i386 l iptables-ipv6-1.4.1.1-2.fc8.mpls.1.962.i386 l iptables-devel-1.4.1.1-2.fc8.mpls.1.962.i386 note : i removed the old ip route and the old iptables. ip_forward active "1" design network like this : etho eth1 eth1 eth1 eth2 eth0 [host A] ----------- [LER1] ----------- [LER2] ------------- [host B] 2.1 2.2 3.1 3.2 6.1 6.2 my configuration is : LER1 echo “1” > /proc/sys/net/mpls/debug modprobe mpls4 mpls nhlfe add key 0 instructions push gen 1000 nexthop eth1 ipv4 192.168.3.2 mpls labelspace set dev eth1 labelspace 0 mpls ilm add label gen 2000 labelspace 0 mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.2.1 mpls xc add ilm_label gen 2000 ilm_labelspace 0 nhlfe_key 0x3 ip route add 192.168.6.0/24 via 192.168.3.2 mpls 0x2 LER2 echo “1” > /proc/sys/net/mpls/debug modprobe mpls4 mpls nhlfe add key 0 instructions push gen 2000 nexthop eth0 ipv4 192.168.3.21 mpls labelspace set dev eth1 labelspace 0 mpls ilm add label gen 1000 labelspace 0 mpls nhlfe add key 0 instructions nexthop eth0 ipv4 192.168.6.2 mpls xc add ilm_label gen 1000 ilm_labelspace 0 nhlfe_key 0x3 ip route add 192.168.2.0/24 via 192.168.3.1 mpls 0x2 the problem is : can't ping from LER2 to Host A, while LER1 to host B it’s okey. result is : [root@localhost ~]# ping 192.168.2.1 this capture using tcpdump in LER1, when ler1 ping host B --- ler 2 ping host A [root@localhost ~]# tcpdump -i eth1 -c 30 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes 17:31:43.995713 MPLS (label 1000, exp 0, [S], ttl 64), IP, length: 88 17:31:43.996063 IP 192.168.6.2 > 192.168.3.1: ICMP echo reply, id 43275, seq 111, length 64 17:31:44.200161 MPLS (label 2000, exp 0, [S], ttl 64), IP, length: 88 17:31:44.995725 MPLS (label 1000, exp 0, [S], ttl 64), IP, length: 88 17:31:44.996229 IP 192.168.6.2 > 192.168.3.1: ICMP echo reply, id 43275, seq 112, length 64 17:31:45.200138 MPLS (label 2000, exp 0, [S], ttl 64), IP, length: 88 17:31:45.995760 MPLS (label 1000, exp 0, [S], ttl 64), IP, length: 88 note : setting at every router is same. iptables,ip6tables,ebtales and firewall i disable |
From: <win...@fr...> - 2009-11-16 02:28:39
|
Hi James, How long will it take to repair mpls-quagga tree. Or is it possible that we try to test first some old but stable one? Thanks, -Wind Quoting James Leu <jl...@mi...>: > Wind, > > The quagga-mpls tree is in a bit of flux. I will try to repair it and > email the list when it is ready to be built. > > On Sun, Nov 08, 2009 at 04:43:41PM +0800, win...@fr... wrote: >> Hello list, >> >> I have been successfully install mpls-kernel, mpls-ebtables, >> mpls-iptables and mpls-iproute2 via source with git clone from >> git://repo.or.cz according instruction from mpls-linux wiki homepage. >> For I am using debian/lenny so I could not use RPMs. >> When I try to compile mpls-quagga with mpls-ldp-portable, I meet problem. >> Following is what I do compiling, >> First I change DEFSRC in create-links according to my path of >> mpls-ldp-portable then "sh create-links", then I use "sh bootstrap.sh" >> to generate configure file. >> Then I use ./configure --enable-mpls --enable-ldpd --enable-rsvpd to >> configure to make sure including mpls & ldpd with quagga. After make, >> I notice some error, so I use make check to see, and find the below >> error: >> impl_lock.c:8:35: error: macro "mpls_malloc" requires 2 arguments, but >> only 1 given >> impl_lock.c: In function ‘mpls_lock_create’: >> impl_lock.c:8: error: ‘mpls_malloc’ undeclared (first use in this function) >> impl_lock.c:8: error: (Each undeclared identifier is reported only once >> impl_lock.c:8: error: for each function it appears in.) >> impl_lock.c:28:19: error: macro "mpls_free" requires 2 arguments, but >> only 1 given >> impl_lock.c: In function ‘mpls_lock_delete’: >> impl_lock.c:28: error: ‘mpls_free’ undeclared (first use in this function) >> make: *** [impl_lock.o] Error 1 >> It seems mpls_malloc need 2 argument, but only 1 given. >> I wonder if the sources from git is the suitable one I should use? >> Could anyone help me? >> >> Thanks, >> -Wind >> >> >> ------------------------------------------------------------------------------ >> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day >> trial. Simplify your report design, integration and deployment - >> and focus on >> what you do best, core application coding. Discover what's new with >> Crystal Reports now. http://p.sf.net/sfu/bobj-july >> _______________________________________________ >> mpls-linux-general mailing list >> mpl...@li... >> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > -- > James R. Leu > jl...@mi... > |