Re: [mpls-linux-general] MPLS Trhoughput
Status: Beta
Brought to you by:
jleu
From: Renato W. <ren...@gm...> - 2011-08-16 21:18:55
|
Hello, James wrote a text about MPLS vs IPv4 performance a while ago (2007?). Please take a look above. "Why is MPLS Linux slower at forwarding packets then Linux's IPv4 stack? There are some misconceptions out their regarding the speed of MPLS vs IPv4 packet processing .... Back in the mid 90's the state-of-the-art in edge and core routing technology was processor based packet forwarding. At that same time the requirements for how per packet forwarding decisions were being made was getting more complicated. Edge and core routers were being asked to consider source and destination addresses, incoming and outgoing interfaces, as well as TCP/UDP port numbers. This forced router vendors to switch to some sort of "flow" or "hash" based look up to determine the forwarding treatment (next hop and/or queuing). As any CS major knows both flow and hash based look up schemes can suffer from high amounts of "key collisions" when 1000s of packet flows per second are being considered. This in essence change the look up depth from 32 bits to something greater then 32 bits depending on the technique and the amount of "key collisions". So per packet decisions making was becoming a bottle neck in the core of the network. Along came various "IP Switching" techniques and "tag switching" all of which contributed to MPLS. One of the benefits of MPLS at that time was that the complex decision making for forwarding treatment was done once before at "LSP setup time" and per packet processing would be a consistent 20 bit look up. If the state-of-the-art in packet forwarding has stood still, then MPLS would have been the savior of core routing, but in the time it took for MPLS to become a standard the world of packet forwarding was revolutionized by ASICs and FPGAs. These hardware based packet look up engines could do the complex look up required by core and edge routers faster then the pipes could transport the packets. So when people said "MPLS should be faster then IPv4 at packet processing" they were not referring to standard destination based IPv4 forwarding, they were talking about complex forwarding decision making. Theoretically standard IPv4 destination only processing has a worst case of 32 bits of look up and MPLS has a constant 20 bits of look up, not enough of a difference to show up in throughput tests. So if your comparison of MPLS Linux forwarding versus Linux IPv4 forward is only based on IPv4 destination look ups, you should not expect to see a performance benefit (in fact MPLS Linux forces all ILM keys into a 32 bit number, so it too is doing a 32 bit look up :-). That in combination with the fact that MPLS Linux has not under gone any sort of optimization and has enormous amount of debug/tracing code, while the Linux IPv4 stack has undergone years of optimization by some of the brightest minds in the world. I'm surprised that MPLS Linux has performed as well as it has in the tests results I've seen." 2011/8/16 Adrian Popa <adr...@gm...>: > Hello Topit, > > Make sure you disable debugging - I think it's still enabled by default. > With debugging on, you will get lousy performance, because it has to write a > lot of information to the logs. > > If memory serves me right, you need to do something like echo 0 > > /sys/net/mpls/debug (don't remember the path exactly). With debugging off > performance is about the same as regular IP. > > Cheers, > Adrian > > On Tue, Aug 16, 2011 at 1:06 AM, Topit <top...@ya...> wrote: >> >> Hi james and everybody, i am from indonesia and right now i am in the end >> of my final project but i got a problem with simple MPLS Network. my >> topology like below : >> http://i307.photobucket.com/albums/nn309/top_itlhox/Mpls.jpg >> I used Kernel-2.6.26.6-49.fc8.mpls.version 1.962.i686 on Fedora 8 as PC >> Router. >> >> for take a look THROUGHPUT on mpls network from client 1 to client 2, i >> used Iperf tool as generator TCP packet. >> iperf -c 192.168.1.1 >> ------------------------------------------------------------ >> Client connecting to 192.168.1.1, TCP port 5001 >> TCP window size: 8.00 KByte (default) >> ------------------------------------------------------------ >> [128] local 192.168.4.3 port 49176 connected with 192.168.1.1 port 5001 >> [ ID] Interval Transfer Bandwidth >> [128] 0.0-10.0 sec 5.29 MBytes 4.43 Mbits/sec >> The problem is Bandwidth result just 4.43 Mbits/sec, meanwhile the >> network using fast ethernet as LAN Card. Fast ethernet 100 mbps should be >> show Throughput value almost of 100 mbps. but i just got 4.43 Mbits/sec. a >> lot of simulation to get throughput is 80-90 mbps average. could you tell me >> what wrong with my case. >> >> i need your help soon. my lecturer gave me limit time to solve the >> problem. >> please.. >> i'll be waiting for the answer. >> Regard. >> Topit >> >> >> >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> uberSVN's rich system and user administration capabilities and model >> configuration take the hassle out of deploying and managing Subversion and >> the tools developers use with it. Learn more about uberSVN and get a free >> download at: http://p.sf.net/sfu/wandisco-dev2dev >> >> _______________________________________________ >> mpls-linux-general mailing list >> mpl...@li... >> https://lists.sourceforge.net/lists/listinfo/mpls-linux-general >> > > > ------------------------------------------------------------------------------ > uberSVN's rich system and user administration capabilities and model > configuration take the hassle out of deploying and managing Subversion and > the tools developers use with it. Learn more about uberSVN and get a free > download at: http://p.sf.net/sfu/wandisco-dev2dev > > _______________________________________________ > mpls-linux-general mailing list > mpl...@li... > https://lists.sourceforge.net/lists/listinfo/mpls-linux-general > > -- Renato Westphal |