Re: [bwm-tools-tech] Where do bwmd processed packets get sent next?
Brought to you by:
nkukard
From: Andrew K. <and...@ad...> - 2008-04-09 20:08:50
|
Reply is inline below. On 9 Apr 2008 at 15:04, Nigel Kukard wrote: > > > However, we cannot do the same with the OUTPUT chain of the filter > > table. If we jump those packets over to bwmd, bwmd starts spewing > > messages related to truncation. > > Please provide full logs. Have you tried latest snapshot? I'll look around to see what logs there are. The problem is easily reproducible, so it should be easy to create additional logging if needed. The version I used is stable 0.3.0 from January 2, 2008. It looks like the last snapshot was March 8, 2006 (shown on sourceforge). Is the March 8, 2006 snapshot a later revision than the stable release on January 2, 2008? > > > There is an initial burst of data let through, then the stream > > sputters and dies. It looks to me like it creates a loop for the > > data which causes trouble. It's only processing one 256Kbit/sec flow > > via a fast ethernet interface on a Celeron 2.53GHz (Pentium D > > generation) with 512MB RAM. The machine is only running bwm_tools > > and is dedicated to this testing process. Our production machine > > will be much beefier and will have multiple gigabit interfaces run > > in bridge mode (packets hitting the FORWARD chains mainly). > > > > So, my question is, where does bwmd reinsert the packets that it > > processes? > > Kernel sends the packet header to bwmd, it then acks or nacks it back > to the kernel. I think there is one or two threads which handle this, > I'll have to check and make sure if you get stuck. > So only the header is sent to bwmd. bwmd then tells the kernel queue to trash the packet or let it through. It doesn't actually insert itself into the processing of the stream and just issues instructions to the kernel about what to do with packets currently in transit. Am I understanding correctly? > > wire in > > eth0 > > raw PREROUTING > > conntrack > > mangle PREROUTING > > routing > > mangle INPUT (here we mark inbound packets for bwmd) > > filter INPUT (at the end of this chain, we jump to bwmd) > > application (browser, ftp client, web server, whatever) > > --- > > application > > routing > > raw OUTPUT > > conntrack > > mangle OUTPUT (here we mark outbound packets for bwmd) > > routing (second pass in case mangle changed something) > > filter OUTPUT (logically, this is what should jump to bwmd, but > > doesn't work) mangle POSTROUTING eth0 wire out > > Thats the flow ..... bwmtools needs to ack or nack the packets in the > kernel queue. This would occur when you jump to bwmd. Packets which > are ack'd will come out of the kernel queue and carry on transversal > of the chains. So, bwmd acks or nacks the packets at the end of filter INPUT/OUTPUT, but the packet never leaves the kernel queue. It just gets temporarily held for the results of additional userspace processing. If that's the case, then there really isn't any opportunity to cause a data loop. It must be something else, if that's the case. > > > > I looked through the code and it was obvious to me that it just > > waits for packets, checks link budgets, trims flows, but then I lose > > track of where that data exits bwmd and heads back into the > > networking stack of the OS. I'm sure I just missed it while skimming > > the code. > > The packet header is added to a queue after its been received. There > is code there regarding the kernel queue which acks the packet ... if > you have trouble finding it, let me know and I'll check the code and > show you :) I suspected that the code I needed to look at was actually kernel (or kernel module) code. What you're saying would seem to confirm that. Am I understanding correctly? > > > > Can anyone lend some insight on this? To me, it is most important > > that it work well with the FORWARD chains, which seems to be the > > original design intent, but it would be nice if it could be used for > > limiting flows originating/terminating on the host itself, as well. > > Both ways should work, I've tested both :) I'm curious as to those > errors you're getting. It could just be that the netfilter code was changed up at some point after you last tested. I know from experience that the linux kernels are in a constant state of flux. I'll see what I can do to better pinpoint what is happening and the cause. > > > > Thanks in advance. If I've boneheaded something, please be gentle. > > If this suits our basic needs, we may very well fund future > > development to smooth some rough edges and extend the features > > (particularly, support for more threads). Heck, I might even write a > > little code and/or documentation. > > That would be excellent and very much appreciated. > > We were also looking at tun/tap support so it would run on other > OS's ... but unfortunately funding is a bit of an issue :( The only OS's we're interested in are linux and BSD variants. *Maybe* a Solaris version might be useful, but only with much more threading (think T1 or T2 processors). I don't know your time and financial constraints. How much do you think you'd need to extend the threading model to scale to 8 or more processor cores? Right now, it appears there are 4 threads with the bulk of the processor intensive work done on 2 of those threads. We'd like to deploy this on a dual socket quad core (8 cores total) system. In a perfect world, we'd like to be able to handle a few thousand rules, eight gigabit interfaces, and about 400Mbits/sec of throughput. Sincerely, Andrew Kinney President and Chief Technology Officer Advantagecom Networks, Inc. http://www.advantagecom.net phone: 509-522-3696 ext. 101 |