Re: [bwm-tools-tech] Where do bwmd processed packets get sent next?
Brought to you by:
nkukard
From: Nigel K. <nk...@lb...> - 2008-04-10 08:05:31
|
> The version I used is stable 0.3.0 from January 2, 2008. It looks > like the last snapshot was March 8, 2006 (shown on sourceforge). Is > the March 8, 2006 snapshot a later revision than the stable release > on January 2, 2008? Ah you right, latest is the stable. You're using the right version. > > > > > > There is an initial burst of data let through, then the stream > > > sputters and dies. It looks to me like it creates a loop for the > > > data which causes trouble. It's only processing one 256Kbit/sec flow > > > via a fast ethernet interface on a Celeron 2.53GHz (Pentium D > > > generation) with 512MB RAM. The machine is only running bwm_tools > > > and is dedicated to this testing process. Our production machine > > > will be much beefier and will have multiple gigabit interfaces run > > > in bridge mode (packets hitting the FORWARD chains mainly). > > > > > > So, my question is, where does bwmd reinsert the packets that it > > > processes? > > > > Kernel sends the packet header to bwmd, it then acks or nacks it back > > to the kernel. I think there is one or two threads which handle this, > > I'll have to check and make sure if you get stuck. > > > > So only the header is sent to bwmd. bwmd then tells the kernel queue > to trash the packet or let it through. It doesn't actually insert > itself into the processing of the stream and just issues instructions > to the kernel about what to do with packets currently in transit. Am > I understanding correctly? Well, there is a queue of packets that need a verdict to be sent for. BWMD sends verdicts in the order it see's fit to shape the traffic. It may not be in a FIFO order. So you could say the kernel keeps the packets in a buffer and only sends through through when BWMD has seen them and decided what to do with them. > So, bwmd acks or nacks the packets at the end of filter INPUT/OUTPUT, > but the packet never leaves the kernel queue. It just gets > temporarily held for the results of additional userspace processing. > If that's the case, then there really isn't any opportunity to cause > a data loop. It must be something else, if that's the case. Correct. I was pondering about using tun/tap to bring the queueing to userspace entirely. Problem is, you create a device which is handled by userspace ... but then what? you'll then need some magic to make sure all traffic hits it, then more magic to get the traffic out again ... so yea. > > > I looked through the code and it was obvious to me that it just > > > waits for packets, checks link budgets, trims flows, but then I lose > > > track of where that data exits bwmd and heads back into the > > > networking stack of the OS. I'm sure I just missed it while skimming > > > the code. > > > > The packet header is added to a queue after its been received. There > > is code there regarding the kernel queue which acks the packet ... if > > you have trouble finding it, let me know and I'll check the code and > > show you :) > > I suspected that the code I needed to look at was actually kernel (or > kernel module) code. What you're saying would seem to confirm that. > Am I understanding correctly? BWMD uses netlink sockets to communicate the packets verdict back to the kernel. If its a BWMD problem, you'll need to look in the BWMD source. There is one such file that handles the queue interface, then I think the main BWMD daemon handles the inbound packet headers and sending the verdicts back. Its like only 3 lines of code, so its easily missed. > > > Can anyone lend some insight on this? To me, it is most important > > > that it work well with the FORWARD chains, which seems to be the > > > original design intent, but it would be nice if it could be used for > > > limiting flows originating/terminating on the host itself, as well. > > > > Both ways should work, I've tested both :) I'm curious as to those > > errors you're getting. > > It could just be that the netfilter code was changed up at some point > after you last tested. I know from experience that the linux kernels > are in a constant state of flux. Its possible, but I think that would more constitute a bug if its the Kernel that changed? > > > > > Thanks in advance. If I've boneheaded something, please be gentle. > > > If this suits our basic needs, we may very well fund future > > > development to smooth some rough edges and extend the features > > > (particularly, support for more threads). Heck, I might even write a > > > little code and/or documentation. > > > > That would be excellent and very much appreciated. > > > > We were also looking at tun/tap support so it would run on other > > OS's ... but unfortunately funding is a bit of an issue :( > > The only OS's we're interested in are linux and BSD variants. *Maybe* > a Solaris version might be useful, but only with much more threading > (think T1 or T2 processors). *nod* > I don't know your time and financial constraints. How much do you > think you'd need to extend the threading model to scale to 8 or more > processor cores? Right now, it appears there are 4 threads with the > bulk of the processor intensive work done on 2 of those threads. We'd > like to deploy this on a dual socket quad core (8 cores total) > system. In a perfect world, we'd like to be able to handle a few > thousand rules, eight gigabit interfaces, and about 400Mbits/sec of > throughput. Thats pretty hard to say, let me think about it a bit. Possibly contact me offlist on nk...@lb... to discuss. -N |