From: Iain W. <iw...@op...> - 2007-09-26 16:19:52
|
On 9/27/07, Ryan Barnes <rb...@no...> wrote: > > Has anyone on this list encountered any issues with traffic flow when > running a cluster with two machines where a majority of the connected > sessions are throttled? As per documentation the master handles all > setup/teardown, throttled, and gardened packets. > > This situation ensures non optimal traffic flow, as the inbound traffic > flows to the slave (which advertises it's BGP routes by default) which then > has to pass the traffic back to the master for processing. While this hasn't > yet proven to be an issue per se, it seems rather inefficient. Is there any > suggestions on optimising this traffic flow, or am I simply stuck? > > It would seem a problem of scalability as well even if I implement a third > box acting as slave to take advantage of BGP load balancing, both these > slaves would simply pass ALL my traffic back to a single master to be > forwarded anyway. Yes, the problem is well understood. The throttling was designed for the current broadband product offerings in the Australian marketplace. In this situation the customers receive full line rate and throttling is used as a punitive measure once they exceed their usage quota. A small fraction of users are expected to trigger the limit. For the throttle rate to be accurately met, the code needs to track all packets being delivered to the client, which would be a signalling nightmare if the packets were just handled on the machine who originally receives them. Sending them to the master just seemed like a good idea at the time(!). Some discussion has been had around the idea of bouncing the packets to any cluster member using a deterministic hashing algorithm based on e.g. a session id. I don't believe anyone is currently working on developing the patch. > Also had a second question on how L2TPNS gets it's default GW. There is a > setting in the config called "set peer_address" which seems to be LT2PNS > default GW. Does this override the GW set on the actual system itself? Can > this default be learned via BGP or is that meant for advertising routes > only? The peer_address is only sent to the PPP client in the IPCP phase so that it may configure it's routing table appropriately. I vaguely recall it being important in Windows IPSEC+L2TP tunnels. I would expect it to be the cluster address rather than the default gateway? L2TPNS passes all packets received via L2TP to the tun device, where Linux picks them up and deals with them according to the host routing table. When L2TPNS receives a packet destined to the user via the tun interface, it maps the destination address to a tunnel and session id, wraps it up in L2TP goodness and fires it off to the LAC. When advertising a route via BGP, each L2TPNS node uses my_address, which is automatically detected as the primary ip on the cluster interface. This behaviour seems a little bit suspect. As you should see L2TPNS doesn't do a whole lot of routing, which is good because you can do much cleverer things using Quagga or something on Linux directly. Regards, --Iain |