From: Karl H. <ka...@hi...> - 2009-09-15 14:57:50
|
Karl Hiramoto wrote: > David Miller wrote: > >> From: Karl Hiramoto <ka...@hi...> >> Date: Thu, 10 Sep 2009 23:30:44 +0200 >> >> >> >>> I'm not really sure if or how many packets to upper layers buffer. >>> >>> >> This is determined by ->tx_queue_len, so whatever value is being >> set for ATM network devices is what the core will use for backlog >> limiting while the device's TX queue is stopped. >> > I tried varying tx_queue_len by 10, 100, and 1000x, but it didn't seem > to help much. Whenever the atm dev called netif_wake_queue() it seems > like the driver still starves for packets and still takes time to get > going again. > > > It seem like when the driver calls netif_wake_queue() it's TX hardware > queue is nearly full, but it has space to accept new packets. The TX > hardware queue has time to empty, devices starves for packets(goes > idle), then finally a packet comes in from the upper networking > layers. I'm not really sure at the moment where the problem lies to my > maximum throughput dropping. > > I did try changing sk_sndbuf to 256K but that didn't seem to help either. > > -- Actually i think i spoke too soon, tuning TCP parameters, txqueuelen on all machines the server, router and client it seems my performance came back. -- Karl |