From: Jay V. <fu...@us...> - 2009-03-31 22:56:42
|
Support Team <su...@ca...> wrote: >All the bonding have proper values. But in this test, I did not plug all the >cables in the bonded interfaces. There are four independent PCI-X bus, each >has a 82546 dual port controller in it. Totally eight ports, they all bond >together. But I only plugged in four cables into them, since I did not >expect the >throughput would go beyond 4Gb/s. > >For sure this patch made the throughput go higher, but I think it could go >higher on this system... If the values in /proc/net/bonding/bond* are all fine during the low throughput (1Gb/sec) state without the patch applied, then I am very skeptical that the patch is responsible for any throughput increase. By "fine," I mean all active slaves in the same aggregator, and that aggregator is the active aggregator. To put it another way, if the membership of the active aggregator is the same with and without the patch applied, then I do not believe that it can have any effect on overall performance (presuming nothing else changes, e.g., switch configuration, bonding device configuration, testing environment, etc). All the patch does is keep 802.3ad notified of speed or duplex changes amongst the slaves, which in turn keeps the aggregator membership up to date. In the 2.4.x kernels, that's already taken care of by the miimon, so my belief is that the patch should have no practical effect. -J --- -Jay Vosburgh, IBM Linux Technology Center, fu...@us... >-----Original Message----- >From: Jay Vosburgh [mailto:fu...@us...] >Sent: Tuesday, March 31, 2009 9:45 AM >To: su...@ca... >Cc: 'Ronciak, John'; e10...@li... >Subject: [work] Re: [E1000-devel] Bug report E1000 driver bonding in 802.3ad >mode can not go beyond 1GB/s throughput > >Support Team <su...@ca...> wrote: > >>Hi Jay, >> >>With this patch applied to the 2.4 kernel, we were able to get to >>1.8Gbits/s. >From this result, I think to get bonding working correctly in 2.4 kernel, I >>probably >>have to back port a lot 2.6 changes into 2.4. >> >>Thanks! > > If the patch actually fixed your problem, then your problem in >the first place had to do with the aggregation not forming correctly >(i.e., the links coming up individually, not as a set). > > I think this is unlikely. I think this because the 2.4 bonding >driver should already inspect the speed / duplex on each mii check (the >rate of which depends upon the miimon parameter value). All the patch >does is additionally check the speed and duplex again when the system >notifies bonding that a slave's state has changed. > > Could you check a failing configuration again, and see if the >/proc/net/bonding/bond* shows that all of the slaves have the same >"aggregator ID," and that there is an active aggregator, and its >"partner MAC address" is not all zeroes? > > If the failing system shows all slaves with the same agg ID and >an active partner MAC, then I don't think this patch really fixed >anything, and something else changed that resolved the problem. > > I would appreciate it if you could check this, since if there is >an actual issue, I'll want to submit that patch to the stable 2.4 >release (if it fixes an actual bug). > > -J > >--- > -Jay Vosburgh, IBM Linux Technology Center, fu...@us... > > > >------------------------------------------------------------------------------ >_______________________________________________ >E1000-devel mailing list >E10...@li... >https://lists.sourceforge.net/lists/listinfo/e1000-devel |