I am bonding several ISP to have a high speed connection between two specific points over Internet.
I am using vtun in order to get several Ethernet tunnel over TCP. Thus I get tap0, tap1, tap2 at both end points.
All these virtual Ethernet device are bounded with channel bonding.
This is working fine.
Nevertheless I am trying to have a special behavior. If one ISP is weaker (less bandwidth) than the other ones, I observe that the traffic is fairly distributed.
Let's say I have 100 kB/s for ISP1, 100kB/s for ISP2, and 50kB/s for ISP3. I can see that when I am transferring data with TCP (with netcat for example), the three virtual devices are going at 50 kB/s so I only have ~150 kB (at most). I guess that as far as one is losing packet, the netcat TCP transfer is slowing down, thus all TCP connection below are slowing the same way.
I have tried the balance-alb/tlb modes but without any changes, I guess because the load is calculated with the theoretical speed (100 Mb/s) of the devices.
Is there any way to achieve a more complex distribution in my case?
I am thinking about adaptive load balancing according to the current traffic speed (discarding the theoretical speed).. these current speeds can be considered as weight or something like that..
Thanks for any comment
I built a similar "thing" and run into the same problem. Luckily for you i was crazy enough to "attack" the kernel source ;) A made a new mode witch can use link with different speeds. You can specify the link speeds in comparison to one another upon bonding module insertion. You can not however change the "weights" while in use...
I made my diff against kernel source 22.214.171.124 . It runs without a glitch now more over two months.
Here are the patches:
modprobe bonding mode=7 arp_interval=2000 arp_ip_target=192.168.x.x weights=tap0:18,tap1:1
Where tap0 sends 18 packets while tap1 sends only one (7200kbit tap0 and 400kbit tap1). Arp checking is my preferance, its not a necessity!
Please note: the other end does not need to be patches, in that case it should be in mode 0 (round-rubin)! This behavior only applies to SENDING the packets, so if you have to patch the other end, if you wish use it for downstream...
I will try to trow a better description for it, it will be (when finished) found here:
Please drop me a message if you have any questions, and in case you use it and works for you ;)
Oops, i give you wrong links :) http://anubis.mw.hu/bonding/\* that is...
For lazy people:
Static allocation might perform well if you are quite accurate in estimating the relative speeds of the links. For dynamic resource allocation algorithms for bonding multiple WAN links, I'd recommend TRUFFLE BBNA box from Mushroom Networks. They even have a device that can bond cellular data cards, which obviously has wildly varying WAN speeds on each link that is bonded. We have their TRUFFLE in our main office and the smaller unit in our trailer office.
as the mentioned links are no longer available, i republished them:
this patches are applicable to kernel 3.2.41, not to 2.6.