Thread: [Madwifi-devel] Max throughput? Bonding? WDS? Turbo?
Status: Beta
Brought to you by:
otaku
From: Daniel P. <da...@ci...> - 2004-03-24 15:52:56
|
Just a few quick questions. All more or less unrelated I guess. =20 What's the max throughput anyone has gotten with madwifi? So far my experience has been 21-22mbps on 802.11a using some card with the Atheros a/b/g chipsets. Has anyone gotten faster? These are only about 10 feet apart too, with external antennas, and nothing but air between them. Has anyone been able to bond two or more cards together to achieve higher throughput? I've tried it with ifenslave and bonding in the kernel, but it thinks they're ethernet cards (which I guess makes sense because bonding is usually for ethernet) and tries to set the cards to full duplex and 100mbps. Needless to say, this doesn't work. Next, is WDS present in any of the madwifi builds currently? I know that it was talked about before, someone had written stuff for it was not incorporated into the official repository. Is there anything other than WDS that can be done to make two access points talk to one another? Or does anyone know if WDS will happen very soon? =20 Lastly, is it possible to get Turbo mode in 802.11a to work on user-defined channels? It seems to only work on somewhere around 5.2 or 5.3GHz and refu= ses to let you set the channel to anything else. I'd like to see if I could ma= ke it operate in the 5.725 to 5.825 GHz range. Thanks for any information you guys can give me! I appreciate it :) --=20 Daniel Prather CityNet, LLC da...@ci... |
From: Darrell B. <bu...@on...> - 2004-03-24 17:59:31
|
Humm, user list questions on the dev list (IMHO)... Well, I suppose I can save the devs some time: On Mar 24, 2004, at 9:52 AM, Daniel Prather wrote: > Just a few quick questions. All more or less unrelated I guess. > > What's the max throughput anyone has gotten with madwifi? So far my > experience has been 21-22mbps on 802.11a using some card with the > Atheros a/b/g chipsets. Has anyone gotten faster? These are only > about > 10 feet apart too, with external antennas, and nothing but air between > them. You're doing well then. See http://www.atheros.com/pt/papers.htm for some theoretical numbers. You can push 40 if it's all UDP data, 24 or so with TCP. > Has anyone been able to bond two or more cards together to achieve > higher throughput? I've tried it with ifenslave and bonding in the > kernel, but it thinks they're ethernet cards (which I guess makes sense > because bonding is usually for ethernet) and tries to set the cards to > full duplex and 100mbps. Needless to say, this doesn't work. Havn't ever tried this, but if the kernel bonding always assumes it's an ethernet card, you might be stuck. You could try building a multi-link ppp tunnel over different cards, or just enabling mutli-path routing in the kernel and ensure you have a route over both cards. Another trick would be to use one link for inbound and one link for outbound, taking advantage of the half-duplex nature of the wireless link might net you higher overall throughput if your traffic is relatively balanced. > Next, is WDS present in any of the madwifi builds currently? I know > that it was talked about before, someone had written stuff for it was > not incorporated into the official repository. Is there anything other > than WDS that can be done to make two access points talk to one > another? Or does anyone know if WDS will happen very soon? No WDS right now, apparently planned but low on the list. > Lastly, is it possible to get Turbo mode in 802.11a to work on > user-defined > channels? It seems to only work on somewhere around 5.2 or 5.3GHz and > refuses > to let you set the channel to anything else. I'd like to see if I > could make > it operate in the 5.725 to 5.825 GHz range. Turbo mode restricts the available channels because of the way it way it works. I suppose some of the devs may contradict me, but unless someone goofed up in the channel selection code, it's a FCC/operating mode restriction. -Darrell |
From: Daniel P. <da...@ci...> - 2004-03-24 18:13:15
|
Awwww crap, I didn't mean to send it to the devel list! Oh well. You and Shane have answered my questions anyway. I greatly appreciate it!=20 I may give multilink PPP a try with this setup or try the inbound/outbound thing. Thanks again! On Wed, 2004-03-24 at 11:59, Darrell Budic wrote: > Humm, user list questions on the dev list (IMHO)... Well, I suppose I=20 > can save the devs some time: >=20 > On Mar 24, 2004, at 9:52 AM, Daniel Prather wrote: >=20 > > Just a few quick questions. All more or less unrelated I guess. > > > > What's the max throughput anyone has gotten with madwifi? So far my > > experience has been 21-22mbps on 802.11a using some card with the > > Atheros a/b/g chipsets. Has anyone gotten faster? These are only=20 > > about > > 10 feet apart too, with external antennas, and nothing but air between > > them. >=20 > You're doing well then. See http://www.atheros.com/pt/papers.htm for=20 > some theoretical numbers. You can push 40 if it's all UDP data, 24 or=20 > so with TCP. >=20 > > Has anyone been able to bond two or more cards together to achieve > > higher throughput? I've tried it with ifenslave and bonding in the > > kernel, but it thinks they're ethernet cards (which I guess makes sense > > because bonding is usually for ethernet) and tries to set the cards to > > full duplex and 100mbps. Needless to say, this doesn't work. >=20 > Havn't ever tried this, but if the kernel bonding always assumes it's=20 > an ethernet card, you might be stuck. You could try building a=20 > multi-link ppp tunnel over different cards, or just enabling mutli-path=20 > routing in the kernel and ensure you have a route over both cards.=20 > Another trick would be to use one link for inbound and one link for=20 > outbound, taking advantage of the half-duplex nature of the wireless=20 > link might net you higher overall throughput if your traffic is=20 > relatively balanced. >=20 > > Next, is WDS present in any of the madwifi builds currently? I know > > that it was talked about before, someone had written stuff for it was > > not incorporated into the official repository. Is there anything other > > than WDS that can be done to make two access points talk to one > > another? Or does anyone know if WDS will happen very soon? >=20 > No WDS right now, apparently planned but low on the list. >=20 > > Lastly, is it possible to get Turbo mode in 802.11a to work on=20 > > user-defined > > channels? It seems to only work on somewhere around 5.2 or 5.3GHz and=20 > > refuses > > to let you set the channel to anything else. I'd like to see if I=20 > > could make > > it operate in the 5.725 to 5.825 GHz range. >=20 > Turbo mode restricts the available channels because of the way it way=20 > it works. I suppose some of the devs may contradict me, but unless=20 > someone goofed up in the channel selection code, it's a FCC/operating=20 > mode restriction. >=20 > -Darrell >=20 >=20 >=20 > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=3D1470&alloc_id=3D3638&op=3Dcli= ck > _______________________________________________ > Madwifi-devel mailing list > Mad...@li... > https://lists.sourceforge.net/lists/listinfo/madwifi-devel --=20 Daniel Prather CityNet, LLC da...@ci... |
From: Shane S. <sh...@bo...> - 2004-03-24 19:28:45
|
How does the inbound/outbound bonding work? On Wed, 2004-03-24 at 11:13, Daniel Prather wrote: > Awwww crap, I didn't mean to send it to the devel list! Oh well. You > and Shane have answered my questions anyway. I greatly appreciate it! > I may give multilink PPP a try with this setup or try the > inbound/outbound thing. Thanks again! > > On Wed, 2004-03-24 at 11:59, Darrell Budic wrote: > > Humm, user list questions on the dev list (IMHO)... Well, I suppose I > > can save the devs some time: > > > > On Mar 24, 2004, at 9:52 AM, Daniel Prather wrote: > > > > > Just a few quick questions. All more or less unrelated I guess. > > > > > > What's the max throughput anyone has gotten with madwifi? So far my > > > experience has been 21-22mbps on 802.11a using some card with the > > > Atheros a/b/g chipsets. Has anyone gotten faster? These are only > > > about > > > 10 feet apart too, with external antennas, and nothing but air between > > > them. > > > > You're doing well then. See http://www.atheros.com/pt/papers.htm for > > some theoretical numbers. You can push 40 if it's all UDP data, 24 or > > so with TCP. > > > > > Has anyone been able to bond two or more cards together to achieve > > > higher throughput? I've tried it with ifenslave and bonding in the > > > kernel, but it thinks they're ethernet cards (which I guess makes sense > > > because bonding is usually for ethernet) and tries to set the cards to > > > full duplex and 100mbps. Needless to say, this doesn't work. > > > > Havn't ever tried this, but if the kernel bonding always assumes it's > > an ethernet card, you might be stuck. You could try building a > > multi-link ppp tunnel over different cards, or just enabling mutli-path > > routing in the kernel and ensure you have a route over both cards. > > Another trick would be to use one link for inbound and one link for > > outbound, taking advantage of the half-duplex nature of the wireless > > link might net you higher overall throughput if your traffic is > > relatively balanced. > > > > > Next, is WDS present in any of the madwifi builds currently? I know > > > that it was talked about before, someone had written stuff for it was > > > not incorporated into the official repository. Is there anything other > > > than WDS that can be done to make two access points talk to one > > > another? Or does anyone know if WDS will happen very soon? > > > > No WDS right now, apparently planned but low on the list. > > > > > Lastly, is it possible to get Turbo mode in 802.11a to work on > > > user-defined > > > channels? It seems to only work on somewhere around 5.2 or 5.3GHz and > > > refuses > > > to let you set the channel to anything else. I'd like to see if I > > > could make > > > it operate in the 5.725 to 5.825 GHz range. > > > > Turbo mode restricts the available channels because of the way it way > > it works. I suppose some of the devs may contradict me, but unless > > someone goofed up in the channel selection code, it's a FCC/operating > > mode restriction. > > > > -Darrell > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: IBM Linux Tutorials > > Free Linux tutorial presented by Daniel Robbins, President and CEO of > > GenToo technologies. Learn everything from fundamentals to system > > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > > _______________________________________________ > > Madwifi-devel mailing list > > Mad...@li... > > https://lists.sourceforge.net/lists/listinfo/madwifi-devel |
From: Darrell B. <bu...@on...> - 2004-03-24 20:26:55
|
On Mar 24, 2004, at 1:28 PM, Shane Spencer wrote: > How does the inbound/outbound bonding work? It's an asynchronous routing hack, not really bonding, you've got something like this: 10.1.1.1/30 ----> Link A ----> 10.1.1.2/30 HOST A HOST B 10.1.1.5/30 <--- Link B <--- 10.1.1.6/30 So if you've got network 10.2.0.1/24 off of Host A, and 10.2.1.0/24 off of Host B, you setup your routing tables like this: Host A route add 10.2.1.0/24 gw 10.1.1.2 Host B (also assume 0/0 route is off of Host A) route add 10.2.0.1/24 gw 10.1.1.5 route add default gw 10.1.1.5 Now all of your traffic destined for 10.2.1.0/24 should use Link A, and everything going to 10.2.0.1 & the rest of the world should use link B. This will typically include SYN/ACKs and other IP control traffic. You might be able to get 40 MB/s TCP this way, since all the ack traffic is on the second link. Note that you may need to be careful to setup services on Host A & B to use the address of a different interface (say, eth0 as 10.2.1.1 or 10.2.2.1, respectively), otherwise you way have 2 way traffic on a link if your name server is on 10.1.1.2, for instance. And of course, you've got problems if a link goes away, but you can work around that with higher cost routes, if the link type supports it. A simple alternative to bonding is to just setup routes like this (after enabling IP: equal cost multipath under IP: advanced routing/Networking Options in the kernel): Host A route add 10.2.1.0/24 gw 10.1.1.2 route add 10.2.1.0/24 gw 10.1.1.6 Host B route add 10.2.2.0/24 gw 10.1.1.1 route add 10.2.2.0/24 gw 10.1.1.5 route add default gw 10.1.1.1 route add default gw 10.1.1.1 And the kernel will (theoretically, I don't know what algorithm it uses to throw packets around in this case, typical approaches are to alternate packets, more complex (cisco defaults, etc) routines split things up by streams, keeps packets in order better [ kernel doc says "non-deterministic method", sounds random to me ]) use both links in a roughly equal fashion. If the routes are attached to devices, they should go away if the device goes down, but this gets strange with wireless. It seems to me that the master side would never go down, although the managed side might if it lost sync with the AP, so I don't know if I'd use it in production. Your multilink vtun's should be safer because the ppp session would go down if it didn't get enough acks though, I think. Or run a dynamic routing protocol over both links, proper configuration could probably get you the same effect. This usually won't get you any single stream any faster than one link can handle, but now you can run two of them. Bonding should let you get faster throughput than any individual link, so the choice of which to use is generally dependent on what you want it to do. -Darrell |
From: Shane S. <sh...@bo...> - 2004-03-24 20:35:04
|
On Wed, 2004-03-24 at 13:26, Darrell Budic wrote: > On Mar 24, 2004, at 1:28 PM, Shane Spencer wrote: > > How does the inbound/outbound bonding work? > > It's an asynchronous routing hack, not really bonding, you've > gotsomething like this: > > 10.1.1.1/30 ----> Link A----> 10.1.1.2/30 > HOST A HOST B > 10.1.1.5/30 <--- Link B <--- 10.1.1.6/30 > > So if you've got network 10.2.0.1/24 off of Host A, and 10.2.1.0/24off > of Host B, you setup your routing tables like this: > > Host A > route add 10.2.1.0/24 gw 10.1.1.2 > > Host B (also assume 0/0 route is off of Host A) > route add 10.2.0.1/24 gw 10.1.1.5 > route add default gw 10.1.1.5 > > Now all of your traffic destined for 10.2.1.0/24 should use Link A,and > everything going to 10.2.0.1 & the rest of the world should uselink B. > This will typically include SYN/ACKs and other IP controltraffic. You > might be able to get 40 MB/s TCP this way, since all theack traffic is > on the second link. Note that you may need to becareful to setup > services on Host A & B to use the address of adifferent interface > (say, eth0 as 10.2.1.1 or 10.2.2.1, respectively),otherwise you way > have 2 way traffic on a link if your name server ison 10.1.1.2, for > instance. And of course, you've got problems if alink goes away, but > you can work around that with higher cost routes,if the link type > supports it. > > A simple alternative to bonding is to just setup routes like > this(after enabling IP: equal cost multipath under IP: > advancedrouting/Networking Options in the kernel): > > Host A > route add 10.2.1.0/24 gw 10.1.1.2 > route add 10.2.1.0/24 gw 10.1.1.6 > > Host B > route add 10.2.2.0/24 gw 10.1.1.1 > route add 10.2.2.0/24 gw 10.1.1.5 > route add default gw 10.1.1.1 > route add default gw 10.1.1.1 Is that a mistype at the end? two gw's that are the same? or is that a metric thingy? > And the kernel will (theoretically, I don't know what algorithm ituses > to throw packets around in this case, typical approaches are > toalternate packets, more complex (cisco defaults, etc) routines > splitthings up by streams, keeps packets in order better [ kernel doc > says"non-deterministic method", sounds random to me ]) use both links > in aroughly equal fashion. If the routes are attached to devices, > theyshould go away if the device goes down, but this gets strange > withwireless. It seems to me that the master side would never go > down,although the managed side might if it lost sync with the AP, so > Idon't know if I'd use it in production. Your multilink vtun's > shouldbe safer because the ppp session would go down if it didn't get > enoughacks though, I think. Or run a dynamic routing protocol over > bothlinks, proper configuration could probably get you the same > effect. > > This usually won't get you any single stream any faster than one > linkcan handle, but now you can run two of them. Bonding should let > youget faster throughput than any individual link, so the choice of > whichto use is generally dependent on what you want it to do. > > -Darrell |
From: Darrell B. <bu...@on...> - 2004-03-24 21:08:13
|
On Mar 24, 2004, at 2:35 PM, Shane Spencer wrote: > On Wed, 2004-03-24 at 13:26, Darrell Budic wrote: >> On Mar 24, 2004, at 1:28 PM, Shane Spencer wrote: >> >> How does the inbound/outbound bonding work? >> >> It's an asynchronous routing hack, not really bonding, you've >> gotsomething like this: >> >> A simple alternative to bonding is to just setup routes like >> this(after enabling IP: equal cost multipath under IP: >> advancedrouting/Networking Options in the kernel): >> >> Host A >> route add 10.2.1.0/24 gw 10.1.1.2 >> route add 10.2.1.0/24 gw 10.1.1.6 >> >> Host B >> route add 10.2.2.0/24 gw 10.1.1.1 >> route add 10.2.2.0/24 gw 10.1.1.5 >> route add default gw 10.1.1.1 >> route add default gw 10.1.1.1 > > Is that a mistype at the end? two gw's that are the same? or is that > a > metric thingy? Whups, typo: should read "route add default gw 10.1.1.5". Use identical metrics in this case if you want to add metrics. |
From: Jim T. <ji...@ne...> - 2004-03-25 00:04:01
|
On Mar 24, 2004, at 12:26 PM, Darrell Budic wrote: > On Mar 24, 2004, at 1:28 PM, Shane Spencer wrote: > >> How does the inbound/outbound bonding work? > > It's an asynchronous routing hack, not really bonding, you've got > something like this: > [...] > Now all of your traffic destined for 10.2.1.0/24 should use Link A, > and everything going to 10.2.0.1 & the rest of the world should use > link B. This will typically include SYN/ACKs and other IP control > traffic. If you're going to do this, you might want to experiment with turning off 802.11-layer ACKs. Also, don't bother trying this with 2 radios in the same spectrum (e.g. two 11g radios, or 2 11a radios within the same (sub) band. The ACR of the Atheros radios (and, as far as I know any available direct-conversion chipset) won't support it. If you do, any transmit by the sending side will blind the receiver on the same side, and the last thing you want to do is smash packets that have already traversed the wireless media. Jim |
From: Shane S. <sh...@bo...> - 2004-03-24 18:03:24
|
On Wed, 2004-03-24 at 08:52, Daniel Prather wrote: > Just a few quick questions. All more or less unrelated I guess. > > What's the max throughput anyone has gotten with madwifi? So far my > experience has been 21-22mbps on 802.11a using some card with the > Atheros a/b/g chipsets. Has anyone gotten faster? These are only about > 10 feet apart too, with external antennas, and nothing but air between > them. Same here. > Has anyone been able to bond two or more cards together to achieve > higher throughput? I've tried it with ifenslave and bonding in the > kernel, but it thinks they're ethernet cards (which I guess makes sense > because bonding is usually for ethernet) and tries to set the cards to > full duplex and 100mbps. Needless to say, this doesn't work. I did the same.. to the point of wanting to hack madwifis driver to report speeds via fake MII registers.. however it was simpler to hack the bonding driver and set the speeds manually.. which I did. I then bonded two vtun ethernet style tunnels over two wireless links (802.11a and 802.11g) running at symetrical speeds.. I was able to get barely any better throughput and claim now that its just hard to use unstable bitrates and bonding to do what you want. Here is my solution for bonding. I used two vtun ppp tunnels in multilink mode (ask me more about the config if you like) and was able to achieve a bonded, faster, connection with redundancy.. which is all I wanted.. However the sbc I am using isn't able to push more than 50MBps burst to any one pci device.. the PPP connection itself using vtun used up too much CPU on these poor boards as well.. I got a nice 26MBps redundant link. I could easily see that getting close to double the TX rate of the two cards (lowest card rate limits the entire connection) if I had better hardware. > Next, is WDS present in any of the madwifi builds currently? I know > that it was talked about before, someone had written stuff for it was > not incorporated into the official repository. Is there anything other > than WDS that can be done to make two access points talk to one > another? Or does anyone know if WDS will happen very soon? Search the list.. we all want it.. we can't have it yet.. > Lastly, is it possible to get Turbo mode in 802.11a to work on user-defined > channels? It seems to only work on somewhere around 5.2 or 5.3GHz and refuses > to let you set the channel to anything else. I'd like to see if I could make > it operate in the 5.725 to 5.825 GHz range. No clue ;) > Thanks for any information you guys can give me! I appreciate it :) NP Shane Spencer |