Thread: [Madwifi-devel] rate control algorithms
Status: Beta
Brought to you by:
otaku
From: Mathieu L. <Mat...@so...> - 2004-08-16 11:40:04
|
hi all, In my previous mail (http://sourceforge.net/mailarchive/forum.php?thread_id=3D4943228&forum_id= =3D33966) about rate control algorithms, I proposed a patch to implement a = new algorithm for the madwifi driver. This algorithm is described in http:/= /www-sop.inria.fr/rapports/sophia/RR-5208.html. Sam suggested privately tha= t I look into a previously-proposed patch by Nicholas Jones (http://sourcef= orge.net/tracker/index.php?func=3Ddetail&aid=3D961384&group_id=3D82936&atid= =3D567755) and into the NetBSD rssadapt algorithm. I did this last week and= implemented both of these algorithms in my simulation engine. The followin= g describes the performance results I got. 1) The Jones algorithm is a variation over the original madwifi algorithm. I personally did not expect it to perform very well but, much to my surprise, it did perform much better than the madwifi algorithm but not as well and our proposed algorithm. The figure (http://www-sop.inria.fr/dream/personnel/Mathieu.Lacage/jones.ps) shows AMRR (our algorithm), jones and madwifi. It shows two stations moving closer to each other and exchanging packets while saturating the transmission medium. The goodput (application-level throughput) is plotted against distance. Readers willing to really understand this graph might want to read our paper linked above for inspiration. 2) The algorithm implemented by David Young is inspired by a paper published a year ago (Javier del Prado Pavon and Sunghyun Choi, "Link Adaptation Strategy for IEEE 802.11 WLAN via Received Signal Strength Measurement", ICC=E2=80=9903, pp. 1108-1113, May 2003). This paper is, sadl= y, merely a description of a nice idea for a rate control algorithm but lacks precision on critical details of the proposed algorithm. It is thus impossible to implement the algorithm used by the authors to publish their results (their answer by email is "confidential issues"). David did however implement something based on their ideas and I did perform numerous simulations with his algorithm. So far, I have been unable to produce sensible results with the algorithm: it shows great instability in the face of strong RSS stability. I corrected this by artificially introducing a uniform RSS randomness but while this corrected the major instability problems, it did not fix the abysmal performance of the algorithm. The graph=20 (http://www-sop.inria.fr/dream/personnel/Mathieu.Lacage/dyoung.ps) compares ARF/RBAR/AARF and DYOUNG.=20 As such, my suggestion is to go with our proposed algorithm which does not use any RSSI information from the driver. regards, Mathieu --=20 Mathieu Lacage <mat...@so...> |
From: Mathieu L. <Mat...@so...> - 2004-08-16 08:01:44
|
hi all, In my previous mail (http://sourceforge.net/mailarchive/forum.php?thread_id=3D4943228&forum_id= =3D33966) about rate control algorithms, I proposed a patch to implement a = new algorithm for the madwifi driver. This algorithm is described in http:/= /www-sop.inria.fr/rapports/sophia/RR-5208.html. Sam suggested privately tha= t I look into a previously-proposed patch by Nicholas Jones (http://sourcef= orge.net/tracker/index.php?func=3Ddetail&aid=3D961384&group_id=3D82936&atid= =3D567755) and into the NetBSD rssadapt algorithm. I did this last week and= implemented both of these algorithms in my simulation engine. The followin= g describes the performance results I got. 1) The Jones algorithm is a variation over the original madwifi algorithm. I personally did not expect it to perform very well but, much to my surprise, it did perform much better than the madwifi algorithm but not as well and our proposed algorithm. The attached figure (jones.ps) shows AMRR (our algorithm), jones and madwifi. It shows two stations moving closer to each other and exchanging packets while saturating the transmission medium. The goodput (application-level throughput) is plotted against distance. Readers willing to really understand this graph might want to read our paper linked above for inspiration. 2) The algorithm implemented by David Young is inspired by a paper published a year ago (Javier del Prado Pavon and Sunghyun Choi, "Link Adaptation Strategy for IEEE 802.11 WLAN via Received Signal Strength Measurement", ICC=E2=80=9903, pp. 1108-1113, May 2003). This paper is, sadl= y, merely a description of a nice idea for a rate control algorithm but lacks precision on critical details of the proposed algorithm. It is thus impossible to implement the algorithm used by the authors to publish their results (their answer by email is "confidential issues"). David did however implement something based on their ideas and I did perform numerous simulations with his algorithm. So far, I have been unable to produce sensible results with the algorithm: it shows great instability in the face of strong RSS stability. I corrected this by artificially introducing a uniform RSS randomness but while this corrected the major instability problems, it did not fix the abysmal performance of the algorithm. The attached graph (dyoung.ps) compares ARF/RBAR/AARF and DYOUNG.=20 As such, my suggestion is to go with our proposed algorithm which does not use any RSSI information from the driver. regards, Mathieu --=20 Mathieu Lacage <mat...@so...> |
From: David Y. <dy...@po...> - 2004-08-17 05:38:44
|
Mathieu, For what it's worth, rssadapt works about as I expect it to in my office. I have not put it to a scientific test, however. Regarding the instability, I will look at your graph later and see if anything stands out. Have you used my code directly, or re-written it? I will base an attempt at an improved link-adaptation algorithm on some publications by the MIT Roofnet research group---see <http://pdos.lcs.mit.edu/~rtm/>. They observe on both indoor and outdoor testbed networks that RSS is not a strong predictor of the best transmission rate. They also observe that when multipath delays are an integer multiple of the symbol duration, a RAKE receiver cannot counteract the inter-symbol interference. This means that on the same RF "path," a packet sent at 11Mbps may actually have a better chance at being received than a packet sent at 1Mbps. [Some radios tell us the "Barker code lock quality"; does it let us indirectly measure the amount of multipath interference? IIUC, code lock is essentially the RMS difference between the received signal and the modem's "playback" of the bits it decoded. Suppose this tells us N+I. Then the difference between N+I(rate A) and N+I(rate B) may tell us something about the amount of multipath interference at each rate.] I have two burning questions about the MIT study. First, I wonder how their results would change if they controlled for the transmission duration. Also, I wonder whether the results can be duplicated with a different receiver than the Intersil Prism receiver. I hope that future research will provide the answers. I will describe my idea for a new link-adaptor in broad outline. I will appreciate your feedback. My link-adaptor will use feedback from the transmit section of the MAC to compute the Expected Transmission Time (ETT---see the Roofnet literature) for unicast packets between 256 and 512, 512 and 1024, ..., 8192 and 16384 microseconds long (say), at each bit rate, for each receiver. That is, it will keep a table, ETT[receiver][duration][bit rate]. The MAC will provide a retry cont for each transmission; the link-adaptor will use the tuple [receiver, bit rate, fragmentation threshold, retransmission count] to update the table. The link-adaptor will choose the bit rate and fragmentation threshold for a data packet by a pseudo-random process. With highest probability, the link-adaptor will choose the bit rate and fragmentation threshold that optimizes a packet's ETT; with lowest probability, the link-adaptor will choose the bit rate and fragmentation threshold with highest (worst) ETT; and it will choose all other rate/threshold combinations with probability inversely proportional to their ETT. In this way, the ETT table is kept up-to-date and adaptation to changing channel conditions is ensured. Since most packets are sent with optimal ETT, I am hopeful that the link-adaptor will achieve high throughput. Dave -- David Young OJC Technologies dy...@oj... Urbana, IL * (217) 278-3933 |
From: Mathieu L. <Mat...@so...> - 2004-08-17 06:24:52
|
On Tue, 2004-08-17 at 07:37, David Young wrote: > Mathieu, > > For what it's worth, rssadapt works about as I expect it to in my office. > I have not put it to a scientific test, however. I can imagine this. I suspect something is going wrong in my simulations but I have not had enough time to thoroughfully investigate these issues. > > Regarding the instability, I will look at your graph later and see if > anything stands out. Have you used my code directly, or re-written it? I have rewritten it. The attached files show the code. I have used the exact same fixed-point arithmetic you used (ie: 8.8). The only difference in this version is the way I initialize the thresholds. Look in the Threshold::Threshold constructor. This code also assumes that all packets are in the same size range because it was too much pain to fix the overall codebase I am using to correctly propagate the size information for each packet to this class. A quick tour of the code: - there exist one instance of class Dyoung for each wifi card on a host - there exist one instance of DyoungAlgo for each potential destination STA from a given wifi card associated to a Dyoung class - DyoungAlgo::reportRxSuccess is invoked once for each packet (data or other) successfully received on the wifi card. - DyoungAlfo::reportOk is invoked once for each data packet successfully transmitted (that is, we successfully received an ACK). - DyoungAlgo::reportFailed is invoked once for each data packet whose transmission failed. (that is, we did not receive an ACK) - DyoungAlgo::reportFinalFailed is invoked once for each data packet whose retry count reached MaxRetryCount. - DyoungAlgo::pickrate is invoked once for each data packet which much be sent to choose its rate. Please, note that we always use RTS/CTS although the results can be extended easily to the non-RTS/CTS case. Finally, the instability issues I have noticed are mainly related to what happens when the RSS_avg is highly stable. Here is a sample scenario: 1) RSS_avg is equal to the same value for an extended period of time 2) I assume that to use a rate, you need Th[rate] < RSS_avg. If Th[rate] = RSS_avg, you use rate-1. 3) the current rate does have both failures and success. The failures make Th[rate] slowly converge towards RSS_avg until it is strictly equal to RSS_avg. The successes make Th[rate+1] slowly converge towards Th[rate], that is, RSS_avg. In the end, both thresholds are equal to RSS_avg which means that the algorithm uses rate - 1. You could say that Th[rate] will eventually decrease if we use rate-1 and have a lot of successes which means we will switch to rate again. This does not happen when rate=1 (that is, it is the rate just above the lowest rate) because Th[0] = 0 (always) and because your code makes an explicit test against Th[rate-1] == 0 when updating the higher rate in case of successful transmission. When rate is 1 initially and when it switches to rate - 1 (that is, 0), Th[1] = RSS_avg and we will keep on making it converge towards RSS_avg which means it will never change its value. In that case, we are effectively stuck at rate 0 without being able to switch to any other rate until RSS_avg changes. Arguably, this scenario is a bit far-fetched but it does happen during my simulations when I don't randomize the RSSI. > > I will base an attempt at an improved link-adaptation algorithm > on some publications by the MIT Roofnet research group---see > <http://pdos.lcs.mit.edu/~rtm/>. They observe on both indoor and That is interesting. I will have a look at these. > outdoor testbed networks that RSS is not a strong predictor of the > best transmission rate. They also observe that when multipath delays > are an integer multiple of the symbol duration, a RAKE receiver cannot > counteract the inter-symbol interference. This means that on the same > RF "path," a packet sent at 11Mbps may actually have a better chance at > being received than a packet sent at 1Mbps. > > [Some radios tell us the "Barker code lock quality"; does it let us > indirectly measure the amount of multipath interference? IIUC, code lock > is essentially the RMS difference between the received signal and the > modem's "playback" of the bits it decoded. Suppose this tells us N+I. > Then the difference between N+I(rate A) and N+I(rate B) may tell us > something about the amount of multipath interference at each rate.] > > I have two burning questions about the MIT study. First, I wonder > how their results would change if they controlled for the transmission > duration. Also, I wonder whether the results can be duplicated with a > different receiver than the Intersil Prism receiver. I hope that future > research will provide the answers. > > I will describe my idea for a new link-adaptor in broad outline. I will I am, of course, biased but the algorithm AARF we describe in our paper offers excellent results (that is, they are very very close to the optimum) with very little overhead and complexity. The relatively good measures we obtained with our AMRR algorithm with madwifi cards is proof to me that the idea of AARF (which has been reused in AMRR) is sound and might really work well in practice. Alas, I must say that even though I can probably easily implement these algorithms in any driver upon request, I lack the experience and the setup to make useful and thorough testing. > appreciate your feedback. My link-adaptor will use feedback from the > transmit section of the MAC to compute the Expected Transmission Time > (ETT---see the Roofnet literature) for unicast packets between 256 > and 512, 512 and 1024, ..., 8192 and 16384 microseconds long (say), > at each bit rate, for each receiver. That is, it will keep a table, > ETT[receiver][duration][bit rate]. The MAC will provide a retry cont > for each transmission; the link-adaptor will use the tuple [receiver, bit > rate, fragmentation threshold, retransmission count] to update the table. > The link-adaptor will choose the bit rate and fragmentation threshold > for a data packet by a pseudo-random process. With highest probability, > the link-adaptor will choose the bit rate and fragmentation threshold that > optimizes a packet's ETT; with lowest probability, the link-adaptor will > choose the bit rate and fragmentation threshold with highest (worst) ETT; > and it will choose all other rate/threshold combinations with probability > inversely proportional to their ETT. In this way, the ETT table is kept > up-to-date and adaptation to changing channel conditions is ensured. > Since most packets are sent with optimal ETT, I am hopeful that the > link-adaptor will achieve high throughput. Alas, I cannot really read the mit refs you point to and make comments on this until mid to end september: I am supposed to complete another project in the meantime but I will try to think about this and make comments asap. thanks for your feedback, Mathieu -- Mathieu Lacage <mat...@so...> |
From: Mathieu L. <Mat...@so...> - 2004-08-17 22:55:10
|
Hi David, I should not have done that (my boss should curse me ;-) but I took a few hours to go through some of the Roofnet papers and your email. Comments are inserted below. On Tue, 2004-08-17 at 07:37, David Young wrote: [snip] > I will base an attempt at an improved link-adaptation algorithm > on some publications by the MIT Roofnet research group---see > <http://pdos.lcs.mit.edu/~rtm/>. They observe on both indoor and > outdoor testbed networks that RSS is not a strong predictor of the The sigcomm paper you refer to is, indeed, interesting and provides some useful information. I am not personally sure that the section 8 which analyzes the impact of interference of multiple sources with the operation of Roofnet is reasonable. For example, if you can receive 50 packets from interference sources when you don't transmit yourself, the number of interferences is likely to skyrocket very quickly because each of these 50 packets will interfere with your own packets but also because each of the retransmission attempt for these packets (unless these 50 packets are mainly broadcast packets which are never retransmitted) and the retransmission attempts for your own packets will interfere with each other. Although I do not claim that the conclusions of section 8 are wrong, it seems to me that further work would be required to prove this. Specifically, the (maybe impossible) experiment where these secondary interference sources are shut off would provide a clear answer to that question. An alternative would be to simulate such a collision scenario with a simple model of the 802.11 mac and see if the resulting loss patterns are compatible with the observed loss patterns. Another interesting line of analysis would be to look at the relative importance of broadcast packets within the foreign packets and see if there is a correlation between the number of lost packets per second and the percentage of broadcast packets within the foreign packets. I must say that, at one point, we did simulate the behavior of many rate control algorithms in adhoc networks where the interference rate is much higher than in the infrastructure networks we used to publish our results. Although it is hard to extend these simulation results to the physical world, the main source of SNR variation was node interference (we used a free space path loss, a SNIR interference model and a Jakes Channel model). Specifically, the classic assumption which assumes the SNR constant over one packet transmission was barely valid and the SNR varied a lot from one packet transmission to the other. In this context, algorithms such as ARF which require numerous packet transmissions before adapting their transmission rates perform extremely poorly. Of course, our proposed AARF was pretty bad too. > best transmission rate. They also observe that when multipath delays > are an integer multiple of the symbol duration, a RAKE receiver cannot > counteract the inter-symbol interference. This means that on the same > RF "path," a packet sent at 11Mbps may actually have a better chance at > being received than a packet sent at 1Mbps. Yes. However, I would like to know if they used the standard MAC retransmission mechanism or not. Namely, I wonder if "successful reception" is defined by "a given packet was transmitted successfully, maybe with numerous retransmissions". > [Some radios tell us the "Barker code lock quality"; does it let us > indirectly measure the amount of multipath interference? IIUC, code lock > is essentially the RMS difference between the received signal and the > modem's "playback" of the bits it decoded. Suppose this tells us N+I. > Then the difference between N+I(rate A) and N+I(rate B) may tell us > something about the amount of multipath interference at each rate.] That sounds right. However, I lack the digital communications background required to make comments on this: although I learned this in university, I never was interested in it (ie: I was a real slacker there :) and never really got back to learn it again for good. Bad me. [snip] > I will describe my idea for a new link-adaptor in broad outline. I will > appreciate your feedback. My link-adaptor will use feedback from the > transmit section of the MAC to compute the Expected Transmission Time ok. > (ETT---see the Roofnet literature) for unicast packets between 256 > and 512, 512 and 1024, ..., 8192 and 16384 microseconds long (say), > at each bit rate, for each receiver. That is, it will keep a table, > ETT[receiver][duration][bit rate]. The MAC will provide a retry cont ok. I assume you meant "number of bits" when you wrote "duration" above. > for each transmission; the link-adaptor will use the tuple [receiver, bit > rate, fragmentation threshold, retransmission count] to update the table. The exact rules you will use to update the table are critical to the performance and behavior of your algorithm. The simulations I have run on your rssadapt algorithm show that slightly changing these rules has a major impact on the resulting throughput. > The link-adaptor will choose the bit rate and fragmentation threshold > for a data packet by a pseudo-random process. With highest probability, > the link-adaptor will choose the bit rate and fragmentation threshold that > optimizes a packet's ETT; with lowest probability, the link-adaptor will > choose the bit rate and fragmentation threshold with highest (worst) ETT; I did not see any other reference to the fragmentation threshold before: it is not clear to me how you could integrate it and how you could balance the choice of the bit rate against the fragmentation threshold. > and it will choose all other rate/threshold combinations with probability > inversely proportional to their ETT. In this way, the ETT table is kept It might be interesting to try to see how other probability functions (not necessarily 1/ETT but maybe 1/ETT^2 or an exponential function) perform. > up-to-date and adaptation to changing channel conditions is ensured. > Since most packets are sent with optimal ETT, I am hopeful that the > link-adaptor will achieve high throughput. That might work, but critical details lack and, although I lack experience in this domain, I think that they are very important. Specifically, there lack a discussion on: - the rules used to update the table - how the fragmentation threshold enters the picture I hope you found these comments useful. regards, Mathieu -- Mathieu Lacage <mat...@so...> |
From: David Y. <dy...@po...> - 2004-08-18 17:06:53
|
On Tue, Aug 17, 2004 at 02:16:44PM +0200, Mathieu Lacage wrote: > Hi David, > > I should not have done that (my boss should curse me ;-) but I took a > few hours to go through some of the Roofnet papers and your email. > Comments are inserted below. > > On Tue, 2004-08-17 at 07:37, David Young wrote: > > I will describe my idea for a new link-adaptor in broad outline. I will > > appreciate your feedback. My link-adaptor will use feedback from the > > transmit section of the MAC to compute the Expected Transmission Time > > ok. > > > (ETT---see the Roofnet literature) for unicast packets between 256 > > and 512, 512 and 1024, ..., 8192 and 16384 microseconds long (say), > > at each bit rate, for each receiver. That is, it will keep a table, > > ETT[receiver][duration][bit rate]. The MAC will provide a retry cont > > ok. I assume you meant "number of bits" when you wrote "duration" above. I mean microseconds duration, however, the implementation may store number of bits (or bytes). I strongly suspect that at some receivers, long packets will virtually always be destroyed by interference from microwave ovens, cordless phones, and hidden 802.11 nodes; short packets will get through. For this reason, I want for the rate-adaptor to explore the packet-duration space to find the duration that optimizes ETT. The optimal duration determines the fragmentation threshold at each bit rate. I have a hunch that the optimal packet duration for a receiver will be independent of bit rate, but I will find out with some operating experience. > The exact rules you will use to update the table are critical to the > performance and behavior of your algorithm. The simulations I have run > on your rssadapt algorithm show that slightly changing these rules has a > major impact on the resulting throughput. [snip snip] > > It might be interesting to try to see how other probability functions > (not necessarily 1/ETT but maybe 1/ETT^2 or an exponential function) > perform. I agree. Dave -- David Young OJC Technologies dy...@oj... Urbana, IL * (217) 278-3933 |
From: John B. <jb...@am...> - 2004-08-17 07:57:36
|
Hi David, I have some comments since I've also been looking at rate control for the past few months. We have observed similar results regarding s2n ratios and delivery rates on our indoor network (which run atheros cards and a madwifi-based driver) as we did for our sigcomm paper (which used prism2 cards outdoors). We have trouble predicting delivery rate based on s2n ratios. I have also been specifically looking at an ETT-based rate control, and have done quite a bit of testing of it on an indoor 40 node network. So far, I have been comparing auto-rate fallback (arf), the rate control in the current madwifi driver (madwifi) , as well as a probing ett-based algorithm (probe). What we have been seeing is that our indoor testbed seems to be quite different from our outdoor one - links indoor are more bimodal, especially at lower bitrates. Here is a cdf of the broadcast delivery rates across all the links in the system for a bunch of different bitrates with 1500 byte packets: http://pdos.lcs.mit.edu/~jbicket/link_distr.1500.cdf.eps Here is the cdf of how the rate control algorithms did on the links: http://pdos.lcs.mit.edu/~jbicket/rate.g.cdf.eps It seems as through arf consistently performs poorly - it seems that either sticks with bitrates that require many retries (but don't fail often) or it is too sensitive to failures. What is interesting is that the simple rate control currently implemented in the madwifi driver seems to do very well. In fact, the ett-based method seems to consistently perform better only on high-throughput links. I believe this is because the delivery rates at the higher bitrates, even for high-throughput links, tend to be lossy - the link_distr figure showed a pretty steady drop-off in link quality for the highest bitrate. From a few examples in our outdoor network, it also appears that the ett-based algorithm performs much better outdoors than these graphs from our indoor network would suggest. Our hunch is it is because outdoors all the bitrates are more lossy, and picking the bitrate isn't straightforward unless you are taking the ETT of a packet into account. --John Bicket David Young [dy...@po...] wrote: >Mathieu, > >For what it's worth, rssadapt works about as I expect it to in my office. >I have not put it to a scientific test, however. > >Regarding the instability, I will look at your graph later and see if >anything stands out. Have you used my code directly, or re-written it? > >I will base an attempt at an improved link-adaptation algorithm >on some publications by the MIT Roofnet research group---see ><http://pdos.lcs.mit.edu/~rtm/>. They observe on both indoor and >outdoor testbed networks that RSS is not a strong predictor of the >best transmission rate. They also observe that when multipath delays >are an integer multiple of the symbol duration, a RAKE receiver cannot >counteract the inter-symbol interference. This means that on the same >RF "path," a packet sent at 11Mbps may actually have a better chance at >being received than a packet sent at 1Mbps. > >[Some radios tell us the "Barker code lock quality"; does it let us > indirectly measure the amount of multipath interference? IIUC, code lock > is essentially the RMS difference between the received signal and the > modem's "playback" of the bits it decoded. Suppose this tells us N+I. > Then the difference between N+I(rate A) and N+I(rate B) may tell us > something about the amount of multipath interference at each rate.] > >I have two burning questions about the MIT study. First, I wonder >how their results would change if they controlled for the transmission >duration. Also, I wonder whether the results can be duplicated with a >different receiver than the Intersil Prism receiver. I hope that future >research will provide the answers. > >I will describe my idea for a new link-adaptor in broad outline. I will >appreciate your feedback. My link-adaptor will use feedback from the >transmit section of the MAC to compute the Expected Transmission Time >(ETT---see the Roofnet literature) for unicast packets between 256 >and 512, 512 and 1024, ..., 8192 and 16384 microseconds long (say), >at each bit rate, for each receiver. That is, it will keep a table, >ETT[receiver][duration][bit rate]. The MAC will provide a retry cont >for each transmission; the link-adaptor will use the tuple [receiver, bit >rate, fragmentation threshold, retransmission count] to update the table. >The link-adaptor will choose the bit rate and fragmentation threshold >for a data packet by a pseudo-random process. With highest probability, >the link-adaptor will choose the bit rate and fragmentation threshold that >optimizes a packet's ETT; with lowest probability, the link-adaptor will >choose the bit rate and fragmentation threshold with highest (worst) ETT; >and it will choose all other rate/threshold combinations with probability >inversely proportional to their ETT. In this way, the ETT table is kept >up-to-date and adaptation to changing channel conditions is ensured. >Since most packets are sent with optimal ETT, I am hopeful that the >link-adaptor will achieve high throughput. > >Dave > >-- >David Young OJC Technologies >dy...@oj... Urbana, IL * (217) 278-3933 > > >------------------------------------------------------- >SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media >100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 >Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. >http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 >_______________________________________________ >Madwifi-devel mailing list >Mad...@li... >https://lists.sourceforge.net/lists/listinfo/madwifi-devel |
From: Mathieu L. <Mat...@so...> - 2004-08-18 06:00:30
|
Hi John, On Tue, 2004-08-17 at 09:47, John Bicket wrote: [snip] > Here is the cdf of how the rate control algorithms did on the links: > http://pdos.lcs.mit.edu/~jbicket/rate.g.cdf.eps > It seems as through arf consistently performs poorly - it seems that > either sticks with bitrates that require many retries (but don't fail > often) or it is too sensitive to failures. I wonder how you implemented it. Did you merely add code in ath_rx_tasklet to update the ARF counters and ath_tx_start to pick the rate ? If you did this and if you are using saturation experiments for these results, then I know what is wrong :) What happens is that when you are using saturation, the tx queue of the madwifi hardware fills quickly (I measured often 80 to 100 items). When you update the arf counters from ath_tx_tasklet (or ath_processq in the latest version of the madwifi driver), you update them with information from packets which are at the head of the queue. When you pick the rate in ath_tx_start, you configure the rate for the packets at the tail of the queue. Needless to say, this will trigger wide oscillations of your algorithm which will make it always pick the wrong rate. I wrote a "fix" for that which is, however, rather yucky: it parses the hardware transmission queue to update the rate choosen in each packet rather than just updating the rate for the packet at the tail of the queue. Another fix is to allow only one packet at a time in the queue. This is the solution sam tends towards because it has the nice side-effect that you can use it to implement multi-rate retry for older Atheros hardware. I have not had time to implement this though. Of course, the madwifi algorithm does not really suffer from this problem (it has other problems) because it runs from a periodic timer whose period is longer than the time most often needed to empty the transmission queue. I hope this helps, regards, Mathieu -- Mathieu Lacage <mat...@so...> |
From: Sam L. <sa...@er...> - 2004-08-16 15:57:45
|
I'm very interested in getting a reasonable rate control algorithm into the driver. But, as I've said to you previously, a prerequisite to changing the driver is to break the rate control code out into a separate loadable module. The last patch you sent me nearly did this. Perhaps someone else can follow through on the details. Sam |