Thread: [Madwifi-devel] Bad Sensitivity in 11g-Mode
Status: Beta
Brought to you by:
otaku
From: Roman S. <rsk...@aa...> - 2007-09-21 10:00:58
|
On a Wistron CM9-GP Mini PCI card I did some Sensitivity measurements = using a calibrated Rohde & Schwarz RF-test system. I tested three different configurations: a) Host: IBM R40 ThinkPad, Windows XP, Driver 4.12.38 (1/04/2005) b) madwifi 0.9.3 svn r2100 with Linux 2.4.20 c) madwifi 0.9.3.2 Results: a) On the windows xp system the card shows an excellent receiver = sensitivity which is far better than -90dBm on the low rates in both, = 11b and 11g modes. b) Running svn r2100 the sensitivity in 11b-Mode is also excellent. I = have measured down to -98dBm at PER 8% using 1Mbit/s. But the behaviour in all 11g OFDM-rates is really strange:=20 It seems that the receiver refuses operation at signal levels below = -77dBm. Characteristical for all WLAN-receivers is that the Packet Error Rate = rises when the RF-level is about 3dB to 5 dB higher then the sensitivity = limit. The typical limit is better than -90dBm for the lowest 11g OFDM-Rate = 6Mbit/s. Interesting thing is that in this configuration the receiver suddenly = looses about 70% of all packets at -77dBm RF-Level without having = CRC-errors in RX-packets, so while really having a PER =3D0%, on all = 11g-rates. (54Mbit/s excepted from this observation, cause it reaches its = sensitivity limit typical at about -75dBm +- 2dB anyway) And there is a Hysteresis in the dependance on the RF-Level: Once the receiver is in the mode of loosing >70% of all packets, it = won't stop loosing packets until the level is increased for at least = 10dB (to -67dBm) for at least one second. The receiver starts to work properly again. Then the level can be slowly decreased again to -76dBm and nothing = changes, all packets are received. When -77dBm are reached, the receiver looses >70% of all packets again - = this procedure can be repeated as often you want. That means that this limit is not reached because of physical reception = problems, the receiver has not reached his limit by far! It seems to be a configuration problem of the driver initialising the = card. And it is obviously only dependant on the RSSI-Level and not on the = quality of the radio reception... c) We tried the 0.9.3.2 and the result was the same as seen in b) with = one significant difference: The hysteresis behaviour was gone. There was one hard threshold: below = -77dBm RF-level still >70% of packets were lost at 0% PER. But increasing the RF-Level of about 1..2dB, all packets were received = again. For low 11g-Rates, the user will see a reduction of the expected = coverage range down to about 1/10 of what is known as usual with this = technology. Any idea how we can get better receive sensitivity ? Regards, Roman=20 =20 |
From: Derek S. <de...@in...> - 2007-09-21 22:20:37
|
Hi, I have written the minstrel rat control algorithm, which reports the percentage of lost packets at each of the available data rates. It shows similar results to you. On incredibly strong links, (receiver/transmitter adjacent) all rates work perfect. As the link distance increases, the ofdm rates drop off, and fail. All that works is the 11b rates. Admittedly, the ofdm rates are a different encoding technology, and will be more adversely affected by noise/interference. Maybe. But there are many 11g cards that do heaps better than atheros cards, as your report below illustrates. ============================================================= > Any idea how we can get better receive sensitivity ? Yes. In a recent letter to this list from Nick Kossifidis, the issue of calibration was raised. Currently, the ath_calibrate() method is called by a timer (every 30 seconds if the last calibration works, or every second if the last calibration fails). Nick was suggesting the need to drive the ath_calibrate method from the rate control algorithm. On rereading your letter, and Nick's letter, and some offline correspondence with Nick, I am wondering if there is a need for the ath_calibrate method to be run on changing to an ofdm rate, or run more frequently, or something. Nick was also suggesting removing the noise floor calculation code from the ath_calibrate method, so the two functions are executed at quite different times. To resolve these questions, I would really appreciate some input from those more knowledgeable in the area of inner workings of the hal, OR suggested experiments.. Derek. On Fri, 21 Sep 2007, Roman Skrobotz wrote: > On a Wistron CM9-GP Mini PCI card I did some Sensitivity measurements using a calibrated Rohde & Schwarz RF-test system. > > I tested three different configurations: > > a) Host: IBM R40 ThinkPad, Windows XP, Driver 4.12.38 (1/04/2005) > > b) madwifi 0.9.3 svn r2100 with Linux 2.4.20 > > c) madwifi 0.9.3.2 > > Results: > a) On the windows xp system the card shows an excellent receiver sensitivity which is far better than -90dBm on the low rates in both, 11b and 11g modes. > > b) Running svn r2100 the sensitivity in 11b-Mode is also excellent. I have measured down to -98dBm at PER 8% using 1Mbit/s. > But the behaviour in all 11g OFDM-rates is really strange: > It seems that the receiver refuses operation at signal levels below -77dBm. > Characteristical for all WLAN-receivers is that the Packet Error Rate rises when the RF-level is about 3dB to 5 dB higher then the sensitivity limit. > The typical limit is better than -90dBm for the lowest 11g OFDM-Rate 6Mbit/s. > Interesting thing is that in this configuration the receiver suddenly looses about 70% of all packets at -77dBm RF-Level without having CRC-errors in RX-packets, so while really having a PER =0%, on all 11g-rates. > (54Mbit/s excepted from this observation, cause it reaches its sensitivity limit typical at about -75dBm +- 2dB anyway) > And there is a Hysteresis in the dependance on the RF-Level: > Once the receiver is in the mode of loosing >70% of all packets, it won't stop loosing packets until the level is increased for at least 10dB (to -67dBm) for at least one second. > The receiver starts to work properly again. > Then the level can be slowly decreased again to -76dBm and nothing changes, all packets are received. > When -77dBm are reached, the receiver looses >70% of all packets again - this procedure can be repeated as often you want. > That means that this limit is not reached because of physical reception problems, the receiver has not reached his limit by far! > It seems to be a configuration problem of the driver initialising the card. > And it is obviously only dependant on the RSSI-Level and not on the quality of the radio reception... > > c) We tried the 0.9.3.2 and the result was the same as seen in b) with one significant difference: > The hysteresis behaviour was gone. There was one hard threshold: below -77dBm RF-level still >70% of packets were lost at 0% PER. > But increasing the RF-Level of about 1..2dB, all packets were received again. > For low 11g-Rates, the user will see a reduction of the expected coverage range down to about 1/10 of what is known as usual with this technology. > > Any idea how we can get better receive sensitivity ? > > Regards, > Roman > > > > -- Derek Smithies Ph.D. IndraNet Technologies Ltd. Email: de...@in... ph +64 3 365 6485 Web: http://www.indranet-technologies.com/ |
From: Scott R. <sco...@gm...> - 2007-10-04 02:22:29
|
Hello all, The issue of bad sensitivity in OFDM modes is one that we have come across in our networks. For many links, the theory says that the link should work, and in 11b mode, it does. However, when we run these links in 11a, we see very high packet loss ( >70% as was mentioned). This is not due to ACK timeouts - we simply do not see the packets and this has been verified using monitor mode captures. Packets we do see have sensible SNRs which are in line with our theoretical link models. Unfortunately we have no idea where to start with debugging this problem. However, we are extremely keen to talk to people who have some knowledge of the workings of the chipsets as well as others who also experience this problem. We are happy to provide real-world environments to measure and to test ideas on. The issue of calibration has come up. My only concern is that we are not really clear as to what calibration actually achieves, how often it should be done, etc. I plan to write an email to Sam Leffler asking for some first-hand knowledge of the calibration procedure, what it's effects are and when it should occur in order to try and solve this. I'll post any results from that discussion here. In summary, we also see the problems mentioned with OFDM sensitivity and are very keen to spend some time trying to solve it. Maybe others who see this problem can chime in, even if it is only to add another data-point. Cheers, -- Scott Raynel WAND Network Research Group Department of Computer Science University of Waikato New Zealand |
From: Tjalling H. <tja...@ti...> - 2007-11-06 10:10:10
|
Scott Raynel <scottraynel <at> gmail.com> writes: > The issue of bad sensitivity in OFDM modes is one that we have come > across in our networks. For many links, the theory says that the link > should work, and in 11b mode, it does. However, when we run these > links in 11a, we see very high packet loss ( >70% as was mentioned). > This is not due to ACK timeouts - we simply do not see the packets > and this has been verified using monitor mode captures. Packets we do > see have sensible SNRs which are in line with our theoretical link > models. > > Unfortunately we have no idea where to start with debugging this > problem. However, we are extremely keen to talk to people who have > some knowledge of the workings of the chipsets as well as others who > also experience this problem. We are happy to provide real-world > environments to measure and to test ideas on. Please take a look at ticket #705 (http://madwifi.org/ticket/705) on the MadWifi wiki. I've updated it with a patch that could fix the sensitivity problems below. Please test it out and post your findings. Regards, Tjalling Hattink |
From: Scott R. <sco...@gm...> - 2007-11-06 10:15:32
|
Good stuff - pity about the fact that the new HAL doesn't support this. I'll see what we can do about testing it on an older HAL, though this might be tricky. Thanks for the effort! On 6/11/2007, at 11:06 PM, Tjalling Hattink wrote: > Scott Raynel <scottraynel <at> gmail.com> writes: > >> The issue of bad sensitivity in OFDM modes is one that we have come >> across in our networks. For many links, the theory says that the link >> should work, and in 11b mode, it does. However, when we run these >> links in 11a, we see very high packet loss ( >70% as was mentioned). >> This is not due to ACK timeouts - we simply do not see the packets >> and this has been verified using monitor mode captures. Packets we do >> see have sensible SNRs which are in line with our theoretical link >> models. >> >> Unfortunately we have no idea where to start with debugging this >> problem. However, we are extremely keen to talk to people who have >> some knowledge of the workings of the chipsets as well as others who >> also experience this problem. We are happy to provide real-world >> environments to measure and to test ideas on. > > Please take a look at ticket #705 (http://madwifi.org/ticket/705) on > the MadWifi > wiki. I've updated it with a patch that could fix the sensitivity > problems > below. Please test it out and post your findings. > > Regards, > > Tjalling Hattink > > > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a > browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > Madwifi-devel mailing list > Mad...@li... > https://lists.sourceforge.net/lists/listinfo/madwifi-devel -- Scott Raynel WAND Network Research Group Department of Computer Science University of Waikato New Zealand |