From: Georg Lukas <georg@bo...> - 2007-11-30 14:15:47
Hello developers :)
I'm f'uping here because this is clearly a driver issue and not a user
* Georg Lukas <georg@...> [2007-11-28 22:13]:
> the per-node IEEE80211_NODE_QOS flag is evaluated when sending packets,
> but the ieee80211_add_neighbor() function doesn't set it, it just copies
> the WME data from the beacon frame to the ni.
My quick'n'dirty patch for this issue (http://pastebin.ca/800541) has
led to a pretty strange problem, and I hope one of you can enlighten me:
With the patch (which just marks all neighbors with WME in their beacons
as QOS capable), the own sending performance is greatly reduced. When
sending packets in ad-hoc mode using the (default) BE (best effort)
queue, the number of packets per second is lowered to the performance of
the BK (background) queue.
OTOH, when just patching the ieee80211_classify() function to set the
QOS flag (leading basically to the same result, just later in the code
flow), the BE queue is working as expected (the results are comparable
to using an unpatched driver). The sent packets look exactly the same
between the two hacks, and IEEE80211_NODE_QOS keeps set after the first
Still, it seems that with the WME IE based patch, the priority is
somehow handled wrongly (and thus the wrong contention window is used).
I cannot find the place in the code where the difference could result
from. Maybe someone can help me out with this?
|| http://op-co.de ++ GCS/CM d? s: a-- C+++ UL+++ !P L+++ E--- W++ ++
|| gpg: 0x962FD2DE || N++ o? K- w---() O M V? PS+ PE-- Y+ PGP++ t* ||
|| Ge0rG: euIRCnet || 5 X+ R tv b+(+++) DI+(+++) D+ G e* h! r* !y+ ||
++ IRCnet OFTC OPN ||________________________________________________||
Get latest updates about Open Source Projects, Conferences and News.