Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo


#2 Contention in Mac80211


In Mac80211::handleLowerMsg(cMessage *msg):
if (contention->isScheduled()) {
error("Gaack! I am changing the IFS on an ongoing contention");

It seems to me that it should be possible that, while a node is in contention (e.g. counting down its bc) it should be able to receive a message - it will freeze its bc, receive the message and then resume backoff. This line prevents this beviour, should there not be a call to suspendContention(); here and then receive the message (without currentIFS = DIFS;, assuming suspendContention() takes care of that in this case).

Also, in
void Mac80211::suspendContention() {
Why does suspendContention assert there is no contention timer is active? Does suspendContention not "freeze" the contention? I am afraid I do not completely understand but I am getting "Gaack!!" errors while simulating a scenario with collisions - which should be perfectly acceptable to an 80211 mac...


  • Galactix

    I compared the code to Mac80211 in MFw 2.0p3 and noticed an overhaul. Indeed, the PHY changed so it makes sense the channelSense changes (modelled with a channelSenseRequest message called "contention") is changed as well. Although in MFw the suspendContention worked properly.

    I cannot find where contention is scheduled, apparently it is scheduled because i get the "Gaack!!" error. I would like to unschedule (cancelEvent) it in the suspendContention() method but it is not a self-message, so Mac802111 is not allowed to cancelEvent(contention).

    It seems to me the channelSenseRequest should have a "freeze" method which unschedules it, to resume sensing when backoff continues after an intermediate reception.... at this moment the behaviour is not conform the standard as a node cannot receive during backoff.

  • Galactix

    A workaround is simply putting the assert(!contention->isScheduled()) and the error("Gaack") messages in comments. It works, although I am a bit hesitant to blatantly modify code I do not fully understand. Could it be that the Mac80211 in MiXiM 1.0 is still in a transitional state from MFw to MiXiM, and still a bit beta-ish?

  • Karl Wessel
    Karl Wessel

    The contention is an "UNTIL_BUSY" ChannelSenseRequest. Which means the Decider has to send it back to the Mac layer as soon as the channel turns busy. So if a frame is received during contention the Request is sent back to the Mac automatically. Thats why "handleEndContentionTimer()" in the Mac80211 checks if the channel was actually idle, if not it calls "suspendContention" which then freeze the contention. Since this is the only place where "suspendContention" is called the contention message normally can't be scheduled anymore at this point. Which is the reason for the "assert(!contention->isScheduled());" line.

    Which settings did you use? Maybe I can reproduce the "Gaack" and see what caused it.

  • Galactix

    Thanks for the reaction! I went back to an unmodified Mac80211. The Gaack in question is the one in the handleLowerMsg() function.

    I am expecting it has something to do with some modifications I made to the PHY in Const80211.h to model 802.11p, these include:
    const double BITRATES_80211[] = {

    I have interpolated the SNR values in omnetpp.ini for these bitrates (actually only use 3 and 6).

    const double PHY_HEADER_LENGTH=40;
    const double HEADER_WITHOUT_PREAMBLE=8;
    const double BANDWIDTH=10E+6;

    (basically, 802.11p's PLCP and preamble are shorter in microseconds, BANDWIDTH is halved, effectively halving BITRATES as given above.

    const const_simtime_t ST = 16E-6;
    const const_simtime_t SIFS = 32E-6;

    and of course this also influences DIFS.

    But when I switch back to the original Consts80211.h, the problem remains.

    My network layer implements a 'beaconing' function. 10 nodes transmit 25 beacons per second, at 40ms intervals. This gives 4ms timeslots, more than large enough to fit a 400 byte message in at 3Mbps. As a base case, nodes start with an initial delay of nodeId*4ms, so we have an ideal TDMA scenario and no collisions. All is fine.

    Now, when I initialise nodes with an initial delay randomly choosen: uniform(0,1)*(1.0/lambda);

    Here lambda is the beaconing frequency (25Hz). Now messages overlap in time (very likely) and collisions occur. This is when the Gaack messages show up.

    I have 160 bytes of Mac (802.11p) and 0 Netw headers, for abstraction I only provide a 400 byte payload at network level.

    I imagine there must be a calculation going on somewhere underwater w.r.t. timing... I noticed some hardcoded 802.11b settings (such as carrier frequency and half bandwidth in Mac80211::createSignal) so maybe there are more in modules I have not yet checked, and a discrepancy between the hardcoded b values and the p values is causing a conflict in calculating when the medium should be idle?

    I really want to help sort this out as I think it will improve the quality of the simulator. I have been thinking how Mac80211 could be more generic so it could be initialised as an a, b, g, p, n or whatever MAC. To get it working 'quickly' I just changed Mac80211's b values into p's values and (as mentioned in previous comments) disabling the Gaack message, not the nicest option :)

    If it is any help I could of course also send my entire simulation source.

  • Karl Wessel
    Karl Wessel

    Can you please send me the omnetpp.ini with which the "gaacks" occur? That way I can reproduce them and find out what caused them.