QS_EventKBytesPerSecLimit, mod_qos 10.30, httpd 2.2.latest

  • Jeff Trawick

    Jeff Trawick - 2014-04-02

    I'm having trouble getting downloads slowed down for a certain class of users. I'm routing requests through mod_proxy to limit brigade size. (I verified that they were 8000 bytes in size for my testcase.)

    Here's the configuration I'm using:

    LoadModule qos_module modules/mod_qos.so

    Listen 10101
    <VirtualHost *:10101="">
    ProxyPass /
    SetEnvIf Request_URI "." TESTING
    QS_EventKBytesPerSecLimit TESTING 10000
    QS_EventPerSecLimit TESTING 999999
    QS_EventRequestLimit TESTING 999999
    LogLevel warn

    Listen 10100
    LogLevel warn

    worker MPM

    ThreadsPerChild 25
    MaxSpareThreads 25
    MinSpareThreads 1
    MaxClients 25
    MaxRequestsPerChild 0

    I'm requesting a 100MB file 15 times in a row from wget (1 at a time) to port 10101 (reverse proxy).

    About 9 requests go through very quickly, at which point wget is receiving about 10-13K bytes/second (not measured exactly), and the downloads essentially don't finish, even though they should be proceeding at about 10MB/sec according to my configuration (IIUC :) ).

    The error log has this around the time the last completed request finishes:

    [Wed Apr 02 19:24:40 2014] [warn] [client] mod_qos(052): byte rate limit, rule: var={TESTING}(10000), kbytes/sec=81818, delay=359ms

    Is there anything obviously wrong with my configuration?

    Maybe so much data was let through initially that according to the calculations it is slowing down the transfer excessively to even it out?

    How straightforward should I expect the throttling to be? E.g., how much slower than 100MB/sec (with the above config) could I expect it to get due to mod_qos throttling?

    Is there something I can check at runtime? In qos_out_filter_delay I see

    (gdb) p *rctx
    $1 = {entry = 0x0, entry_cond = 0x0, event_entries = 0x7fda44006a58, evmsg = 0x7fda44006b98 "L;", is_vip = 0, maxpostcount = 0,
    event_kbytes_per_sec_block_rate = 359, cc_event_req_set = 0, cc_serialize_set = 0, body_window = 0x0}

    Thanks very much for any advice!

    (BTW, this seems related to the old thread at https://sourceforge.net/p/mod-qos/discussion/697421/thread/c78d7da5/?limit=25#6856)

  • Jeff Trawick

    Jeff Trawick - 2014-04-29

    Hello again,

    I have found a solution for my issue, which I have published at https://github.com/trawick/tweak-qos

    A basic description is in the README which GitHub displays on that page.

    The main commit:


    Additional fixes follow that one. I'm afraid that the commits in the repository prior to that one introduce a number of changes to fix gcc warnings in my own build environment or fix a minor issue with httpd 2.4 and don't fix any bugs. (See
    https://github.com/trawick/tweak-qos/commits/master for a description of all commits)

    As you can see from the directive name and the implementation, this is not integrated into the existing mod_qos implementation for bandwidth limiting. Any notes from Pascal on suitability or means of integration would be much appreciated.

    Thanks all!

  • Pascal Buchbinder

    Hi Jeff.
    I'll be happy to incorporate your proposal into the module to improve the current behaviour and to let everybody benefit from your enhancements.

    I think the main issue of the current/old implementation is the fact, that the module measures only finished requests (downloading big files causes "isolated peaks" disturbing the measurement and letting the module to overreact).

    Many thanks!

    Last edit: Pascal Buchbinder 2014-04-29
  • Jeff Trawick

    Jeff Trawick - 2014-04-30

    Thanks, Pascal!

  • Pascal Buchbinder

    Hi Jeff.
    Your input is very very useful and I would like to thank you that mod_qos may benefit from your knowledge and findings.

    • Doing the calculations within the filter is the most important change. The old implementation doing this in the logger handler was slow and error-prone, especially for very large files when took very long to download them and there was no other traffic by other clients.
    • Using milliseconds only (and not nanoseconds) was not fine granular enough.
    • Splitting large bucket brigades into smaller blocks solves the problem with local files (works now in non-mod_proxy setup as well).
    • Using the filter after mod_deflate is a welcome improvement as well.

    I'm going to adapt the existing code in a first phase still using a closed-loop control system measuring the actual transfer rate.
    A first test [1] (multiple clients downloading files of different sizes between 7k and 10M) looks very promising. This test shows, that the old 10.30 code had an accuracy of only 52% while the new code achieves 97% (followed by your proposal which achieves a high accuracy of 86% too).

    More tests will follow... and so will new mod_qos releases (I'm looking forward to release a new version with a first fix soon).

    Best regards, Pascal

    [1] http://mod-qos.cvs.sourceforge.net/viewvc/mod-qos/src/test/KBytesPerSecLimit.sh?revision=1.2&content-type=text%2Fplain

  • Jeff Trawick

    Jeff Trawick - 2014-05-03

    Thanks so much, Pascal. I appreciate your responsiveness, and I look forward to trying out the new code!

    Apart from the issue of accuracy (97% definitely much better than 52% :) ), my use case involves very large files such as ISO files or movies, and getting the filter involved from the start was critical.


Log in to post a comment.