#65 Infinite access() loop checking cache

closed-fixed
nobody
5
2006-11-07
2006-10-27
No

Thanks for davfs, it's cool.

I use jungledisk with davfs2 v1.1.2 and coda on (a
modified) FC5 system with Linux kernel 2.6.18.
Mount.davfs frequently gets into an infinite loop in
which (according to strace) it does nothing but call
access(..., F_OK) on files in its cache over and over.
It appears to traverse a long list of cache files and
then repeat.

On closer inspection, it looks like the list it's
iterating over changes over time. Looking at my strace
output, a particular file appears on line 1, then again
on line 2193, then again on line 4244, then 6389, then
8346. The deltas in each case are 2192, 2051, 2145,
1957. Perhaps this reflects the (slow) rate at which
jungledisk uploads drain out of davfs2's cache? In any
case, even if this apparent infinite loop is not
actually infinite (because eventually the entire cache
will drain), it still shouldn't race repeatedly through
all these access() calls, pinning my CPU at 100%.

I'm a software engineer and am available to help debug
this with a little guidance.

Discussion

  • Werner Baumann

    Werner Baumann - 2006-10-28

    Logged In: YES
    user_id=1260327

    Hello Bob,

    thanks for the report.

    There is indeed a loop that is called regularly. It is in
    function dav_tidy_cache() in cache.c. It is called in
    intervals set by the configuration option idle_time (default
    10 seconds). Purpose is to save back dirty files, release
    locks that are no longer needed, refresh locks and remove
    older files in the cash, if the cache exceeds its size limit.

    You may increase the idle_time.

    But your report also shows, that this does not work as
    effective as it should. It should only access the disc when
    necessary. Reason for this unecessary access() calls is some
    (too) paranoid programming of the inline function
    is_cached(). But before I change this function I will have
    to review all its uses.

    There is a quick way to improve the efficency. There are two
    occasions in dav_tidy_cache(), where this paranoid checking
    is really useless and can be removed without risk:

    In function dav_tidy_cache(), line 578:

    if (node != NULL && is_cached(node))
    cache_size += node->size;

    Please replace by

    if (node != NULL && node->cache_path != NULL)
    cache_size += node->size;

    And in function resize_cache(), called by dav_tidy_cache(),
    line 2412:

    if (is_cached(node) && !is_open(node) &&
    !is_dirty(node)
    && !is_created(node) && !is_backup(node)
    && (least_recent == NULL
    || node->atime <
    least_recent->atime))

    replace by

    if (node->cache_path != NULL &&
    !is_open(node) && !is_dirty(node)
    && !is_created(node) && !is_backup(node)
    && (least_recent == NULL
    || node->atime <
    least_recent->atime))

    I will also have to review the code that evaluates the size
    of the cache. Meanwhile you might check, whether your cache
    size (configuration option cache_size, default 50 MiByte) is
    too small, so that the function resize_cache() is called to
    often.

    Greetings
    Werner

     
  • Bob Glickstein

    Bob Glickstein - 2006-10-30

    Logged In: YES
    user_id=999651

    I applied that patch as soon as you sent it and have been
    using davfs heavily since then. It definitely clears up the
    problem. Thanks!

     
  • Werner Baumann

    Werner Baumann - 2006-11-07

    Logged In: YES
    user_id=1260327

    The patch is now integrated in version 1.1.3

    Greetings
    Werner

     
  • Werner Baumann

    Werner Baumann - 2006-11-07
    • status: open --> closed-fixed
     

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks