#91 Davfs 1.3.3 --- crashes SUSE Linux 9.3 -- memory leak ?



I've installed DAVFS2 1.3.3 with Neon2.6 from the most
recent build. With a default configuration file, I run an ls -lRt dump of the mounted filesystem and it eventually causes the whole machine to hang ...

1) Has anyone seen this issue before ?
2) Is it a Neon vs. Fuse filesystem issue
(i.e. should we use one vs. the other)
3) Could the default 50 MB cache file limitation be
the cause (i.e. we reset to 2GB et al).

None of the filesystems filled, but we do notice all of the memory on the machine gets consumed. Note: the Virt Proc size and the Real Proc size seem to grow normally. The overall memory on the machine is still above 3GB at the "lock-up".


  • Werner Baumann

    Werner Baumann - 2008-12-23
    • assigned_to: nobody --> wbaumann
  • Werner Baumann

    Werner Baumann - 2008-12-23

    Do you think this is standard behaviour of davfs2 and nobody noticed?

    If not: How about some information about your case that would allow to see what may cause this non-standard behaviour.

    First *guess*: there is some loop in your file system. Maybe you mounted a local server which includes the davfs2 cache?

    Anyway: even if davfs2 is eating up all memory. You should always be able to kill the mount.davfs process with SIGTERM or SIGKILL from a virtual terminal.


  • Nobody/Anonymous

    Not mounting a common filesystem (i.e. where cache location is).

    We do have a large number of files. Currently have the cache size set to 500MB,
    table_size to 2^20.

    What would you suggest we might tune.

    It's currently mounting a WebDAV repository as a filesystem for Apache to use. I've recreated
    the problem by simply doing a ls -lRT on the filesystem and it eventually causes the server to

    Anything else you might need to know ?

  • Werner Baumann

    Werner Baumann - 2008-12-23

    "Anything else you might need to know ?"
    Everything that might affect davfs2:
    - your overall configuration
    - kind of connection
    - number of files
    - nesting of directories
    - ...

    Some more concrete questions:
    - does ls -ltR work when issued on the server side?
    - what is "a large number of files"? A number please.
    - does ls -l for just one directory work?
    - when you increase the number of recursions, dirs and files step by step; at what point does the problem occur?

    Hint: davfs2 needs about 200 to 500 Bytes of memory for each node (file or dir).

    With ls -R davfs2 has to do a HTTP-request for every single directory. On slow connections it will have to repeat many of them (because they might have become stale while retrieving the other ones).

    davfs2 surely was not intended to be the backend of a Webserver? What's the intention of this anyway (why not use a proxy)? Please note: WebDAV is not a terribly efficient protocol when used as file system.

    "and it eventually causes the server to hang."
    To avoid misunderstandings: it is the client (the system that davfs2 runs on) that hangs? Or is it the WebDAV server?


  • Werner Baumann

    Werner Baumann - 2009-05-03
    • status: open --> closed-out-of-date

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks