Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo


#25 /tmp/, filesize limits?


I plan to use davfs2 to handle large files.
I have problems with files larger than 2GB.

Is this a known limitation? Can it be fixed?
Is this a coda issue or a davfs issue?

Also I read issues with running out of disk space in /tmp.
Is this still the case? Is the location of the /tmp
directory easily configurable?

My current situation:


Uploading files larger than 2GB:
they are truncated to 2GB.
and the client (the shell with the mount point) says
"Filesize limit exceeded."
There is nothing interesting in /var/log/messages.


Stat-ting files larger than 2GB:
Size gets truncated to 2GB
-rw------- 1 2147483647 ~/davmnt2/2gigs

Although the WebDAV answer just gave the correct size:


Downloading files larger than 2GB:
this also fails or truncates.


  • Werner Baumann
    Werner Baumann

    Logged In: YES

    There are currently two quite different versions of davfs2

    - the older davfs2-0.x.x, that does almost no cacheing on
    the cost of causing more traffic

    - the newer davfs2-1.x.x, that maintains a permanent cache
    and also a directory cache in memory; it reduces traffic and
    increases speed on the cost of more disk and memory usage.

    But both versions have one thing in common:
    Files that are opened will allways be downloaded in full (no
    partial GET) and saved on disk as long as they are open.
    There are currently no plans to change this.


    davfs2-0.x.x stores cached files in /tmp. When the file is
    closed (and stored back to the server) the cached version in
    /tmp will be deleted.

    davfs2-1.x.x uses a cache directory in the users home
    directory. If mounted by root the cache will be in
    /var/cache/davfs2. But both locations may be configured in
    the davfs2.conf files (system conf and per user conf). Files
    that are closed will not be deleted, but kept in cache. Next
    time the file is opened it will only be downloaded from the
    server if it has been changed remotely (like browsers do).
    The size of this cache is limited by configuration.

    But in both versions there must allways be enough free disk
    space to hold copies of all open files.

    Large file support:

    Most development on davfs2 was done on the base of the neon
    library version 0.2.4. This library did not support large
    files and so does davfs2.

    Newer versions of neon support large files. I am working on
    LFS for the next release of davfs2-1.x.x (indeed I compiled
    it with LFS yesterday, but it is not tested. I would like to
    send you a prelease package for testing if you are interested.)

    I did not yet bother with LFS for davfs2-0.x.x, but once it
    is working with davfs2-1.x.x it should not be that difficult.

    I do not know for sure whether coda supports large files.
    But it propably does (it uses 64-bit for communicating the
    file size). The next release of davfs2-1.x.x will be able to
    use the fuse kernel file system too, so there will be no
    problem with LFS by the kernel.


  • Werner Baumann
    Werner Baumann

    Logged In: YES

    The new version, davfs2 1.1.1, comes with support for large
    files. Please test it and report whether it works for you.


  • Werner Baumann
    Werner Baumann

    • status: open --> closed