Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

cache size becoming too big

Help
butchie55
2008-02-28
2013-04-16
  • butchie55
    butchie55
    2008-02-28

    Our davfs2 shared folders are working great here (version 1.2.2 on Gentoo). The only thing is the cache folders are becoming quite big.
    We have 2 main davfs mount points, one is about 19 GB and 30000 files, the other one is only 1.2 GB and 4000 files.
    Access time is really good.

    But the ./davfs/cache folder is eating up disk space (more than 2.5 GB )

    Can I clean manually this cache or should I keep it this way for performance reasons (in which case I will need to move it to an other partition as I didn't plan it to grow so big) ?

    Yours,

    Butchie

     
    • Werner Baumann
      Werner Baumann
      2008-02-28

      Hello Butchie,

      before deleting the cache, you should check for the reason, why it is growing that big. There are also a few things to consider, before removing the cache.

      Why is it that big:

      In davfs2 the cache size is limited. By default the limit is 50 MiByte, but you may have changed this in davfs2.conf with option "cache_size". davfs2 periodically scans the cache. When it exceeds this limit, it will delete cached files: oldest access-time first. But it will never delete cache files that are open, changed and not yet saved back to the server, and files in the backup-directory "lost+found".

      So, when your cache permanently exceeds the configured cache_size, there seem to be many files in the cache that cannot be removed. This indicates a problem with uploading files to the server.

      Please check the directory lost+found first.
      If there are a few files, this may be due to conflicting write access to the server. But if there are many files, there is a serious problem.

      Maybe there are applications running, that hold open many files?

      To see, whether cache maintenance is working, you may add option "debug cache" to your davfs2.conf. This will cause a lot of entries in one of your system's log files. They should show "tidy_cache" running every 10 to 20 seconds, eventually reporting the size of the cache. (You have to unmount and mount again for this option to take effect.)

      Cache size:
      It depends on the number of users of the file system and on how many different files they open. It is hard to tell in advance. If the same file is downloaded from the server within minutes, though not changed on the server, it is to small.

      Deleting the cache:

      You should only do it, when there are no files in "lost+found", or the owners of these files agree that they are not longer needed.

      You should only do it, when the file system is unmounted. When you delete the cache, while the file system is mounted, this may cause data loss.

      There is a direftory for every mount point in ./davfs2/cache. You may delete the directory completely, or only part of the files. After mounting again, davfs2 will of course have to gradually rebuild the cache, by downloading files that are opened but not found in the cache.

      Last remark:

      When I designed the cache, I had in mind, that only a small group of users will access the file system. So if there are many users, and a huge load, it may turn out, that cache maintenance is not optimal or even insufficient.

      Cheers
      Werner

       
    • poutinen
      poutinen
      2008-05-08

      Hi!

      I'm having similar problems. The cache just keeps growing.
      Environment: Core 2 Duo (laptop), almost vanilla Fedora 8 x86 (with all the latest yum updates!) thus running kernel 2.6.24.5-85.fc8, file system ext3

      davfs2-1.3.1, make install, using default /usr/local/etc/davfs2/davfs2.conf

      mounted WebDAV site is actually local Subversion version 1.4.4 repository (with Apache 2.2.8) mounted through http
      mount -t davfs http://localhost/repos/test/ /mnt/svn/

      Then simply copying something e.g.
      cp -r /mnt/svn/trunk /tmp/
      works nice and puts everything in /tmp AND in the davfs cache directory.

      None of these files are then opened with other programs.

      BUT nothing happens to the cache size, it keeps its size (which is >50MB, whatever is the amount copied from /mnt/svn!), e.g. 196MB like here:

      du -s -m /var/cache/davfs2/localhost-repos-test+mnt-svn+root/
      196     /var/cache/davfs2/localhost-repos-test+mnt-svn+root/

      Then more weird is that when one umounts, normal information :
      /sbin/umount.davfs: waiting while mount.davfs (pid 4456) synchronizes the cache .. OK

      Then when one mounts the same again:
      mount -t davfs http://localhost/repos/test/ /mnt/svn/

      /var/log/messages
      is filled with orphaned files (takes ages to mount!)

      ...
      May  9 01:15:00 montreal mount.davfs: found orphaned file in cache:
      May  9 01:15:00 montreal mount.davfs:   /var/cache/davfs2/localhost-repos-test+mnt-svn+root/c180.dat-xtfBjV
      May  9 01:15:00 montreal mount.davfs: found orphaned file in cache:
      May  9 01:15:00 montreal mount.davfs:   /var/cache/davfs2/localhost-repos-test+mnt-svn+root/c2b1.dat-5bCSNl
      May  9 01:15:10 montreal mount.davfs: open files exceed max cache size by 114457 MiBytes
      May  9 01:16:20 montreal mount.davfs:last message repeated 7 times

      The actual size of the cache is still the same:
      du -s -m /var/cache/davfs2/localhost-repos-test+mnt-svn+root/
      196     /var/cache/davfs2/localhost-repos-test+mnt-svn+root/

      I tried with raising the debug level by adding:
      debug cache

      That produces a lot more data, some here:

      May  9 01:28:53 montreal mount.davfs: open files exceed max cache size by 114457 MiBytes
      May  9 01:29:18 montreal mount.davfs: davfs2 1.3.1
      May  9 01:29:21 montreal mount.davfs: Initializing cache
      May  9 01:29:21 montreal mount.davfs: Alignment of dav_node: 8
      May  9 01:29:21 montreal mount.davfs: Checking cache directory
      May  9 01:29:21 montreal mount.davfs:   /var/cache/davfs2/localhost-repos-test+mnt-svn+root
      May  9 01:29:21 montreal mount.davfs: new node: (nil)->0x9b95110
      May  9 01:29:21 montreal mount.davfs: Reading stored cache data
      May  9 01:29:21 montreal mount.davfs: new node: 0x9b95110->0x9b9f910
      May  9 01:29:21 montreal mount.davfs: new node: 0x9b9f910->0x9b9fa18
      ...
      May  9 01:29:21 montreal mount.davfs: new node: 0x9b9f910->0x9bab2f8
      May  9 01:29:21 montreal mount.davfs: error parsing /var/cache/davfs2/localhost-repos-test+mnt-svn+root/index
      May  9 01:29:21 montreal mount.davfs: deleting node 0x9bab2f8
      ...
      May  9 01:29:21 montreal mount.davfs: deleting node 0x9b9f910
      May  9 01:29:21 montreal mount.davfs: new node: 0x9b95110->0x9b95178
      May  9 01:29:21 montreal mount.davfs: found orphaned file in cache:
      May  9 01:29:21 montreal mount.davfs:   /var/cache/davfs2/localhost-repos-test+mnt-svn+root/GraphicsCards.class-VouAvc
      May  9 01:29:21 montreal mount.davfs: new node: 0x9b95178->0x9b95238
      May  9 01:29:21 montreal mount.davfs: found orphaned file in cache:
      May  9 01:29:21 montreal mount.davfs:   /var/cache/davfs2/localhost-repos-test+mnt-svn+root/memmonitor.js-PiNbn7
      May  9 01:29:21 montreal mount.davfs: new node: 0x9b95178->0x9b95fb0
      ...
      May  9 01:34:00 montreal mount.davfs: added /repos/test/trunk/
      May  9 01:34:00 montreal mount.davfs: new node: 0x9b95110->0x9c595a0
      May  9 01:34:00 montreal mount.davfs: added /repos/test/davfs2-1.3.0/
      May  9 01:34:00 montreal mount.davfs: new node: 0x9b95110->0x9c59698
      May  9 01:34:00 montreal mount.davfs: added /repos/test/jdk1.6.0_06/
      May  9 01:34:00 montreal mount.davfs: directory updated: (nil)->0x9b95110
      May  9 01:34:00 montreal mount.davfs:   /repos/test/
      May  9 01:34:10 montreal mount.davfs: resize cache: 182 of 50 MiBytes used.
      May  9 01:34:10 montreal mount.davfs: open files exceed max cache size by 105620 MiBytes
      May  9 01:34:10 montreal mount.davfs:               105670 of 50 MiBytes used.
      May  9 01:34:10 montreal mount.davfs: tidy: 0 of 3541 nodes changed
      May  9 01:34:20 montreal mount.davfs: resize cache: 105670 of 50 MiBytes used.
      May  9 01:34:20 montreal mount.davfs: open files exceed max cache size by 105620 MiBytes
      May  9 01:34:20 montreal mount.davfs:               105670 of 50 MiBytes used.
      May  9 01:34:20 montreal mount.davfs: tidy: 0 of 3541 nodes changed

      Any suggestions what to try ??

        - Panu

       
    • Werner Baumann
      Werner Baumann
      2008-05-09

      Hello Panu,

      usally this happens, when files cannot be stored back to the server. This will lead to many files, that cannot be removed from the cache, because they have been changed or are opened by some application. You will get files in the lost+found directory in this case.

      Please check:

      - are there files in the lost+found directory of your mounted davfs2 file system.
        If so: how many disk space is occupied by these files.

      - does uploading of files work? Copy some file to the davfs2 file system and check
        whether it is stored on the server (using a browser).

      There seems also to be a problem with the index file. Maybe you already run out of disk space and davfs2 cannot save the index file in the cache? In this case: Completely remove the cache dirctory; set option "debug most"; mount the file system; copy some directory from the file system to /tmp; unmount the file system; send me the complete log entries from davfs2 as well as the index file from the new cache directory.

      Cheers
      Werner

       
    • poutinen
      poutinen
      2008-05-10

      Well none of the files are open or changed after the cp command (checked with lsof)
      There's plenty of disk space (22GB to be exact) and all of / is in one ext3 partition
      And one can even do the mount read-only:

      mount -t davfs -o ro  http://localhost/repos/test/ /mnt/svn/

      And the same thing happens. Cache keeps growing ...

        - Panu

       
    • Werner Baumann
      Werner Baumann
      2008-05-10

      I still believe an answer to my questions would be helpful.

      Cheers
      Werner

       
    • poutinen
      poutinen
      2008-05-11

      Sorry, I quickly thought that I answered the relevant ones. My mind was too much in the caching part since I don't even want to write to the server but more like provide the latest data there (=Subversion repository) as a simple file system area. And in practise even do the mount as read-only.

      But I set the Subversion repository now as an autoversioning one. I had to modify the the following default setting to:
      use_locks 0

      Then I mounted it:
      mount -t davfs -o rw http://localhost/repos/test/ /mnt/svn/

      Copied the files:
      cp -r /mnt/svn/jdk1.6.0_06 /tmp/

      Unmounted it:
      umount /mnt/svn

      Remounted.
      Yes /mnt/svn/lost+found has now lot's of files:
      du -s -m /mnt/svn/lost+found
      185    /mnt/svn/lost+found

      Uploading works ok (with svn autoversioning).
      And there's plenty of disk space as said before.

      I can provide you debug files if you want (where exactly?)
      Should I set the default cache size smaller and maybe run a more simple case ??

        - Panu

       
    • Werner Baumann
      Werner Baumann
      2008-05-11

      Hello Panu,

      as you did only read-operations (cp) and not change files, there is no reason for davfs2 to put any files into lost+found. So the next step is to track down, why this happens.

      As I understand, you did your test with a clean cache directory (or no cache directory at all), so this test does not interfere with older data. The name of the cache-directory should be something like "/var/cache/davfs2/localhost-repos-test+mnt-svn+root.

      I would need:
      - a directory listing of this directory (long form "ls -l")
      - the index-file from this directory
      - the complete debug log (starting with mount and ending with umount)

      I hope this will show, why davfs2 puts files in the lost+found-directory with no reason (I think it are these files, that let your cache grow, as davfs2 cannot remove them from the cache).

      This will be a lot of data. You may want to send them as .tar.gz to my email-address:
      werner.baumann@onlinehome.de

      Cheers
      Werner

       
    • Werner Baumann
      Werner Baumann
      2008-05-12

      Hello Panu,

      thanks for the debugging information.

      State of investigation:
      mounting, copying and unmounting work fine.
      The error occurs, when the file system is mounted again and davfs2 tries to parse the index file. It gets a parse error. As it can't read the index file, there are lots of "orphaned files" (= files not referenced in the index). These files will be put in lost+found directory and cannot be removed by davfs2, causing the cache to grow.

      This error seems only to happen, when the neon-library uses libexpat as parser for XML. Everything works fine with libxml2 as parser. I will try to track down the reason for this and fix the problem. But this may take some time.

      Meanwhile:
      If you have any chance to use a neon-library linked against libxml2, you should try.
      Maybe yourdistribution offers different versions (you need the dev-package too).
      You may also get the sources from http://www.webdav.org/neon/. You will have to use configuration options to force the use of libxml2, e.g.:
      "./configure --enable-shared --with-libxml2 --with-ssl=gnutls"
      After installation you may have to run ldconfig (see man ldconfig).

      When configuring davfs2, you may need to enforce the use of your neon-library, "./configure --with-neon=/usr/local", assuming your neon-package is in /usr/local/include and /usr/local/lib.

      Cheers
      Werner

       
    • poutinen
      poutinen
      2008-05-12

      Hi again!

      Ok, I tried the use of libxml2 as instructed (http://www.webdav.org/neon/neon-0.28.2.tar.gz):

      [root@montreal ~]# ps -ef|grep davfs
      davfs2   29614     1  0 18:38 ?        00:00:00 /sbin/mount.davfs http://localhost/repos/test/ /mnt/svn/ -o rw
      [root@montreal ~]# lsof -p 29614
      COMMAND     PID   USER   FD   TYPE     DEVICE     SIZE    NODE NAME
      mount.dav 29614 davfs2  cwd    DIR        8,7     4096 3303105 /root
      mount.dav 29614 davfs2  rtd    DIR        8,7     4096       2 /
      mount.dav 29614 davfs2  txt    REG        8,7   304593 3571771 /usr/local/sbin/mount.davfs
      mount.dav 29614 davfs2  mem    REG        8,7   374915 3579644 /usr/local/lib/libneon.so.27.1.2
      mount.dav 29614 davfs2  mem    REG        8,7   109580 3321264 /lib/libnsl-2.7.so
      mount.dav 29614 davfs2  mem    REG        8,7    50768 3319564 /lib/libnss_files-2.7.so
      mount.dav 29614 davfs2  mem    REG        8,7     9304 3319572 /lib/libcom_err.so.2.1
      mount.dav 29614 davfs2  mem    REG        8,7    33972 3569187 /usr/lib/libkrb5support.so.0.1
      mount.dav 29614 davfs2  mem    REG        8,7   157404 3578538 /usr/lib/libk5crypto.so.3.1
      mount.dav 29614 davfs2  mem    REG        8,7  1262752 3579764 /usr/lib/libxml2.so.2.6.32
      mount.dav 29614 davfs2  mem    REG        8,7     8072 3321265 /lib/libkeyutils-1.2.so
      mount.dav 29614 davfs2  mem    REG        8,7   191036 3578540 /usr/lib/libgssapi_krb5.so.2.2
      mount.dav 29614 davfs2  mem    REG        8,7   128952 3321242 /lib/ld-2.7.so
      mount.dav 29614 davfs2  mem    REG        8,7  1692524 3321244 /lib/libc-2.7.so
      mount.dav 29614 davfs2  mem    REG        8,7   210324 3321247 /lib/libm-2.7.so
      mount.dav 29614 davfs2  mem    REG        8,7    20564 3321245 /lib/libdl-2.7.so
      mount.dav 29614 davfs2  mem    REG        8,7    74928 3321246 /lib/libz.so.1.2.3
      mount.dav 29614 davfs2  mem    REG        8,7    84772 3319639 /lib/libresolv-2.7.so
      mount.dav 29614 davfs2  mem    REG        8,7   105968 3321266 /lib/libselinux.so.1
      mount.dav 29614 davfs2  mem    REG        8,7   347832 3321277 /lib/libgcrypt.so.11.2.3
      mount.dav 29614 davfs2  mem    REG        8,7   509396 3567065 /usr/lib/libgnutls.so.13.3.0
      mount.dav 29614 davfs2  mem    REG        8,7   600824 3578539 /usr/lib/libkrb5.so.3.3
      mount.dav 29614 davfs2  mem    REG        8,7    13324 3321276 /lib/libgpg-error.so.0.3.1
      mount.dav 29614 davfs2  mem    REG        8,7 77962480 3565811 /usr/lib/locale/locale-archive
      mount.dav 29614 davfs2  mem    REG        8,7    25700 3616382 /usr/lib/gconv/gconv-modules.cache
      mount.dav 29614 davfs2    0r   CHR        1,3              190 /dev/null
      mount.dav 29614 davfs2    1w   CHR        1,3              190 /dev/null
      mount.dav 29614 davfs2    2w   CHR        1,3              190 /dev/null
      mount.dav 29614 davfs2    3u  unix 0xf083c8c0            42113 socket
      mount.dav 29614 davfs2    4r   CHR        1,9              952 /dev/urandom
      mount.dav 29614 davfs2    5u   CHR     10,229              591 /dev/fuse
      [root@montreal ~]#

      When one mounted, copied some data with:
      cp -r /mnt/svn/jdk1.6.0_06/jre/ /tmp/
      cache size kept the same (no tidying every 10secs).

      When unmounted, size was:
      du -s -m /var/cache/davfs2/localhost-repos-test+mnt-svn+root/
      106     /var/cache/davfs2/localhost-repos-test+mnt-svn+root/

      But when one re-mounted the orphan list list got shorter and thus the amount of data in cache got smaller too:
      du -s -m /var/cache/davfs2/localhost-repos-test+mnt-svn+root/
      14      /var/cache/davfs2/localhost-repos-test+mnt-svn+root/

        - Panu

       
    • Werner Baumann
      Werner Baumann
      2008-05-14

      Hello Panu,

      glad to hear from you that it is working now.
      Please have an occasional look at lost+found. As you are mounting read-only, davfs2 should never put any files in this directory.

      Just for information:
      Besides the problem parsing the index file using expat, there were two other bugs.

      - when davfs2 checked for orphaned files in the cache, it missed some of the existing file nodes. Because of this it classified some cache files as orphaned that belonged to existing file nodes.

      - when downloading a file for the first time it miscalculated the new cache size. This way the calculated cache size could be much smaller then the real one and davfs2 did not call resize_cache.

      Cheers
      Werner

       
    • poutinen
      poutinen
      2008-05-15

      Hi Werner!

      I'm afraid there are still problems with the caching (with 1.3.2 2nd test version!). No more orphans but the cache resizing down doesn't work all the time. If e.g. I do some copying of the data more than once:
      cp -r /mnt/svn/jdk1.6.0_06/ /tmp/j2
      cp -r /mnt/svn/jdk1.6.0_06/ /tmp/j3
      The amount of data in /var/cache/davfs/ grows with every cp and it doesn't resize while mounted down to the default 50MB (by looking at the cache!) in the log file it says so though!

      But if one unmounts and mounts again then some resizing is done. And second but if there's lot's of data it doesn't even resize down to 50MB in one "remount" but if one simply does the remount (= unmount + mount) enough times then resizes eventually down to the level of 50BM. Did I explain this clearly enough ?

      [root@montreal mnt]# mount -t davfs http://localhost/repos/test /mnt/svn/
      Please enter the username to authenticate with server
      http://localhost/repos/test or hit enter for none.
      Username:
      [root@montreal mnt]# cp -r /mnt/svn/jdk1.6.0_06 /tmp/j2
      [root@montreal mnt]#
      [root@montreal mnt]# cp -r /mnt/svn/jdk1.6.0_06 /tmp/j3
      [root@montreal mnt]#
      [root@montreal mnt]# cp -r /mnt/svn/jdk1.6.0_06 /tmp/j4
      [root@montreal mnt]#
      [root@montreal mnt]# umount /mnt/svn
      /sbin/umount.davfs: waiting while mount.davfs (pid 6445) synchronizes the cache .. OK
      [root@montreal mnt]# mount -t davfs http://localhost/repos/test /mnt/svn/
      Please enter the username to authenticate with server
      http://localhost/repos/test or hit enter for none.
      Username:
      [root@montreal mnt]# umount /mnt/svn
      /sbin/umount.davfs: waiting while mount.davfs (pid 6497) synchronizes the cache .. OK
      [root@montreal mnt]# mount -t davfs http://localhost/repos/test /mnt/svn/
      Please enter the username to authenticate with server
      http://localhost/repos/test or hit enter for none.
      Username:
      [root@montreal mnt]#

      tail -f /var/log/debug.log|grep "cache-size"

      ...
      May 15 19:10:35 montreal mount.davfs: cache-size: 3 MiBytes.
      May 15 19:10:45 montreal mount.davfs: cache-size: 3 MiBytes.
      May 15 19:10:55 montreal mount.davfs: cache-size: 3 MiBytes.
      May 15 19:11:19 montreal mount.davfs: cache-size: 136 MiBytes.   <--- remount here!
      May 15 19:11:29 montreal mount.davfs: cache-size: 50 MiBytes.
      May 15 19:11:39 montreal mount.davfs: cache-size: 50 MiBytes.
      May 15 19:12:10 montreal mount.davfs: cache-size: 89 MiBytes.    <--- remount here!
      May 15 19:12:20 montreal mount.davfs: cache-size: 24 MiBytes.
      May 15 19:12:30 montreal mount.davfs: cache-size: 24 MiBytes.

      Same time:

      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      161    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      144    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]#
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      144    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]#
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      144    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]#
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      144    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]#
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      144    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      94    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]#
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      93    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]# du -s -m localhost-repos-test+mnt-svn+root/
      11    localhost-repos-test+mnt-svn+root/
      [root@montreal davfs2]#

      Anyway if you want I can send additional log data if needed!

      Thanks

        - Panu

       
    • Werner Baumann
      Werner Baumann
      2008-05-15

      Hello Panu,

      I think the behaviour observed by you is due to limitations in the design of the cache. These limitations are:

      - when davfs2 has to download files, it will first download - and store - the files, and check for cache_size later

      - it only checks for cache_size in intervals of 10 seconds (default value)

      Because of this, the cache can always become temporarly (for 10 seconds) bigger than the configured cache_size. If you download huge files (what is not the case in your example), the cache_size can be exceeded by the size of the largest file. If you copy whole trees (as in your example), the cache_size can be exceeded by the amount of data that can be downloaded within 10 seconds. As your server is localhost this can be quite a lot.

      So, with the current cache design, davfs2 can not guarantee to not exceed the configured cache size dramatically. It only guarantees (if there isn't a bug) that the cache size will not permanently stay bigger than the configured value.

      You can indirectly influence this by changing the interval time. The time interval for checking the cache is by default 10 seconds. It is the same time as the configuration option "delay_upload". If there is danger to run out of disk-space, you may reduce this value. But if you set delay-upload to 0, the interval will be the default 10 seconds again.

      When unmounting: if you want to make sure that the cache is resized before unmounting, you should wait for delay_upload seconds, to give davfs2 a chance.

      Cheers
      Werner

      P.S.: There are surely ways to improve the design of the cache with respect to this. But unless it is a real big problem, I would like to delay this for some unpredictable time.

       
    • poutinen
      poutinen
      2008-05-16

      Hi Werner!

      Sorry again, maybe I didn't explain the problem clearly enough :-)

      The current design of cache growing over the "soft-limit" e.g. default 50MB is not the problem here. Problem is that when it's resized (e.g. every 10sec) it's miscalculated. The cache thinks that it's size is under 50MB but the real size on the disk is not.

      When remounted the cache calculates the correct size for it's records when started but once again it miscalculates the size while resizing down. Thus when one does enough of the remounts (nothing else!) it finally gets under e.g. 50MB.

      So the above example:
      ...
      May 15 19:10:35 montreal mount.davfs: cache-size: 3 MiBytes.
      May 15 19:10:45 montreal mount.davfs: cache-size: 3 MiBytes.
      May 15 19:10:55 montreal mount.davfs: cache-size: 3 MiBytes.   <--- every 10s, NOT correct size
      May 15 19:11:19 montreal mount.davfs: cache-size: 136 MiBytes. <--- remount here, correct size!
      May 15 19:11:29 montreal mount.davfs: cache-size: 50 MiBytes.  <--- every 10s, NOT correct size
      May 15 19:11:39 montreal mount.davfs: cache-size: 50 MiBytes.
      May 15 19:12:10 montreal mount.davfs: cache-size: 89 MiBytes.  <--- remount here, correct size!
      May 15 19:12:20 montreal mount.davfs: cache-size: 24 MiBytes.  <--- finally the correct size!
      May 15 19:12:30 montreal mount.davfs: cache-size: 24 MiBytes.

        - Panu

       
    • Werner Baumann
      Werner Baumann
      2008-05-16

      Hello Panu,

      sorry for misunderstanding.

      And yes, there is another bug. It's the same stupid error in iterating over the hash-table that caused the wrong "orphaned files". It is in resize_cache() too.

      Could you please just change one line?
      In src/cache.c, function resize_cache(void), line 2695:

      wrong:
                          cache_size += node->size;
                      }
                      node = node->next;              <<< bug
                  }
              }

      Please change into:
                          cache_size += node->size;
                      }
                      node = node->table_next;
                  }
              }

      Thanks for sticking to the subject. Nevertheless, I hope you will not find another bug this time.

      Cheers
      Werner

       
    • poutinen
      poutinen
      2008-05-16

      Hi Werner!

      Looks pretty good with the above patch!

      Just one thing.

      I tested it also with a huge svn repository we have, copied some data over and noticed that all the directory entries dir-* are not accounted in the cache size. Do they even get "cleared" from the cache directory within e.g. this default 10s period?

      But when one unmounts all the dir-* entries are deleted.

      Is this simply a design decision ?

      Thanks!

        - Panu

       
    • Werner Baumann
      Werner Baumann
      2008-05-17

      Hello Panu!

      > Looks pretty good with the above patch!

      Big relief! This weekend I will release a bug fixed version.

      > Is this simply a design decision ?

      Yes. But now I'm forced to write down the design decisions, some of which I made unconsciously.

      The disk cache is mainly intended to
      - hold open files
      - permanantly save *files*, to be able to do a *conditional* GET-request next time the file is accessed.

      cache_size is merely meant to limit the disk space, that is occupied permanently. Downloaded files that are not requested for a longer period should be deleted and not block disk space.

      Directory information is retrieved using PROPFIND-requests. There is no conditional PROPFIND-request in WebDAV, so permanent caching of directory information is not of much use. davfs2 will have to request this information regularly, to avoid stale information. The proper balance between avoiding unnecessary traffic and avoiding stale information is hard to find, and there is no best solution.

      When a davfs2 file system is unmounted, only the files are stored permanently, and those directory informations, that are needed to reference these files. This information is in the index-file.

      The dir-**-files hold directory information in the format demanded by the kernel file system. It is different for fuse and for coda. coda requires them to be regular files, as it will access them directly, without involvement of davfs2 (once they are opened). davfs2 creates this dir-**-files from its node-information in memory. It holds this files in cache to reuse them if the directory information did not change. But this is only a minor performance gain.

      Cheers
      Werner