Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.


Nothing saved until end of operation?

  • Paul

    I have been using 0.28 happily on Sarge for my nightly backup -- moving 200 or so 5MB files to the DAV mount, which took a few hours.

    Then suddenly I started getting "400 Bad Request" errors intermittently, so I upgraded to v1.1.1. The errors are gone, but davfs's behavior is different. The mv (or cp) command speeds through all the files in just a minute, but the actual upload doesn't happen until after mv has terminated. There is no feedback. If the upload is interrupted, none of the 200 files ever arrive on the DAV mount. If I want to umount immediately after the mv, that command hangs for a few hours with no explanation.

    Am I overlooking some option that will make davfs write one file at a time, like a normal filesystem?

    • Werner Baumann
      Werner Baumann


      this is intented to be a feature.

      davfs2 v1 does the real upload in the backgroud, so the file system calls will not hang on slow connections.

      This behaviour is comparable to what your OS does when you use slow media, like a floppy. It will do the file operations in memory (not on the floppy) and will only save to the floppy when it gets bored. Of course, unmounting a floppy will hang until everthing is saved to the floppy.

      Concerning notification:
      As davfs2 is a file system (and it runs as a daemon), the only interaction with the user is via the file system interface. The file system on unix like systems do not know of any message like "Wait untill I have saved the cached files back to the real medium".

      As storing 1 GByte of Data to the servers takes some hours on your connection, davfs2 will need this time. And it correctly will not finish before this is done.


      P.S.: While cached files are waiting for the real upload, you can savely use them, e.g. open for writing. There will be no data corruption in this case.

      • Paul

        Thanks! But is there a way to make v1.1.1 act more like v0.28? The remote DAV server is slightly flaky. It would be very useful for each mv'd file to be written to the server before the next one begins. If the connection is slow, I'd like the filesystem call to be slow too.

        As it is, I keep losing data -- mv unlinks the source files before they're written to the destination server, and if the connection drops the files are gone.

        (Alternately, is there a solution to the "400 Bad Request" issue with 0.28? I am able to reproduce that problem testing with\)

    • Paul

      Also, after the one-minute mv, I umount the DAV fs, which seems to work fine, but I can't remount because the .pid file exists -- does that get deleted when the upload is done?

      Meanwhile, it's been 2 hours since the mv terminated, and still none of the files have appeared on the backup server -- where are they right now? Is davfs2 trying to move all 200 files as a unit, rather than one at a time, so the first file won't appear until the whole 1GB has been uploaded?


    • Werner Baumann
      Werner Baumann

      > But is there a way to make v1.1.1 act more like v0.28?

      > is there a solution to the "400 Bad Request" issue with 0.28?
      I would have to know whether it is a bad request by davfs2 or a bad response from server, and what exactly the badness is. Best way to learn this would be to record the HTTP-traffic. You can do this with e.g. Ethereal. You might remove most of the HTTP-Body (the data) from GET and PUT requests (but not from PROPFIND, LOCK and UNLOCK) and send it to me.

      Data Loss?
      There is one important difference between the caching that your OS does, and the caching of davfs2: When the file system is unmounted, davfs2 will in *no way* be notified by the kernel file system. It has to detect unmounting by other means, and when it can detect this, unmounting is allready finished.
      This is not a big problem for davfs2. It will not terminate, but do anything that is necessary and possible to synchronize data with the server. But there is no chance for davfs2 to block the unmounting or to notify the user. (It is a file system and can only respond to upcalls it gets from the kernel.)

      If you transfer a huge amount of data, you must avoid to kill davfs2, e.g. you must not shut down the system before all mount.davfs processes have terminated by their on will. As a good citizen, davfs2 will not refuse when the system is going to kill it.

      Unreliable connections:
      These will not cause data loss with davfs2 1.1.x. When davfs2 can not upload the files (for whatever reason) it will store a backup in the lost+found directory.

      davfs2 crashes or is killed:
      Even in this case there usually will be no data loss. The files will still be in the cache. But you must *not* mount the file system, because at startup davfs2 will synchronize with the server and delete cached files that do not exist on the server or in the index file it could not write because you killed it.
      Instead you may search the cache directory for missing files and save them. Now you can mount again and copy this files into the davfs2 mount. davfs2 will take another try to upload the files.

      > Is davfs2 trying to move all 200 files as a unit, rather than one at a time?
      one at a time. Simple HTTP PUT.

      > after the one-minute mv, I umount the DAV fs
      As you know, davfs2 will need two or three hours, you should not do this. But davfs2 will not terminate untill it finished its tasks. As long as the mount.davfs process is running, the pid-file will exist. You may use ps.


      BTW: Isn't there a better backup strategy than uploading 1 GByte over a slow DSL-connection every night?

      • Paul

        >BTW: Isn't there a better backup strategy than uploading 1 GByte over a slow DSL-connection every night?

        Yes, much better. But this seems like a good way to test davfs2!

        Okay, I reinstalled 0.28 from the .deb and ran tethereal while I tried to mount -- here is the entirety of the output:

          217.922345 -> HTTP OPTIONS /dav/ HTTP/1.1
          238.128603 -> HTTP OPTIONS /dav/ HTTP/1.1
          218.014355 -> HTTP HTTP/1.1 400 Bad Request (text/html)
          238.220613 -> HTTP HTTP/1.1 400 Bad Request (text/html)

        Mounting the same mountpoint running v1.1.1, I get:

          553.231567 -> HTTP PROPFIND /dav/ HTTP/1.1
          553.232886 -> HTTP Continuation or non-HTTP traffic
          553.348045 -> HTTP HTTP/1.1 401 Authorization Required
          553.349491 -> HTTP PROPFIND /dav/ HTTP/1.1
          553.349535 -> HTTP Continuation or non-HTTP traffic
          553.483511 -> HTTP HTTP/1.1 207 Multi-Status (text/html)

    • Werner Baumann
      Werner Baumann

      I believe I know the reason for the davfs2 0.2.8 problem. It is an additional 'connection' header that the server does not accept. But removing it might cause trouble with older versions of Apache. So I will have to wait for the opinion of another developer, before (propably) changing it.

      > I have been using 0.28 happily on Sarge ...
      > Then suddenly I started getting "400 Bad Request" errors ...

      Any idea what has been changed in between to cause this change in the behaviour of davfs2 0.2.8?
      I assume you did not change davfs2 and you also did not change the version of the neon library.
      - Do you know of any changes on the server side?
      - Did you use a proxy and don't use it any more?
      - You propably don't use the IP address instead of the hostname? (This can cause a lot of trouble.)

      davfs2 1.1.1:
      The problem is that uploads fail. This uploads are done using HEAD and PUT requests. So I would need to see traffic with HEAD and PUT requests.
      What you sent only contains the request and response lines. But there are a lot of additional headers in each request and respond that I need to know. Also the first and the last line of the data body might be useful.


      • Paul

        >Any idea what has been changed in between to cause this change in the behaviour of davfs2 0.2.8?

        I didn't change anything at all on my end. I assume it was a change on, although I have no trouble reaching it with cadaver.

        >davfs2 1.1.1:
        >I would need to see traffic with HEAD and PUT requests.

        Here you go. I can create a zero-byte file with touch, but I can't upload anything more than that.

        Here's me creating a file on the DAV filesystem with touch, successfully:

        365.625928 -> HTTP HEAD /dav/testfile-touch HTTP/1.1
        365.723006 -> HTTP HEAD /dav/testfile-touch HTTP/1.1
        365.841925 -> HTTP HTTP/1.1 404 Not Found
        372.355129 -> HTTP HEAD /dav/testfile-touch HTTP/1.1
        372.474557 -> HTTP HTTP/1.1 404 Not Found
        372.476066 -> HTTP PUT /dav/testfile-touch HTTP/1.1
        372.735897 -> HTTP HTTP/1.1 201 Created
        384.838611 -> HTTP HEAD /dav/testfile-touch HTTP/1.1
        384.975884 -> HTTP HTTP/1.1 200 OK

        Here's me attempting to mv a 2-byte file to the DAV filesystem:

        726.958166 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        727.054513 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        727.176816 -> HTTP HTTP/1.1 404 Not Found
        737.183932 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        737.301018 -> HTTP HTTP/1.1 404 Not Found
        737.302501 -> HTTP PUT /dav/testfile-2-byte HTTP/1.1
        777.858730 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        777.981260 -> HTTP HTTP/1.1 404 Not Found
        777.982752 -> HTTP PUT /dav/testfile-2-byte HTTP/1.1
        818.543165 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        818.670105 -> HTTP HTTP/1.1 404 Not Found
        818.671456 -> HTTP PUT /dav/testfile-2-byte HTTP/1.1
        859.231217 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        859.356087 -> HTTP HTTP/1.1 404 Not Found
        859.357810 -> HTTP PUT /dav/testfile-2-byte HTTP/1.1
        899.936076 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        900.058113 -> HTTP HTTP/1.1 404 Not Found
        900.059459 -> HTTP PUT /dav/testfile-2-byte HTTP/1.1
        940.612870 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        940.738486 -> HTTP HTTP/1.1 404 Not Found
        940.739919 -> HTTP PUT /dav/testfile-2-byte HTTP/1.1
        54225.712291 -> HTTP PUT /dav/testfile-2-byte HTTP/1.1
        54266.265324 -> HTTP HEAD /dav/testfile-2-byte HTTP/1.1
        54266.390361 -> HTTP HTTP/1.1 404 Not Found

        ... endlessly, until I kill the mount.davfs process. (I can't kill mv, because it terminated ok.)

    • Werner Baumann
      Werner Baumann

      davfs2 0.2.8:
      I just released 0.2.9. This should solve the Bad Request problem, as the additional (and most propably protocoll violating) second Connection header is removed.

      davfs2 1.1.1:
      There is no answer to the PUT request (in case of the 2-byte-file). Because of this the connection will time out (on the davfs2 side) and davfs2 will try again and again. But it should not be necessary to kill it, unmounting should suffice. davfs should then try for the last time and move the two bytes into lost+foud.
      Most propably the server does not understand the Expect 100 header It should understand it, or return an error code.
      But to see what is going on, I need to see the complete traffic. I need the headers and the body. And in this case I also need the information from the TCP-Layer to know if, and who, terminates the connection. I don't know tethereal, but ethereal and tcpdump will provide all of this.

      You may also try this (you will have to compile davfs2 for this). In the sources, file webdav.c, comment out this lines:
      197 ne_set_expect100(session, 1);
      1005 ne_set_request_expect100(req, 1);
      1007 ne_set_request_flag(req, NE_REQFLAG_EXPECT100, 1);
      1024 ne_set_request_expect100(req, 1);
      1026 ne_set_request_flag(req, NE_REQFLAG_EXPECT100, 1);


    • Werner Baumann
      Werner Baumann

      Just created an account and tested.

      The server indeed can not handle the Expect 100 header. It works without this header. I will have to look, whether it is possible to detect this behaviour, or I will have to make it a configuration option.

      But there seem to be more curious effects with this server. It seems to not support locks and answers with Not Found to a lock request. It aborted an almost finished 50 MByte upload with 409 Conflict.

      Looks to me like it holds what it promises under 'System Requirements':
      ' only supports Windows 2000 and higher, or Mac OSX.'
      This is a real idiotic statement referring to a web service.