Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

Temporary file locks

Help
2008-10-10
2013-04-16
  • Matt A Gregory
    Matt A Gregory
    2008-10-10

    I am using davfs2 to edit content on Zope.  Unfortunately when I use sed, vim, gvim, perl or anything else to move, alter, create or edit files davfs2 creates a lock on the file (if it exists) or creates a webdavLock on a file if it does not exist, but never actually completes the changes.

    I either need to know how to push/commit changes or otherwise alter davfs2 to actually do what I want it to do in a reasonable amount of time.

    I've waited over an hour for davfs2 to commit changes, and it never seems to do it.

     
    • Matt A Gregory
      Matt A Gregory
      2008-10-10

      and this is what my boss and other co-workers see while davfs2 has all the files locked (this is the actual webserver content returned from zope, please notice the "LockNullResource")

      <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
      <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
      <head><LockNullResource at index_html_header></head>
      <body>
      <div class="page_context">

              <div class="page_masthead"><LockNullResource at index_html_masthead></div>
              <div class="page_left_col"><LockNullResource at index_html_left_column></div>
              <div class="page_mid_col"><LockNullResource at index_html_main></div>
              <div class="page_right_col"><LockNullResource at index_html_right_column></div>
              <div class="page_footer"></div>
      </div>

      </body>
      </html>

      should I grow tired of waiting and umount the davfs2 share, all created files (but not directories) and all changes are lost.

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-10

      logfiles show that davfs2 did call unlock... :-/

      127.0.0.1 - mgregory [09/Oct/2008:12:14:54 -0400] "MKCOL /ttv/sedonaaz.tv/ HTTP/1.1" 201 176 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [09/Oct/2008:14:20:44 -0400] "PROPFIND /ttv/sedonaaz.tv/ HTTP/1.1" 207 1006 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:07:23 -0400] "PROPFIND /ttv/sedonaaz.tv/ HTTP/1.1" 207 1006 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:08:37 -0400] "HEAD /ttv/sedHSBtS2 HTTP/1.1" 404 1331 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:08:37 -0400] "LOCK /ttv/sedHSBtS2 HTTP/1.1" 200 771 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:07 -0400] "UNLOCK /ttv/sedHSBtS2 HTTP/1.1" 204 160 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:07 -0400] "HEAD /ttv/sedXXhcp4 HTTP/1.1" 404 1331 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:07 -0400] "LOCK /ttv/sedXXhcp4 HTTP/1.1" 200 769 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:08 -0400] "UNLOCK /ttv/sedXXhcp4 HTTP/1.1" 204 160 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:08 -0400] "HEAD /ttv/sednqU1A7 HTTP/1.1" 404 1331 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:08 -0400] "LOCK /ttv/sednqU1A7 HTTP/1.1" 200 769 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:39 -0400] "UNLOCK /ttv/sednqU1A7 HTTP/1.1" 204 160 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:39 -0400] "HEAD /ttv/sedpUZjLU HTTP/1.1" 404 1331 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:39 -0400] "LOCK /ttv/sedpUZjLU HTTP/1.1" 200 769 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:39 -0400] "UNLOCK /ttv/sedpUZjLU HTTP/1.1" 204 160 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:39 -0400] "HEAD /ttv/sedTWQikJ HTTP/1.1" 404 1331 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:39 -0400] "LOCK /ttv/sedTWQikJ HTTP/1.1" 200 771 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:39 -0400] "UNLOCK /ttv/sedTWQikJ HTTP/1.1" 204 160 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:40 -0400] "HEAD /ttv/sedz3Aogz HTTP/1.1" 404 1331 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:40 -0400] "LOCK /ttv/sedz3Aogz HTTP/1.1" 200 769 "" "davfs2/1.3.0 neon/0.28.3"
      127.0.0.1 - mgregory [10/Oct/2008:11:09:40 -0400] "UNLOCK /ttv/sedz3Aogz HTTP/1.1" 204 160 "" "davfs2/1.3.0 neon/0.28.3"

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-10

      Nope... it's davfs2 doing things wrong.

      Here's the whole transaction log.  My content gets deleted and there is never  PUT operation.  Effectively breaking my zope transaction log and of course not actually doing anything useful in the end result:

      http://pastebin.com/d22fab0c2

      dav2fs locks the resource, then unlocks and deletes it but never puts the new resource.  That's not webdav friendly much less content management system friendly.

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-10

      I will have to stop using this resource until this is resolved.  It's not even close to working for me.

       
    • Werner Baumann
      Werner Baumann
      2008-10-10

      - Why use version 1.3.0. There is version 1.3.3 with known bugs fixed. Though, I don't think this problem is related to these fixed bugs.

      - What version of Zope, running on what brand of server. There are known server bugs. And there is a known bug in Zope that causes PUT requests from davfs2 to fail. It should be fixed in the latest release of Zope.

      - "when I use sed, vim, gvim, perl or anything else to move, alter, create or edit files"
      It would be helpful, if the logs could be related to *one specific* action. A simple cp where the target does not yet exist, would be a good starting point for basic testing.
      For instance: davfs2 will issue a DELETE-request in two cases only: when the application requests to delete the file or when your application does a "mv", with the target existing. So I really need to know what your application requests from the file system.

      - "Here's the whole transaction log." Are you sure the server logs everything? It might have a error.log as well. I don't think this is a complete log of the davfs2-Zope-interaction.
      The debug log from davfs2 would be more helpful. It logs calls from the kernel file system (caused by your application) as well as the HTTP-requests and the responses from the server. Option "debug most" would be good for a start. Alternatively capturing the HTTP-traffic with wireshark or tcpdump could show the compete traffic and not only what the server chooses to log.

      - did you look into the lost+found-directory. If there are entries (the files you edited) this is almost ever caused by failed PUT-requests.

      - As servers have different capabilities and different bugs, davfs2 can be configured. There is no configuration that will work with every servers. The default is intended to work with Apache and most other servers, but is not optimal for many non-Apache servers, and may also fail with some servers. You should look at the davfs2.conf man page, especially options
      if_match_bug
      use_expect100

      - Last: you know, davfs2 is a caching WebDAV-client? It will not upload changed files before they are closed, and even then with a delay of approximately 10 seconds.

      Cheers
      Werner

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-13

      >> - Why use version 1.3.0. There is version 1.3.3 with known bugs fixed. Though, I don't think this problem is related to these fixed bugs.

      I am using gentoo and version 1.3.0 is the latest stable ebuild.  I will take on the responsibility of putting a new ~x86 ebuild into portage and issuing the proper bug reports to bugs.gentoo.org.  However, as you have already pointed out, for my case this is a moot point.

      >> - What version of Zope, running on what brand of server. There are known server bugs. And there is a known bug in Zope that causes PUT requests from davfs2 to fail. It should be fixed in the latest release of Zope.

      I have attempted this on zope 2.9.8 and 2.9.9 but not on zope 3.3.1 because of business decisions to go with the older zope due to more supported zope products.  As to the second part of your question, there was never any PUT requests at all ( I will re-up the zope server logs as it appears that pastebin deleted them...

      here: http://pastebin.com/d22fab0c2 [stored for 1 month] )

      >> - "when I use sed, vim, gvim, perl or anything else to move, alter, create or edit files"
      It would be helpful, if the logs could be related to *one specific* action. A simple cp where the target does not yet exist, would be a good starting point for basic testing.
      For instance: davfs2 will issue a DELETE-request in two cases only: when the application requests to delete the file or when your application does a "mv", with the target existing. So I really need to know what your application requests from the file system.

      Ok, as requested in the log I pasted to pastebin the specific operation is where I did an inplace edit of several files.  The command I ran was this: 'sed -i -e "s/container/context/g" index_html*'

      - "Here's the whole transaction log." Are you sure the server logs everything? It might have a error.log as well. I don't think this is a complete log of the davfs2-Zope-interaction.
      The debug log from davfs2 would be more helpful. It logs calls from the kernel file system (caused by your application) as well as the HTTP-requests and the responses from the server. Option "debug most" would be good for a start. Alternatively capturing the HTTP-traffic with wireshark or tcpdump could show the compete traffic and not only what the server chooses to log.

      This is a server request log, yes it logs every request to come through.  I don't mind running a debug log on davfs2 to help you resolve this as this functionality is very important to me.

      I enabled debug most in the davfs2.conf file and will re-run these tests and post the results.

      >> - did you look into the lost+found-directory. If there are entries (the files you edited) this is almost ever caused by failed PUT-requests.

      afaik that is an ext3 specific path location and we are talking about Zope and the ZODB not a filesystem.  My local filesystem is Reiserfs.

      >> - As servers have different capabilities and different bugs, davfs2 can be configured. There is no configuration that will work with every servers. The default is intended to work with Apache and most other servers, but is not optimal for many non-Apache servers, and may also fail with some servers. You should look at the davfs2.conf man page, especially options
      if_match_bug
      use_expect100

      I will look into it, however my time is relatively limited.  I can certainly run davfs2 in debug mode and do some testing and forward the results to you, however I have a huge workload and attempting to write specific webdav handling for davfs2 is outside of my immediate scope right now and would require too much of my time for my employer to tolerate. 

      >> - Last: you know, davfs2 is a caching WebDAV-client? It will not upload changed files before they are closed, and even then with a delay of approximately 10 seconds.

      Yes, I know.  This is why I gave it over an hour with no result.

      Let me know what else I can do to help you resolve this issue.

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-13

      Out of respect for this thread I have decided to avoid polluting it with long logs.  I have uploaded the logs to my own server for you to download and view.

      davfs.messages is the result of 'sudo grep mount.davfs /var/log/messages' during this test
      davfs.Z2log is the result of 'sudo grep matt.*davfs2 /var/log/zope/zope-test/Z2.log'

      the relevant test was from mounting my zope server in a test directory and running 'sed -i -e "s/context/container/g"' do do an in-place edit on several files.

      I believe this is either a race condition or how Zope handles LOCK and PUT respectively, although I certainly am nowhere near the expert on WebDAV that you are.

      the files can be downloaded from http://www.skyleach.org/davfs.messages and http://www.skyleach.org/davfs.Z2log

      good luck, and please let me know if you need anything else.

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-13

      I updated the ebuild, submitted to this bug: https://bugs.gentoo.org/show_bug.cgi?id=220359

      now, the same test results with 1.3.3

      http://www.skyleach.org/davfs.messages
      http://www.skyleach.org/davfs.Z2log

       
    • Werner Baumann
      Werner Baumann
      2008-10-13

      Thanks for the logs. They show two things:

      - there are a lot of PUT-requests not logged by Zope

      - it is the known Zope bug in HEAD-responses.

      There is already a bug report about this:
      http://sourceforge.net/tracker/index.php?func=detail&aid=1988565&group_id=26275&atid=386747
      This thread also shows what is going wrong.

      Solution:
      -----------
      Set otion "if_match_bug 0" in your davfs2.conf file. Uploads should work now.

      Backgroud:
      The problem turns up because davfs2 tries to avoid the Lost-Update-Problem (unintentionally overwriting concurrent changes from other clients). But this is made difficult by server bugs.
      davfs2 knows two ways to avoid unintentional overwriting:
      1) use conditional PUT, i.e. headers If-Match and If-None-Match with etags
      2) use HEAD-requests to check for changes on the server before uploading with PUT

      1) A bug in the widely use Apache/mod_dav/mod_dav_fs server makes this impossible
      2) A bug in Zopes HEAD-implementation makes this impossible.

      There is no way but to manually configure which method to use. By default davfs2 uses method 2 (because Apache is very popular). So Zope users will have to change the configuration.

      The latest versions of Zope and Apache/mod_dav should have fixed this bugs. In any case, you should use the "if_match-bug 0"-option if it works, because this is the more reliable and more efficient way. Using HEAD is a less reliable work araound for the Apache/mod_dav bug.

      Cheers
      Werner

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-14

      I tried to remount my test directory and the files that were deleted and aparently lost in zope were stiill saved in the davfs2 cache.  That would be fine except now davfs2 is acting really funky and slow, and the files still never appear in zope...

      before I can run any real testing I need to know how to clean/clear this up.

       
    • Werner Baumann
      Werner Baumann
      2008-10-14

      There is a cache directory for every mount point; you will know it from the name. If you don't need the cached data, simply remove the directory.

      If you do further testing:
      Don't mount a huge directory. a couple of files is enough. More files will only increase the debug output with no additoinal information.
      Do just *one* action that requires a PUT-request. There is no use in repeating the same action on different files over and over again. A simply cp of a local file to the mounted davfs2 file system would be good. Or just "echo text > new_file_in_the_davfs2_file_system".

      Cheers
      Werner

       
    • Werner Baumann
      Werner Baumann
      2008-10-14

      Why did you replace the first version of http://www.skyleach.org/davfs.messages by a new, rather useless version? To see what is going on at least "debug most" is needed.

      Werner

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-14

      Ok, I've done about as much with this as I can for now.

      I would suggest a new design for davfs3.  Namely, a way to create per-server handlers, perhaps in XML configuration files, that allow for advanced content management.

      While Apache WebDAV is popular, real content management systems also use WebDAV.  Yet they handle many things differently than Apache does because they have to.

      Let's go down some examples:

      When you create a file on a WebDAV server either the server stores the file as-is (Apache) or there is backend handling to determine the content-type and then create a resource in the database, either a flatfile database like the ZODB or an RDBMS like MySQL, Oracle or (god forbid) MSSQL.  In the case of content-type text/* the content management system doesn't really know how to handle the file other than to create it's default file type.  In the case of Zope, this is a DTML Document.

      Linux/GNU programs like sed, when doing an in-place edit on files, tell the os to make a temporary file, rewrite the stream from the original file to the temporary file, and then delete the original file and rename or move the temporary file to replace the original.

      This means that special managed-content files such as Zope's CMF/Plone page_template filetypes get their type changed from page_template to DTML Document, breaking CMF sites.

      This is not a fault of davfs2 or of Zope or of WebDAV, it's simply that none of the technologies address the problem.

      For this reason, I make the following recomendations:

      davfs3 design changes:
      1.) WebDAV and/or ANY http based CMS supported through system-specific handlers
      2.) utiltity tools that allow you to call specific handler functions.  Example: In the case where you want to create a file of type page_template (having only a meaning to Zope CMF) davfs-tools --handler/-h Zope-2.9.9 --type/-t page_template --content/-c ~/tmp/my_page
      3.) XML-based handler definition files for specific servers that define these methods and allows you to associate them with default OS actions (creating a file, modifying a file, moving a file, etc...)

      This would make davfs REALLY useful as there are many many CMS systems out there and NONE of them stick entirely to WebDAV since webdav is very simplistic.

      more thoughts later.

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-14

      ok, I've cleared out the cache directory.

      Test started at 12:41pm sharp (logfile date/time synchronization using ntp)
      Test 1: (run 2x)
      Operation: edit a managed content file from zope
      Transaction log:

      Step 1: mount a small test directory with davfs2
      RESULT: success
      Step 2: copy index_html in zope and paste it into the webdav folder, verify it shows up in the davfs2 mountpoint RESULT: success
      Step 3: edit the index_html file with vim, save and close.  verify changes viewable in zope
      RESULT: failure
      Horribly slow opening the file and allowing me to actually edit it.  Horribly slow saving the file.  The changes never appear in the zope version of the file.  (file gets locked in zope and unlocked but never changed).

      logfiles:
      http://www.skyleach.org/davfs2_test1_messages
      http://www.skyleach.org/davfs2_test1_bash
      http://www.skyleach.org/davfs2_test1_Z2log
      http://www.skyleach.org/davfs2_test1_system_bash_messages
      NOTE: there was nothing in the zope event log for the time of the tests (errors or otherwise).

      my davfs2 configuration file:
      http://www.skyleach.org/davfs2.conf

       
    • Werner Baumann
      Werner Baumann
      2008-10-14

      - whenever a request fails, because davfs2 (actually the neon library) cannot read the malformed response, Zope does not log this request. Maybe this is because neon sends RST when receiving a malformed response.

      - whenever davfs2 uses the If-Match- or If-None-Match-header, neon can't read the server response. Seems like your version of Zope implements both bugs: the if_match-bug and the body-in-HEAD-bug. The only way to comunicate with this broken server is to not check for server-side changes at all. You can do this with options "precheck 0" and "if_match_bug 1". (BTW: you know "man mount.davfs" and "man davfs2.conf"?)

      - the problem may also be caused by Zope not understanding the Expect-100-Header. As mentioned earlier, you can turn off this Header with option "use_expect100 0".

      To see if the reason is really a malformed response from Zope and what is wrong with it, would require to see the contents of the response from Zope. This could be done either with Wireshark or dcpdump, or with the additional option "debug httpbody" in the davfs2.conf.

      Some remoarks on your tests:
      - the problem is not reading files (GET). So I don't understand what this "cat" command is intended for.
      - the problem is not listing the contents of a directory (PROPFIND). So I do not understand what "Step 2: copy index_html in zope and paste it into the webdav folder, verify it shows up in the davfs2 mountpoint" is intended for.
      - the problem is uploading files that were created or changed on the client side. So what you should do for testing is the most simle command to just create a file on the client side and nothing else. Do not use vim; it is not simply but does a lot more. Use echo or cp. The test should concentrate on exactly the problem in question.

      - your logs are out of sync:
      "davfs2_test1_system_bash_messages" lists three mounts and two unmounts, but "davfs2_test1_messages" only shows the last mount and umount.

      Your proposal:
      I dont't really understand the relation to the problem in question. davfs2 is not a content management system. It is just a simple WebDAV-client. Zope claims to support the WebDAV-protocol, but has bugs in doing so. This Zope-bugs are the problem.

      Each of your proposed handlers would require an agreement between the server and the client how to handle things, i.e. it would require a special protocol or protocol enxtension. Is there anything like this for what Zope is doing? And if so, are there no clients for this protocol. I am not clear why you use a client like davfs2, that is for the basic WebDAV-protocol, and expect it to handle (undocumented?) extensions. If Zope does non-WebDAV-stuff and you want to use it, use a Zope-client.

      Why should I expect, that servers that fail to implement the "simplistic" WebDAV-protocol (and davfs2 uses only the basics) correctly, are able to handle more complex protocols?

      But davfs2 surely will not implement such extensions. One reason is the very nature of davfs2: it maps WebDAV-resources into the unix file system to allow applications without built-in WebDAV-support to access WebDAV-resources via the file system interface. It is already impossible to map all of the basic WebDAV-protocol without loss into the unix file system interface. It will be still more impossible to do this with your proposed extensions, unless you want to create a new unix file system interface too.

      One example:
      You can access Subversion repositories with davfs2. But this is suboptimal because of the restrictions of the file system. If you want to take advantage of the features of Subversion you should use the Subversion-client.

      davfs2 is a simple client for the "simplistic" WebDAV protocol. I believe in this it is useful. But it is not and is not intended to be more. But I don't think this is the problem. The problem is to upload a file to the Zope server.
      But to be clear: davfs2 expects that it can use PUT to upload some contents to the server (using a certain url) and that it can retrieve exactly this content from the server with GET (using the sam url). If the server does not allow this, you can't use davfs2. The server should not claim to support WebDAV in this case.

      Cheers
      Werner

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-14

      I didn't mean to put you on the defensive.  I'm a developer with more than 13 years of experience, so when I make a recommendation there is a fair amount of experience behind it.

      allow me to state the obvious
      davfs2 is a protocol bridge between a POSIX compliant filesystem and a web service.  This webservice, webdav, is a protocol (or more properly an extension to a protocol: http).

      As you and the other developers have already found, different implementations work in different ways and in many cases rfc2518 isn't explicit.

      I am fully aware that davfs is all about webdav.  My proposal is to take the webdav-specific protocol out of the core system and instead have it as a configuration.  This would allow custom configurations for using non-webdav protocols over HTTP.

      Yes, I am fully aware that it isn't specifically related to my issue.  In the course of my problems I have come across multiple issues that make it suitable for certain tasks and not for others due to webdav limitations.

      Should the developers of davfs be completely opposed to this idea, there is no trouble with that.  A simple fork of the project to create a multi-protocol version would be possible.

      What would be changed:
      resource-specific handlers could be changed/altered by users for custom cases
      webdav would just be the start of the supported protocols.  Many other web/network services could be supported (PACS/DICOM, HL7, Any HTTP based CMS, etc)

      What would be kept:
      all of the features of davfs that make it a great option for webdav (caching, transparent to applications, etc...)

      As for my specific issues I will try your recommendations and see how they apply.

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-15

      >> - whenever a request fails, because davfs2 (actually the neon library) cannot read the malformed response, Zope does not log this request. Maybe this is because neon sends RST when receiving a malformed response.

      I noticed this.  No answer at this time as I am not a zope guru.

      >> - whenever davfs2 uses the If-Match- or If-None-Match-header, neon can't read the server response. Seems like your version of Zope implements both bugs: the if_match-bug and the body-in-HEAD-bug. The only way to comunicate with this broken server is to not check for server-side changes at all. You can do this with options "precheck 0" and "if_match_bug 1". (BTW: you know "man mount.davfs" and "man davfs2.conf"?)

      This would really not work well as content changes are being made constantly on the server.

      >> - the problem may also be caused by Zope not understanding the Expect-100-Header. As mentioned earlier, you can turn off this Header with option "use_expect100 0".

      Option set for now, but I will turn it off once I get sufficient functionality to see if it is really required.

      >> To see if the reason is really a malformed response from Zope and what is wrong with it, would require to see the contents of the response from Zope. This could be done either with Wireshark or dcpdump, or with the additional option "debug httpbody" in the davfs2.conf.

      Wireshark capturing packets on localhost.

      >> Some remoarks on your tests:
      >> - the problem is not reading files (GET). So I don't understand what this "cat" command is intended for.

      I was checking to see if the change was made on the file locally after 'gvim :wq'.

      >> - the problem is not listing the contents of a directory (PROPFIND). So I do not understand what "Step 2: copy index_html in zope and paste it into the webdav folder, verify it shows up in the davfs2 mountpoint" is intended for.

      This is a test.   This is only a test.  Please bear with the tester.  Have you ever worked with a large development group before?

      >> - the problem is uploading files that were created or changed on the client side. So what you should do for testing is the most simle command to just create a file on the client side and nothing else. Do not use vim; it is not simply but does a lot more. Use echo or cp. The test should concentrate on exactly the problem in question.

      No, it isn't.  The problem is editing files, often thousands of them, on the client side using GNU/Linux tools rather than the web browser on an application server and content management system that supports webdav.

      >> - your logs are out of sync:
      >> "davfs2_test1_system_bash_messages" lists three mounts and two unmounts, but "davfs2_test1_messages" only shows the last mount and umount.

      My log files are most definitely not out of sync.  All servers are on my local development machine which is synchronized daily with a teir 2 ntp server.  I just included more of /var/log/messages than I meant to (an earlier mount was included).

      >> Your proposal:
      >> I dont't really understand the relation to the problem in question. davfs2 is not a content management system. It is just a simple WebDAV-client. Zope claims to support the WebDAV-protocol, but has bugs in doing so. This Zope-bugs are the problem.

      Let's not play blame games, it's childish and a waste of valuable time.  Let's be professional developers and solve an issue.

      >> Each of your proposed handlers would require an agreement between the server and the client how to handle things, i.e. it would require a special protocol or protocol enxtension. Is there anything like this for what Zope is doing? And if so, are there no clients for this protocol. I am not clear why you use a client like davfs2, that is for the basic WebDAV-protocol, and expect it to handle (undocumented?) extensions. If Zope does non-WebDAV-stuff and you want to use it, use a Zope-client.

      Welcome the 'teh interwebz'.  The "zope client" along with the "client" for EVERY web service application server of ANY kind is served either over HTTP or over webdav or both.  Every HTTP client handler is different, and as you have already seen nearly all webdav servers are different.  Thus the need for a configurable protocol is self-evident.  You may not like the idea for any reason you choose, this does not make it an invalid suggestion.

      >> Why should I expect, that servers that fail to implement the "simplistic" WebDAV-protocol (and davfs2 uses only the basics) correctly, are able to handle more complex protocols?

      Should I assume by quoting simplistic you have been offended by my qualification?  The WebDAV protocol extension was designed for editing flatfiles on a filesystem.  The reason that webdav (and consequently davfs2) is not extremely popular and used on nearly all sites is that flatfile/filesystem based content managment is clunky and unweildy in large content environments where the vast majority of content editors know how to write only from GUI tools.  They know little or no HTML and cannot be convinced to (or in many cases are unable to) learn these technologies.  Consider asking a graphic artist to manage all of her photos on a content managment server and, oh yeah, please use webdav.  Nightmare incarnate!  Yet us those of us schooled in deeper magicks are often called upon to do things that would take the content managers hours or even weeks yet we can do with a quickly written script in mere moments.  To do this, however, we are forced to either script a web client or to use things like davfs2 for quick assess to the files.  I have many web client scripts.  I would rather have the functionality of davfs and be able to specify the how-to-handle rules only one time.  Hell, imagine having armies of developers like myself uploading handlers for hundreds of different protocols and servers.  It's all win.

      >> But davfs2 surely will not implement such extensions. One reason is the very nature of davfs2: it maps WebDAV-resources into the unix file system to allow applications without built-in WebDAV-support to access WebDAV-resources via the file system interface. It is already impossible to map all of the basic WebDAV-protocol without loss into the unix file system interface. It will be still more impossible to do this with your proposed extensions, unless you want to create a new unix file system interface too.

      Completely untrue.  I have already cracked open the source for davfs and I am no padwan in c but rather a full master.  I will be happy to schedule this on my plate for a later time though, unfortunately, I am feasting on things that are both more urgent and more appealing to my need for income at this time.

      >> One example:
      >> You can access Subversion repositories with davfs2. But this is suboptimal because of the restrictions of the file system. If you want to take advantage of the features of Subversion you should use the Subversion-client.

      My thoughts exactly when reading the Delta-V thread.  Yet subversion already allows me to modify 10k files with a few bash commonds and no web interface based system does.

      >> davfs2 is a simple client for the "simplistic" WebDAV protocol. I believe in this it is useful. But it is not and is not intended to be more. But I don't think this is the problem. The problem is to upload a file to the Zope server.
      >> But to be clear: davfs2 expects that it can use PUT to upload some contents to the server (using a certain url) and that it can retrieve exactly this content from the server with GET (using the sam url). If the server does not allow this, you can't use davfs2. The server should not claim to support WebDAV in this case.

      YES!  In this it is useful.  In this it has drawn my interest.  My complements to you sir.  Now watch the meaning of FOSS and see another developer work wonders with it that are beyond your wildest dreams!  ...er... when he gets some free time :-)

      >> Cheers
      >> Werner

      << Raises a glass of stout
      - Matt "SkyLeach" Gregory

       
    • Werner Baumann
      Werner Baumann
      2008-10-15

      Hello Matt.

      "Let's not play blame games, it's childish and a waste of valuable time. Let's be professional developers and solve an issue."

      Nice to hear this from an experienced developer and master of C. The issue:
      davfs2 cannot upload files to your Zope server because
      - Zope sends an illegal body in response to HEAD and causes the succeeding PUT to fail.
      After switching off the HEAD-requests:
      - all reaspones from Zope to conditional requests (if there are any) cannot be read by the neon library.

      As a simple hobby programmer I would like to know why. I just want to see the traffic going on between davfs2 and Zope. With my little experience and a hobbyist's knowledge of HTTP I believe I can see what is wrong in the davfs2-Zope communication from this traffic. And I only can solve problems when I can see what is going wrong.

      Do you want to "solve an issue"?
      Yes? Do a simple cp of some file into the mounted davfs2 file system and capture the traffic between davfs2 and Zope. Send the captured traffic to me.

      You can turn davfs2 into a multiprotocol world wonder afterwords.

      Cheers
      Werner

      P.S.: "I have already cracked open the source for davfs". How did you "crack open" sources that are freely accessible in plain text by anyone from a public server?

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-15

      >> Hello Matt.
      'sup?

      >> The issue:
      >> davfs2 cannot upload files to your Zope server because
      >> - Zope sends an illegal body in response to HEAD and causes the succeeding PUT to fail.
      >> After switching off the HEAD-requests:
      >> - all reaspones from Zope to conditional requests (if there are any) cannot be read by the neon library.

      >>I would like to know why. I just want to see the traffic going on between davfs2 and Zope. I believe I can see what is wrong in the davfs2-Zope communication from this traffic.

      >> Do a simple cp of some file into the mounted davfs2 file system and capture the traffic between davfs2 and Zope. Send the captured traffic to me.

      >> Cheers
      >> Werner

      I took the liberty of editing your original text and removing stuff that does not belong on sourceforge.  I hope you will understand.

      I will, for the sake of cooperation, perform the tests you have requested.  I will not go into length explaining why these tests do not address my issue, as I have already done this.

      >> P.S.: "I have already cracked open the source for davfs". How did you "crack open" sources that are freely accessible in plain text by anyone from a public server?

      To "crack open" has several meanings.

      It can mean "crack" as in "I'm a l33t hax0r and I cracked the gibson d00d!" or it can mean "to crack open" as in "to open".

      I meant the latter.

      -Matt

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-15

      PUT test completed with success when the following options are in the config

      use_expect100 0
      if_match_bug 1
      precheck 0

      logfiles:
      http://www.skyleach.org/davfs2_test2_messages
      http://www.skyleach.org/davfs2_test2_Z2log

      I must use davfs2 with extreme caution with these flags set since content change race conditions could cause me to clobber other users' changes.

       
    • Matt A Gregory
      Matt A Gregory
      2008-10-15

      when you are ready to proceed please let me know and I will move on to the issue of editing files and the fact that they vanish after editing.

      -Matt

       
    • Werner Baumann
      Werner Baumann
      2008-10-15

      Hello Matt,

      as expected it works when conditional requests and prechecking with HEAD are turned off. But conditional requests are important to make things reliable. So the question is still: why do conditionals not work with Zope?

      To see why this does not work, and what has do be done to make it working, I must - of course - see the traffic when conditionals are switched on and the request fails. And I must see the full traffic, including the request body. The debug output of davfs2 only shows what has already been processed by neon. As neon can't read the response from Zope this will probably only show part of  the traffic, and most probably not show the part that is causing the trouble. I need the traffic captured with dcpdump or wireshark from this failing request to analyze the problem.

      What is the problem with doing the same action (cp -v ~/tmp/wbaumann ./zope-matt/) and capture the traffic with wireshark? Options must be
      use_expect100 0
      if_match_bug 0
      precheck 1 (is default)
      * This is the next step to take in solving the issue. *

      Reliability and Lost-Update-Problem:
      Even with prechecking and conditionals not working, there are still means to prevent Lost-Updates. davfs2 uses LOCKs. The problem with locks is, that it may be possible to circumvent locks - even unintentionally. This is not very likely and unintentional overwriting someone elses changes is not very probable, but conditional request would be better.

      expect 100:
      This Header is very unimportant and I will not use it by default in future versions. What it does is, to first ask the server whether it will accept the PUT-reuest before sending the data, to avoid useless traffic. But in almost all cases the server should and will accept the data and expect 100 will just cause one additional roudtrip.

      "I will move on to the issue of editing files and the fact that they vanish after editing".
      They vanish because uploading them to Zope with PUT fails. That's what my proposed tests are all about.

      Cheers
      Werner