Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Sorry, it's me again. I got enough of this davfs and want to massacre all of them.
Now in serious. Due to the low responsiveness of our software providers I may have to write a script that unmounts davfs when webdav stops being accessible. Our problem is that when webdav is off our application can't access user profiles but instead of giving up on this, it creates empty profiles and then writes them back with the end result that when webdav is back these profiles get overwritten with those created from scratch.
The idea is to quickly unmount davfs when something happens to webdav to prevent profiles from being written back. I don't care for the user data for current sessions, only for old profiles with data accumulated over months being overwritten with new from scratch profiles.
Unmounting davfs can take time and killing it may occasionally get it hung. So I was thinking about something like deleting the index and then running unmount. What do you advise ?
Deleting the index will have no effect. Changing anything esle within the cache directory while davfs2 is running may have very strange effects.
You would have to kill davfs2 with SIGKILL. Afterwords you can unmount it. This should work but it might happen that a stale PID-file remains and prevents davfs2 from starting the next time.
The best way to prevent problems like this is to use locks. Your application should not be able to create or open files for writing in this case, unless it is already locked because it has been changed before and not yet uploaded.
But what is your application really doing (in detail)? With the patch for empty files applied: your application should either not be able to open the file for writing (error: resource temporarily not available) or it is able to edit the file in the cache which is uploaded later (in this case the old content should not be lost). Looks like your application truncates the cached file to zero before editing it.
P.S: If you think that your application is not willingly truncating the cached file the debug log of davfs2 would help (debug most).
This application is willingly doing whatever it likes at that particular moment. When it receives resource not available, it simply creates a new profile and then writes it back to the disk. I would compare this application to that bug you had in the era of double PUTs, when davfs was wiping out any file written during webdav outages.
Actually I see that umount has -i switch which removes the mount immediately. If this is what it does, then it should be ok as I would manually delete the cache later. I simply need something that sends davfs to hell in a split second after loss of connection to webdav is detected. Is this switch what I am looking for?
No. The -i switch is only to prevent an infinite loop
umount calls umount.davfs - umount.davfs calls umount - umount calls umount.davfs ...
The umount.davfs helper is only to prevent the mount command from *returning* before mount.davfs has synchronised and saved the cache. This is to show users that mount.davfs is still synchronizing so they will not shut down the computer. This is necessary because mount.davfs gets no notification from the kernel when the file system is umounted. It only notices the unmounting when the connection to the kernel is closed (and without umount.davfs umount would already have returned) and then starts to write back dirty files and create the new index-file.
What umount.davfs does:
1. read the PID file for the mount.davfs-daemon
2. check with ps whether this mount.davfs-daemon is still running
3. call umount -i; this will unmount the file system
4. print the message "Waiting while ... " and start a loop that checks for mount.davfs still running
5. when the mount.davfs daemon terminates (cache is synchronized), umount.davfs will return and so will umount. The user knows that the file system is synchronized (or in case of connection down: the index has all information to restore the state when mounting next time).
So only step 1 and 2 will add some very small delay. The main delay is after the file system has been unmounted.
One more hint:
You can just remove umount.davfs (better remove the symbolic link in /sbin).
mount will unmount the file system immediately without calling umount.davfs. mount.davfs will start to synchronize the cache. As long as you don't shutdown the system while mount.davfs is running and synchronizing or otherwise kill mount.davfs prematurely, there is no (additional) danger for data integrity.
Whether this will help, depends on whether you can call umount before your application tries some nonsense.
Removing the link seems to be what I am looking for. Basically I have a few minutes from the beginning of the outage before profiles of inactive sessions start getting flushed to the disk. So this should suffice