This is a totally novice question, but I did check the old mediawiki
and the trac and was still confused:
do the backends also manage the .dirinfo information?
I have a server installation that provides a dumpspot for clients to
upload files; we're using wzdftpd for the POSTUPLOAD feature so we can
post-process these files for publication, but what we're seeing is a
severe performance degradation due to the massive size of the .dirinfo
files -- the clients do not delete these files, but they are swept
away periodically by an external process and as a result, some have
grown to 1Mb or more. It seems wzdftpd must load this file for every
file
transfer, and that load takes considerable time. We were tipped to
this when we noticed the files were arriving with
a precise spacing of 28 seconds.
I'm wondering if there is a way to avoid this. My workaround is to
keep a bare dir-permission-only .dirinfo.0 file and overwrite the
generated file periodically. Is there a more elegant solution. if we
switch to a postgres or mysql backend, would this store the dirinfo
permissions, or are backends only for the login/password data?
|