Having a problem with Filelocker 2.6, on RHEL7. Smaller files are fine; but uploading a large file (800mb) fails. This shows up in the error log:
[06/Dec/2017:16:22:02] HTTP Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 656, in respond
response.body = self.handler()
File "/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py", line 188, in call
self.body = self.oldhandler(*args, kwargs)
File "/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py", line 34, in call
return self.callable(*self.args, self.kwargs)
File "/usr/local/filelocker/controller/FileController.py", line 362, in upload
if long(os.path.getsize(upFile.file_object.name)) != long(fileSizeBytes): #The file transfer stopped prematurely, take out of transfers and queue partial file for deletion
File "/usr/lib64/python2.7/genericpath.py", line 49, in getsize
return os.stat(filename).st_size
OSError: [Errno 2] No such file or directory: '/filelocker-vault/[0]fltmp669180.tmp'
For reference, our architecture is a F5 BigIP to load balance 2 backend filelocker servers. The database lives on a separate host, naturally.
I've tried increasing Apache's ProxyTimeout (60 to 300) and also Filelocker's server.socket_timeout (60 to 300). This has not seemed to help anything.
Further investigation leads me to believe that the problem is directly related to the NFS volume mounted from our netapp filer. Adding a sub-directory for the vault seems to resolve the large uploads problem.
When trying to upload to the root of the nfs mount, I see [0]fltmp.. created, and it disappears shortly thereafter - repalced by a new file .nfs000... then when it comes time to clamscan and encrypt, it can't find the original file.
I'll keep investigating and will post more as I learn it.