The current practice of uploading is quite insane. For
each write() you just inflate a buffer and do the real
upload on flush(). Try uploading a large file and it
will kill you machine for sure...
Uhm, I also just ran into this problem with the Debian package of curlftpfs - uploading a tar which exceeds one GB in size just kills curlftpfs, althought sufficient memory should still have been available.
However, using 1 GB RAM for uploading a file is no option for me in any case, is there any way to fix this or work around this problem, or will I have to switch away from curlftpfs, to another file upload solution?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
as a happy user of curlftpfs I want to thank you for your time.
But, as others have pointed out, the current practice of uploading files makes it unusable for many of us.
Is there anything I can do to help you fix that? Unfortunately, coding in C is not in my skillset.
Best regards
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Confirmed,
curlftpfs is unusable for uploading large files, it tries to cache the entire file into RAM, then crashes.
This has been reported 2 years ago, so maybe it's not possible to fix it ?
Will it be fixed soon ? if not when will it be fixed ? do you need someone to fix it for you ?
I saw that there are no news since one year now, is this project dead ?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Logged In: YES
user_id=307089
I'm aware of the problem. I'll address this in a future version.
Logged In: YES
user_id=1053109
Could you give an idea as to when you have the time to fix this?
Logged In: YES
user_id=1513774
This just happened to me, and it sent my load averages to 50 in the process.
Logged In: NO
jepp,
using it for backup tars to another server and it occupies 900+ MB RAM, thats not good
Logged In: NO
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14092 root 16 0 1970m 1.2g 1028 S 2.0 63.6 0:24.24 curlftpfs
and its not even released when canceling!!!!
Logged In: YES
user_id=290799
Originator: NO
Uhm, I also just ran into this problem with the Debian package of curlftpfs - uploading a tar which exceeds one GB in size just kills curlftpfs, althought sufficient memory should still have been available.
However, using 1 GB RAM for uploading a file is no option for me in any case, is there any way to fix this or work around this problem, or will I have to switch away from curlftpfs, to another file upload solution?
Logged In: NO
please define "future" ...
Logged In: NO
Hello braga,
as a happy user of curlftpfs I want to thank you for your time.
But, as others have pointed out, the current practice of uploading files makes it unusable for many of us.
Is there anything I can do to help you fix that? Unfortunately, coding in C is not in my skillset.
Best regards
Logged In: NO
Confirmed,
curlftpfs is unusable for uploading large files, it tries to cache the entire file into RAM, then crashes.
This has been reported 2 years ago, so maybe it's not possible to fix it ?
Will it be fixed soon ? if not when will it be fixed ? do you need someone to fix it for you ?
I saw that there are no news since one year now, is this project dead ?
Logged In: YES
user_id=1296790
Originator: NO
will this issue be fixed someday ? Still open for 2 years and seems to me as a major one because it brokes the upload of large files.