From: nf2 <nf...@sc...> - 2007-11-16 14:41:47
|
AFAIK fuse doesn't forward lseek() calls to the filesystems. Therefore it's not possible to let lseek fail with ESPIPE if the filesystem can't handle seeks. Perhaps letting lseek fail would be more informative for an application, than letting the next read() or write() fail. Because FUSE always expects to be able to write to arbitrary locations in a file, it seems to be pretty hard to implement filesystems like FTP. For instance curlftpfs caches the whole file in memory before writing it, which causes trouble with big files. regards, Norbert || |
From: Aditya R. <adi...@gm...> - 2008-07-25 21:19:35
|
Hi. I couldn't find anything on the mailing list about this except that lseek() fails silently rather than return an error value (which is the behavior I see). But what is the reason for lseek() not being part of the FUSE API, given that it is a frequently used call? Are there any ways to get around this problem without modifying existing applications? Thanks, Aditya |
From: Miklos S. <mi...@sz...> - 2008-07-25 21:30:08
|
On Fri, 25 Jul 2008, Aditya Rajgarhia wrote: > I couldn't find anything on the mailing list about this except that lseek() > fails silently rather than return an error value (which is the behavior I > see). Fails in which case? > But what is the reason for lseek() not being part of the FUSE API, > given that it is a frequently used call? Are there any ways to get around > this problem without modifying existing applications? What problem? Thanks, Miklos |
From: Aditya R. <adi...@gm...> - 2008-07-25 23:42:45
|
On Fri, Jul 25, 2008 at 2:30 PM, Miklos Szeredi <mi...@sz...> wrote: > On Fri, 25 Jul 2008, Aditya Rajgarhia wrote: > > I couldn't find anything on the mailing list about this except that > lseek() > > fails silently rather than return an error value (which is the behavior I > > see). > > Fails in which case? When a user application calls lseek(). Maybe I understand incorrectly and it doesn't fail -- I just saw the thread at http://article.gmane.org/gmane.comp.file-systems.fuse.devel/5409/match=lseekand assumed that an lseek() on a FUSE filesystem does nothing and doesn't return an error either. If that's not the case, what does an lseek() do? > > > > But what is the reason for lseek() not being part of the FUSE API, > > given that it is a frequently used call? Are there any ways to get around > > this problem without modifying existing applications? > > What problem? As pointed above, I guess I'm not completely sure if there's a problem. In any case what's the reason for lseek() not being part of the API? Thanks, Aditya |
From: Miklos S. <mi...@sz...> - 2008-07-28 20:14:22
|
On Fri, 25 Jul 2008, Aditya Rajgarhia wrote: > When a user application calls lseek(). Maybe I understand > incorrectly and it doesn't fail -- I just saw the thread at > http://article.gmane.org/gmane.comp.file-systems.fuse.devel/5409/match=lseekand > assumed that an lseek() on a FUSE filesystem does nothing and > doesn't return an error either. If that's not the case, what does an > lseek() do? It does exactly the same as on any other filesystem: set the file position. The problem in the referenced mail is exactly that lseek() on a fuse filesystem doesn't fail, even if the filesystem might not be able to support seeking. Miklos |
From: Aditya R. <adi...@gm...> - 2008-07-28 22:11:55
|
Is the current file position set by FUSE daemon in the offset parameter for read/write() then? I did notice that parameter in FUSE which is not part of the system calls but didn't realize it may be for this purpose. If that's the case it would explain my confusion, which is that filesystems built on FUSE may need to do additional housekeeping when the file position is moved using lseek(), but if the position is passed to read/write() the work can be done there instead. Thanks for your reply -- let me know if my understanding is still incorrect! -Aditya On Mon, Jul 28, 2008 at 1:14 PM, Miklos Szeredi <mi...@sz...> wrote: > On Fri, 25 Jul 2008, Aditya Rajgarhia wrote: > > > When a user application calls lseek(). Maybe I understand > > incorrectly and it doesn't fail -- I just saw the thread at > > > http://article.gmane.org/gmane.comp.file-systems.fuse.devel/5409/match=lseekand > > assumed that an lseek() on a FUSE filesystem does nothing and > > doesn't return an error either. If that's not the case, what does an > > lseek() do? > > It does exactly the same as on any other filesystem: set the file > position. > > The problem in the referenced mail is exactly that lseek() on a fuse > filesystem doesn't fail, even if the filesystem might not be able to > support seeking. > > Miklos > -- (309) 750 0861 adi...@gm... |
From: Goswin v. B. <gos...@we...> - 2008-08-07 11:29:09
|
"Aditya Rajgarhia" <adi...@gm...> writes: > Is the current file position set by FUSE daemon in the offset parameter for > read/write() then? I did notice that parameter in FUSE which is not part of > the system calls but didn't realize it may be for this purpose. If that's > the case it would explain my confusion, which is that filesystems built on > FUSE may need to do additional housekeeping when the file position is moved > using lseek(), but if the position is passed to read/write() the work can be > done there instead. > > Thanks for your reply -- let me know if my understanding is still incorrect! > > -Aditya What fuse actualy implements is ssize_t pread(int fd, void *buf, size_t count, off_t offset); ssize_t pwrite(int fd, const void *buf, size_t count, off_t offset); MfG Goswin |
From: Miklos S. <mi...@sz...> - 2008-07-29 09:46:03
|
On Mon, 28 Jul 2008, Aditya Rajgarhia wrote: > Is the current file position set by FUSE daemon in the offset > parameter for read/write() then? Exactly, yes. Miklos |
From: Miklos S. <mi...@sz...> - 2007-11-19 12:11:23
|
> AFAIK fuse doesn't forward lseek() calls to the filesystems. Therefore > it's not possible to let lseek fail with ESPIPE if the filesystem can't > handle seeks. Perhaps letting lseek fail would be more informative for > an application, than letting the next read() or write() fail. Makes sense, added to todo. Another possible implementation would be to add a flag returned by open, meaning that seeks on the file are not allowed. > Because FUSE always expects to be able to write to arbitrary locations > in a file, it seems to be pretty hard to implement filesystems like FTP. > For instance curlftpfs caches the whole file in memory before writing > it, which causes trouble with big files. I have actually worked on this issue: http://bugzilla.novell.com/show_bug.cgi?id=281052 Miklos |
From: nf2 <nf...@sc...> - 2007-11-19 13:06:24
|
Miklos Szeredi wrote: >> AFAIK fuse doesn't forward lseek() calls to the filesystems. Therefore >> it's not possible to let lseek fail with ESPIPE if the filesystem can't >> handle seeks. Perhaps letting lseek fail would be more informative for >> an application, than letting the next read() or write() fail. >> > > Makes sense, added to todo. Another possible implementation would be > to add a flag returned by open, meaning that seeks on the file are not > allowed. > sounds good! > >> Because FUSE always expects to be able to write to arbitrary locations >> in a file, it seems to be pretty hard to implement filesystems like FTP. >> For instance curlftpfs caches the whole file in memory before writing >> it, which causes trouble with big files. >> > > I have actually worked on this issue: > > http://bugzilla.novell.com/show_bug.cgi?id=281052 > > braga has sent me this patch yesterday. didn't know it was yours. i already tried to do some improvements: * there has been a "hang" when curl_easy_perform(fh->write_conn) in the write thread fails (the ftp server of my webspace seems to kill the write connection, but i don't understand what really happens). now - at least - writing fails with EIO and doesn't hang anymore. * ftpfs_truncate creates empty files with no permissions set -> this causes problems with gedit and others (nautilus and gedit are my favorite "test suite" for FUSE filesystems ;-))*) if you want, i can send you the new path. braga told me that he doesn't have time for working on curlftpfs at the moment. perhaps i could ask him for SVN access to commit those patches. i would really like to improve filesystems like curlftpfs and also fusesmb, because otherwise my work on libfusi/gfuse-manger it pretty pointless. cheers, norbert *) what i really miss, is a test suite for FUSE filesystems for complicated operations like atomic save with backup, copying files on the same server etc... |
From: Miklos S. <mi...@sz...> - 2007-11-19 14:08:21
|
> >> Because FUSE always expects to be able to write to arbitrary locations > >> in a file, it seems to be pretty hard to implement filesystems like FTP. > >> For instance curlftpfs caches the whole file in memory before writing > >> it, which causes trouble with big files. > >> > > > > I have actually worked on this issue: > > > > http://bugzilla.novell.com/show_bug.cgi?id=281052 > > > > > > braga has sent me this patch yesterday. didn't know it was yours. i > already tried to do some improvements: > > * there has been a "hang" when curl_easy_perform(fh->write_conn) in the > write thread fails (the ftp server of my webspace seems to kill the > write connection, but i don't understand what really happens). now - at > least - writing fails with EIO and doesn't hang anymore. > > * ftpfs_truncate creates empty files with no permissions set -> this > causes problems with gedit and others (nautilus and gedit are my > favorite "test suite" for FUSE filesystems ;-))*) > > if you want, i can send you the new path. > > braga told me that he doesn't have time for working on curlftpfs at the > moment. perhaps i could ask him for SVN access to commit those patches. That would be great. I just planned to apply the patch to the SUSE package, but if you could work with Braga, and release a fixed upstream version, it would be much better for everyone. > i would really like to improve filesystems like curlftpfs and also > fusesmb, because otherwise my work on libfusi/gfuse-manger it pretty > pointless. > > cheers, > norbert > > *) what i really miss, is a test suite for FUSE filesystems for > complicated operations like atomic save with backup, copying files on > the same server etc... Yes, that would be nice. Thanks, Miklos |
From: Robson B. A. <rob...@gm...> - 2007-11-19 14:23:42
|
Hi Miklos, I didn't apply your patch because I tested it and saw that it had the bugs that Norbert mentioned. I started tracking them down but ran out of time. I also think it's great Norbert is willing to work on that and would be more than happy to accept him as a contributor to curlftpfs. On Nov 19, 2007 12:08 PM, Miklos Szeredi <mi...@sz...> wrote: > > >> Because FUSE always expects to be able to write to arbitrary locations > > >> in a file, it seems to be pretty hard to implement filesystems like FTP. > > >> For instance curlftpfs caches the whole file in memory before writing > > >> it, which causes trouble with big files. > > >> > > > > > > I have actually worked on this issue: > > > > > > http://bugzilla.novell.com/show_bug.cgi?id=281052 > > > > > > > > > > braga has sent me this patch yesterday. didn't know it was yours. i > > already tried to do some improvements: > > > > * there has been a "hang" when curl_easy_perform(fh->write_conn) in the > > write thread fails (the ftp server of my webspace seems to kill the > > write connection, but i don't understand what really happens). now - at > > least - writing fails with EIO and doesn't hang anymore. > > > > * ftpfs_truncate creates empty files with no permissions set -> this > > causes problems with gedit and others (nautilus and gedit are my > > favorite "test suite" for FUSE filesystems ;-))*) > > > > if you want, i can send you the new path. > > > > braga told me that he doesn't have time for working on curlftpfs at the > > moment. perhaps i could ask him for SVN access to commit those patches. > > That would be great. I just planned to apply the patch to the SUSE > package, but if you could work with Braga, and release a fixed > upstream version, it would be much better for everyone. > > > i would really like to improve filesystems like curlftpfs and also > > fusesmb, because otherwise my work on libfusi/gfuse-manger it pretty > > pointless. > > > > cheers, > > norbert > > > > *) what i really miss, is a test suite for FUSE filesystems for > > complicated operations like atomic save with backup, copying files on > > the same server etc... > > Yes, that would be nice. > > Thanks, > Miklos > -- []s, Robson That's the difference between me and the rest of the world! Happiness isn't good enough for me! I demand euphoria! -- Calvin |
From: nf2 <nf...@sc...> - 2007-11-19 15:31:56
|
with wireshark i found the reason why writing to my ftp server failed: 451 Failure writing to local file which means: my webspace is full. with my changes the error gets reported for big files, but i have a problem with small files which only need one ftpfs_write() call (like echo "hello world" > file). this is really tricky, because there is no subsequent ftpfs_write() to return an error. and returning an error in ftpfs_release() won't work i guess. regards norbert Robson Braga Araujo wrote: > Hi Miklos, > > I didn't apply your patch because I tested it and saw that it had the > bugs that Norbert mentioned. I started tracking them down but ran out > of time. I also think it's great Norbert is willing to work on that > and would be more than happy to accept him as a contributor to > curlftpfs. > > On Nov 19, 2007 12:08 PM, Miklos Szeredi <mi...@sz...> wrote: > >>>>> Because FUSE always expects to be able to write to arbitrary locations >>>>> in a file, it seems to be pretty hard to implement filesystems like FTP. >>>>> For instance curlftpfs caches the whole file in memory before writing >>>>> it, which causes trouble with big files. >>>>> >>>>> >>>> I have actually worked on this issue: >>>> >>>> http://bugzilla.novell.com/show_bug.cgi?id=281052 >>>> >>>> >>>> >>> braga has sent me this patch yesterday. didn't know it was yours. i >>> already tried to do some improvements: >>> >>> * there has been a "hang" when curl_easy_perform(fh->write_conn) in the >>> write thread fails (the ftp server of my webspace seems to kill the >>> write connection, but i don't understand what really happens). now - at >>> least - writing fails with EIO and doesn't hang anymore. >>> >>> * ftpfs_truncate creates empty files with no permissions set -> this >>> causes problems with gedit and others (nautilus and gedit are my >>> favorite "test suite" for FUSE filesystems ;-))*) >>> >>> if you want, i can send you the new path. >>> >>> braga told me that he doesn't have time for working on curlftpfs at the >>> moment. perhaps i could ask him for SVN access to commit those patches. >>> >> That would be great. I just planned to apply the patch to the SUSE >> package, but if you could work with Braga, and release a fixed >> upstream version, it would be much better for everyone. >> >> >>> i would really like to improve filesystems like curlftpfs and also >>> fusesmb, because otherwise my work on libfusi/gfuse-manger it pretty >>> pointless. >>> >>> cheers, >>> norbert >>> >>> *) what i really miss, is a test suite for FUSE filesystems for >>> complicated operations like atomic save with backup, copying files on >>> the same server etc... >>> >> Yes, that would be nice. >> >> Thanks, >> Miklos >> >> > > > > |
From: Mark W. <mwa...@gm...> - 2007-11-19 16:27:05
|
On Nov 19, 2007 10:31 AM, nf2 <nf...@sc...> wrote: > with wireshark i found the reason why writing to my ftp server failed: > > 451 Failure writing to local file > > which means: my webspace is full. > > with my changes the error gets reported for big files, but i have a > problem with small files which only need one ftpfs_write() call (like > echo "hello world" > file). this is really tricky, because there is no > subsequent ftpfs_write() to return an error. and returning an error in > ftpfs_release() won't work i guess. > I'm not at all familiar with ftpfs, but it sounds like you might want to look into returning an error on the flush() operation. It will always be called right before a release (although it is also called some other times). Cheers, Mark |
From: Miklos S. <mi...@sz...> - 2007-11-19 16:28:40
|
> with wireshark i found the reason why writing to my ftp server failed: > > 451 Failure writing to local file > > which means: my webspace is full. > > with my changes the error gets reported for big files, but i have a > problem with small files which only need one ftpfs_write() call (like > echo "hello world" > file). this is really tricky, because there is no > subsequent ftpfs_write() to return an error. and returning an error in > ftpfs_release() won't work i guess. It should probably be returning the error in ftpfs_flush(). That means, the application will get the error from close(2). Unfortunately many apps don't check the return value of close() so that may not help (the shell should be OK, so your example would actually work). The only other possibility is to make writes synchronous, but I don't think that can be done with libcurl. Miklos |
From: nf2 <nf...@sc...> - 2007-11-19 18:34:12
|
Miklos Szeredi wrote: >> with wireshark i found the reason why writing to my ftp server failed: >> >> 451 Failure writing to local file >> >> which means: my webspace is full. >> >> with my changes the error gets reported for big files, but i have a >> problem with small files which only need one ftpfs_write() call (like >> echo "hello world" > file). this is really tricky, because there is no >> subsequent ftpfs_write() to return an error. and returning an error in >> ftpfs_release() won't work i guess. >> > > It should probably be returning the error in ftpfs_flush(). That > means, the application will get the error from close(2). > > Unfortunately many apps don't check the return value of close() so > that may not help (the shell should be OK, so your example would > actually work). > > The only other possibility is to make writes synchronous, but I don't > think that can be done with libcurl. > > Miklos > this really sucks! i just realized that for tiny files the error can not be detected before finishing the libcurl transfer, which obviously can't be before ftpfs_release(). and returning -EIO in ftpfs_release() is ignored by FUSE. ftpfs_flush() can't trigger the error, because libcurl doesn't have a flush() function. norbert |
From: Miklos S. <mi...@sz...> - 2007-11-19 19:23:34
|
> Miklos Szeredi wrote: > >> with wireshark i found the reason why writing to my ftp server failed: > >> > >> 451 Failure writing to local file > >> > >> which means: my webspace is full. > >> > >> with my changes the error gets reported for big files, but i have a > >> problem with small files which only need one ftpfs_write() call (like > >> echo "hello world" > file). this is really tricky, because there is no > >> subsequent ftpfs_write() to return an error. and returning an error in > >> ftpfs_release() won't work i guess. > >> > > > > It should probably be returning the error in ftpfs_flush(). That > > means, the application will get the error from close(2). > > > > Unfortunately many apps don't check the return value of close() so > > that may not help (the shell should be OK, so your example would > > actually work). > > > > The only other possibility is to make writes synchronous, but I don't > > think that can be done with libcurl. > > > > Miklos > > > > this really sucks! i just realized that for tiny files the error can not > be detected before finishing the libcurl transfer, which obviously can't > be before ftpfs_release(). It could be finished in ftpfs_flush(). In the vast majority of cases ->release() will follow immediately after ->flush(). In the minority there could be more reads or writes, but in those exceptional cases the transfer could be restarted, no? Miklos |
From: nf2 <nf...@sc...> - 2007-11-19 20:15:11
|
Miklos Szeredi wrote: >> Miklos Szeredi wrote: >> >>>> with wireshark i found the reason why writing to my ftp server failed: >>>> >>>> 451 Failure writing to local file >>>> >>>> which means: my webspace is full. >>>> >>>> with my changes the error gets reported for big files, but i have a >>>> problem with small files which only need one ftpfs_write() call (like >>>> echo "hello world" > file). this is really tricky, because there is no >>>> subsequent ftpfs_write() to return an error. and returning an error in >>>> ftpfs_release() won't work i guess. >>>> >>>> >>> It should probably be returning the error in ftpfs_flush(). That >>> means, the application will get the error from close(2). >>> >>> Unfortunately many apps don't check the return value of close() so >>> that may not help (the shell should be OK, so your example would >>> actually work). >>> >>> The only other possibility is to make writes synchronous, but I don't >>> think that can be done with libcurl. >>> >>> Miklos >>> >>> >> this really sucks! i just realized that for tiny files the error can not >> be detected before finishing the libcurl transfer, which obviously can't >> be before ftpfs_release(). >> > > It could be finished in ftpfs_flush(). In the vast majority of cases > ->release() will follow immediately after ->flush(). In the minority > there could be more reads or writes, but in those exceptional cases > the transfer could be restarted, no? > > Miklos > ftp-REST, but i don't know if that's always enabled for uploads... in which situations do those multiple close() calls usually happen? forking a process while writing a file? norbert |
From: Miklos S. <mi...@sz...> - 2007-11-19 20:35:10
|
> >> Miklos Szeredi wrote: > >> > >>>> with wireshark i found the reason why writing to my ftp server failed: > >>>> > >>>> 451 Failure writing to local file > >>>> > >>>> which means: my webspace is full. > >>>> > >>>> with my changes the error gets reported for big files, but i have a > >>>> problem with small files which only need one ftpfs_write() call (like > >>>> echo "hello world" > file). this is really tricky, because there is no > >>>> subsequent ftpfs_write() to return an error. and returning an error in > >>>> ftpfs_release() won't work i guess. > >>>> > >>>> > >>> It should probably be returning the error in ftpfs_flush(). That > >>> means, the application will get the error from close(2). > >>> > >>> Unfortunately many apps don't check the return value of close() so > >>> that may not help (the shell should be OK, so your example would > >>> actually work). > >>> > >>> The only other possibility is to make writes synchronous, but I don't > >>> think that can be done with libcurl. > >>> > >>> Miklos > >>> > >>> > >> this really sucks! i just realized that for tiny files the error can not > >> be detected before finishing the libcurl transfer, which obviously can't > >> be before ftpfs_release(). > >> > > > > It could be finished in ftpfs_flush(). In the vast majority of cases > > ->release() will follow immediately after ->flush(). In the minority > > there could be more reads or writes, but in those exceptional cases > > the transfer could be restarted, no? > > > > Miklos > > > ftp-REST, but i don't know if that's always enabled for uploads... It's not, that's why all complication is needed. No, I meant, start a new transfer, like after open. Since now it's not an empty file, the optimized streaming thing will not work. But the old buffered writing should still work. > in which situations do those multiple close() calls usually happen? > forking a process while writing a file? Yes, that kind of thing. Forking will not result in close(), but if the forked process closes the descriptor and the parent continues to write to the file, then this could happen. Miklos |
From: nf2 <nf...@sc...> - 2007-11-19 21:09:07
|
Miklos Szeredi wrote: >>>> Miklos Szeredi wrote: >>>> >>>> >>>>>> with wireshark i found the reason why writing to my ftp server failed: >>>>>> >>>>>> 451 Failure writing to local file >>>>>> >>>>>> which means: my webspace is full. >>>>>> >>>>>> with my changes the error gets reported for big files, but i have a >>>>>> problem with small files which only need one ftpfs_write() call (like >>>>>> echo "hello world" > file). this is really tricky, because there is no >>>>>> subsequent ftpfs_write() to return an error. and returning an error in >>>>>> ftpfs_release() won't work i guess. >>>>>> >>>>>> >>>>>> >>>>> It should probably be returning the error in ftpfs_flush(). That >>>>> means, the application will get the error from close(2). >>>>> >>>>> Unfortunately many apps don't check the return value of close() so >>>>> that may not help (the shell should be OK, so your example would >>>>> actually work). >>>>> >>>>> The only other possibility is to make writes synchronous, but I don't >>>>> think that can be done with libcurl. >>>>> >>>>> Miklos >>>>> >>>>> >>>>> >>>> this really sucks! i just realized that for tiny files the error can not >>>> be detected before finishing the libcurl transfer, which obviously can't >>>> be before ftpfs_release(). >>>> >>>> >>> It could be finished in ftpfs_flush(). In the vast majority of cases >>> ->release() will follow immediately after ->flush(). In the minority >>> there could be more reads or writes, but in those exceptional cases >>> the transfer could be restarted, no? >>> >>> Miklos >>> >>> >> ftp-REST, but i don't know if that's always enabled for uploads... >> > > It's not, that's why all complication is needed. > > No, I meant, start a new transfer, like after open. Since now it's > not an empty file, the optimized streaming thing will not work. But > the old buffered writing should still work. > > but this would cause the memory allocation problems again. perhaps FTP for FUSE requires a totally different approach: completely decoupling the FUSE filesystem from the actual upload: * an on-disk cache specified on the curlftpfs command line (-o cachedir=). curlftpfs will just copy the data to the disk cache. * a GUI for watching/cancelling the uploads. apart from the memory problem, the "old buffered writing" is very problematic for a second reason, especially on desktops: everything happens in a single operation - the close() function. therefore it's impossible to cancel a transfer. this is painful when someone - for instance - selected the wrong DVD-image to upload and cannot correct that. norbert |
From: Miklos S. <mi...@sz...> - 2007-11-20 10:16:46
|
> but this would cause the memory allocation problems again. Yes, but only in rare cases. And probably in those cases the file will not be large, so it won't matter. > perhaps FTP for FUSE requires a totally different approach: > > completely decoupling the FUSE filesystem from the actual upload: > > * an on-disk cache specified on the curlftpfs command line (-o > cachedir=). curlftpfs will just copy the data to the disk cache. A disk cache may be a good idea, but it won't solve all the problems. Disk may fill up just as memory can, especially on PDA's and the like. > * a GUI for watching/cancelling the uploads. I think a GUI is a different scope. If it can't work without a GUI, it's really no different than the gnome-vfs/KIO stuff. > apart from the memory problem, the "old buffered writing" is very > problematic for a second reason, especially on desktops: everything > happens in a single operation - the close() function. therefore it's > impossible to cancel a transfer. this is painful when someone - for > instance - selected the wrong DVD-image to upload and cannot correct that. Fuse can handle interrupts. I think that's the correct way to deal with cancelling operations. But I think we needn't really worry about DVD size images being uploaded all-at-once, since the streaming upload will take care of the common case. Miklos |
From: nf2 <nf...@sc...> - 2007-11-23 11:31:50
|
Miklos Szeredi wrote: >> but this would cause the memory allocation problems again. >> > > Yes, but only in rare cases. And probably in those cases the file > will not be large, so it won't matter. > > >> perhaps FTP for FUSE requires a totally different approach: >> >> completely decoupling the FUSE filesystem from the actual upload: >> >> * an on-disk cache specified on the curlftpfs command line (-o >> cachedir=). curlftpfs will just copy the data to the disk cache. >> > > A disk cache may be a good idea, but it won't solve all the problems. > Disk may fill up just as memory can, especially on PDA's and the like. > > >> * a GUI for watching/cancelling the uploads. >> > > I think a GUI is a different scope. If it can't work without a GUI, > it's really no different than the gnome-vfs/KIO stuff. > > >> apart from the memory problem, the "old buffered writing" is very >> problematic for a second reason, especially on desktops: everything >> happens in a single operation - the close() function. therefore it's >> impossible to cancel a transfer. this is painful when someone - for >> instance - selected the wrong DVD-image to upload and cannot correct that. >> > > Fuse can handle interrupts. I think that's the correct way to deal > with cancelling operations. > > But I think we needn't really worry about DVD size images being > uploaded all-at-once, since the streaming upload will take care of the > common case. > > Miklos > i don't understand how a client application hanging in close() can send an interrupt to FUSE. i think the "old buffered writing" should be completely removed, and only streaming writes continued with APPE or REST after a premature flush() should be used (because the premature flush might also occur in very big files). btw, i have commited your patch and my recent changes to http://code.google.com/p/curlftpfs/. regards, norbert |
From: Miklos S. <mi...@sz...> - 2007-11-24 09:15:30
|
> i don't understand how a client application hanging in close() can send > an interrupt to FUSE. > > i think the "old buffered writing" should be completely removed, and > only streaming writes continued with APPE Yeah, APPE is good, because unlike REST, it should be supported on all servers. The only problem is if the file is extended or truncated on the remote end, because then APPE may do the wrong thing. But perhaps we can live with this. > or REST after a premature flush() should be used (because the > premature flush might also occur in very big files). In theory it might. In practice, I don't know. Miklos |
From: nf2 <nf...@sc...> - 2007-11-25 03:36:29
|
Miklos Szeredi wrote: >> i don't understand how a client application hanging in close() can send >> an interrupt to FUSE. >> >> i think the "old buffered writing" should be completely removed, and >> only streaming writes continued with APPE >> > > Yeah, APPE is good, because unlike REST, it should be supported on all > servers. The only problem is if the file is extended or truncated on > the remote end, because then APPE may do the wrong thing. But perhaps > we can live with this. > i have implemented APPE with checking the size of the written file in flush() to see if the file has not been messed up. > >> or REST after a premature flush() should be used (because the >> premature flush might also occur in very big files). >> > > In theory it might. In practice, I don't know. > > Miklos > premature flush() calls seem to happen quite often. for instance when nautilus copies a file. but with APPE it works. the problem is that many text editors like gedit save files with open(O_RDWR) instead of open(O_WRONLY), and without buffering files it's hard to deal with that case. because i think file buffering should be removed, i would like to understand why they save with open(O_RDWR). actually, i believe only the cases where a file is created or truncated to zero before writing can be handled with FTP properly. norbert |
From: nf2 <nf...@sc...> - 2007-11-25 18:14:08
|
nf2 wrote: > Miklos Szeredi wrote: > >>> i don't understand how a client application hanging in close() can send >>> an interrupt to FUSE. >>> >>> i think the "old buffered writing" should be completely removed, and >>> only streaming writes continued with APPE >>> >>> >> Yeah, APPE is good, because unlike REST, it should be supported on all >> servers. The only problem is if the file is extended or truncated on >> the remote end, because then APPE may do the wrong thing. But perhaps >> we can live with this. >> >> > > i have implemented APPE with checking the size of the written file in > flush() to see if the file has not been messed up. > > >> >> >>> or REST after a premature flush() should be used (because the >>> premature flush might also occur in very big files). >>> >>> >> In theory it might. In practice, I don't know. >> >> Miklos >> >> > > premature flush() calls seem to happen quite often. for instance when > nautilus copies a file. but with APPE it works. > > the problem is that many text editors like gedit save files with > open(O_RDWR) instead of open(O_WRONLY), and without buffering files it's > hard to deal with that case. because i think file buffering should be > removed, i would like to understand why they save with open(O_RDWR). > > actually, i believe only the cases where a file is created or truncated > to zero before writing can be handled with FTP properly. > > norbert > > ... however, i have completely removed buffered writes and also made open(O_RDWR) work - at least for cases where the application doesn't switch between reads and writes. also, writing to an existing file works, but only when the application has truncated it to zero. text editors like gedit, Kate, leafpad seem to work with this solution. openoffice is a bit more picky - there are problems with editing *rtf and *txt documents, because it tries to reread the file from the beginning with the same file-handle after writing. vim complains about a problem with its swap file - it fails, because it does a non sequential write. nevertheless, file saving and copying, especially for GUI applications and file-managers now works a lot smoother than with buffered writes. i have committed the changes to http://code.google.com/p/curlftpfs/ - feel free to check out and test. but, honestly, i think applications will need a way to find out whether a filesystem only supports sequential writes. one possibility would be to let lseek(0) fail. open(O_RDWR) should also fail. that's better than any workaround: people will complain that applications don't work with curlftpfs and applications will be fixed to deal with this special case. another possiblility would be extending the contract of open(): a new flag O_SEQUENTIAL. without passing this flag, open() would fail on filesystems which only allow sequential writes. the third possiblity would be to return information about file system capabilities in fsstat. cheers, norbert |