You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(11) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(1) |
Feb
(13) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(3) |
Oct
(2) |
Nov
(21) |
Dec
(24) |
2004 |
Jan
(23) |
Feb
(45) |
Mar
(29) |
Apr
(16) |
May
(34) |
Jun
(93) |
Jul
(52) |
Aug
(38) |
Sep
(161) |
Oct
(124) |
Nov
(134) |
Dec
(80) |
2005 |
Jan
(182) |
Feb
(72) |
Mar
(149) |
Apr
(136) |
May
(154) |
Jun
(64) |
Jul
(122) |
Aug
(134) |
Sep
(171) |
Oct
(116) |
Nov
(184) |
Dec
(130) |
2006 |
Jan
(141) |
Feb
(146) |
Mar
(208) |
Apr
(96) |
May
(105) |
Jun
(103) |
Jul
(90) |
Aug
(85) |
Sep
(136) |
Oct
(142) |
Nov
(157) |
Dec
(90) |
2007 |
Jan
(56) |
Feb
(99) |
Mar
(154) |
Apr
(124) |
May
(153) |
Jun
(120) |
Jul
(205) |
Aug
(155) |
Sep
(104) |
Oct
(155) |
Nov
(162) |
Dec
(130) |
2008 |
Jan
(111) |
Feb
(99) |
Mar
(155) |
Apr
(159) |
May
(56) |
Jun
(147) |
Jul
(293) |
Aug
(260) |
Sep
(98) |
Oct
(103) |
Nov
(169) |
Dec
(117) |
2009 |
Jan
(97) |
Feb
(50) |
Mar
(132) |
Apr
(129) |
May
(117) |
Jun
(63) |
Jul
(59) |
Aug
(99) |
Sep
(96) |
Oct
(87) |
Nov
(188) |
Dec
(129) |
2010 |
Jan
(107) |
Feb
(160) |
Mar
(55) |
Apr
(99) |
May
(47) |
Jun
(142) |
Jul
(146) |
Aug
(84) |
Sep
(108) |
Oct
(122) |
Nov
(114) |
Dec
(44) |
2011 |
Jan
(67) |
Feb
(69) |
Mar
(96) |
Apr
(77) |
May
(182) |
Jun
(129) |
Jul
(115) |
Aug
(98) |
Sep
(80) |
Oct
(86) |
Nov
(99) |
Dec
(187) |
2012 |
Jan
(57) |
Feb
(65) |
Mar
(103) |
Apr
(106) |
May
(123) |
Jun
(107) |
Jul
(157) |
Aug
(81) |
Sep
(159) |
Oct
(117) |
Nov
(70) |
Dec
(78) |
2013 |
Jan
(167) |
Feb
(187) |
Mar
(71) |
Apr
(130) |
May
(85) |
Jun
(112) |
Jul
(95) |
Aug
(149) |
Sep
(43) |
Oct
(64) |
Nov
(45) |
Dec
(27) |
2014 |
Jan
(55) |
Feb
(68) |
Mar
(64) |
Apr
(61) |
May
(51) |
Jun
(80) |
Jul
(90) |
Aug
(63) |
Sep
(142) |
Oct
(113) |
Nov
(145) |
Dec
(24) |
2015 |
Jan
(20) |
Feb
(20) |
Mar
(61) |
Apr
(43) |
May
(44) |
Jun
(37) |
Jul
(43) |
Aug
(59) |
Sep
(85) |
Oct
(58) |
Nov
(47) |
Dec
(131) |
2016 |
Jan
(130) |
Feb
(47) |
Mar
(121) |
Apr
(131) |
May
(75) |
Jun
(55) |
Jul
(25) |
Aug
(56) |
Sep
(42) |
Oct
(92) |
Nov
(96) |
Dec
(74) |
2017 |
Jan
(124) |
Feb
(67) |
Mar
(41) |
Apr
(42) |
May
(48) |
Jun
(47) |
Jul
(51) |
Aug
(43) |
Sep
(63) |
Oct
(33) |
Nov
(35) |
Dec
(2) |
2018 |
Jan
(47) |
Feb
(24) |
Mar
(67) |
Apr
(29) |
May
(8) |
Jun
(4) |
Jul
(21) |
Aug
(34) |
Sep
(27) |
Oct
(26) |
Nov
(35) |
Dec
(64) |
2019 |
Jan
(36) |
Feb
(116) |
Mar
(85) |
Apr
(46) |
May
(16) |
Jun
(21) |
Jul
(27) |
Aug
(42) |
Sep
(33) |
Oct
(57) |
Nov
(41) |
Dec
(27) |
2020 |
Jan
(23) |
Feb
(46) |
Mar
(33) |
Apr
(54) |
May
(72) |
Jun
(49) |
Jul
(59) |
Aug
(41) |
Sep
(98) |
Oct
(61) |
Nov
(489) |
Dec
(34) |
2021 |
Jan
(94) |
Feb
(68) |
Mar
(41) |
Apr
(27) |
May
(40) |
Jun
(41) |
Jul
(32) |
Aug
(19) |
Sep
(27) |
Oct
(34) |
Nov
(59) |
Dec
(55) |
2022 |
Jan
(39) |
Feb
(69) |
Mar
(57) |
Apr
(50) |
May
(131) |
Jun
(58) |
Jul
(65) |
Aug
(22) |
Sep
(68) |
Oct
(34) |
Nov
(31) |
Dec
(36) |
2023 |
Jan
(22) |
Feb
(38) |
Mar
(65) |
Apr
(37) |
May
(115) |
Jun
(65) |
Jul
(47) |
Aug
(82) |
Sep
(33) |
Oct
(57) |
Nov
(52) |
Dec
(45) |
2024 |
Jan
(38) |
Feb
(45) |
Mar
(36) |
Apr
(34) |
May
(38) |
Jun
(35) |
Jul
(27) |
Aug
(40) |
Sep
(12) |
Oct
|
Nov
|
Dec
|
From: Jim H. <jam...@gm...> - 2008-09-22 12:21:54
|
hi Dear friend: We are an electronic products wholesale .Our products are of high quality and low price. If you want to do business , we can offer you the most reasonable discount to make you get more profits. We are expecting for your business. Please visit our website: http://www.sjmyz.com E-mail : sjm...@ya... MSN: sj...@ho... Looking forward to your contact and long cooperation with us! Our mainly products such the phones, PSP, display TV, notebook, video, computers, Mp4, GPS, xbox 360, digital cameras and so on. Welcome to visit our website! |
From: Sam D. <sei...@gm...> - 2008-09-22 10:00:48
|
Hi all I've been trying to use a fuse filesystem (mp3fs in this case) with apache to host some files. Mp3fs is a filesystem using fuse that converts flac files into mp3 files on the fly. Apache gets permission denied errors when trying to access the mp3fs mount point, but I can read the files myself just fine when it is mounted into the /var/www/ directory. I talked to the developer of mp3fs and he is just as stumped as I am, he suggested that it might be a fuse issue, hence why I am asking here :-). Does fuse have any known issues playing nicely with apache or is it more likely a bug in mp3fs? Sam |
From: Heikki O. <sh...@za...> - 2008-09-21 12:37:28
|
On Sun, Sep 21, 2008 at 11:56:13AM +0200, Tomas M wrote: > I can see it's possible to add sshfs mount to /etc/fstab, but it > still requires password to be typed to console. > > Is it possible to auto mount sshfs without typing password? Yes, use ssh-keygen with empty passphrase. -- Heikki Orsila hei...@ik... http://www.iki.fi/shd |
From: Tomas M <to...@sl...> - 2008-09-21 11:59:43
|
Hello. I can see it's possible to add sshfs mount to /etc/fstab, but it still requires password to be typed to console. Is it possible to auto mount sshfs without typing password? Thank you. |
From: Chuanwen W. <wc...@gm...> - 2008-09-21 10:01:18
|
I think I have made a mistake. In fact, "write" callback can also write as much as 128K. [...] unique: 93055, opcode: WRITE (16), nodeid: 39, insize: 131152 WRITE[0] 131072 bytes to 2621440 WRITE[0] 131072 bytes unique: 93055, success, outsize: 24 unique: 93056, opcode: WRITE (16), nodeid: 39, insize: 131152 WRITE[0] 131072 bytes to 2752512 WRITE[0] 131072 bytes unique: 93056, success, outsize: 24 unique: 93057, opcode: WRITE (16), nodeid: 39, insize: 131152 WRITE[0] 131072 bytes to 2883584 WRITE[0] 131072 bytes unique: 93057, success, outsize: 24 [...] > $ fusexmp -d -o big_writes tmp > [...] > unique: 8, opcode: WRITE (16), nodeid: 2, insize: 8272 > WRITE[0] 8192 bytes to 0 > WRITE[0] 8192 bytes > unique: 8, success, outsize: 24 > unique: 9, opcode: WRITE (16), nodeid: 2, insize: 8272 > WRITE[0] 8192 bytes to 8192 > WRITE[0] 8192 bytes > unique: 9, success, outsize: 24 > unique: 10, opcode: WRITE (16), nodeid: 2, insize: 8272 > WRITE[0] 8192 bytes to 16384 > WRITE[0] 8192 bytes > unique: 10, success, outsize: 24 > [...] That was my fault and I'm sorry about that. And now I think the problem is fixed. :) -- wcw |
From: Ankush D. <ank...@gm...> - 2008-09-21 06:08:43
|
Sir The site http://fuse.sourceforge.net/wiki/ its not working .i am planning to develop Filesystem that supports versioning feature and i need to learn fuse for it can u please provide me with basic documentation for implementation of fuse thanks Ankush |
From: Chuanwen W. <wc...@gm...> - 2008-09-21 03:53:27
|
Hi, Brian Wang! Thanks for your help! And thank Fuse, it let work more easily! On Wed, Sep 17, 2008 at 11:51 AM, Brian Wang <ywa...@ho...> wrote: > try "big_writes" option. If you have all the big_write patches applied. I > think the big_writes patches should be already in the most recent cvs > version. I am not sure when it went to the kernel. but 2.6.27 should have > it. I have tried fuse-2.8.0-pre1 and kernel 2.6.26-r1. And now, I have BIGER "wirte": $ fusexmp -d -o big_writes tmp [...] unique: 8, opcode: WRITE (16), nodeid: 2, insize: 8272 WRITE[0] 8192 bytes to 0 WRITE[0] 8192 bytes unique: 8, success, outsize: 24 unique: 9, opcode: WRITE (16), nodeid: 2, insize: 8272 WRITE[0] 8192 bytes to 8192 WRITE[0] 8192 bytes unique: 9, success, outsize: 24 unique: 10, opcode: WRITE (16), nodeid: 2, insize: 8272 WRITE[0] 8192 bytes to 16384 WRITE[0] 8192 bytes unique: 10, success, outsize: 24 [...] As you can see, now the "write" callback write 8K each time, but not 4K as what I metioned in the pre-email. But I think 8K is still not big enough. It would be better if "write" callback can write 128K each time just like "read" callback. So can I configure it to have even bigger writes? Thanks in advanced! -- wcw |
From: CITEC-PostMaster <mul...@gm...> - 2008-09-20 15:26:23
|
I have found the tutorial mentioned about automount sshfs with autofs. http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/ I noticed that at step 4, there is a parameter named ghost. I need to know what does it do? Why I need to put this? I have tried 'man' and googling already, but still don't see any answer. /mnt/ssh /etc/auto.sshfs uid=1000,gid=1000,--timeout=30*,--ghost* |
From: Andre-John M. <aj...@sy...> - 2008-09-20 15:23:18
|
Hi, I am dealing with IPv6 based hosts and I decided to see if I could use sshfs for mounting a remote drive. While I find that it can handle named IPv6 addresses, I found that it could not handle numerical IPv6 addresses, for example doing the following fails: sshfs -d mysuser@[::1]:/Users/ /test/ -oreconnect,volname=test On the other hand scp supports the same remote destination format, so this suggested that the issue is not a limitation of ssh. Investigation reveals that this is an issue with the parsing of the parameter, since the current algorithm assumes the only colon is as a separator between the host and path component. Once the issue of parsing is dealt with ssh is quite capable of accepting an host connection request of the form: user@::1 I would have provided a patch, but my C is rusty. The pseudo code solution would be (influenced by looking at scp): 1. extract the user if present (put it in tuser) and remove it from the input string 2. if host string starts with '[' look for ']', then look for ':' and treat everything after as the path and everything before as the host 3. remove the square brackets from the host name 4. create the sshfs.host string such that we append tuser (and '@') if it is not null and then the hostname This means that when we call ssh we would pass myser@::1, ::1, myuser@localhost or localhost. André Note: my tests were done using sshfs 2.1 on MacOS X, with the patches provided by Amit Singh in MacFusion. |
From: Sonal T. <son...@gm...> - 2008-09-18 04:24:10
|
Hi All I am an engineering student and new to NILFS group. Is there some implementation document on NILFS which can describe in about how to implement various structures like inodes, segments etc. I have to start everything right from the scratch in order to write a whole log structured file system. Thanks in Advance Sonal. |
From: Robert P <zen...@ya...> - 2008-09-17 23:29:32
|
Can someone explain to me why INIT is shown in the debug output twice? Below I am running the example program that came with 2.7.3. I have also noticed weird behavior in gdb where it seems that there are two threads stepping through my code. Is this related? TIA $ ./hello -d -s /tmp/fuse unique: 1, opcode: INIT (26), nodeid: 0, insize: 56 INIT: 7.8 flags=0x00000003 max_readahead=0x00020000 INIT: 7.8 flags=0x00000001 max_readahead=0x00020000 max_write=0x00020000 unique: 1, error: 0 (Success), outsize: 40 |
From: P. E. R. <ped...@gm...> - 2008-09-17 20:29:48
|
Right! Thanks guys. What happened with the fuse's wiki? Cheers 2008/9/17 Chuanwen Wu <wc...@gm...> > On Thu, Sep 11, 2008 at 9:31 PM, Goswin von Brederlow <gos...@we...> > wrote: > > "Pedro Eugênio Rocha" <ped...@gm...> writes: > > > >> I have some doubts about how fuse execute its callbacks (get_attr, > read_dir, > >> etc.) after it daemonizes. If i just call fuse_main with the default > >> parameters, just like the hello example on the fuse's page, the > callbacks > >> are executed sequentially, or each callback is executed on a different > >> thread? Is there a way to configure it? > >> > >> Sorry for the dumb question, but i search o lot and didnt't find the > >> anwser... > >> > >> Cheers, > > > > You can disable threading (-s) and then they will be done > > sequentially. Otherwise they will be done concurrently as much as > > possible. How much parallelism can be done depends on the actions as > > they do need to lock parts (or sometimes all) of the fs tree. > And as what I noticed in the output of fuse( with -d but not -s), the > "write" operation callback was always executed sequentially. I mean > only the "write" operation callback, and others may be parellel. > > Please fixed me if I'm wrong. > > > -- > wcw > -- Pedro Eugênio Rocha Linux user #473848 C3SL - Centro de Computação Científica e Software Livre UFPR - Ciência da Computação |
From: Brian W. <ywa...@ho...> - 2008-09-17 03:52:54
|
try "big_writes" option. If you have all the big_write patches applied. I think the big_writes patches should be already in the most recent cvs version. I am not sure when it went to the kernel. but 2.6.27 should have it. ----- Original Message ----- From: "Chuanwen Wu" <wc...@gm...> To: <fus...@li...> Sent: Tuesday, September 16, 2008 9:45 PM Subject: Re: [fuse-devel] let fuse has BIG WRITE? > Maybe I didn't describe clearly. > > In fuse "read" callback, it's can read (4KB, 8KB , 16KB ... 128KB) > data each time : > /**********************************************************/ > unique: 128, opcode: READ (15), nodeid: 24, insize: 80 > READ[0] 16384 bytes from 0 /* 16K */ > READ[0] 16384 bytes > unique: 128, error: 0 (Success), outsize: 16400 > unique: 129, opcode: READ (15), nodeid: 24, insize: 80 > READ[0] 32768 bytes from 16384 /* 32K */ > READ[0] 32768 bytes > unique: 129, error: 0 (Success), outsize: 32784 > unique: 130, opcode: READ (15), nodeid: 24, insize: 80 > READ[0] 65536 bytes from 49152 /* 64K */ > READ[0] 65536 bytes > unique: 130, error: 0 (Success), outsize: 65552 > unique: 131, opcode: READ (15), nodeid: 24, insize: 80 > READ[0] 131072 bytes from 114688 /* 128K */ > READ[0] 131072 bytes > unique: 131, error: 0 (Success), outsize: 131088 > [...] > /******************************************************/ > > But in "write" callback, it always can write 4KB each time: > /******************************************************/ > unique: 37, opcode: WRITE (16), nodeid: 5, insize: 4160 > WRITE[0] 4096 bytes to 0 /* 4K */ > WRITE[0] 4096 bytes > unique: 37, error: 0 (Success), outsize: 24 > unique: 38, opcode: WRITE (16), nodeid: 5, insize: 4160 > WRITE[0] 4096 bytes to 4096 /* 4K */ > WRITE[0] 4096 bytes > unique: 38, error: 0 (Success), outsize: 24 > [...] > /******************************************************/ > > Now, I want to write more data (may be 64K, 128K or even larger ) in > the "write" callback. > So, is it possible to do that? > > Any help will be appreciated! > > On Mon, Sep 15, 2008 at 10:55 PM, Chuanwen Wu <wc...@gm...> wrote: >> Hi, I ran fusexmp, and I found that, no matter how many bytes the user >> program write each time, all "xmp_write " operation actually write >> 4096Bytes. >> Just as the output below: >> >> unique: 37, opcode: WRITE (16), nodeid: 5, insize: 4160 >> WRITE[0] 4096 bytes to 0 >> WRITE[0] 4096 bytes >> unique: 37, error: 0 (Success), outsize: 24 >> unique: 38, opcode: WRITE (16), nodeid: 5, insize: 4160 >> WRITE[0] 4096 bytes to 4096 >> WRITE[0] 4096 bytes >> unique: 38, error: 0 (Success), outsize: 24 >> unique: 39, opcode: WRITE (16), nodeid: 5, insize: 4160 >> WRITE[0] 4096 bytes to 8192 >> WRITE[0] 4096 bytes >> unique: 39, error: 0 (Success), outsize: 24 >> unique: 40, opcode: WRITE (16), nodeid: 5, insize: 4160 >> WRITE[0] 4096 bytes to 12288 >> WRITE[0] 4096 bytes >> unique: 40, error: 0 (Success), outsize: 24 >> [...] >> >> I am not very sure wheather the 4096 bytes limitation is from Fuse or >> not. >> And I am looking for some way to let fuse write more data each time, >> like read operation - 128k each time. >> So, is there any way to do this? >> >> Thanks in advanced! >> -- >> wcw >> > > > > -- > wcw > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the > world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > fuse-devel mailing list > fus...@li... > https://lists.sourceforge.net/lists/listinfo/fuse-devel > |
From: Chuanwen W. <wc...@gm...> - 2008-09-17 03:44:52
|
Maybe I didn't describe clearly. In fuse "read" callback, it's can read (4KB, 8KB , 16KB ... 128KB) data each time : /**********************************************************/ unique: 128, opcode: READ (15), nodeid: 24, insize: 80 READ[0] 16384 bytes from 0 /* 16K */ READ[0] 16384 bytes unique: 128, error: 0 (Success), outsize: 16400 unique: 129, opcode: READ (15), nodeid: 24, insize: 80 READ[0] 32768 bytes from 16384 /* 32K */ READ[0] 32768 bytes unique: 129, error: 0 (Success), outsize: 32784 unique: 130, opcode: READ (15), nodeid: 24, insize: 80 READ[0] 65536 bytes from 49152 /* 64K */ READ[0] 65536 bytes unique: 130, error: 0 (Success), outsize: 65552 unique: 131, opcode: READ (15), nodeid: 24, insize: 80 READ[0] 131072 bytes from 114688 /* 128K */ READ[0] 131072 bytes unique: 131, error: 0 (Success), outsize: 131088 [...] /******************************************************/ But in "write" callback, it always can write 4KB each time: /******************************************************/ unique: 37, opcode: WRITE (16), nodeid: 5, insize: 4160 WRITE[0] 4096 bytes to 0 /* 4K */ WRITE[0] 4096 bytes unique: 37, error: 0 (Success), outsize: 24 unique: 38, opcode: WRITE (16), nodeid: 5, insize: 4160 WRITE[0] 4096 bytes to 4096 /* 4K */ WRITE[0] 4096 bytes unique: 38, error: 0 (Success), outsize: 24 [...] /******************************************************/ Now, I want to write more data (may be 64K, 128K or even larger ) in the "write" callback. So, is it possible to do that? Any help will be appreciated! On Mon, Sep 15, 2008 at 10:55 PM, Chuanwen Wu <wc...@gm...> wrote: > Hi, I ran fusexmp, and I found that, no matter how many bytes the user > program write each time, all "xmp_write " operation actually write > 4096Bytes. > Just as the output below: > > unique: 37, opcode: WRITE (16), nodeid: 5, insize: 4160 > WRITE[0] 4096 bytes to 0 > WRITE[0] 4096 bytes > unique: 37, error: 0 (Success), outsize: 24 > unique: 38, opcode: WRITE (16), nodeid: 5, insize: 4160 > WRITE[0] 4096 bytes to 4096 > WRITE[0] 4096 bytes > unique: 38, error: 0 (Success), outsize: 24 > unique: 39, opcode: WRITE (16), nodeid: 5, insize: 4160 > WRITE[0] 4096 bytes to 8192 > WRITE[0] 4096 bytes > unique: 39, error: 0 (Success), outsize: 24 > unique: 40, opcode: WRITE (16), nodeid: 5, insize: 4160 > WRITE[0] 4096 bytes to 12288 > WRITE[0] 4096 bytes > unique: 40, error: 0 (Success), outsize: 24 > [...] > > I am not very sure wheather the 4096 bytes limitation is from Fuse or not. > And I am looking for some way to let fuse write more data each time, > like read operation - 128k each time. > So, is there any way to do this? > > Thanks in advanced! > -- > wcw > -- wcw |
From: Chuanwen W. <wc...@gm...> - 2008-09-17 03:29:24
|
On Thu, Sep 11, 2008 at 9:31 PM, Goswin von Brederlow <gos...@we...> wrote: > "Pedro Eugênio Rocha" <ped...@gm...> writes: > >> I have some doubts about how fuse execute its callbacks (get_attr, read_dir, >> etc.) after it daemonizes. If i just call fuse_main with the default >> parameters, just like the hello example on the fuse's page, the callbacks >> are executed sequentially, or each callback is executed on a different >> thread? Is there a way to configure it? >> >> Sorry for the dumb question, but i search o lot and didnt't find the >> anwser... >> >> Cheers, > > You can disable threading (-s) and then they will be done > sequentially. Otherwise they will be done concurrently as much as > possible. How much parallelism can be done depends on the actions as > they do need to lock parts (or sometimes all) of the fs tree. And as what I noticed in the output of fuse( with -d but not -s), the "write" operation callback was always executed sequentially. I mean only the "write" operation callback, and others may be parellel. Please fixed me if I'm wrong. -- wcw |
From: Brian W. <ywa...@ho...> - 2008-09-15 15:58:18
|
These are what I found about FUSE NFS. 1. you have to use "use_ino" and "noforget" for FUSE mount. if "noforget" is not used, you will get ESTALE when the inode is purged from the cache. 2. If the server reboot (or nfs restart), you will get ESTALE. You have to remount from the client side. 3. If you restart fuse without restart NFS, it is even worse, your NFS client may have to reboot for the NFS mount to recover (umount/remount won't work most of the time). FUSE high level implementation implements the "." and ".." lookup already, but the lookup only works with "use_ino, noforget" options ( no restart/reboot of course) becuase fuse high level implementation generates its own inode number for each file/dir and they are not persistent inode numbers. > From: mu...@no...> To: bs...@q-...> Date: Mon, 15 Sep 2008 11:25:10 -0400> CC: fus...@li...> Subject: Re: [fuse-devel] Stale NFS file handle FUSE 2.8.0-pre1 Linux 2.6.27-rc6> > On 15-Sep-08, at 11:10 AM, Bernd Schubert wrote:> > On Monday 15 September 2008 05:28:39 Anoop Karollil wrote:> >>> >> I am still seeing stale NFS file handles when using FUSE 2.8.0-pre1 > >> on> >> Linux 2.6.27-rc6. In a previous post Miklos advised the 'use_ino' > >> with a> >> 'noforget' option after applying a patch. But I think the noforget> >> option is specific to that patch and not implemented in FUSE in > >> 2.8.0-pre1.> >> > I'm not so sure if there is a final solution at all presently. NFS > > will only> > work properly if you have support for inode generation numbers, but > > IMHO fuse> > doesn't support these (yet?).> > As far as I know, NFS is only fully supported if you use the low level > interface and provide the necessary interfaces to lookup a file by > inode number, and it's parent.> > When the NFS client mounts a file-system, all new operations will > start from the root inode in the file-system. This will work fine with > fuse. The NFS client's kernel will cache the file-system structure. > The NFS server's kernel will also cache the file-system's structure > enough to be able to handle most requests from the NFS client which > will come in by inode number.> > Should the NFS server stop (or is rebooted), and/or the serving file- > system is unmounted and mounted again, then the kernel will loose all > cached information, including the structure of the file-system (inodes > numbers and parents).> > The NFS client, however, will go merrily on, and it will expect that > the server can respond to new requests based on the file-system id > number and the inode number (and generation). When the NFS client > makes a new request which is by inode number, the NFS server tries to > find that inode. If it's not in the cache, then it requests > information from the file-system by asking for the file by number, and > will also fill in the structure of the file-system by asking for the > parent directories. When they cannot be found either because the file- > system is not implementing lookup by inode number (name is "."), or > lookup parent inode number (name is ".."), then the NFS server will > receive the ESTALE failure from the FUSE kernel module, and return > this to the NFS client.> > Not sure whether these functions are implemented within the high-level > interface, but if not, then this is why NFS still doesn't work as you > might expect.> > Regards,> > John.> > > -------------------------------------------------------------------------> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge> Build the coolest Linux based applications with Moblin SDK & win great prizes> Grand prize is a trip for two to an Open Source event anywhere in the world> http://moblin-contest.org/redirect.php?banner_id=100&url=/> _______________________________________________> fuse-devel mailing list> fus...@li...> https://lists.sourceforge.net/lists/listinfo/fuse-devel _________________________________________________________________ Stay up to date on your PC, the Web, and your mobile phone with Windows Live. http://clk.atdmt.com/MRT/go/msnnkwxp1020093185mrt/direct/01/ |
From: John M. <mu...@no...> - 2008-09-15 15:25:25
|
On 15-Sep-08, at 11:10 AM, Bernd Schubert wrote: > On Monday 15 September 2008 05:28:39 Anoop Karollil wrote: >> >> I am still seeing stale NFS file handles when using FUSE 2.8.0-pre1 >> on >> Linux 2.6.27-rc6. In a previous post Miklos advised the 'use_ino' >> with a >> 'noforget' option after applying a patch. But I think the noforget >> option is specific to that patch and not implemented in FUSE in >> 2.8.0-pre1. > > I'm not so sure if there is a final solution at all presently. NFS > will only > work properly if you have support for inode generation numbers, but > IMHO fuse > doesn't support these (yet?). As far as I know, NFS is only fully supported if you use the low level interface and provide the necessary interfaces to lookup a file by inode number, and it's parent. When the NFS client mounts a file-system, all new operations will start from the root inode in the file-system. This will work fine with fuse. The NFS client's kernel will cache the file-system structure. The NFS server's kernel will also cache the file-system's structure enough to be able to handle most requests from the NFS client which will come in by inode number. Should the NFS server stop (or is rebooted), and/or the serving file- system is unmounted and mounted again, then the kernel will loose all cached information, including the structure of the file-system (inodes numbers and parents). The NFS client, however, will go merrily on, and it will expect that the server can respond to new requests based on the file-system id number and the inode number (and generation). When the NFS client makes a new request which is by inode number, the NFS server tries to find that inode. If it's not in the cache, then it requests information from the file-system by asking for the file by number, and will also fill in the structure of the file-system by asking for the parent directories. When they cannot be found either because the file- system is not implementing lookup by inode number (name is "."), or lookup parent inode number (name is ".."), then the NFS server will receive the ESTALE failure from the FUSE kernel module, and return this to the NFS client. Not sure whether these functions are implemented within the high-level interface, but if not, then this is why NFS still doesn't work as you might expect. Regards, John. |
From: Bernd S. <bs...@q-...> - 2008-09-15 15:10:11
|
On Monday 15 September 2008 05:28:39 Anoop Karollil wrote: > Hello, > > I am still seeing stale NFS file handles when using FUSE 2.8.0-pre1 on > Linux 2.6.27-rc6. In a previous post Miklos advised the 'use_ino' with a > 'noforget' option after applying a patch. But I think the noforget > option is specific to that patch and not implemented in FUSE in 2.8.0-pre1. > > So what do I do in 2.8.0-pre1 to get around the stale NFS file handle > problem? I'm not so sure if there is a final solution at all presently. NFS will only work properly if you have support for inode generation numbers, but IMHO fuse doesn't support these (yet?). Actually you are even luckily if you get stale nfs file handles, it is much worse if the nfs-client doesn't detect the inode number has been recycled... Cheers, Bernd -- Bernd Schubert Q-Leap Networks GmbH |
From: Chuanwen W. <wc...@gm...> - 2008-09-15 14:55:33
|
Hi, I ran fusexmp, and I found that, no matter how many bytes the user program write each time, all "xmp_write " operation actually write 4096Bytes. Just as the output below: unique: 37, opcode: WRITE (16), nodeid: 5, insize: 4160 WRITE[0] 4096 bytes to 0 WRITE[0] 4096 bytes unique: 37, error: 0 (Success), outsize: 24 unique: 38, opcode: WRITE (16), nodeid: 5, insize: 4160 WRITE[0] 4096 bytes to 4096 WRITE[0] 4096 bytes unique: 38, error: 0 (Success), outsize: 24 unique: 39, opcode: WRITE (16), nodeid: 5, insize: 4160 WRITE[0] 4096 bytes to 8192 WRITE[0] 4096 bytes unique: 39, error: 0 (Success), outsize: 24 unique: 40, opcode: WRITE (16), nodeid: 5, insize: 4160 WRITE[0] 4096 bytes to 12288 WRITE[0] 4096 bytes unique: 40, error: 0 (Success), outsize: 24 [...] I am not very sure wheather the 4096 bytes limitation is from Fuse or not. And I am looking for some way to let fuse write more data each time, like read operation - 128k each time. So, is there any way to do this? Thanks in advanced! -- wcw |
From: Anoop K. <ka...@cs...> - 2008-09-15 03:28:31
|
Hello, I am still seeing stale NFS file handles when using FUSE 2.8.0-pre1 on Linux 2.6.27-rc6. In a previous post Miklos advised the 'use_ino' with a 'noforget' option after applying a patch. But I think the noforget option is specific to that patch and not implemented in FUSE in 2.8.0-pre1. So what do I do in 2.8.0-pre1 to get around the stale NFS file handle problem? Thanks, Anoop |
From: Nikolaus R. <Nik...@ra...> - 2008-09-14 22:58:52
|
CITEC-PostMaster <mul...@gm...> writes: > Hi, > > First of all I would like to thank to all contributor of this wonderful > project. > OK, this is my question I host a webmail on my server but I planed to host > the mail msgs on remote host. > I managed to setup+config fuse with sshfs successfully. > > However as I expected since mail msg stored on remote host, it toke > long time to display the msg list. So, I curious that if it possible > to optimize the connection speed by adjusting with sshfs parameter? Not without more information about how the mail is actually stored on the server. One message per file? Is there an index? But why do you want to reinvent the wheel here? IMAP is an established protocol which does exactly what you want. Best, -Nikolaus -- »It is not worth an intelligent man's time to be in the majority. By definition, there are already enough people to do that.« -J.H. Hardy PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C |
From: CITEC-PostMaster <mul...@gm...> - 2008-09-14 18:52:17
|
Hi, First of all I would like to thank to all contributor of this wonderful project. OK, this is my question I host a webmail on my server but I planed to host the mail msgs on remote host. I managed to setup+config fuse with sshfs successfully. However as I expected since mail msg stored on remote host, it toke long time to display the msg list. So, I curious that if it possible to optimize the connection speed by adjusting with sshfs parameter? Does any adjustment with following params will improve the performance? Any best practise? -o large_read -o max_read=N -o async_read -o sync_read -o entry_timeout=T -o negative_timeout=T -o attr_timeout=T -o ac_attr_timeout=T -o intr -o intr_signal=NUM Thank |
From: Zach L. <zac...@gm...> - 2008-09-12 05:53:47
|
This is happening only with -s option. On Thu, Sep 11, 2008 at 7:34 PM, Zach Larpenteur <zac...@gm...>wrote: > Well zdd is very similar to dd. > Reads in block from file and writes it to file. > > static void > read_write_data(void) > { > unsigned i; > ssize_t ret; > > for (i=0; i < count ;i++) { > > if (timelimit_exceeded) > break; > > ret = read(ifd, buf, bs); > if (ret <= 0) { > break; > } > ret = write(ofd, buf, bs); > if (ret <= 0) { > perror("write"); > > break; > } > > data_sofar += bs; > } > > return; > } > > > > On Thu, Sep 11, 2008 at 6:56 PM, Franco Broi <fr...@fu...>wrote: > >> Seems to be working OK for us. >> >> fusermount version: 2.7.4 >> linux-2.6.24.7 >> >> What exactly is zdd doing? >> >> On Thu, 2008-09-11 at 18:45 -1000, Zach Larpenteur wrote: >> > Hey Guys, >> > >> > Is there any issue with fuse and 2.6.24 kernel. I see following behavior >> > when I try to run dd over fuse mounted filesystem. >> > Eventually fusexmp_fh hangs and I have issue kill -9 to terminate it. >> > >> > I'm using fuse-2.7,4 and kernel is compiled with fuse. >> > >> > ./fusexmp_fh -s /mnt/fuse >> > >> > ./zdd -i /dev/zero -o /mnt/fuse/tmp/ZeroChunks2 -b 4096 -c 8048576 >> > Time taken = 1 sec 51 usec, Bandwidth = 159.70 MBps >> > Time taken = 1 sec 102 usec, Bandwidth = 106.44 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 107.17 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 116.03 MBps >> > Time taken = 1 sec 47 usec, Bandwidth = 110.45 MBps >> > Time taken = 1 sec 49 usec, Bandwidth = 122.10 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 100.18 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 120.02 MBps >> > Time taken = 1 sec 112 usec, Bandwidth = 122.06 MBps >> > Time taken = 1 sec 47 usec, Bandwidth = 119.95 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 116.08 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 92.27 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 111.93 MBps >> > Time taken = 1 sec 49 usec, Bandwidth = 115.88 MBps >> > Time taken = 1 sec 49 usec, Bandwidth = 119.79 MBps >> > Time taken = 1 sec 49 usec, Bandwidth = 114.77 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 95.42 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 99.51 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 126.97 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 114.21 MBps >> > Time taken = 1 sec 47 usec, Bandwidth = 111.48 MBps >> > Time taken = 1 sec 711 usec, Bandwidth = 95.86 MBps >> > Time taken = 1 sec 47 usec, Bandwidth = 97.16 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 119.59 MBps >> > Time taken = 1 sec 47 usec, Bandwidth = 112.78 MBps >> > Time taken = 1 sec 50 usec, Bandwidth = 123.65 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 93.73 MBps >> > Time taken = 1 sec 49 usec, Bandwidth = 115.15 MBps >> > Time taken = 1 sec 49 usec, Bandwidth = 117.66 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 122.32 MBps >> > Time taken = 1 sec 52 usec, Bandwidth = 118.16 MBps >> > Time taken = 1 sec 160 usec, Bandwidth = 100.54 MBps >> > Time taken = 1 sec 53 usec, Bandwidth = 106.54 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 127.68 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 114.25 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 117.91 MBps >> > Time taken = 1 sec 49 usec, Bandwidth = 79.21 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 97.36 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 140.80 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 118.34 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 113.24 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 82.50 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 109.25 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 32.79 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 32.67 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 27.28 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 30.69 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 25.66 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 21.21 MBps >> > Time taken = 1 sec 40 usec, Bandwidth = 21.55 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 21.15 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 18.86 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 23.92 MBps >> > Time taken = 1 sec 64 usec, Bandwidth = 28.39 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 23.42 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 28.96 MBps >> > Time taken = 1 sec 57 usec, Bandwidth = 21.14 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 23.60 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 17.81 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 24.64 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 27.80 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 23.59 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 21.92 MBps >> > Time taken = 1 sec 80 usec, Bandwidth = 28.10 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 29.67 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 29.47 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 27.40 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 22.35 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 30.55 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 27.20 MBps >> > Time taken = 1 sec 52 usec, Bandwidth = 27.77 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 15.61 MBps >> > Time taken = 1 sec 48 usec, Bandwidth = 23.37 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 28.22 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 25.01 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 26.18 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 23.93 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 27.29 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 28.33 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 25.81 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 24.54 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 21.57 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 28.28 MBps >> > Time taken = 1 sec 61 usec, Bandwidth = 22.75 MBps >> > Time taken = 1 sec 44 usec, Bandwidth = 25.16 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 25.40 MBps >> > Time taken = 1 sec 42 usec, Bandwidth = 18.07 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 12.90 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 25.61 MBps >> > Time taken = 1 sec 40 usec, Bandwidth = 25.40 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 25.96 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 24.63 MBps >> > Time taken = 1 sec 88 usec, Bandwidth = 35.23 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 23.86 MBps >> > Time taken = 1 sec 50 usec, Bandwidth = 35.30 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 19.12 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 26.07 MBps >> > Time taken = 1 sec 45 usec, Bandwidth = 23.87 MBps >> > Time taken = 1 sec 41 usec, Bandwidth = 26.15 MBps >> > Time taken = 1 sec 46 usec, Bandwidth = 25.75 MBps >> > Time taken = 1 sec 61 usec, Bandwidth = 24.95 MBps >> > Time taken = 1 sec 52 usec, Bandwidth = 24.91 MBps >> > Time taken = 1 sec 43 usec, Bandwidth = 21.81 MBps >> > Time taken = 1 sec 47 usec, Bandwidth = 25.28 MBps >> > Time taken = 1 sec 40 usec, Bandwidth = 37.00 MBps >> > Time taken = 1 sec 38 usec, Bandwidth = 0.00 MBps >> > Time taken = 1 sec 22 usec, Bandwidth = 0.00 MBps >> > >> > >> > Any help is appreciated. >> > >> > -Zach >> > >> ------------------------------------------------------------------------- >> > This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> > Build the coolest Linux based applications with Moblin SDK & win great >> prizes >> > Grand prize is a trip for two to an Open Source event anywhere in the >> world >> > http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> > _______________________________________________ >> > fuse-devel mailing list >> > fus...@li... >> > https://lists.sourceforge.net/lists/listinfo/fuse-devel >> >> > |
From: Zach L. <zac...@gm...> - 2008-09-12 05:33:51
|
Well zdd is very similar to dd. Reads in block from file and writes it to file. static void read_write_data(void) { unsigned i; ssize_t ret; for (i=0; i < count ;i++) { if (timelimit_exceeded) break; ret = read(ifd, buf, bs); if (ret <= 0) { break; } ret = write(ofd, buf, bs); if (ret <= 0) { perror("write"); break; } data_sofar += bs; } return; } On Thu, Sep 11, 2008 at 6:56 PM, Franco Broi <fr...@fu...>wrote: > Seems to be working OK for us. > > fusermount version: 2.7.4 > linux-2.6.24.7 > > What exactly is zdd doing? > > On Thu, 2008-09-11 at 18:45 -1000, Zach Larpenteur wrote: > > Hey Guys, > > > > Is there any issue with fuse and 2.6.24 kernel. I see following behavior > > when I try to run dd over fuse mounted filesystem. > > Eventually fusexmp_fh hangs and I have issue kill -9 to terminate it. > > > > I'm using fuse-2.7,4 and kernel is compiled with fuse. > > > > ./fusexmp_fh -s /mnt/fuse > > > > ./zdd -i /dev/zero -o /mnt/fuse/tmp/ZeroChunks2 -b 4096 -c 8048576 > > Time taken = 1 sec 51 usec, Bandwidth = 159.70 MBps > > Time taken = 1 sec 102 usec, Bandwidth = 106.44 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 107.17 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 116.03 MBps > > Time taken = 1 sec 47 usec, Bandwidth = 110.45 MBps > > Time taken = 1 sec 49 usec, Bandwidth = 122.10 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 100.18 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 120.02 MBps > > Time taken = 1 sec 112 usec, Bandwidth = 122.06 MBps > > Time taken = 1 sec 47 usec, Bandwidth = 119.95 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 116.08 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 92.27 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 111.93 MBps > > Time taken = 1 sec 49 usec, Bandwidth = 115.88 MBps > > Time taken = 1 sec 49 usec, Bandwidth = 119.79 MBps > > Time taken = 1 sec 49 usec, Bandwidth = 114.77 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 95.42 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 99.51 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 126.97 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 114.21 MBps > > Time taken = 1 sec 47 usec, Bandwidth = 111.48 MBps > > Time taken = 1 sec 711 usec, Bandwidth = 95.86 MBps > > Time taken = 1 sec 47 usec, Bandwidth = 97.16 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 119.59 MBps > > Time taken = 1 sec 47 usec, Bandwidth = 112.78 MBps > > Time taken = 1 sec 50 usec, Bandwidth = 123.65 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 93.73 MBps > > Time taken = 1 sec 49 usec, Bandwidth = 115.15 MBps > > Time taken = 1 sec 49 usec, Bandwidth = 117.66 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 122.32 MBps > > Time taken = 1 sec 52 usec, Bandwidth = 118.16 MBps > > Time taken = 1 sec 160 usec, Bandwidth = 100.54 MBps > > Time taken = 1 sec 53 usec, Bandwidth = 106.54 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 127.68 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 114.25 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 117.91 MBps > > Time taken = 1 sec 49 usec, Bandwidth = 79.21 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 97.36 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 140.80 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 118.34 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 113.24 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 82.50 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 109.25 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 32.79 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 32.67 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 27.28 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 30.69 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 25.66 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 21.21 MBps > > Time taken = 1 sec 40 usec, Bandwidth = 21.55 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 21.15 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 18.86 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 23.92 MBps > > Time taken = 1 sec 64 usec, Bandwidth = 28.39 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 23.42 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 28.96 MBps > > Time taken = 1 sec 57 usec, Bandwidth = 21.14 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 23.60 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 17.81 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 24.64 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 27.80 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 23.59 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 21.92 MBps > > Time taken = 1 sec 80 usec, Bandwidth = 28.10 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 29.67 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 29.47 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 27.40 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 22.35 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 30.55 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 27.20 MBps > > Time taken = 1 sec 52 usec, Bandwidth = 27.77 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 15.61 MBps > > Time taken = 1 sec 48 usec, Bandwidth = 23.37 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 28.22 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 25.01 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 26.18 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 23.93 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 27.29 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 28.33 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 25.81 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 24.54 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 21.57 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 28.28 MBps > > Time taken = 1 sec 61 usec, Bandwidth = 22.75 MBps > > Time taken = 1 sec 44 usec, Bandwidth = 25.16 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 25.40 MBps > > Time taken = 1 sec 42 usec, Bandwidth = 18.07 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 12.90 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 25.61 MBps > > Time taken = 1 sec 40 usec, Bandwidth = 25.40 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 25.96 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 24.63 MBps > > Time taken = 1 sec 88 usec, Bandwidth = 35.23 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 23.86 MBps > > Time taken = 1 sec 50 usec, Bandwidth = 35.30 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 19.12 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 26.07 MBps > > Time taken = 1 sec 45 usec, Bandwidth = 23.87 MBps > > Time taken = 1 sec 41 usec, Bandwidth = 26.15 MBps > > Time taken = 1 sec 46 usec, Bandwidth = 25.75 MBps > > Time taken = 1 sec 61 usec, Bandwidth = 24.95 MBps > > Time taken = 1 sec 52 usec, Bandwidth = 24.91 MBps > > Time taken = 1 sec 43 usec, Bandwidth = 21.81 MBps > > Time taken = 1 sec 47 usec, Bandwidth = 25.28 MBps > > Time taken = 1 sec 40 usec, Bandwidth = 37.00 MBps > > Time taken = 1 sec 38 usec, Bandwidth = 0.00 MBps > > Time taken = 1 sec 22 usec, Bandwidth = 0.00 MBps > > > > > > Any help is appreciated. > > > > -Zach > > ------------------------------------------------------------------------- > > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > > Build the coolest Linux based applications with Moblin SDK & win great > prizes > > Grand prize is a trip for two to an Open Source event anywhere in the > world > > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > _______________________________________________ > > fuse-devel mailing list > > fus...@li... > > https://lists.sourceforge.net/lists/listinfo/fuse-devel > > |
From: Franco B. <fr...@fu...> - 2008-09-12 04:54:14
|
Seems to be working OK for us. fusermount version: 2.7.4 linux-2.6.24.7 What exactly is zdd doing? On Thu, 2008-09-11 at 18:45 -1000, Zach Larpenteur wrote: > Hey Guys, > > Is there any issue with fuse and 2.6.24 kernel. I see following behavior > when I try to run dd over fuse mounted filesystem. > Eventually fusexmp_fh hangs and I have issue kill -9 to terminate it. > > I'm using fuse-2.7,4 and kernel is compiled with fuse. > > ./fusexmp_fh -s /mnt/fuse > > ./zdd -i /dev/zero -o /mnt/fuse/tmp/ZeroChunks2 -b 4096 -c 8048576 > Time taken = 1 sec 51 usec, Bandwidth = 159.70 MBps > Time taken = 1 sec 102 usec, Bandwidth = 106.44 MBps > Time taken = 1 sec 48 usec, Bandwidth = 107.17 MBps > Time taken = 1 sec 44 usec, Bandwidth = 116.03 MBps > Time taken = 1 sec 47 usec, Bandwidth = 110.45 MBps > Time taken = 1 sec 49 usec, Bandwidth = 122.10 MBps > Time taken = 1 sec 48 usec, Bandwidth = 100.18 MBps > Time taken = 1 sec 48 usec, Bandwidth = 120.02 MBps > Time taken = 1 sec 112 usec, Bandwidth = 122.06 MBps > Time taken = 1 sec 47 usec, Bandwidth = 119.95 MBps > Time taken = 1 sec 46 usec, Bandwidth = 116.08 MBps > Time taken = 1 sec 44 usec, Bandwidth = 92.27 MBps > Time taken = 1 sec 43 usec, Bandwidth = 111.93 MBps > Time taken = 1 sec 49 usec, Bandwidth = 115.88 MBps > Time taken = 1 sec 49 usec, Bandwidth = 119.79 MBps > Time taken = 1 sec 49 usec, Bandwidth = 114.77 MBps > Time taken = 1 sec 44 usec, Bandwidth = 95.42 MBps > Time taken = 1 sec 41 usec, Bandwidth = 99.51 MBps > Time taken = 1 sec 42 usec, Bandwidth = 126.97 MBps > Time taken = 1 sec 43 usec, Bandwidth = 114.21 MBps > Time taken = 1 sec 47 usec, Bandwidth = 111.48 MBps > Time taken = 1 sec 711 usec, Bandwidth = 95.86 MBps > Time taken = 1 sec 47 usec, Bandwidth = 97.16 MBps > Time taken = 1 sec 46 usec, Bandwidth = 119.59 MBps > Time taken = 1 sec 47 usec, Bandwidth = 112.78 MBps > Time taken = 1 sec 50 usec, Bandwidth = 123.65 MBps > Time taken = 1 sec 48 usec, Bandwidth = 93.73 MBps > Time taken = 1 sec 49 usec, Bandwidth = 115.15 MBps > Time taken = 1 sec 49 usec, Bandwidth = 117.66 MBps > Time taken = 1 sec 46 usec, Bandwidth = 122.32 MBps > Time taken = 1 sec 52 usec, Bandwidth = 118.16 MBps > Time taken = 1 sec 160 usec, Bandwidth = 100.54 MBps > Time taken = 1 sec 53 usec, Bandwidth = 106.54 MBps > Time taken = 1 sec 48 usec, Bandwidth = 127.68 MBps > Time taken = 1 sec 45 usec, Bandwidth = 114.25 MBps > Time taken = 1 sec 46 usec, Bandwidth = 117.91 MBps > Time taken = 1 sec 49 usec, Bandwidth = 79.21 MBps > Time taken = 1 sec 42 usec, Bandwidth = 97.36 MBps > Time taken = 1 sec 43 usec, Bandwidth = 140.80 MBps > Time taken = 1 sec 42 usec, Bandwidth = 118.34 MBps > Time taken = 1 sec 44 usec, Bandwidth = 113.24 MBps > Time taken = 1 sec 43 usec, Bandwidth = 82.50 MBps > Time taken = 1 sec 41 usec, Bandwidth = 109.25 MBps > Time taken = 1 sec 45 usec, Bandwidth = 32.79 MBps > Time taken = 1 sec 41 usec, Bandwidth = 32.67 MBps > Time taken = 1 sec 44 usec, Bandwidth = 27.28 MBps > Time taken = 1 sec 44 usec, Bandwidth = 30.69 MBps > Time taken = 1 sec 48 usec, Bandwidth = 25.66 MBps > Time taken = 1 sec 42 usec, Bandwidth = 21.21 MBps > Time taken = 1 sec 40 usec, Bandwidth = 21.55 MBps > Time taken = 1 sec 41 usec, Bandwidth = 21.15 MBps > Time taken = 1 sec 42 usec, Bandwidth = 18.86 MBps > Time taken = 1 sec 44 usec, Bandwidth = 23.92 MBps > Time taken = 1 sec 64 usec, Bandwidth = 28.39 MBps > Time taken = 1 sec 45 usec, Bandwidth = 23.42 MBps > Time taken = 1 sec 45 usec, Bandwidth = 28.96 MBps > Time taken = 1 sec 57 usec, Bandwidth = 21.14 MBps > Time taken = 1 sec 42 usec, Bandwidth = 23.60 MBps > Time taken = 1 sec 43 usec, Bandwidth = 17.81 MBps > Time taken = 1 sec 44 usec, Bandwidth = 24.64 MBps > Time taken = 1 sec 43 usec, Bandwidth = 27.80 MBps > Time taken = 1 sec 43 usec, Bandwidth = 23.59 MBps > Time taken = 1 sec 45 usec, Bandwidth = 21.92 MBps > Time taken = 1 sec 80 usec, Bandwidth = 28.10 MBps > Time taken = 1 sec 43 usec, Bandwidth = 29.67 MBps > Time taken = 1 sec 41 usec, Bandwidth = 29.47 MBps > Time taken = 1 sec 43 usec, Bandwidth = 27.40 MBps > Time taken = 1 sec 44 usec, Bandwidth = 22.35 MBps > Time taken = 1 sec 46 usec, Bandwidth = 30.55 MBps > Time taken = 1 sec 48 usec, Bandwidth = 27.20 MBps > Time taken = 1 sec 52 usec, Bandwidth = 27.77 MBps > Time taken = 1 sec 42 usec, Bandwidth = 15.61 MBps > Time taken = 1 sec 48 usec, Bandwidth = 23.37 MBps > Time taken = 1 sec 42 usec, Bandwidth = 28.22 MBps > Time taken = 1 sec 43 usec, Bandwidth = 25.01 MBps > Time taken = 1 sec 43 usec, Bandwidth = 26.18 MBps > Time taken = 1 sec 43 usec, Bandwidth = 23.93 MBps > Time taken = 1 sec 45 usec, Bandwidth = 27.29 MBps > Time taken = 1 sec 43 usec, Bandwidth = 28.33 MBps > Time taken = 1 sec 42 usec, Bandwidth = 25.81 MBps > Time taken = 1 sec 42 usec, Bandwidth = 24.54 MBps > Time taken = 1 sec 42 usec, Bandwidth = 21.57 MBps > Time taken = 1 sec 43 usec, Bandwidth = 28.28 MBps > Time taken = 1 sec 61 usec, Bandwidth = 22.75 MBps > Time taken = 1 sec 44 usec, Bandwidth = 25.16 MBps > Time taken = 1 sec 43 usec, Bandwidth = 25.40 MBps > Time taken = 1 sec 42 usec, Bandwidth = 18.07 MBps > Time taken = 1 sec 41 usec, Bandwidth = 12.90 MBps > Time taken = 1 sec 46 usec, Bandwidth = 25.61 MBps > Time taken = 1 sec 40 usec, Bandwidth = 25.40 MBps > Time taken = 1 sec 43 usec, Bandwidth = 25.96 MBps > Time taken = 1 sec 41 usec, Bandwidth = 24.63 MBps > Time taken = 1 sec 88 usec, Bandwidth = 35.23 MBps > Time taken = 1 sec 45 usec, Bandwidth = 23.86 MBps > Time taken = 1 sec 50 usec, Bandwidth = 35.30 MBps > Time taken = 1 sec 46 usec, Bandwidth = 19.12 MBps > Time taken = 1 sec 45 usec, Bandwidth = 26.07 MBps > Time taken = 1 sec 45 usec, Bandwidth = 23.87 MBps > Time taken = 1 sec 41 usec, Bandwidth = 26.15 MBps > Time taken = 1 sec 46 usec, Bandwidth = 25.75 MBps > Time taken = 1 sec 61 usec, Bandwidth = 24.95 MBps > Time taken = 1 sec 52 usec, Bandwidth = 24.91 MBps > Time taken = 1 sec 43 usec, Bandwidth = 21.81 MBps > Time taken = 1 sec 47 usec, Bandwidth = 25.28 MBps > Time taken = 1 sec 40 usec, Bandwidth = 37.00 MBps > Time taken = 1 sec 38 usec, Bandwidth = 0.00 MBps > Time taken = 1 sec 22 usec, Bandwidth = 0.00 MBps > > > Any help is appreciated. > > -Zach > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > fuse-devel mailing list > fus...@li... > https://lists.sourceforge.net/lists/listinfo/fuse-devel |