From: Lorinc H. <lor...@gm...> - 2007-03-06 17:13:09
|
Thanks for the reply Mark, Bgs! I don't think it's a network/CPU issue since I just umount SSHFS and try with NFS 4 on the same network... When I measure result with bonnie++ I got the similar results for NFS and SSHFS even the CPU utilization was good, but bonnie used 16GB file size (double of the memory to avoid cache). NFS Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP squash 2M +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 1136 2 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 648 2 2397 4 586 3 640 2 2668 4 328 1 squash,2M,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,1135.7,2,16,648,2,23 97,4,586,3,640,2,2668,4,328,1 SSHFS Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP squash 2M +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 10995 8 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 354 1 8365 8 1745 2 218 0 3405 1 416 0 squash,2M,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,10995.2,8,16,354,1,8365,8,1745,2,218,0,3405,1,416,0 But when I started our application with 300 512KB files the difference was obvious. I repeated the test at least 10 times and try every type of optimalization... Can be SSHFS working well with large file, but has problem with the small ones (512KB)? Can be the kernel interface version 7.8 too old for 2.6.x FUSE? Thanks for you help! Regards, Lorinc On 3/6/07, Bgs <bg...@bg...> wrote: > > Hi, > > There could be many reasons. Of course if there is any problem (network, > bug, etc.) any fs can be slower than the other. But if you run a CPU > intensive application and you also have to decrypt/encrypt your data, it > will slow down your overall performance (it's not about raw net > throughput here). > > Bye > bgs > > > Mark Haney wrote: > > Lorinc Hever wrote: > >> Hi There, > >> > >> I did a test with one of our custom application on remote data. > >> > >> This application reads about 300 files first to collect some > >> orientation information and after rereads the files to build a 3D > >> model from the pixel data. Each file is about 512KB. > >> > >> I measured the read time with NFS and SSHFS and I just found NFS is 3 > >> times faster :( We would really prefer to use SSHFS for security > >> reasons, but this performance loss is too much. > >> > >> I've tried different mounting options without success (compression, no > >> read ahead etc.). > >> > >> You can find a debug debug fregment below. > >> > >> Any suggestion for tuning is welcomed! > >> > >> Thanks in advance! > >> > >> Regards, > >> Lorinc > > > > This is such an open ended issue. I saw almost the exact reverse when I > > was using sshfs to pull large files from a SAN to an SGI box a year ago. > > I was getting much better throughput with SSHFS than with NFS. Maybe > > it's due to large files, maybe it's just a fluke on my network. I don't > > know, but there might be another reason for the slowdown. > > > > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys-and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > fuse-devel mailing list > fus...@li... > https://lists.sourceforge.net/lists/listinfo/fuse-devel > |