From: Howie <how...@gm...> - 2005-04-21 11:08:18
|
>I'm having the same problem with a small file system I'm working on. >The lost in performance for file writing is about 30% and i think it's >caused for the 4K blocks.... I 'm also have this problem . This is the performance testing which use "Bonnie tools" =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D -------Sequential Output---------------------------=20 ----Sequential Input------- -- Random ---Per Char----- ----Block------ --Rewrite------- -----Per Char- --Block----- ---Seeks-- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec=20 local 100 28045 99.7 195131 101.0 321515 100.5 38494 101.1 1217829 95.1 108324.8 fuse1 100 24253 66.6 85965 21.0 65853 18.0 32531 82.6 =20 148139 14.5 15607.1 fuse2 100 29626 72.3 94936 22.3 123942 29.0 43572 99.6=20 1481160 100 39950.1 ---------------------------------------------------------------------------= -------------------------------------------------------- fuse1 without large_read ,fuse2 use large_read kernel_cache when we use large_read the read performance is as good as local operation. But write will loss some performance. Is it possible have large_write option ,the same as large_read. howie |