From: Marco B <mar...@gm...> - 2011-10-23 22:36:14
|
Hi All, we are doing some benchmarks with MooseFS using a single server Linux CentOS 5.7, HP DL585G6 4x6core Opteron 2.8Ghz, 128GB Ram, 160 GB ioDrive SLC Here is Bonnie result over ioDrive partition (unoptimized ext3 filesystem): Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP mfs1.domain 1000M 72004 92 623230 99 1007946 99 87689 99 +++++ +++ +++++ +++ ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Here is Bonnie result on the same ioDrive mounted over MooseFS: Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP mfs1.domain 1000M 46785 65 61631 8 85611 9 84549 98 2026423 99 4230 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 669 4 29247 20 5808 12 699 3 3405 5 1544 5 It seems there is a big bottleneck in mfs or maybe Fuse. During tests we saw mfschunkserver and mfsmount processes eating 100% of a single CPU each. Waiting for comments/suggestions Regards Marco |
From: Robert S. <rsa...@ne...> - 2011-10-24 12:36:27
|
There are a few bottlenecks in MFS. Most of these are caused by too many connections to a single daemon. mfsmount seems to peform badly if you make more than 5 - 10 simultaneous reads or writes per mount. mfschunkserver also seems to perform badly if you make more than around 10 simultaneous connections per instance. mfsmaster seems to cause some problems if the load on the system becomes too high. In general MFS performs best with a high number of mfsmount's, a high number of mfschunkservers and a dedicated machine running mfsmaster. Fuse can be a bottleneck for asynchronous I/O but I doubt that is what Bonnie is using. Your setup is not quite how MFS was designed to be used and for a single machine file server you will find traditional file systems like ext[234], zfs and xfs will perform significantly better. There are not too many competitors for MFS in the distributed file system space, and most of then are significantly harder to get working and are even more unstable. I can not comment on the performance of the competitors. Robert On 10/23/11 6:36 PM, Marco B wrote: > Hi All, > we are doing some benchmarks with MooseFS using a single server > Linux CentOS 5.7, HP DL585G6 4x6core Opteron 2.8Ghz, 128GB Ram, 160 GB > ioDrive SLC > > Here is Bonnie result over ioDrive partition (unoptimized ext3 filesystem): > > Version 1.03e ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > mfs1.domain 1000M 72004 92 623230 99 1007946 99 87689 99 +++++ > +++ +++++ +++ > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ > > > Here is Bonnie result on the same ioDrive mounted over MooseFS: > > Version 1.03e ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > mfs1.domain 1000M 46785 65 61631 8 85611 9 84549 98 2026423 > 99 4230 7 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 669 4 29247 20 5808 12 699 3 3405 5 1544 5 > > It seems there is a big bottleneck in mfs or maybe Fuse. > During tests we saw mfschunkserver and mfsmount processes eating 100% > of a single CPU each. > > Waiting for comments/suggestions > Regards > Marco > > ------------------------------------------------------------------------------ > The demand for IT networking professionals continues to grow, and the > demand for specialized networking skills is growing even more rapidly. > Take a complimentary Learning@Cisco Self-Assessment and learn > about Cisco certifications, training, and career opportunities. > http://p.sf.net/sfu/cisco-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Mike <isp...@gm...> - 2011-11-01 03:40:37
|
On 11-10-24 09:25 AM, Robert Sandilands wrote: > There are a few bottlenecks in MFS. > > Most of these are caused by too many connections to a single daemon. > mfsmount seems to peform badly if you make more than 5 - 10 simultaneous > reads or writes per mount. mfschunkserver also seems to perform badly if > you make more than around 10 simultaneous connections per instance. > mfsmaster seems to cause some problems if the load on the system becomes > too high. > > In general MFS performs best with a high number of mfsmount's, a high > number of mfschunkservers and a dedicated machine running mfsmaster. Ok, let's suppose I have a collection of PC hardware with 4 effective CPU cores per machine running as chunkservers. Will I get better performance running a number of chunkserver processes, each handling a subset of disks on the machine? If so, will the best number of chunkservers be proportional to the number of CPU cores, or the number of disk spindles? If I can come up with some spare hardware I may test this myself. |
From: Robert S. <rsa...@ne...> - 2011-11-01 04:08:34
|
On Oct 31, 2011, at 11:24 PM, Mike <isp...@gm...> wrote: > On 11-10-24 09:25 AM, Robert Sandilands wrote: >> There are a few bottlenecks in MFS. >> >> In general MFS performs best with a high number of mfsmount's, a high >> number of mfschunkservers and a dedicated machine running mfsmaster. > Ok, let's suppose I have a collection of PC hardware with 4 effective > CPU cores per machine running as chunkservers. Will I get better > performance running a number of chunkserver processes, each handling a > subset of disks on the machine? > Yes, depending on the availability of RAM. > If so, will the best number of chunkservers be proportional to the > number of CPU cores, or the number of disk spindles? > Spindles. My guess is around 10 spindles per instance. > If I can come up with some spare hardware I may test this myself. > > |