|
From: Aleksander W. <ale...@mo...> - 2015-09-23 06:59:23
|
Hello Joe First of all thank You for pointing the problem to us. The reason why you don't get high performance is FreeBSD design. We made some investigations and it appears that the block size in all I/O is only 4kB. All operating systems use cache during I/O. The standard size of cache block is 4kB (standard page size), so transfers via cache are done using the same block size. In some operating systems (for example Linux) there are algorithms that join these small blocks into larger groups and therefore increase performance of I/O, but that's not the case in FreeBSD. So in FreeBSD, even when you set block size to 1M, inside the kernel all operations are split into 4k blocks (because of the cache). Our developer noticed, that during DIRECT I/O operations (without using cache), all I/O are split into 128k blocks (maximum allowable transfer size sent by our mfsmount to the kernel). It increases performance significantly. In our test environment we reached 900MB/s on 10Gb network. Be aware that in this case cache is not used at all. So to sum it all up: *FreeBSD can use block size larger than 4k but only without cache.* Mainly for FreeBSD we added special cache option for MooseFS client called DIRECT. This option is available in MooseFS client since version 3.0.49. To disable local cache and enable DIRECT communication please use this option during mount: mfsmount -H mfsmaster.your.domain.com -o mfscachemode=DIRECT /mount/point More details about current version of MooseFS you can find on http://moosefs.com/download.html page. Please test this option. We are waiting for your feedback. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 10.09.2015 14:56, Joseph Love wrote: > Thanks. I look forward to hearing what you’ve uncovered. > > -Joe > >> On Sep 10, 2015, at 6:53 AM, Aleksander Wieliczko >> <ale...@mo... >> <mailto:ale...@mo...>> wrote: >> >> Hi. >> >> Yes. >> We are in progress of resolving this problem. >> We found few potential reasons of this behaviour, but we need some >> more time to find the best solution. >> >> If we solve this problem, we respond as quick as it is possible with >> some more details. >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com <x-msg://11/moosefs.com> >> >> On 09.09.2015 22:23, Joseph Love wrote: >>> Hi Aleksander, >>> >>> Will there be time for some investigation into the performance of >>> the FreeBSD client coming up? >>> >>> Thanks, >>> -Joe >>> >>>> On Aug 28, 2015, at 1:38 AM, Aleksander Wieliczko >>>> <ale...@mo... >>>> <mailto:ale...@mo...>> wrote: >>>> >>>> Hi. >>>> Thank you for this information. >>>> >>>> Can you do one more simple test. >>>> What network bandwidth can you achieve between two FreeBSD machines? >>>> >>>> I mean, something like: >>>> FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null >>>> (simple nc and dd tool will tell you a lot.) >>>> >>>> We know that FUSE on FreeBSD systems had some problems but we need >>>> to take close look to this issue. >>>> We will try to repeat this scenario in our test environment and >>>> return to you after 08.09.2015, because we are in progress of >>>> different tests till this day. >>>> >>>> I would like to add one more aspect. >>>> Mfsmaster application is single-thread so we compared your cpu with >>>> our. >>>> This are the results: >>>> *(source: cpubenchmark.net <http://cpubenchmark.net/>)* >>>> >>>> CPU Xeon E5-1620 v2 @ 3.70GHz >>>> Avarage points: 9508 >>>> *Single Thread points**: 1920* >>>> >>>> CPU Intel Atom C2758 @ 2.40GHz >>>> Avarage points: 3620 >>>> *Single Thread points: 520* >>>> >>>> Best regards >>>> Aleksander Wieliczko >>>> Technical Support Engineer >>>> MooseFS.com <x-msg://9/moosefs.com> >>>> >>> >> > |