From: Joseph L. <jo...@ge...> - 2015-08-28 16:21:05
|
> > On Aug 28, 2015, at 2:21 AM, pen...@ic... wrote: > > Hi joe: > > Do the performance test results to share? > Many thanks. > > pen...@ic... <mailto:pen...@ic...> Hi, Based on the most recent changes I’d made to the configuration, the only information I have available is a couple of benchmarks with “tiotest”. Cluster setup is located below the test results. Note: Due to an oddity with the scripts (tiobench.pl) reporting, the ubuntu numbers all have a CPU Efficiency column of 0, because for some reason, the script kept causing a divide-by-0 error. FreeBSD Client > tiobench.pl --size 20480 --numruns 3 --block 65536 Run #3: /usr/local/bin/tiotest -t 8 -f 2560 -r 500 -b 65536 -d . -TTT Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff ---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- ----- Sequential Reads 10.2-RELEASE 20480 65536 1 139.64 38.22% 1.342 4197.20 0.00031 0.00000 365 10.2-RELEASE 20480 65536 2 318.74 144.6% 1.149 6922.27 0.00123 0.00000 220 10.2-RELEASE 20480 65536 4 373.62 598.3% 1.957 7664.83 0.00245 0.00000 62 10.2-RELEASE 20480 65536 8 552.13 1976.% 2.509 6891.88 0.00610 0.00000 28 Random Reads 10.2-RELEASE 20480 65536 1 105.52 18.25% 1.774 2097.17 0.02500 0.00000 578 10.2-RELEASE 20480 65536 2 268.93 101.4% 1.350 115.08 0.00000 0.00000 265 10.2-RELEASE 20480 65536 4 507.95 441.4% 1.387 21.11 0.00000 0.00000 115 10.2-RELEASE 20480 65536 8 830.79 1786.% 1.498 20.77 0.00000 0.00000 47 Sequential Writes 10.2-RELEASE 20480 65536 1 40.39 18.38% 4.641 2196.91 0.00000 0.00000 220 10.2-RELEASE 20480 65536 2 70.25 60.87% 5.304 2231.48 0.00000 0.00000 115 10.2-RELEASE 20480 65536 4 131.59 234.7% 5.653 5631.18 0.00244 0.00000 56 10.2-RELEASE 20480 65536 8 198.66 829.6% 7.447 8772.20 0.00732 0.00000 24 Random Writes 10.2-RELEASE 20480 65536 1 126.75 38.97% 1.476 3.74 0.00000 0.00000 325 10.2-RELEASE 20480 65536 2 240.33 164.2% 1.552 3.59 0.00000 0.00000 146 10.2-RELEASE 20480 65536 4 381.63 717.1% 1.952 4.17 0.00000 0.00000 53 10.2-RELEASE 20480 65536 8 535.87 3036.% 2.765 5.93 0.00000 0.00000 18 Ubuntu Client $ tiobench.pl --size 20480 --numruns 3 --block 65536 Run #3: /usr/local/bin/tiotest -t 8 -f 2560 -r 500 -b 65536 -d . -TTT Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load Sequential Reads 3.19.0-25-generic 20480 65536 1 4741.47 98.03% 0.039 0.86 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 8710.00 369.7% 0.043 0.79 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 13688.96 1258.% 0.053 0.98 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 8 17183.37 5364.% 0.082 1.08 0.00000 0.00000 0 Random Reads 3.19.0-25-generic 20480 65536 1 4836.21 0% 0.037 0.22 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 8089.74 0% 0.043 0.40 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 10498.32 0% 0.058 0.47 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 8 12353.81 0% 0.085 0.65 0.00000 0.00000 0 Sequential Writes 3.19.0-25-generic 20480 65536 1 297.05 23.24% 0.629 6.21 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 584.97 99.80% 0.622 2255.13 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 457.10 206.3% 1.606 2462.01 0.00122 0.00000 0 3.19.0-25-generic 20480 65536 8 417.21 398.1% 3.197 319.45 0.00000 0.00000 0 Random Writes 3.19.0-25-generic 20480 65536 1 323.18 0.861% 0.574 3.23 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 49.09 0% 0.523 2.72 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 863.38 0% 0.716 4.48 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 8 324.29 0% 1.168 5.90 0.00000 0.00000 0 I’m guessing that there’s some massive read-caching that happens in the mfsclient on ubuntu, since that read rate is obviously much greater than 10gbe, much less past the 3x SSDs in the machines. I can certainly try this with larger sizes that don’t fit into memory on the client. My testing configuration: 3 Chunkservers: - Dual Xeon X5560 - 24GB memory. - FreeBSD 10.2-Release - MooseFS 3.0.39 - Intel X520-DA2 10gbit nic - OS installed on 1tb WD RE3 HDD. - MooseFS ‘hdd’ uses an Intel DC S3610 200GB SSD. MFSMaster: - Installed on one of the chunk servers. Does not use the SSD for its storage (which should be inconsequential, since it keeps its data in memory too). Client: - Dual Xeon L5650 - 24GB Memory - Either FreeBSD 10.2, or Ubuntu 14.04 server, depending on test. - Local HDD is 1tb WD RE3. - Intel X520-DA2 10gbit nic Network: - Brocade 8000b switch, used as 10gbit switch, ignoring the FC features. -Joe |