You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Neddy, N. N. <na...@nd...> - 2015-09-09 06:41:55
|
Hi, I suppose it depends on your hardwares and network setup. But from my experience with old PCs (4 boxes of Intel dual core, SATA2 disks) attach to 1G low-end switch. Recovering process of 1TB data (goal = 3) when 2 boxes were previously down took about 20 minutes to fully complete. On Wed, Sep 9, 2015 at 3:04 AM, Warren Myers <wa...@an...> wrote: > How resilient is MooseFS to temporarily losing a chunk server - say doing > hardware maintenance, migration to a new datacenter, VM migration, etc? > > *Warren Myers* > http://antipaucity.com > https://www.digitalocean.com/?refcode=d197a961987a > > > ------------------------------------------------------------------------------ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Aleksander W. <ale...@mo...> - 2015-09-09 06:38:54
|
Hello. MooseFS is very resilient in many aspects not only in chunkserver actions. I would like to say about two main chunkserver conditions: 1. Chunkserver Loose: Let's say that our MooseFS cluster consist of 4 components(1x mfsmaster, 3x mfschunkserver) and minimum GAOAL is set to 2. In this case it means that one chunkserver can be powered down, unplugged from network or die without any data loose. After loosing a chunkserver replication process starts. It means, that all chunks without extra copy are replicated to another chunkservers to achieve GOAL2. GOAL - mean how many copies of chunks will be available on different chunkservers. MooseFS never keeps more than one copy on the same chunkserver! 2. Maintenance mode When we turn on maintenance mode and stop process for specific chunkserver, no extra copy for under GOAL chunks will be made. Also no write operations will be executed. More details about maintenance mode can be found in paragraph 4.3 "Maintenance mode" in our MooseFS 2.0 manual: http://moosefs.com/Content/Downloads/MooseFS-2-0-60-User-Manual.pdf Maintenance mode option can be on and off from CGI web page or from command line. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 08.09.2015 22:04, Warren Myers wrote: > How resilient is MooseFS to temporarily losing a chunk server - say > doing hardware maintenance, migration to a new datacenter, VM > migration, etc? > > *Warren Myers* > http://antipaucity.com > https://www.digitalocean.com/?refcode=d197a961987a > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Warren M. <wa...@an...> - 2015-09-08 20:17:06
|
How resilient is MooseFS to temporarily losing a chunk server - say doing hardware maintenance, migration to a new datacenter, VM migration, etc? Warren Myers http://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a |
From: Jakub Kruszona-Z. <ac...@mo...> - 2015-09-08 10:36:16
|
> On 08 Sep 2015, at 12:00, pen...@ic... wrote: > > hi all: > > When I compiling mfs-1.6.27.5 the terminal display is > <Catch.jpg> > but compiling moosefs-2.0.72 the terminal display > <Catch3397.jpg> > the compiler output is much clean than compile mfs-1.6.27.5. > How did that? It’s been done by adding "AM_SILENT_RULES([yes])" to ‘configure.ac’. You can also force using silent-rules in mfs-1.6.27-5 by: running ./configure --enable-silent-rules or running make V=0 You can read more about this in this article: https://autotools.io/automake/silent.html regards, Jakub Kruszona-Zawadzki. |
From: <pen...@ic...> - 2015-09-08 10:02:20
|
hi all: When I compiling mfs-1.6.27.5 the terminal display is but compiling moosefs-2.0.72 the terminal display the compiler output is much clean than compile mfs-1.6.27.5. How did that? Many thanks for help. Best regards. pen...@ic... |
From: Neddy, N. N. <na...@nd...> - 2015-09-08 07:43:18
|
It woks but the static files can't be loaded. I changed the configs related to your ideas: upstream mfscgi { server 10.2.192.130:9425; } server { listen 81; location /mfs.cgi { proxy_pass http://mfscgi; } location /mfs.css { proxy_pass http://mfscgi; } location /acidtab.js { proxy_pass http://mfscgi; } location /logomini.png { proxy_pass http://mfscgi; } } Now it's ok, though you'll have to redirect or put mfs.cgi to your index direction. So the original config failed because mfscgiserv put 301 redirect to mfs.cgi and closed current connection? Below is nginx's error log: [error] 12224#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 10.2.x.x, server: , request: "GET / HTTP/1.1", upstream: "http://10.2.192.130:9425/" Thanks, On Tue, Sep 8, 2015 at 1:52 PM, 黑铁柱 <kan...@gm...> wrote: > try this > server { > listen 81; > location / { > proxy_pass http://10.2.192.130:9425/mfs.cgi; > } > } > > 2015-09-07 17:01 GMT+08:00 Neddy, NH. Nam <na...@nd...>: >> >> Hi, >> >> I try to access MooseFS CGI monitor via nginx proxy but no luck. >> >> With CGI monitor was started on master server, nginx was configured to >> pass request to mfscgiserv >> >> server { >> listen 81; >> location / { >> proxy_pass http://10.2.192.130:9425; >> } >> } >> >> But clients only get "502 Bad Gateway" error when open nginx server >> address in their browser. >> >> I wonder did anybody try to do the same thing? >> Appreciate your advice. >> >> >> ------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: 黑铁柱 <kan...@gm...> - 2015-09-08 06:53:13
|
try this server { listen 81; location / { proxy_pass http://10.2.192.130:9425/mfs.cgi; } } 2015-09-07 17:01 GMT+08:00 Neddy, NH. Nam <na...@nd...>: > Hi, > > I try to access MooseFS CGI monitor via nginx proxy but no luck. > > With CGI monitor was started on master server, nginx was configured to > pass request to mfscgiserv > > server { > listen 81; > location / { > proxy_pass http://10.2.192.130:9425; > } > } > > But clients only get "502 Bad Gateway" error when open nginx server > address in their browser. > > I wonder did anybody try to do the same thing? > Appreciate your advice. > > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Neddy, N. N. <na...@nd...> - 2015-09-07 09:33:01
|
Hi, I try to access MooseFS CGI monitor via nginx proxy but no luck. With CGI monitor was started on master server, nginx was configured to pass request to mfscgiserv server { listen 81; location / { proxy_pass http://10.2.192.130:9425; } } But clients only get "502 Bad Gateway" error when open nginx server address in their browser. I wonder did anybody try to do the same thing? Appreciate your advice. |
From: Valerio S. <val...@gm...> - 2015-09-02 08:09:03
|
Hello Aleksander, for now I need simply a short answer, that is to understand if MooseFS guarantees eventual, weak, causal, strong consistency (or other flavours). The how's and gory details are also relevant and useful but I can quietly wait for the longer answer. Thanks a lot, -- Valerio On Wed, Sep 2, 2015 at 7:53 AM, Aleksander Wieliczko < ale...@mo...> wrote: > Hi. > We made so many changes till 2011 so you need to give as some time to > generate answer on your question. > We will return to this topic as fast as it is possible. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com> > > On 31.08.2015 19:02, Valerio Schiavoni wrote: > > Hello, > what is the consistency model of MooseFS? > I've found a 2011 thread [1] on this ML that seems to be remained > unanswered, and since we need to know the same details... > > Thanks. > > 1 - http://sourceforge.net/p/moosefs/mailman/message/26888539/ > > -- > Valerio > > _________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ------------------------------------------------------------------------------ > Monitor Your Dynamic Infrastructure at Any Scale With Datadog! > Get real-time metrics from all of your servers, apps and tools > in one place. > SourceForge users - Click here to start your Free Trial of Datadog now! > http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140 > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Aleksander W. <ale...@mo...> - 2015-09-02 05:53:19
|
Hi. We made so many changes till 2011 so you need to give as some time to generate answer on your question. We will return to this topic as fast as it is possible. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 31.08.2015 19:02, Valerio Schiavoni wrote: > Hello, > what is the consistency model of MooseFS? > I've found a 2011 thread [1] on this ML that seems to be remained > unanswered, and since we need to know the same details... > > Thanks. > > 1 - http://sourceforge.net/p/moosefs/mailman/message/26888539/ > > -- > Valerio > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Valerio S. <val...@gm...> - 2015-08-31 17:02:53
|
Hello, what is the consistency model of MooseFS? I've found a 2011 thread [1] on this ML that seems to be remained unanswered, and since we need to know the same details... Thanks. 1 - http://sourceforge.net/p/moosefs/mailman/message/26888539/ -- Valerio |
From: Joseph L. <jo...@ge...> - 2015-08-28 16:21:05
|
> > On Aug 28, 2015, at 2:21 AM, pen...@ic... wrote: > > Hi joe: > > Do the performance test results to share? > Many thanks. > > pen...@ic... <mailto:pen...@ic...> Hi, Based on the most recent changes I’d made to the configuration, the only information I have available is a couple of benchmarks with “tiotest”. Cluster setup is located below the test results. Note: Due to an oddity with the scripts (tiobench.pl) reporting, the ubuntu numbers all have a CPU Efficiency column of 0, because for some reason, the script kept causing a divide-by-0 error. FreeBSD Client > tiobench.pl --size 20480 --numruns 3 --block 65536 Run #3: /usr/local/bin/tiotest -t 8 -f 2560 -r 500 -b 65536 -d . -TTT Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff ---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- ----- Sequential Reads 10.2-RELEASE 20480 65536 1 139.64 38.22% 1.342 4197.20 0.00031 0.00000 365 10.2-RELEASE 20480 65536 2 318.74 144.6% 1.149 6922.27 0.00123 0.00000 220 10.2-RELEASE 20480 65536 4 373.62 598.3% 1.957 7664.83 0.00245 0.00000 62 10.2-RELEASE 20480 65536 8 552.13 1976.% 2.509 6891.88 0.00610 0.00000 28 Random Reads 10.2-RELEASE 20480 65536 1 105.52 18.25% 1.774 2097.17 0.02500 0.00000 578 10.2-RELEASE 20480 65536 2 268.93 101.4% 1.350 115.08 0.00000 0.00000 265 10.2-RELEASE 20480 65536 4 507.95 441.4% 1.387 21.11 0.00000 0.00000 115 10.2-RELEASE 20480 65536 8 830.79 1786.% 1.498 20.77 0.00000 0.00000 47 Sequential Writes 10.2-RELEASE 20480 65536 1 40.39 18.38% 4.641 2196.91 0.00000 0.00000 220 10.2-RELEASE 20480 65536 2 70.25 60.87% 5.304 2231.48 0.00000 0.00000 115 10.2-RELEASE 20480 65536 4 131.59 234.7% 5.653 5631.18 0.00244 0.00000 56 10.2-RELEASE 20480 65536 8 198.66 829.6% 7.447 8772.20 0.00732 0.00000 24 Random Writes 10.2-RELEASE 20480 65536 1 126.75 38.97% 1.476 3.74 0.00000 0.00000 325 10.2-RELEASE 20480 65536 2 240.33 164.2% 1.552 3.59 0.00000 0.00000 146 10.2-RELEASE 20480 65536 4 381.63 717.1% 1.952 4.17 0.00000 0.00000 53 10.2-RELEASE 20480 65536 8 535.87 3036.% 2.765 5.93 0.00000 0.00000 18 Ubuntu Client $ tiobench.pl --size 20480 --numruns 3 --block 65536 Run #3: /usr/local/bin/tiotest -t 8 -f 2560 -r 500 -b 65536 -d . -TTT Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load Sequential Reads 3.19.0-25-generic 20480 65536 1 4741.47 98.03% 0.039 0.86 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 8710.00 369.7% 0.043 0.79 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 13688.96 1258.% 0.053 0.98 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 8 17183.37 5364.% 0.082 1.08 0.00000 0.00000 0 Random Reads 3.19.0-25-generic 20480 65536 1 4836.21 0% 0.037 0.22 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 8089.74 0% 0.043 0.40 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 10498.32 0% 0.058 0.47 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 8 12353.81 0% 0.085 0.65 0.00000 0.00000 0 Sequential Writes 3.19.0-25-generic 20480 65536 1 297.05 23.24% 0.629 6.21 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 584.97 99.80% 0.622 2255.13 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 457.10 206.3% 1.606 2462.01 0.00122 0.00000 0 3.19.0-25-generic 20480 65536 8 417.21 398.1% 3.197 319.45 0.00000 0.00000 0 Random Writes 3.19.0-25-generic 20480 65536 1 323.18 0.861% 0.574 3.23 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 2 49.09 0% 0.523 2.72 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 4 863.38 0% 0.716 4.48 0.00000 0.00000 0 3.19.0-25-generic 20480 65536 8 324.29 0% 1.168 5.90 0.00000 0.00000 0 I’m guessing that there’s some massive read-caching that happens in the mfsclient on ubuntu, since that read rate is obviously much greater than 10gbe, much less past the 3x SSDs in the machines. I can certainly try this with larger sizes that don’t fit into memory on the client. My testing configuration: 3 Chunkservers: - Dual Xeon X5560 - 24GB memory. - FreeBSD 10.2-Release - MooseFS 3.0.39 - Intel X520-DA2 10gbit nic - OS installed on 1tb WD RE3 HDD. - MooseFS ‘hdd’ uses an Intel DC S3610 200GB SSD. MFSMaster: - Installed on one of the chunk servers. Does not use the SSD for its storage (which should be inconsequential, since it keeps its data in memory too). Client: - Dual Xeon L5650 - 24GB Memory - Either FreeBSD 10.2, or Ubuntu 14.04 server, depending on test. - Local HDD is 1tb WD RE3. - Intel X520-DA2 10gbit nic Network: - Brocade 8000b switch, used as 10gbit switch, ignoring the FC features. -Joe |
From: Joseph L. <jo...@ge...> - 2015-08-28 14:20:18
|
Sorry, habit to hit ‘reply’ not ‘reply-all’ with mailing lists.. Meant to send this to the list, and not just to Aleksander. -Joe > On Aug 28, 2015, at 9:18 AM, Joseph Love <jo...@ge...> wrote: > > Hi, > > I ran iperf for 120 seconds between two of the servers. Below are the results between them. > > iperf -s > ------------------------------------------------------------ > Server listening on TCP port 5001 > TCP window size: 64.0 KByte (default) > ------------------------------------------------------------ > [ 4] local 172.20.20.92 port 5001 connected with 172.20.20.91 port 39950 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-120.0 sec 117 GBytes 8.35 Gbits/sec > > > iperf -c 172.20.20.92 -t 120 > ------------------------------------------------------------ > Client connecting to 172.20.20.92, TCP port 5001 > TCP window size: 32.5 KByte (default) > ------------------------------------------------------------ > [ 3] local 172.20.20.91 port 39950 connected with 172.20.20.92 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-120.0 sec 117 GBytes 8.35 Gbits/sec > > Based on this, it seems that the servers are able to maintain a decent amount of data transfer. > > > That’s a good note on the single-threaded issue on the mfsmaster. > The X5560 I relocated it to has the following passmark numbers on that site: > Average CPU Mark: 5442 > Single Thread rating: 1288 > > Not as awesome as a Xeon E5, but quite a bit better than the Atom. It did have a noticeable effect on performance, but less than 10%. > > > I look forward to seeing what you can identify after your current testing. > > Thanks, > -Joe > >> On Aug 28, 2015, at 1:38 AM, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> wrote: >> >> Hi. >> Thank you for this information. >> >> Can you do one more simple test. >> What network bandwidth can you achieve between two FreeBSD machines? >> >> I mean, something like: >> FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null >> (simple nc and dd tool will tell you a lot.) >> >> We know that FUSE on FreeBSD systems had some problems but we need to take close look to this issue. >> We will try to repeat this scenario in our test environment and return to you after 08.09.2015, because we are in progress of different tests till this day. >> >> I would like to add one more aspect. >> Mfsmaster application is single-thread so we compared your cpu with our. >> This are the results: >> (source: cpubenchmark.net <http://cpubenchmark.net/>) >> >> CPU Xeon E5-1620 v2 @ 3.70GHz >> Avarage points: 9508 >> Single Thread points: 1920 >> >> CPU Intel Atom C2758 @ 2.40GHz >> Avarage points: 3620 >> Single Thread points: 520 >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com <x-msg://6/moosefs.com> >> >> On 27.08.2015 23:19, Joseph Love wrote: >>> Hi, >>> >>> So, I did the following: >>> - Moved the mfsmaster to one of the chunk servers (which has Xeons in it). >>> - Installed ubuntu 14.04 on the 4th xeon server, which I was trying as a client in the latest tests. >>> >>> A single ‘dd’ instance on ubuntu 14.04 was able to write to the cluster at just under 500MB/s (494MB/s, 20GB from /dev/zero). >>> I also ran (though it failed to generate the statistics) tiobench, which ran substantially faster, before generating a divide by zero error. >>> >>> I noticed something interesting and different when doing this. >>> >>> I’ve been running nload on the chunk servers, showing the bandwidth used by the chunk servers on my screen. It’s a little interesting how this varies between when the freebsd client is writing and when the ubuntu client is writing (even from just dd from /dev/zero). >>> >>> The freebsd client neither achieves the speeds the ubuntu client does, nor the consistency of sending data to all the chunk servers that the ubuntu client achieves. >>> >>> http://www.getsomewhere.net/nload_freebsd_client.png <http://www.getsomewhere.net/nload_freebsd_client.png> >>> http://www.getsomewhere.net/nload_ubuntu_client.png <http://www.getsomewhere.net/nload_ubuntu_client.png> >>> >>> -Joe >>> >>> >>>> On Aug 26, 2015, at 8:56 AM, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> wrote: >>>> >>>> Thank you for this information. >>>> >>>> We have two ideas to test in your environment: >>>> 1. Can you test mfsclient on Linux OS(like Ubuntu 14/Debian 8) with FUSE >= 2.9.3 ? >>>> 2. Can you switch from Atom to Xeon CPU for master during your tests? >>>> >>>> We are waiting for your feedback. >>>> >>>> Best regards >>>> Aleksander Wieliczko >>>> Technical Support Engineer >>>> MooseFS.com <x-msg://49/moosefs.com> >>>> >>>> On 26.08.2015 15:23, Joseph Love wrote: >>>>> Hi, >>>>> >>>>> Sure. >>>>> All parts are running 3.0.39, on FreeBSD 10.2. >>>>> >>>>> Chunk servers are dual xeon X5560s, 24gb memory. >>>>> Master is an atom C2758, 4gb memory >>>>> Clients vary between an atom C2758 4gb memory (not the master), and a dual xeon L5650, 24gb memory. >>>>> >>>>> I’ve tried two different disk setups with the chunk servers: >>>>> - with a pair of 1tb WD RE3s, mirrored; and >>>>> - with a single Intel DC S3500 SSD. >>>>> >>>>> Goal is set to 1. >>>>> NICs are Intel X520-DA2 (10gbe) with direct-attach SFP+ cables to a Broadcom 8000b switch. >>>>> >>>>> Latency from client 1 (atom C2758) - 30 packets: >>>>> to Master: round-trip min/avg/max/stddev = 0.069/0.086/0.096/0.007 ms >>>>> to chunk1: round-trip min/avg/max/stddev = 0.059/0.125/0.612/0.159 ms >>>>> to chunk2: round-trip min/avg/max/stddev = 0.049/0.087/0.611/0.098 ms >>>>> to chunk3: round-trip min/avg/max/stddev = 0.054/0.071/0.100/0.010 ms >>>>> >>>>> Latency from client 2 (xeon L5650) - 30 packets: >>>>> to Master: round-trip min/avg/max/stddev = 0.045/0.056/0.073/0.006 ms >>>>> to chunk1: round-trip min/avg/max/stddev = 0.029/0.036/0.043/0.003 ms >>>>> to chunk2: round-trip min/avg/max/stddev = 0.033/0.037/0.044/0.003 ms >>>>> to chunk3: >>>>> >>>>> There’s no traffic shaping/QOS on this LAN, and the master & chunk servers are all on a network dedicated just to them. >>>>> >>>>> I’ve also tried this from client2, partially out of curiosity: >>>>> > dd if=/dev/zero of=test.zero.dd bs=1m count=10000 & && dd if=/dev/zero of=test.zero-2.dd bs=1m count=10000 & && dd if=/dev/zero of=test.zero-3.dd bs=1m count=10000 & >>>>> >>>>> 10000+0 records in >>>>> 10000+0 records out >>>>> 10485760000 bytes transferred in 101.117837 secs (103698421 bytes/sec) >>>>> 10000+0 records in >>>>> 10000+0 records out >>>>> 10485760000 bytes transferred in 101.644066 secs (103161556 bytes/sec) >>>>> 10000+0 records in >>>>> 10000+0 records out >>>>> 10485760000 bytes transferred in 101.900907 secs (102901537 bytes/sec) >>>>> >>>>> Running 3 instances, I was able to write at 100MB/s per instance of dd. Which suggests being able to write at at least 300MB/s from a single client (just not in a single process/thread). >>>>> >>>>> I tried the same thing with 7 instances. I think it actually hit the limits on the chunk server’s disk speeds (approximately 220MB/s per ssd, iirc). 7 instances of dd gave me these results: >>>>> 10485760000 bytes transferred in 124.690526 secs (84094280 bytes/sec) >>>>> 10485760000 bytes transferred in 124.719954 secs (84074438 bytes/sec) >>>>> 10485760000 bytes transferred in 126.579289 secs (82839460 bytes/sec) >>>>> 10485760000 bytes transferred in 124.954106 secs (83916890 bytes/sec) >>>>> 10485760000 bytes transferred in 126.181140 secs (83100850 bytes/sec) >>>>> 10485760000 bytes transferred in 126.294929 secs (83025978 bytes/sec) >>>>> 10485760000 bytes transferred in 103.810845 secs (101008329 bytes/sec) >>>>> Add that all up, and it’s about 600MB/s, which is about 200MB/s/chunkserver - pretty close to the theoretical max for the SSDs. >>>>> >>>>> So, I guess a single client can reach the speeds, just not in a single process/thread. >>>>> >>>>> -Joe >>>>> >>>>> >>>>>> On Aug 26, 2015, at 1:17 AM, Aleksander Wieliczko < <mailto:ale...@mo...>ale...@mo... <mailto:ale...@mo...>> wrote: >>>>>> >>>>>> Hi. >>>>>> >>>>>> Can we get some more details about your configuration? >>>>>> >>>>>> - MooseFS master version? >>>>>> - MooseFS chunkserver version? >>>>>> - MooseFS client version? >>>>>> - Kernel version ? >>>>>> - CPU speed ? >>>>>> - RAM size ? >>>>>> - Number of disks per chunkserver? >>>>>> - What GOAL you set for test folder? >>>>>> - NIC interface type - Coper or Fiber? >>>>>> - Network latency from client to master and chunkservers(ping mfsmaster)? >>>>>> - Do you have some traffic shaping/QOS in you LAN? >>>>>> >>>>>> Best regards >>>>>> Aleksander Wieliczko >>>>>> Technical Support Engineer >>>>>> MooseFS.com <x-msg://27/moosefs.com> >>>>>> >>>>>> >>>>>> On 25.08.2015 18:29, Joseph Love wrote: >>>>>>> They’re all on 10gbe. I did turn on Jumbo frames during my testing, it didn’t seem to make a really big difference. >>>>>>> >>>>>>> I just tried with SSDs in the chunk servers (Intel DC S3500 200GB), and still seeing about the same performance characteristic. >>>>>>> >>>>>>> I know in the middle of some other synthetic tests that I can break 200MB/s (sequential) read, 150MB/s write, but that’s a multithreaded test application. >>>>>>> Actually, now that I say that, I suppose it might be a single-thread performance characteristic with FUSE on FreeBSD. >>>>>>> >>>>>>> Anyone have statistic from a Linux system that shows > 100MB/s per thread on a client? >>>>>>> >>>>>>> -Joe >>>>>>> >>>>>>>> On Aug 25, 2015, at 11:12 AM, Ricardo J. Barberis <ric...@do...> <mailto:ric...@do...> wrote: >>>>>>>> >>>>>>>> El Martes 25/08/2015, Joseph Love escribió: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I’ve been doing some tests to get an idea as to what sort of speeds I can >>>>>>>>> maybe expect from moosefs, and ran into something unexpected. From >>>>>>>>> multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 >>>>>>>>> clients) to my 3-node cluster (3 chunk servers). From a single client >>>>>>>>> (while everything else is idle) I get the same result. It occurred to me >>>>>>>>> that the write speed to a disk in each chunk server is roughly 100MB/s, and >>>>>>>>> I was curious if this seems to be the likely culprit for performance >>>>>>>>> limitations for a single stream from a single client. >>>>>>>>> >>>>>>>>> I’m about to try it again with SSDs, but I have a bit of time before that’s >>>>>>>>> ready, and I figured I’d try to pose the question early. >>>>>>>>> >>>>>>>>> Thoughts? >>>>>>>>> >>>>>>>>> -Joe >>>>>>>> How about network? If your clients are connected to 1 Gbps, 100 MB/s is nearly >>>>>>>> saturating the network. >>>>>>>> >>>>>>>> Also, using Jumbo frames might give you a few extra MB/s. >>>>>>>> >>>>>>>> Regards, >>>>>>>> -- >>>>>>>> Ricardo J. Barberis >>>>>>>> Senior SysAdmin / IT Architect >>>>>>>> DonWeb >>>>>>>> La Actitud Es Todo >>>>>>>> www.DonWeb.com <http://www.donweb.com/> >>>>>>>> _____ >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> _________________________________________ >>>>>>>> moosefs-users mailing list >>>>>>>> moo...@li... <mailto:moo...@li...> >>>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> _________________________________________ >>>>>>> moosefs-users mailing list >>>>>>> moo...@li... <mailto:moo...@li...> >>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> _________________________________________ >>>>>> moosefs-users mailing list >>>>>> moo...@li... <mailto:moo...@li...> >>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>> >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... <mailto:moo...@li...> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> >> ------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: <pen...@ic...> - 2015-08-28 08:22:15
|
Hi joe: Do the performance test results to share? Many thanks. pen...@ic... From: Joseph Love Date: 2015-08-28 05:19 To: moosefs-users Subject: Re: [MooseFS-Users] performance inquiry Hi, So, I did the following: - Moved the mfsmaster to one of the chunk servers (which has Xeons in it). - Installed ubuntu 14.04 on the 4th xeon server, which I was trying as a client in the latest tests. A single ‘dd’ instance on ubuntu 14.04 was able to write to the cluster at just under 500MB/s (494MB/s, 20GB from /dev/zero). I also ran (though it failed to generate the statistics) tiobench, which ran substantially faster, before generating a divide by zero error. I noticed something interesting and different when doing this. I’ve been running nload on the chunk servers, showing the bandwidth used by the chunk servers on my screen. It’s a little interesting how this varies between when the freebsd client is writing and when the ubuntu client is writing (even from just dd from /dev/zero). The freebsd client neither achieves the speeds the ubuntu client does, nor the consistency of sending data to all the chunk servers that the ubuntu client achieves. http://www.getsomewhere.net/nload_freebsd_client.png http://www.getsomewhere.net/nload_ubuntu_client.png -Joe On Aug 26, 2015, at 8:56 AM, Aleksander Wieliczko <ale...@mo...> wrote: Thank you for this information. We have two ideas to test in your environment: 1. Can you test mfsclient on Linux OS(like Ubuntu 14/Debian 8) with FUSE >= 2.9.3 ? 2. Can you switch from Atom to Xeon CPU for master during your tests? We are waiting for your feedback. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 26.08.2015 15:23, Joseph Love wrote: Hi, Sure. All parts are running 3.0.39, on FreeBSD 10.2. Chunk servers are dual xeon X5560s, 24gb memory. Master is an atom C2758, 4gb memory Clients vary between an atom C2758 4gb memory (not the master), and a dual xeon L5650, 24gb memory. I’ve tried two different disk setups with the chunk servers: - with a pair of 1tb WD RE3s, mirrored; and - with a single Intel DC S3500 SSD. Goal is set to 1. NICs are Intel X520-DA2 (10gbe) with direct-attach SFP+ cables to a Broadcom 8000b switch. Latency from client 1 (atom C2758) - 30 packets: to Master: round-trip min/avg/max/stddev = 0.069/0.086/0.096/0.007 ms to chunk1: round-trip min/avg/max/stddev = 0.059/0.125/0.612/0.159 ms to chunk2: round-trip min/avg/max/stddev = 0.049/0.087/0.611/0.098 ms to chunk3: round-trip min/avg/max/stddev = 0.054/0.071/0.100/0.010 ms Latency from client 2 (xeon L5650) - 30 packets: to Master: round-trip min/avg/max/stddev = 0.045/0.056/0.073/0.006 ms to chunk1: round-trip min/avg/max/stddev = 0.029/0.036/0.043/0.003 ms to chunk2: round-trip min/avg/max/stddev = 0.033/0.037/0.044/0.003 ms to chunk3: There’s no traffic shaping/QOS on this LAN, and the master & chunk servers are all on a network dedicated just to them. I’ve also tried this from client2, partially out of curiosity: > dd if=/dev/zero of=test.zero.dd bs=1m count=10000 & && dd if=/dev/zero of=test.zero-2.dd bs=1m count=10000 & && dd if=/dev/zero of=test.zero-3.dd bs=1m count=10000 & 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 101.117837 secs (103698421 bytes/sec) 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 101.644066 secs (103161556 bytes/sec) 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 101.900907 secs (102901537 bytes/sec) Running 3 instances, I was able to write at 100MB/s per instance of dd. Which suggests being able to write at at least 300MB/s from a single client (just not in a single process/thread). I tried the same thing with 7 instances. I think it actually hit the limits on the chunk server’s disk speeds (approximately 220MB/s per ssd, iirc). 7 instances of dd gave me these results: 10485760000 bytes transferred in 124.690526 secs (84094280 bytes/sec) 10485760000 bytes transferred in 124.719954 secs (84074438 bytes/sec) 10485760000 bytes transferred in 126.579289 secs (82839460 bytes/sec) 10485760000 bytes transferred in 124.954106 secs (83916890 bytes/sec) 10485760000 bytes transferred in 126.181140 secs (83100850 bytes/sec) 10485760000 bytes transferred in 126.294929 secs (83025978 bytes/sec) 10485760000 bytes transferred in 103.810845 secs (101008329 bytes/sec) Add that all up, and it’s about 600MB/s, which is about 200MB/s/chunkserver - pretty close to the theoretical max for the SSDs. So, I guess a single client can reach the speeds, just not in a single process/thread. -Joe On Aug 26, 2015, at 1:17 AM, Aleksander Wieliczko <ale...@mo...> wrote: Hi. Can we get some more details about your configuration? - MooseFS master version? - MooseFS chunkserver version? - MooseFS client version? - Kernel version ? - CPU speed ? - RAM size ? - Number of disks per chunkserver? - What GOAL you set for test folder? - NIC interface type - Coper or Fiber? - Network latency from client to master and chunkservers(ping mfsmaster)? - Do you have some traffic shaping/QOS in you LAN? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 25.08.2015 18:29, Joseph Love wrote: They’re all on 10gbe. I did turn on Jumbo frames during my testing, it didn’t seem to make a really big difference. I just tried with SSDs in the chunk servers (Intel DC S3500 200GB), and still seeing about the same performance characteristic. I know in the middle of some other synthetic tests that I can break 200MB/s (sequential) read, 150MB/s write, but that’s a multithreaded test application. Actually, now that I say that, I suppose it might be a single-thread performance characteristic with FUSE on FreeBSD. Anyone have statistic from a Linux system that shows > 100MB/s per thread on a client? -Joe On Aug 25, 2015, at 11:12 AM, Ricardo J. Barberis <ric...@do...> wrote: El Martes 25/08/2015, Joseph Love escribió: Hi, I’ve been doing some tests to get an idea as to what sort of speeds I can maybe expect from moosefs, and ran into something unexpected. From multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 clients) to my 3-node cluster (3 chunk servers). From a single client (while everything else is idle) I get the same result. It occurred to me that the write speed to a disk in each chunk server is roughly 100MB/s, and I was curious if this seems to be the likely culprit for performance limitations for a single stream from a single client. I’m about to try it again with SSDs, but I have a bit of time before that’s ready, and I figured I’d try to pose the question early. Thoughts? -Joe How about network? If your clients are connected to 1 Gbps, 100 MB/s is nearly saturating the network. Also, using Jumbo frames might give you a few extra MB/s. Regards, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2015-08-28 06:38:56
|
Hi. Thank you for this information. Can you do one more simple test. What network bandwidth can you achieve between two FreeBSD machines? I mean, something like: FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null (simple nc and dd tool will tell you a lot.) We know that FUSE on FreeBSD systems had some problems but we need to take close look to this issue. We will try to repeat this scenario in our test environment and return to you after 08.09.2015, because we are in progress of different tests till this day. I would like to add one more aspect. Mfsmaster application is single-thread so we compared your cpu with our. This are the results: *(source: cpubenchmark.net)* CPU Xeon E5-1620 v2 @ 3.70GHz Avarage points: 9508 *Single Thread points**: 1920* CPU Intel Atom C2758 @ 2.40GHz Avarage points: 3620 *Single Thread points: 520* Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 27.08.2015 23:19, Joseph Love wrote: > Hi, > > So, I did the following: > - Moved the mfsmaster to one of the chunk servers (which has Xeons in it). > - Installed ubuntu 14.04 on the 4th xeon server, which I was trying as > a client in the latest tests. > > A single ‘dd’ instance on ubuntu 14.04 was able to write to the > cluster at just under 500MB/s (494MB/s, 20GB from /dev/zero). > I also ran (though it failed to generate the statistics) tiobench, > which ran substantially faster, before generating a divide by zero error. > > I noticed something interesting and different when doing this. > > I’ve been running nload on the chunk servers, showing the bandwidth > used by the chunk servers on my screen. It’s a little interesting how > this varies between when the freebsd client is writing and when the > ubuntu client is writing (even from just dd from /dev/zero). > > The freebsd client neither achieves the speeds the ubuntu client does, > nor the consistency of sending data to all the chunk servers that the > ubuntu client achieves. > > http://www.getsomewhere.net/nload_freebsd_client.png > http://www.getsomewhere.net/nload_ubuntu_client.png > > -Joe > > >> On Aug 26, 2015, at 8:56 AM, Aleksander Wieliczko >> <ale...@mo... >> <mailto:ale...@mo...>> wrote: >> >> Thank you for this information. >> >> We have two ideas to test in your environment: >> 1. Can you test mfsclient on Linux OS(like Ubuntu 14/Debian 8) with >> FUSE >= 2.9.3 ? >> 2. Can you switch from Atom to Xeon CPU for master during your tests? >> >> We are waiting for your feedback. >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com <x-msg://49/moosefs.com> >> >> On 26.08.2015 15:23, Joseph Love wrote: >>> Hi, >>> >>> Sure. >>> All parts are running 3.0.39, on FreeBSD 10.2. >>> >>> Chunk servers are dual xeon X5560s, 24gb memory. >>> Master is an atom C2758, 4gb memory >>> Clients vary between an atom C2758 4gb memory (not the master), and >>> a dual xeon L5650, 24gb memory. >>> >>> I’ve tried two different disk setups with the chunk servers: >>> - with a pair of 1tb WD RE3s, mirrored; and >>> - with a single Intel DC S3500 SSD. >>> >>> Goal is set to 1. >>> NICs are Intel X520-DA2 (10gbe) with direct-attach SFP+ cables to a >>> Broadcom 8000b switch. >>> >>> Latency from client 1 (atom C2758) - 30 packets: >>> to Master: round-trip min/avg/max/stddev = 0.069/0.086/0.096/0.007 ms >>> to chunk1: round-trip min/avg/max/stddev = 0.059/0.125/0.612/0.159 ms >>> to chunk2: round-trip min/avg/max/stddev = 0.049/0.087/0.611/0.098 ms >>> to chunk3: round-trip min/avg/max/stddev = 0.054/0.071/0.100/0.010 ms >>> >>> Latency from client 2 (xeon L5650) - 30 packets: >>> to Master: round-trip min/avg/max/stddev = 0.045/0.056/0.073/0.006 ms >>> to chunk1: round-trip min/avg/max/stddev = 0.029/0.036/0.043/0.003 ms >>> to chunk2: round-trip min/avg/max/stddev = 0.033/0.037/0.044/0.003 ms >>> to chunk3: >>> >>> There’s no traffic shaping/QOS on this LAN, and the master & chunk >>> servers are all on a network dedicated just to them. >>> >>> I’ve also tried this from client2, partially out of curiosity: >>> > dd if=/dev/zero of=test.zero.dd bs=1m count=10000 & && dd >>> if=/dev/zero of=test.zero-2.dd bs=1m count=10000 & && dd >>> if=/dev/zero of=test.zero-3.dd bs=1m count=10000 & >>> >>> 10000+0 records in >>> 10000+0 records out >>> 10485760000 bytes transferred in 101.117837 secs (103698421 bytes/sec) >>> 10000+0 records in >>> 10000+0 records out >>> 10485760000 bytes transferred in 101.644066 secs (103161556 bytes/sec) >>> 10000+0 records in >>> 10000+0 records out >>> 10485760000 bytes transferred in 101.900907 secs (102901537 bytes/sec) >>> >>> Running 3 instances, I was able to write at 100MB/s per instance of >>> dd. Which suggests being able to write at at least 300MB/s from a >>> single client (just not in a single process/thread). >>> >>> I tried the same thing with 7 instances. I think it actually hit >>> the limits on the chunk server’s disk speeds (approximately 220MB/s >>> per ssd, iirc). 7 instances of dd gave me these results: >>> 10485760000 bytes transferred in 124.690526 secs (84094280 bytes/sec) >>> 10485760000 bytes transferred in 124.719954 secs (84074438 bytes/sec) >>> 10485760000 bytes transferred in 126.579289 secs (82839460 bytes/sec) >>> 10485760000 bytes transferred in 124.954106 secs (83916890 bytes/sec) >>> 10485760000 bytes transferred in 126.181140 secs (83100850 bytes/sec) >>> 10485760000 bytes transferred in 126.294929 secs (83025978 bytes/sec) >>> 10485760000 bytes transferred in 103.810845 secs (101008329 bytes/sec) >>> Add that all up, and it’s about 600MB/s, which is about >>> 200MB/s/chunkserver - pretty close to the theoretical max for the SSDs. >>> >>> So, I guess a single client can reach the speeds, just not in a >>> single process/thread. >>> >>> -Joe >>> >>> >>>> On Aug 26, 2015, at 1:17 AM, Aleksander Wieliczko >>>> <ale...@mo... >>>> <mailto:ale...@mo...>> wrote: >>>> >>>> Hi. >>>> >>>> Can we get some more details about your configuration? >>>> >>>> - MooseFS master version? >>>> - MooseFS chunkserver version? >>>> - MooseFS client version? >>>> - Kernel version ? >>>> - CPU speed ? >>>> - RAM size ? >>>> - Number of disks per chunkserver? >>>> - What GOAL you set for test folder? >>>> - NIC interface type - Coper or Fiber? >>>> - Network latency from client to master and chunkservers(ping >>>> mfsmaster)? >>>> - Do you have some traffic shaping/QOS in you LAN? >>>> >>>> Best regards >>>> Aleksander Wieliczko >>>> Technical Support Engineer >>>> MooseFS.com <x-msg://27/moosefs.com> >>>> >>>> >>>> On 25.08.2015 18:29, Joseph Love wrote: >>>>> They’re all on 10gbe. I did turn on Jumbo frames during my testing, it didn’t seem to make a really big difference. >>>>> >>>>> I just tried with SSDs in the chunk servers (Intel DC S3500 200GB), and still seeing about the same performance characteristic. >>>>> >>>>> I know in the middle of some other synthetic tests that I can break 200MB/s (sequential) read, 150MB/s write, but that’s a multithreaded test application. >>>>> Actually, now that I say that, I suppose it might be a single-thread performance characteristic with FUSE on FreeBSD. >>>>> >>>>> Anyone have statistic from a Linux system that shows > 100MB/s per thread on a client? >>>>> >>>>> -Joe >>>>> >>>>>> On Aug 25, 2015, at 11:12 AM, Ricardo J. Barberis <ric...@do...> wrote: >>>>>> >>>>>> El Martes 25/08/2015, Joseph Love escribió: >>>>>>> Hi, >>>>>>> >>>>>>> I’ve been doing some tests to get an idea as to what sort of speeds I can >>>>>>> maybe expect from moosefs, and ran into something unexpected. From >>>>>>> multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 >>>>>>> clients) to my 3-node cluster (3 chunk servers). From a single client >>>>>>> (while everything else is idle) I get the same result. It occurred to me >>>>>>> that the write speed to a disk in each chunk server is roughly 100MB/s, and >>>>>>> I was curious if this seems to be the likely culprit for performance >>>>>>> limitations for a single stream from a single client. >>>>>>> >>>>>>> I’m about to try it again with SSDs, but I have a bit of time before that’s >>>>>>> ready, and I figured I’d try to pose the question early. >>>>>>> >>>>>>> Thoughts? >>>>>>> >>>>>>> -Joe >>>>>> How about network? If your clients are connected to 1 Gbps, 100 MB/s is nearly >>>>>> saturating the network. >>>>>> >>>>>> Also, using Jumbo frames might give you a few extra MB/s. >>>>>> >>>>>> Regards, >>>>>> -- >>>>>> Ricardo J. Barberis >>>>>> Senior SysAdmin / IT Architect >>>>>> DonWeb >>>>>> La Actitud Es Todo >>>>>> www.DonWeb.com >>>>>> _____ >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> _________________________________________ >>>>>> moosefs-users mailing list >>>>>> moo...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>> ------------------------------------------------------------------------------ >>>>> _________________________________________ >>>>> moosefs-users mailing list >>>>> moo...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>> >>>> ------------------------------------------------------------------------------ >>>> _________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... >>>> <mailto:moo...@li...> >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> > > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Joseph L. <jo...@ge...> - 2015-08-27 21:19:37
|
Hi, So, I did the following: - Moved the mfsmaster to one of the chunk servers (which has Xeons in it). - Installed ubuntu 14.04 on the 4th xeon server, which I was trying as a client in the latest tests. A single ‘dd’ instance on ubuntu 14.04 was able to write to the cluster at just under 500MB/s (494MB/s, 20GB from /dev/zero). I also ran (though it failed to generate the statistics) tiobench, which ran substantially faster, before generating a divide by zero error. I noticed something interesting and different when doing this. I’ve been running nload on the chunk servers, showing the bandwidth used by the chunk servers on my screen. It’s a little interesting how this varies between when the freebsd client is writing and when the ubuntu client is writing (even from just dd from /dev/zero). The freebsd client neither achieves the speeds the ubuntu client does, nor the consistency of sending data to all the chunk servers that the ubuntu client achieves. http://www.getsomewhere.net/nload_freebsd_client.png <http://www.getsomewhere.net/nload_freebsd_client.png> http://www.getsomewhere.net/nload_ubuntu_client.png <http://www.getsomewhere.net/nload_ubuntu_client.png> -Joe > On Aug 26, 2015, at 8:56 AM, Aleksander Wieliczko <ale...@mo...> wrote: > > Thank you for this information. > > We have two ideas to test in your environment: > 1. Can you test mfsclient on Linux OS(like Ubuntu 14/Debian 8) with FUSE >= 2.9.3 ? > 2. Can you switch from Atom to Xeon CPU for master during your tests? > > We are waiting for your feedback. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <x-msg://49/moosefs.com> > > On 26.08.2015 15:23, Joseph Love wrote: >> Hi, >> >> Sure. >> All parts are running 3.0.39, on FreeBSD 10.2. >> >> Chunk servers are dual xeon X5560s, 24gb memory. >> Master is an atom C2758, 4gb memory >> Clients vary between an atom C2758 4gb memory (not the master), and a dual xeon L5650, 24gb memory. >> >> I’ve tried two different disk setups with the chunk servers: >> - with a pair of 1tb WD RE3s, mirrored; and >> - with a single Intel DC S3500 SSD. >> >> Goal is set to 1. >> NICs are Intel X520-DA2 (10gbe) with direct-attach SFP+ cables to a Broadcom 8000b switch. >> >> Latency from client 1 (atom C2758) - 30 packets: >> to Master: round-trip min/avg/max/stddev = 0.069/0.086/0.096/0.007 ms >> to chunk1: round-trip min/avg/max/stddev = 0.059/0.125/0.612/0.159 ms >> to chunk2: round-trip min/avg/max/stddev = 0.049/0.087/0.611/0.098 ms >> to chunk3: round-trip min/avg/max/stddev = 0.054/0.071/0.100/0.010 ms >> >> Latency from client 2 (xeon L5650) - 30 packets: >> to Master: round-trip min/avg/max/stddev = 0.045/0.056/0.073/0.006 ms >> to chunk1: round-trip min/avg/max/stddev = 0.029/0.036/0.043/0.003 ms >> to chunk2: round-trip min/avg/max/stddev = 0.033/0.037/0.044/0.003 ms >> to chunk3: >> >> There’s no traffic shaping/QOS on this LAN, and the master & chunk servers are all on a network dedicated just to them. >> >> I’ve also tried this from client2, partially out of curiosity: >> > dd if=/dev/zero of=test.zero.dd bs=1m count=10000 & && dd if=/dev/zero of=test.zero-2.dd bs=1m count=10000 & && dd if=/dev/zero of=test.zero-3.dd bs=1m count=10000 & >> >> 10000+0 records in >> 10000+0 records out >> 10485760000 bytes transferred in 101.117837 secs (103698421 bytes/sec) >> 10000+0 records in >> 10000+0 records out >> 10485760000 bytes transferred in 101.644066 secs (103161556 bytes/sec) >> 10000+0 records in >> 10000+0 records out >> 10485760000 bytes transferred in 101.900907 secs (102901537 bytes/sec) >> >> Running 3 instances, I was able to write at 100MB/s per instance of dd. Which suggests being able to write at at least 300MB/s from a single client (just not in a single process/thread). >> >> I tried the same thing with 7 instances. I think it actually hit the limits on the chunk server’s disk speeds (approximately 220MB/s per ssd, iirc). 7 instances of dd gave me these results: >> 10485760000 bytes transferred in 124.690526 secs (84094280 bytes/sec) >> 10485760000 bytes transferred in 124.719954 secs (84074438 bytes/sec) >> 10485760000 bytes transferred in 126.579289 secs (82839460 bytes/sec) >> 10485760000 bytes transferred in 124.954106 secs (83916890 bytes/sec) >> 10485760000 bytes transferred in 126.181140 secs (83100850 bytes/sec) >> 10485760000 bytes transferred in 126.294929 secs (83025978 bytes/sec) >> 10485760000 bytes transferred in 103.810845 secs (101008329 bytes/sec) >> Add that all up, and it’s about 600MB/s, which is about 200MB/s/chunkserver - pretty close to the theoretical max for the SSDs. >> >> So, I guess a single client can reach the speeds, just not in a single process/thread. >> >> -Joe >> >> >>> On Aug 26, 2015, at 1:17 AM, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> wrote: >>> >>> Hi. >>> >>> Can we get some more details about your configuration? >>> >>> - MooseFS master version? >>> - MooseFS chunkserver version? >>> - MooseFS client version? >>> - Kernel version ? >>> - CPU speed ? >>> - RAM size ? >>> - Number of disks per chunkserver? >>> - What GOAL you set for test folder? >>> - NIC interface type - Coper or Fiber? >>> - Network latency from client to master and chunkservers(ping mfsmaster)? >>> - Do you have some traffic shaping/QOS in you LAN? >>> >>> Best regards >>> Aleksander Wieliczko >>> Technical Support Engineer >>> MooseFS.com <x-msg://27/moosefs.com> >>> >>> >>> On 25.08.2015 18:29, Joseph Love wrote: >>>> They’re all on 10gbe. I did turn on Jumbo frames during my testing, it didn’t seem to make a really big difference. >>>> >>>> I just tried with SSDs in the chunk servers (Intel DC S3500 200GB), and still seeing about the same performance characteristic. >>>> >>>> I know in the middle of some other synthetic tests that I can break 200MB/s (sequential) read, 150MB/s write, but that’s a multithreaded test application. >>>> Actually, now that I say that, I suppose it might be a single-thread performance characteristic with FUSE on FreeBSD. >>>> >>>> Anyone have statistic from a Linux system that shows > 100MB/s per thread on a client? >>>> >>>> -Joe >>>> >>>>> On Aug 25, 2015, at 11:12 AM, Ricardo J. Barberis <ric...@do...> <mailto:ric...@do...> wrote: >>>>> >>>>> El Martes 25/08/2015, Joseph Love escribió: >>>>>> Hi, >>>>>> >>>>>> I’ve been doing some tests to get an idea as to what sort of speeds I can >>>>>> maybe expect from moosefs, and ran into something unexpected. From >>>>>> multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 >>>>>> clients) to my 3-node cluster (3 chunk servers). From a single client >>>>>> (while everything else is idle) I get the same result. It occurred to me >>>>>> that the write speed to a disk in each chunk server is roughly 100MB/s, and >>>>>> I was curious if this seems to be the likely culprit for performance >>>>>> limitations for a single stream from a single client. >>>>>> >>>>>> I’m about to try it again with SSDs, but I have a bit of time before that’s >>>>>> ready, and I figured I’d try to pose the question early. >>>>>> >>>>>> Thoughts? >>>>>> >>>>>> -Joe >>>>> How about network? If your clients are connected to 1 Gbps, 100 MB/s is nearly >>>>> saturating the network. >>>>> >>>>> Also, using Jumbo frames might give you a few extra MB/s. >>>>> >>>>> Regards, >>>>> -- >>>>> Ricardo J. Barberis >>>>> Senior SysAdmin / IT Architect >>>>> DonWeb >>>>> La Actitud Es Todo >>>>> www.DonWeb.com <http://www.donweb.com/> >>>>> _____ >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> _________________________________________ >>>>> moosefs-users mailing list >>>>> moo...@li... <mailto:moo...@li...> >>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>> ------------------------------------------------------------------------------ >>>> _________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... <mailto:moo...@li...> >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>> >>> ------------------------------------------------------------------------------ >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... <mailto:moo...@li...> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > |
From: Aleksander W. <ale...@mo...> - 2015-08-26 06:17:45
|
Hi. Can we get some more details about your configuration? - MooseFS master version? - MooseFS chunkserver version? - MooseFS client version? - Kernel version ? - CPU speed ? - RAM size ? - Number of disks per chunkserver? - What GOAL you set for test folder? - NIC interface type - Coper or Fiber? - Network latency from client to master and chunkservers(ping mfsmaster)? - Do you have some traffic shaping/QOS in you LAN? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 25.08.2015 18:29, Joseph Love wrote: > They’re all on 10gbe. I did turn on Jumbo frames during my testing, it didn’t seem to make a really big difference. > > I just tried with SSDs in the chunk servers (Intel DC S3500 200GB), and still seeing about the same performance characteristic. > > I know in the middle of some other synthetic tests that I can break 200MB/s (sequential) read, 150MB/s write, but that’s a multithreaded test application. > Actually, now that I say that, I suppose it might be a single-thread performance characteristic with FUSE on FreeBSD. > > Anyone have statistic from a Linux system that shows > 100MB/s per thread on a client? > > -Joe > >> On Aug 25, 2015, at 11:12 AM, Ricardo J. Barberis <ric...@do...> wrote: >> >> El Martes 25/08/2015, Joseph Love escribió: >>> Hi, >>> >>> I’ve been doing some tests to get an idea as to what sort of speeds I can >>> maybe expect from moosefs, and ran into something unexpected. From >>> multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 >>> clients) to my 3-node cluster (3 chunk servers). From a single client >>> (while everything else is idle) I get the same result. It occurred to me >>> that the write speed to a disk in each chunk server is roughly 100MB/s, and >>> I was curious if this seems to be the likely culprit for performance >>> limitations for a single stream from a single client. >>> >>> I’m about to try it again with SSDs, but I have a bit of time before that’s >>> ready, and I figured I’d try to pose the question early. >>> >>> Thoughts? >>> >>> -Joe >> How about network? If your clients are connected to 1 Gbps, 100 MB/s is nearly >> saturating the network. >> >> Also, using Jumbo frames might give you a few extra MB/s. >> >> Regards, >> -- >> Ricardo J. Barberis >> Senior SysAdmin / IT Architect >> DonWeb >> La Actitud Es Todo >> www.DonWeb.com >> _____ >> >> ------------------------------------------------------------------------------ >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Joseph L. <jo...@ge...> - 2015-08-25 16:29:21
|
They’re all on 10gbe. I did turn on Jumbo frames during my testing, it didn’t seem to make a really big difference. I just tried with SSDs in the chunk servers (Intel DC S3500 200GB), and still seeing about the same performance characteristic. I know in the middle of some other synthetic tests that I can break 200MB/s (sequential) read, 150MB/s write, but that’s a multithreaded test application. Actually, now that I say that, I suppose it might be a single-thread performance characteristic with FUSE on FreeBSD. Anyone have statistic from a Linux system that shows > 100MB/s per thread on a client? -Joe > On Aug 25, 2015, at 11:12 AM, Ricardo J. Barberis <ric...@do...> wrote: > > El Martes 25/08/2015, Joseph Love escribió: >> Hi, >> >> I’ve been doing some tests to get an idea as to what sort of speeds I can >> maybe expect from moosefs, and ran into something unexpected. From >> multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 >> clients) to my 3-node cluster (3 chunk servers). From a single client >> (while everything else is idle) I get the same result. It occurred to me >> that the write speed to a disk in each chunk server is roughly 100MB/s, and >> I was curious if this seems to be the likely culprit for performance >> limitations for a single stream from a single client. >> >> I’m about to try it again with SSDs, but I have a bit of time before that’s >> ready, and I figured I’d try to pose the question early. >> >> Thoughts? >> >> -Joe > > How about network? If your clients are connected to 1 Gbps, 100 MB/s is nearly > saturating the network. > > Also, using Jumbo frames might give you a few extra MB/s. > > Regards, > -- > Ricardo J. Barberis > Senior SysAdmin / IT Architect > DonWeb > La Actitud Es Todo > www.DonWeb.com > _____ > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@do...> - 2015-08-25 16:12:53
|
El Martes 25/08/2015, Joseph Love escribió: > Hi, > > I’ve been doing some tests to get an idea as to what sort of speeds I can > maybe expect from moosefs, and ran into something unexpected. From > multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 > clients) to my 3-node cluster (3 chunk servers). From a single client > (while everything else is idle) I get the same result. It occurred to me > that the write speed to a disk in each chunk server is roughly 100MB/s, and > I was curious if this seems to be the likely culprit for performance > limitations for a single stream from a single client. > > I’m about to try it again with SSDs, but I have a bit of time before that’s > ready, and I figured I’d try to pose the question early. > > Thoughts? > > -Joe How about network? If your clients are connected to 1 Gbps, 100 MB/s is nearly saturating the network. Also, using Jumbo frames might give you a few extra MB/s. Regards, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ |
From: Joseph L. <jo...@ge...> - 2015-08-25 14:38:45
|
Hi, I’ve been doing some tests to get an idea as to what sort of speeds I can maybe expect from moosefs, and ran into something unexpected. From multiple clients, I can sustain 80-100MB/s per client (only tested up to 3 clients) to my 3-node cluster (3 chunk servers). From a single client (while everything else is idle) I get the same result. It occurred to me that the write speed to a disk in each chunk server is roughly 100MB/s, and I was curious if this seems to be the likely culprit for performance limitations for a single stream from a single client. I’m about to try it again with SSDs, but I have a bit of time before that’s ready, and I figured I’d try to pose the question early. Thoughts? -Joe |
From: R. C. <mil...@gm...> - 2015-08-25 11:09:01
|
I designed my MFS deploy so that master and chunks work in their silent isolated physical LAN and only master is reachable by the rest of company computers. Now I'm stuck with NFS or SAMBA...bummer! Thank you Aleksander. Bye Raf 2015-08-25 12:33 GMT+02:00 Aleksander Wieliczko < ale...@mo...>: > Hi. > > Please check your Firewall configuration. > Mfscient need to talk not only to master but also to chunkservers. > > Chunkserver listen on 9422 port for client connections. > All ports used by MooseFS are from range 9419-9422 > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com> > > On 25.08.2015 12:25, R. C. wrote: > > Hi All, > > here the setup: > moosefs-2.0.73 source installation on Centos 6.7 x64 > Master can mount the MFS tree and share thru NFS or SAMBA. Very good > performance. > > Client can mount: > > [root@kvm1 /]# mfsmount -H san -S / /mnt/san/ > mfsmaster accepted connection with parameters: read-write,restricted_ip ; > root mapped to root:root > > Client can create subfolders, delete subfolders, touch new empty files but > cannot copy files of any size. Cp or even MidnightCommander start the copy > but do not copy a single byte. They lock forever. The file is created empty > on MFS. > > Client configuration: > Centos 7 > moosefs-2.0.73 installed from source > fuse-2.9.2-5.el7.x86_64 > fuse-devel-2.9.2-5.el7.x86_64 > kernel 3.10.0-229.11.1.el7.x86_64 > > DNS resolves names correctly. > > Am I missing something? > > Thanks > > Bye > > Raf > > > > ------------------------------------------------------------------------------ > > > > _________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Aleksander W. <ale...@mo...> - 2015-08-25 10:33:44
|
Hi. Please check your Firewall configuration. Mfscient need to talk not only to master but also to chunkservers. Chunkserver listen on 9422 port for client connections. All ports used by MooseFS are from range 9419-9422 Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 25.08.2015 12:25, R. C. wrote: > Hi All, > > here the setup: > moosefs-2.0.73 source installation on Centos 6.7 x64 > Master can mount the MFS tree and share thru NFS or SAMBA. Very good > performance. > > Client can mount: > > [root@kvm1 /]# mfsmount -H san -S / /mnt/san/ > mfsmaster accepted connection with parameters: > read-write,restricted_ip ; root mapped to root:root > > Client can create subfolders, delete subfolders, touch new empty files > but cannot copy files of any size. Cp or even MidnightCommander start > the copy but do not copy a single byte. They lock forever. The file is > created empty on MFS. > > Client configuration: > Centos 7 > moosefs-2.0.73 installed from source > fuse-2.9.2-5.el7.x86_64 > fuse-devel-2.9.2-5.el7.x86_64 > kernel 3.10.0-229.11.1.el7.x86_64 > > DNS resolves names correctly. > > Am I missing something? > > Thanks > > Bye > > Raf > > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: R. C. <mil...@gm...> - 2015-08-25 10:25:07
|
Hi All, here the setup: moosefs-2.0.73 source installation on Centos 6.7 x64 Master can mount the MFS tree and share thru NFS or SAMBA. Very good performance. Client can mount: [root@kvm1 /]# mfsmount -H san -S / /mnt/san/ mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root Client can create subfolders, delete subfolders, touch new empty files but cannot copy files of any size. Cp or even MidnightCommander start the copy but do not copy a single byte. They lock forever. The file is created empty on MFS. Client configuration: Centos 7 moosefs-2.0.73 installed from source fuse-2.9.2-5.el7.x86_64 fuse-devel-2.9.2-5.el7.x86_64 kernel 3.10.0-229.11.1.el7.x86_64 DNS resolves names correctly. Am I missing something? Thanks Bye Raf |
From: Michael T. <mic...@ho...> - 2015-08-25 03:42:47
|
I will be upgrading the 2.x soon-ish, including "dist-upgrade"-ing all the MFS servers (master, chunkservers, and metaloggers) to Ubuntu 14.04, and, quite possibly, installing a v4.1.x kernel from the kernel-ppa mainline archive (to take advantage of filesystem performance and stability improvements). I currently activated a third chunkserver, made all goals=3, and it is about 60% done replicating. Hopefully the replication finish sometime this week. Once done replicating, I will take it offline plus one of the metaloggers too, and attempt to upgrade them to the latest stable version. If everything go well (including validating that there is no corruption of any file), I will then schedule the actual upgrade. The Zimbra VM though will remain in Ubuntu 8.04. But there is a separate plan to eventually upgrade to the latest stable version. Best regards, --- mike t. Date: Mon, 24 Aug 2015 09:14:55 +0200 From: ale...@mo... To: mic...@ho... CC: dw...@mo... Subject: Re: [MooseFS-Users] Network overhead and best method of sharing Hi Thank you for this information. Do you consider to update all this systems? MooseFS in version 1.6 and Ubuntu 8 is no longer supported. Lot of improvements ware made since MFS version 1.6 and ubuntu 8.04 LTS If you are interested in HA solution for MooseFS or some support, please visit our web site at http://moosefs.com Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 22.08.2015 05:37, Michael Tinsay wrote: We have a relatively old Zimbra Mail system up and running on a KVM-based VM runing Ubuntu 8.04. Zimbra recommends not using SMB/CIFS to network mount its message store -- actually they recommend using local disks, but if you really have to use a network mount, they say use NFS. So what I have currently: the Zimbra VM mount its message store via NFS; the NFS is exported by the host machine (running Ubuntu 12.04); and the exported directory is an mfsmount-ed folder. Initially we experienced a few problems (missing mails), but they went away when we exported the folder with the *sync* parameter. So far so good. Our email system is used by close to 500 users and its message store will hit 2TB soon (25% of our available MFS space). Specs: MooseFS version: 1.6.26 Host Machine: Ubuntu 12.04, nfs-kernel-server 1:1.2.5-3ubuntu3.2 VM Guest: Ubuntu 8.04, nfs-common 1:1.1.2-2ubuntu2.4 --- mike t. Date: Fri, 21 Aug 2015 14:04:14 +0200 From: ale...@mo... To: mic...@ho...; moo...@li... Subject: Re: [MooseFS-Users] Network overhead and best method of sharing Question is what kind of software are you using as NFS server? We have lots of bad experience in performance and stability with nfs-kernel-server and mfsmount. It's rather not good idea to use nfs-kernel-server and mfsmount in production environment. Can you tell as something more about your configuration? What NFS software are you using? A would like to add that MooseFS have native client for most popular operating systems: - Ubuntu 10/12/14 - Debian 5/6/7/8 - RHEL/CentOS versions 5/6/7 - OpenSUSE 12 - FreeBSD 9.3/10 - MacOS X 10.9/10.10 - Raspberry Pi 2 Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 21.08.2015 05:18, Michael Tinsay wrote: Why not NFS? Are there any incompatibilities between MFS client and NFS? > > Message: 2 > Date: Wed, 19 Aug 2015 08:17:26 +0200 > From: Aleksander Wieliczko <ale...@mo...> > Subject: Re: [MooseFS-Users] Network overhead and best method of > sharing > To: "R. C." <mil...@gm...> > Cc: moo...@li... > Message-ID: <47E...@mo...> > Content-Type: text/plain; charset="us-ascii" > > Hi Raffaello > > If you use OS which supports MooseFS-client try to not use any re-share method. You will always add another layer in communication during re-share. Direct use of mfsmount will always be better. > > But if you like to use MooseFS on Windows or storage for Virtualisation try to use SAMBA/iSCSI Target. > We are not advising to use NFS as re-share method. > > Best Regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com/> |
From: Aleksander W. <ale...@mo...> - 2015-08-21 12:04:25
|
Question is what kind of software are you using as NFS server? We have lots of bad experience in performance and stability with nfs-kernel-server and mfsmount. It's rather not good idea to use nfs-kernel-server and mfsmount in production environment. Can you tell as something more about your configuration? What NFS software are you using? A would like to add that MooseFS have native client for most popular operating systems: - Ubuntu 10/12/14 - Debian 5/6/7/8 - RHEL/CentOS versions 5/6/7 - OpenSUSE 12 - FreeBSD 9.3/10 - MacOS X 10.9/10.10 - Raspberry Pi 2 Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 21.08.2015 05:18, Michael Tinsay wrote: > > Why not NFS? Are there any incompatibilities between MFS client and NFS? > > > > > Message: 2 > > Date: Wed, 19 Aug 2015 08:17:26 +0200 > > From: Aleksander Wieliczko <ale...@mo...> > > Subject: Re: [MooseFS-Users] Network overhead and best method of > > sharing > > To: "R. C." <mil...@gm...> > > Cc: moo...@li... > > Message-ID: <47E...@mo...> > > Content-Type: text/plain; charset="us-ascii" > > > > Hi Raffaello > > > > If you use OS which supports MooseFS-client try to not use any > re-share method. You will always add another layer in communication > during re-share. Direct use of mfsmount will always be better. > > > > But if you like to use MooseFS on Windows or storage for > Virtualisation try to use SAMBA/iSCSI Target. > > We are not advising to use NFS as re-share method. > > > > Best Regards > > Aleksander Wieliczko > > Technical Support Engineer > > MooseFS.com <http://moosefs.com/> > |