You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Piotr R. K. <pio...@mo...> - 2015-09-24 09:35:11
|
> Hi, > > I have a question about high availability for master servers in MooseFS Pro. > > If two masters replicate changes in metadata, is this replication > synchronous or asynchronous? Asynchronous. Synchronous replication is on our roadmap. Of course synchronous replications will mean that operations on metadata will be much slower, so we don't want to change it globally. Instead we plan to add flag for directory which will mean that all changes inside this directory should be replicate synchronously. > I mean the situation that a user created, deleted or renamed a file > or appended data to a file just before the primary master for this > operation goes down. The operation was reported to the user as finished. > > Does this operation always survive master failure and is visible > after another master takes over the responsibility of the failed master? > Can latest filesystem operations be lost? Theoretically yes. We write changes to socket connected to other masters first and then send positive answer to client, but kernel can of course change the order and send packet to client first and then computer can go down without sending changes to other masters. > How it is in case of chunk server failure? Can latest write > operations be lost in this type of failure? Write operations are repeated in this case (also in case of chunkserver malfunction). The only problem you may encounter in this scenario is size of the file. At the end of write session client sends new size of the file similarly to any other metadata change and when master goes down just after sending proper answer to the client, but before sending this change to other masters, then your file may be smaller than expected. Of course this is theoretic scenario - we never witnessed it practically. > Thanks Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 17 Sep 2015, at 2:27 pm, Arek Wojna <are...@in...> wrote: > > Hi, > > I have a question about high availability for master servers in MooseFS Pro. > > If two masters replicate changes in metadata, is this replication > synchronous or asynchronous? > I mean the situation that a user created, deleted or renamed a file or > appended data to a file just before the primary master for this > operation goes down. The operation was reported to the user as finished. > > Does this operation always survive master failure and is visible after > another master takes over the responsibility of the failed master? > Can latest filesystem operations be lost? > > How it is in case of chunk server failure? Can latest write operations > be lost in this type of failure? > > Thanks > Arek |
From: Aleksander W. <ale...@mo...> - 2015-09-23 06:59:23
|
Hello Joe First of all thank You for pointing the problem to us. The reason why you don't get high performance is FreeBSD design. We made some investigations and it appears that the block size in all I/O is only 4kB. All operating systems use cache during I/O. The standard size of cache block is 4kB (standard page size), so transfers via cache are done using the same block size. In some operating systems (for example Linux) there are algorithms that join these small blocks into larger groups and therefore increase performance of I/O, but that's not the case in FreeBSD. So in FreeBSD, even when you set block size to 1M, inside the kernel all operations are split into 4k blocks (because of the cache). Our developer noticed, that during DIRECT I/O operations (without using cache), all I/O are split into 128k blocks (maximum allowable transfer size sent by our mfsmount to the kernel). It increases performance significantly. In our test environment we reached 900MB/s on 10Gb network. Be aware that in this case cache is not used at all. So to sum it all up: *FreeBSD can use block size larger than 4k but only without cache.* Mainly for FreeBSD we added special cache option for MooseFS client called DIRECT. This option is available in MooseFS client since version 3.0.49. To disable local cache and enable DIRECT communication please use this option during mount: mfsmount -H mfsmaster.your.domain.com -o mfscachemode=DIRECT /mount/point More details about current version of MooseFS you can find on http://moosefs.com/download.html page. Please test this option. We are waiting for your feedback. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 10.09.2015 14:56, Joseph Love wrote: > Thanks. I look forward to hearing what you’ve uncovered. > > -Joe > >> On Sep 10, 2015, at 6:53 AM, Aleksander Wieliczko >> <ale...@mo... >> <mailto:ale...@mo...>> wrote: >> >> Hi. >> >> Yes. >> We are in progress of resolving this problem. >> We found few potential reasons of this behaviour, but we need some >> more time to find the best solution. >> >> If we solve this problem, we respond as quick as it is possible with >> some more details. >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com <x-msg://11/moosefs.com> >> >> On 09.09.2015 22:23, Joseph Love wrote: >>> Hi Aleksander, >>> >>> Will there be time for some investigation into the performance of >>> the FreeBSD client coming up? >>> >>> Thanks, >>> -Joe >>> >>>> On Aug 28, 2015, at 1:38 AM, Aleksander Wieliczko >>>> <ale...@mo... >>>> <mailto:ale...@mo...>> wrote: >>>> >>>> Hi. >>>> Thank you for this information. >>>> >>>> Can you do one more simple test. >>>> What network bandwidth can you achieve between two FreeBSD machines? >>>> >>>> I mean, something like: >>>> FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null >>>> (simple nc and dd tool will tell you a lot.) >>>> >>>> We know that FUSE on FreeBSD systems had some problems but we need >>>> to take close look to this issue. >>>> We will try to repeat this scenario in our test environment and >>>> return to you after 08.09.2015, because we are in progress of >>>> different tests till this day. >>>> >>>> I would like to add one more aspect. >>>> Mfsmaster application is single-thread so we compared your cpu with >>>> our. >>>> This are the results: >>>> *(source: cpubenchmark.net <http://cpubenchmark.net/>)* >>>> >>>> CPU Xeon E5-1620 v2 @ 3.70GHz >>>> Avarage points: 9508 >>>> *Single Thread points**: 1920* >>>> >>>> CPU Intel Atom C2758 @ 2.40GHz >>>> Avarage points: 3620 >>>> *Single Thread points: 520* >>>> >>>> Best regards >>>> Aleksander Wieliczko >>>> Technical Support Engineer >>>> MooseFS.com <x-msg://9/moosefs.com> >>>> >>> >> > |
From: Krzysztof K. <krz...@mo...> - 2015-09-22 07:42:02
|
Hi, There is no simple answer to this question since in MooseFS, to gain the best performance, we use different kind of caching mechanisms on different levels: - read-ahead cache (on client and chunkservers), - write-back cache (on client and chunkservers), - Metadata cache on clients. MooseFS does enforce consistency model for synchronisation of writes to all copies of a single chunk - data written to all copies of the same chunk performed by different clients is always consistent. MooseFS however does not enforce any consistency model on a file level by default due to number of caches in the system, so let me discuss two typical model cases. MapReduce Model In typical MapReduce computational model, where reads are scheduled after completed writes to new files (mapper results are read by reduce jobs) there is a guarantee that read performed by any client on the system will get consistent view of last write operation performed on any client write operation (creating new file - i.e. on the mapper job): fd = open(“file.data",O_WRONLY|O_CREAT|O_EXCL); write(fd,...); fsync(fd); close(fd); read operation (after successful close in write operation - i.e. reduce job on mapper results): fd = open(“file.data",O_RDONLY); read(fd,...); close(fd); General Model In general the only schema of operations in MooseFS that guarantees synchronisation between read and write workers should be coded as following (written as C pseudo-code): Synchronised Write: fdl = open(“file.lock",O_RDWR|O_CREAT); flock(fdl,LOCK_EX); fd = open(“file.data",O_WRONLY); write(fd,.....); fsync(fd); close(fd); flock(fdl,LOCK_UN); /* formally should be there, but in practice close should remove the lock */ close(fdl); Synchronised Read: fdl = open(“file.lock",O_RDWR|O_CREAT); flock(fdl,LOCK_SH); fd = open(“file.data",O_RDONLY); read(fd,.....); close(fd); flock(fdl,LOCK_UN); /* formally */ close(fdl); To achieve “full” synchronisation you must use MooseFS locks mechanism (available in MooseFS 3.0+) on external objects - in our example “file.lock”. You can think of those external objects as a r/w locks in the operating system for thread synchronisation in typical consumer/producer scenario. In above example for clarity we used flock() mechanism (which is more common in BSD system and requires FUSE 2.9+), but in your application you can use any POSIX compliant locks mechanism that can be used with FUSE 2.6+ Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 > On 02 Sep 2015, at 10:08, Valerio Schiavoni <val...@gm...> wrote: > > Hello Aleksander, > for now I need simply a short answer, that is to understand if MooseFS guarantees eventual, weak, causal, strong consistency (or other flavours). > > The how's and gory details are also relevant and useful but I can quietly wait for the longer answer. > > Thanks a lot, > -- > Valerio > > > On Wed, Sep 2, 2015 at 7:53 AM, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> wrote: > Hi. > We made so many changes till 2011 so you need to give as some time to generate answer on your question. > We will return to this topic as fast as it is possible. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com/> > > On 31.08.2015 19:02, Valerio Schiavoni wrote: >> Hello, >> what is the consistency model of MooseFS? >> I've found a 2011 thread [1] on this ML that seems to be remained unanswered, and since we need to know the same details... >> >> Thanks. >> >> 1 - http://sourceforge.net/p/moosefs/mailman/message/26888539/ <http://sourceforge.net/p/moosefs/mailman/message/26888539/> >> >> -- >> Valerio >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > > ------------------------------------------------------------------------------ > Monitor Your Dynamic Infrastructure at Any Scale With Datadog! > Get real-time metrics from all of your servers, apps and tools > in one place. > SourceForge users - Click here to start your Free Trial of Datadog now! > http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140 <http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140> > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > > ------------------------------------------------------------------------------ > Monitor Your Dynamic Infrastructure at Any Scale With Datadog! > Get real-time metrics from all of your servers, apps and tools > in one place. > SourceForge users - Click here to start your Free Trial of Datadog now! > http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <akh...@ri...> - 2015-09-21 08:36:46
|
Hi All! Did somebody a Samba's ping_pong test ? It may be used to validate cluster FS locking facility before building Samba HA cluster. wbr Alexander ====================================================== Aleksander Wieliczko <ale...@mo...> writes: > Also your kernel part of fuse has to support this mechanism. We mainly > tested in on Linux with kernel 3.13 (ubuntu 14.04). > As I remember there are no support for locking in other operating systems. I just ran some tests, using this tool: ftp://ftp.software.ibm.com/software/nfs/backup/beta/testlock.c Both flock and lockf style locking work exactly as they should between two clients running MooseFS 3.0.22 on NetBSD-current and Ubuntu 1504, including correctly handling shared and exclusive locking with flock. Nice! :) -tih -- Popularity is the hallmark of mediocrity. --Niles Crane, "Frasier" ------------------------------------------------------------------------------ One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Jakub Kruszona-Z. <jak...@ge...> - 2015-09-21 05:21:09
|
On 19 Sep, 2015, at 3:02, pen...@ic... wrote: > That's great. > But master is single thread, I think this is one of a limitation to master. Not exactly. Very often single threaded apps are as fast as multithreaded (sometimes even faster - and always much safer and much less buggy). In multithreaded apps CPU spends a lot of time on synchronization. Also whole architecture of application is usually much worse. Davies is right - to handle thousands of clients the most important improvement is to change poll to epoll/kqueue and it is on our roadmap. This should speed up master significantly in case of huge number of clients/chunkservers. > > pen...@ic... > > From: Davies Liu > Date: 2015-09-19 01:21 > To: Aleksander Wieliczko > CC: pen...@ic...; moosefs-users > Subject: Re: [MooseFS-Users] How many clients dose MFS max support? > I think MFS can't scale well if there are more than 2k clients. We saw > the CPU usage of master went to almost 100% with about 1k clients and > not high traffic. > > In order to scale to thousands of clients, MFS should use > epoll/kqueue. I have a branch that use epoll instead of poll: > https://github.com/davies/moosefs/commits/douban > > On Fri, Sep 18, 2015 at 2:31 AM, Aleksander Wieliczko > <ale...@mo...> wrote: > > Hi > > All depends. > > Question is how many clients you want to have and how many operations they > > will perform on one master? > > > > We have instances with about 300 client in production environment. > > In our software we don't have any client connection limits, but number of > > parallel connections close to 10 000 creates different level of problem. > > > > Best regards > > Aleksander Wieliczko > > Technical Support Engineer > > MooseFS.com > > > > On 18.09.2015 09:20, pen...@ic... wrote: > > > > hi all: > > > > How many clients does Moosefs max support? > > > > Thanks for your help. > > > > best regards. > > ________________________________ > > pen...@ic... > > > > > > ------------------------------------------------------------------------------ > > > > > > > > _________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > > > ------------------------------------------------------------------------------ > > > > _________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > -- > - Davies > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Pozdrawiam, Jakub Kruszona-Zawadzki - - - - - - - - - - - - - - - - Segmentation fault (core dumped) Tel. +48 602 212 039 |
From: William K. <wk...@bn...> - 2015-09-19 03:40:11
|
So I imagine that mfsmakesnapshot is really just cloning the metadata and then fills in after the fact when the source changes. Does mfsappendchunks work the same way? (with the ability to merge multiple files) Is there an advantage to one method or the other (especially in regards to a VM image)? -wk |
From: William K. <wk...@bn...> - 2015-09-19 03:29:44
|
So after years of using 1.6.x we finally got a chance to try out version 2.x Went very well, and we love the new mfscli utility (we were using lynx on the cgi before) as well as the other improvements. The only issue we had was that when we tried to do the yum install moosefs-client on a CentOS5 box, it was unhappy that we had the wrong fuse (2.7 ver 2.8) and refused to install due to deps on fuse and libpcap Since we previously always just compiled off source tarball, we simply did that and moved on so it was a minor issue, but we were a little surprised. I hoping that the older fuse isn't a problem but everything seems fine. Client installs on RH6/RH7 went smoothly and we really like the ability to just do the yum install on a minimal chunkserver rather than deal with the minor compilation hassle. But all and all well done. Now the guys are anxious to get 2.x on the other clusters even though they are all working fine. I am always pleasantly surprised with MooseFS. We run a number of other DFS systems (such as Gluster, DRBD, etc) and the techs all prefer MooseFS because its so straight forward and just works out of the box. |
From: <pen...@ic...> - 2015-09-19 01:04:03
|
That's great. But master is single thread, I think this is one of a limitation to master. pen...@ic... From: Davies Liu Date: 2015-09-19 01:21 To: Aleksander Wieliczko CC: pen...@ic...; moosefs-users Subject: Re: [MooseFS-Users] How many clients dose MFS max support? I think MFS can't scale well if there are more than 2k clients. We saw the CPU usage of master went to almost 100% with about 1k clients and not high traffic. In order to scale to thousands of clients, MFS should use epoll/kqueue. I have a branch that use epoll instead of poll: https://github.com/davies/moosefs/commits/douban On Fri, Sep 18, 2015 at 2:31 AM, Aleksander Wieliczko <ale...@mo...> wrote: > Hi > All depends. > Question is how many clients you want to have and how many operations they > will perform on one master? > > We have instances with about 300 client in production environment. > In our software we don't have any client connection limits, but number of > parallel connections close to 10 000 creates different level of problem. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com > > On 18.09.2015 09:20, pen...@ic... wrote: > > hi all: > > How many clients does Moosefs max support? > > Thanks for your help. > > best regards. > ________________________________ > pen...@ic... > > > ------------------------------------------------------------------------------ > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Davies L. <dav...@gm...> - 2015-09-18 17:21:29
|
I think MFS can't scale well if there are more than 2k clients. We saw the CPU usage of master went to almost 100% with about 1k clients and not high traffic. In order to scale to thousands of clients, MFS should use epoll/kqueue. I have a branch that use epoll instead of poll: https://github.com/davies/moosefs/commits/douban On Fri, Sep 18, 2015 at 2:31 AM, Aleksander Wieliczko <ale...@mo...> wrote: > Hi > All depends. > Question is how many clients you want to have and how many operations they > will perform on one master? > > We have instances with about 300 client in production environment. > In our software we don't have any client connection limits, but number of > parallel connections close to 10 000 creates different level of problem. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com > > On 18.09.2015 09:20, pen...@ic... wrote: > > hi all: > > How many clients does Moosefs max support? > > Thanks for your help. > > best regards. > ________________________________ > pen...@ic... > > > ------------------------------------------------------------------------------ > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Warren M. <wa...@an...> - 2015-09-18 16:16:29
|
There are a number of mitigation strategies to reduce the number of direct connections to the MFS Master:have MFS power a file server (ownCloud, CIFS, NFS, etc)have MFS power a web/ftp serveretc Warren Myers http://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a Date: Fri, 18 Sep 2015 17:55:27 +0800 From: pen...@ic... To: ale...@mo...; moo...@li... Subject: Re: [MooseFS-Users] How many clients dose MFS max support? Hi Aleksander Thanks for your answer.In our production environment, clients close to 100 * 1000. we have no idea to test the performance of parallel connections before online. Best regards. pen...@ic... From: Aleksander WieliczkoDate: 2015-09-18 17:31To: pen...@ic...; moosefs-usersSubject: Re: [MooseFS-Users] How many clients dose MFS max support? Hi All depends. Question is how many clients you want to have and how many operations they will perform on one master? We have instances with about 300 client in production environment. In our software we don't have any client connection limits, but number of parallel connections close to 10 000 creates different level of problem. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 18.09.2015 09:20, pen...@ic... wrote: hi all: How many clients does Moosefs max support? Thanks for your help. best regards. pen...@ic... ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: <pen...@ic...> - 2015-09-18 09:57:11
|
Hi Aleksander Thanks for your answer. In our production environment, clients close to 100 * 1000. we have no idea to test the performance of parallel connections before online. Best regards. pen...@ic... From: Aleksander Wieliczko Date: 2015-09-18 17:31 To: pen...@ic...; moosefs-users Subject: Re: [MooseFS-Users] How many clients dose MFS max support? Hi All depends. Question is how many clients you want to have and how many operations they will perform on one master? We have instances with about 300 client in production environment. In our software we don't have any client connection limits, but number of parallel connections close to 10 000 creates different level of problem. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 18.09.2015 09:20, pen...@ic... wrote: hi all: How many clients does Moosefs max support? Thanks for your help. best regards. pen...@ic... ------------------------------------------------------------------------------ _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2015-09-18 09:31:00
|
Hi All depends. Question is how many clients you want to have and how many operations they will perform on one master? We have instances with about 300 client in production environment. In our software we don't have any client connection limits, but number of parallel connections close to 10 000 creates different level of problem. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 18.09.2015 09:20, pen...@ic... wrote: > hi all: > > How many clients does Moosefs max support? > > Thanks for your help. > > best regards. > ------------------------------------------------------------------------ > pen...@ic... > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: <pen...@ic...> - 2015-09-18 07:22:36
|
hi all: How many clients does Moosefs max support? Thanks for your help. best regards. pen...@ic... |
From: Arek W. <are...@in...> - 2015-09-17 12:39:52
|
Hi, I have a question about high availability for master servers in MooseFS Pro. If two masters replicate changes in metadata, is this replication synchronous or asynchronous? I mean the situation that a user created, deleted or renamed a file or appended data to a file just before the primary master for this operation goes down. The operation was reported to the user as finished. Does this operation always survive master failure and is visible after another master takes over the responsibility of the failed master? Can latest filesystem operations be lost? How it is in case of chunk server failure? Can latest write operations be lost in this type of failure? Thanks Arek |
From: Jakub Kruszona-Z. <jak...@ge...> - 2015-09-17 06:28:03
|
Do you have such entry in your syslog: "fork error (store data in foreground - it will block master for a while)" ? If yes, then this is the source of the problem with your master: Linux systems use several different algorithms of estimating how much memory a single process needs when it is created. One of these algorithms assumes that if we fork a process, it will need exactly the same amount of memory as it's parent. With a process taking 24GB of memory and total amount of 40GB (32GB physical plus 8GB virtual) and this algorithm, the forking would always be unsuccessful. But in reality, the fork commant does not copy the entire memory, only the modified fragments are copied as needed. Since the child process in MFS master only reads this memory and dumps it into a file, it is safe to assume not much of the memory content will change. Therefore such "careful" estimating algorithm is not needed. The solution is to switch the estimating algorithm the system uses. It can be done one-time by a root command: echo 1 > /proc/sys/vm/overcommit_memory To switch it permanently, so it stays this way even after the system is restarted, you need to put the following line into your "/etc/sysctl.conf" file: vm.overcommit_memory=1 On 16 Sep, 2015, at 17:16, bil...@16... wrote: > > Hi, > > I have MFS with a master server with 64GB RAM, and 20 chunkserver , about 750TB space. > But there are almost 180,000,000 files in MFS, the RAM usage is about 80%. > > in every hour, master server cpu usage up to 99%, and all the client can't connent with MFS. > I also get the message below, in /var/log/message > > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060956C4_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095862_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095880_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060958BB_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095B97_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095BA9_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFBB0_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFF7E_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060C00B4_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CB78A_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CBB4A_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC748_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC8A3_00000001), so create it for future deletion This looks like a result of problem with forking - it shouldn't be dangerous. Just to be sure, that this is not a result of some unknown bug, we will try to reproduce this in our testing environment. > > How can I solove this problem? > > bil...@16... > ------------------------------------------------------------------------------ > Monitor Your Dynamic Infrastructure at Any Scale With Datadog! > Get real-time metrics from all of your servers, apps and tools > in one place. > SourceForge users - Click here to start your Free Trial of Datadog now! > http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Regards, Jakub Kruszona-Zawadzki - - - - - - - - - - - - - - - - Segmentation fault (core dumped) Phone: +48 602 212 039 |
From: <bil...@16...> - 2015-09-17 05:22:24
|
thanks for your answers. both MFS master and metalogger version is 3.0.34, about the mounts, some are 3.0.34, other is 1.6.27, any other infomations should I need to provide? thanks! bil...@16... From: Piotr Robert Konopelko Date: 2015-09-16 23:29 To: billycatcat CC: moosefs-users Subject: Re: [MooseFS-Users] MFS hourly slow down. Hello, which MooseFS version do you use? Please specify MooseFS version on Master, Metalogger(s), Chunkservers, and Mounts. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com On 16 Sep 2015, at 5:16 pm, bil...@16... wrote: Hi, I have MFS with a master server with 64GB RAM, and 20 chunkserver , about 750TB space. But there are almost 180,000,000 files in MFS, the RAM usage is about 80%. in every hour, master server cpu usage up to 99%, and all the client can't connent with MFS. I also get the message below, in /var/log/message Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060956C4_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095862_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095880_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060958BB_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095B97_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095BA9_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFBB0_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFF7E_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060C00B4_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CB78A_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CBB4A_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC748_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC8A3_00000001), so create it for future deletion How can I solove this problem? bil...@16... ------------------------------------------------------------------------------ Monitor Your Dynamic Infrastructure at Any Scale With Datadog! Get real-time metrics from all of your servers, apps and tools in one place. SourceForge users - Click here to start your Free Trial of Datadog now! http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140_________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Markus K. <mar...@tu...> - 2015-09-16 19:14:36
|
On Wednesday 16 September 2015 23:16:20 bil...@16... wrote: > > Hi, > > I have MFS with a master server with 64GB RAM, and 20 chunkserver , about 750TB space. > But there are almost 180,000,000 files in MFS, the RAM usage is about 80%. > > in every hour, master server cpu usage up to 99%, and all the client can't connent with MFS. > I also get the message below, in /var/log/message > > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060956C4_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095862_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095880_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060958BB_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095B97_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095BA9_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFBB0_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFF7E_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060C00B4_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CB78A_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CBB4A_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC748_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC8A3_00000001), so create it for future deletion > > How can I solove this problem? In /etc/mfs/mfsmaster.cfg you can influence the behavior of the master when he saves the metadata which I guess has to to with that if I remember right: # whether to perform mlockall() to avoid swapping out mfsmaster process (default is 0, i.e. no) # LOCK_MEMORY = 0 You need enough virtual memory to hold 2 copies of the metadata so I guess you need more swap. |
From: Piotr R. K. <pio...@mo...> - 2015-09-16 15:30:00
|
Hello, which MooseFS version do you use? Please specify MooseFS version on Master, Metalogger(s), Chunkservers, and Mounts. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 16 Sep 2015, at 5:16 pm, bil...@16... wrote: > > > Hi, > > I have MFS with a master server with 64GB RAM, and 20 chunkserver , about 750TB space. > But there are almost 180,000,000 files in MFS, the RAM usage is about 80%. > > in every hour, master server cpu usage up to 99%, and all the client can't connent with MFS. > I also get the message below, in /var/log/message > > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060956C4_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095862_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095880_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060958BB_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095B97_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095BA9_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFBB0_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFF7E_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060C00B4_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CB78A_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CBB4A_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC748_00000001), so create it for future deletion > Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC8A3_00000001), so create it for future deletion > > How can I solove this problem? > > bil...@16... <mailto:bil...@16...>------------------------------------------------------------------------------ > Monitor Your Dynamic Infrastructure at Any Scale With Datadog! > Get real-time metrics from all of your servers, apps and tools > in one place. > SourceForge users - Click here to start your Free Trial of Datadog now! > http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140_________________________________________ <http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140_________________________________________> > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: <bil...@16...> - 2015-09-16 15:15:34
|
Hi, I have MFS with a master server with 64GB RAM, and 20 chunkserver , about 750TB space. But there are almost 180,000,000 files in MFS, the RAM usage is about 80%. in every hour, master server cpu usage up to 99%, and all the client can't connent with MFS. I also get the message below, in /var/log/message Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060956C4_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095862_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095880_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060958BB_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095B97_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (0000000006095BA9_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFBB0_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060BFF7E_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060C00B4_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CB78A_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CBB4A_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC748_00000001), so create it for future deletion Sep 16 19:08:59 mfsmaster01 mfsmaster[37325]: chunkserver has nonexistent chunk (00000000060CC8A3_00000001), so create it for future deletion How can I solove this problem? bil...@16... |
From: Warren M. <wa...@an...> - 2015-09-10 15:43:58
|
Thanks, all - that's just what I was hunting for! Warren Myers http://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a Subject: Re: [MooseFS-Users] Resiliency to temporary loss of chunk server From: krz...@mo... Date: Thu, 10 Sep 2015 08:40:32 +0200 CC: moo...@li... To: wa...@an... Hi, Under normal conditions chunkserver disconnection results in immediate start of the replication process for chunks stored on that chunkserver, but you can control this behaviour using so called maintenance mode (mfscli command line tool or mfscgi interface can used to set this mode). With this mode set for the chunkserver the replication process does not start while doing maintenance or migration. On 08 Sep 2015, at 22:04, Warren Myers <wa...@an...> wrote:How resilient is MooseFS to temporarily losing a chunk server - say doing hardware maintenance, migration to a new datacenter, VM migration, etc? Warren Myers http://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a------------------------------------------------------------------------------_________________________________________moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: Joseph L. <jo...@ge...> - 2015-09-10 12:57:04
|
Thanks. I look forward to hearing what you’ve uncovered. -Joe > On Sep 10, 2015, at 6:53 AM, Aleksander Wieliczko <ale...@mo...> wrote: > > Hi. > > Yes. > We are in progress of resolving this problem. > We found few potential reasons of this behaviour, but we need some more time to find the best solution. > > If we solve this problem, we respond as quick as it is possible with some more details. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <x-msg://11/moosefs.com> > > On 09.09.2015 22:23, Joseph Love wrote: >> Hi Aleksander, >> >> Will there be time for some investigation into the performance of the FreeBSD client coming up? >> >> Thanks, >> -Joe >> >>> On Aug 28, 2015, at 1:38 AM, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> wrote: >>> >>> Hi. >>> Thank you for this information. >>> >>> Can you do one more simple test. >>> What network bandwidth can you achieve between two FreeBSD machines? >>> >>> I mean, something like: >>> FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null >>> (simple nc and dd tool will tell you a lot.) >>> >>> We know that FUSE on FreeBSD systems had some problems but we need to take close look to this issue. >>> We will try to repeat this scenario in our test environment and return to you after 08.09.2015, because we are in progress of different tests till this day. >>> >>> I would like to add one more aspect. >>> Mfsmaster application is single-thread so we compared your cpu with our. >>> This are the results: >>> (source: cpubenchmark.net <http://cpubenchmark.net/>) >>> >>> CPU Xeon E5-1620 v2 @ 3.70GHz >>> Avarage points: 9508 >>> Single Thread points: 1920 >>> >>> CPU Intel Atom C2758 @ 2.40GHz >>> Avarage points: 3620 >>> Single Thread points: 520 >>> >>> Best regards >>> Aleksander Wieliczko >>> Technical Support Engineer >>> MooseFS.com <x-msg://9/moosefs.com> >>> >> > |
From: Aleksander W. <ale...@mo...> - 2015-09-10 11:53:04
|
Hi. Yes. We are in progress of resolving this problem. We found few potential reasons of this behaviour, but we need some more time to find the best solution. If we solve this problem, we respond as quick as it is possible with some more details. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 09.09.2015 22:23, Joseph Love wrote: > Hi Aleksander, > > Will there be time for some investigation into the performance of the > FreeBSD client coming up? > > Thanks, > -Joe > >> On Aug 28, 2015, at 1:38 AM, Aleksander Wieliczko >> <ale...@mo... >> <mailto:ale...@mo...>> wrote: >> >> Hi. >> Thank you for this information. >> >> Can you do one more simple test. >> What network bandwidth can you achieve between two FreeBSD machines? >> >> I mean, something like: >> FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null >> (simple nc and dd tool will tell you a lot.) >> >> We know that FUSE on FreeBSD systems had some problems but we need to >> take close look to this issue. >> We will try to repeat this scenario in our test environment and >> return to you after 08.09.2015, because we are in progress of >> different tests till this day. >> >> I would like to add one more aspect. >> Mfsmaster application is single-thread so we compared your cpu with our. >> This are the results: >> *(source: cpubenchmark.net <http://cpubenchmark.net>)* >> >> CPU Xeon E5-1620 v2 @ 3.70GHz >> Avarage points: 9508 >> *Single Thread points**: 1920* >> >> CPU Intel Atom C2758 @ 2.40GHz >> Avarage points: 3620 >> *Single Thread points: 520* >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com <x-msg://9/moosefs.com> >> > |
From: Krzysztof K. <krz...@mo...> - 2015-09-10 06:40:40
|
Hi, Under normal conditions chunkserver disconnection results in immediate start of the replication process for chunks stored on that chunkserver, but you can control this behaviour using so called maintenance mode (mfscli command line tool or mfscgi interface can used to set this mode). With this mode set for the chunkserver the replication process does not start while doing maintenance or migration. > On 08 Sep 2015, at 22:04, Warren Myers <wa...@an...> wrote: > > How resilient is MooseFS to temporarily losing a chunk server - say doing hardware maintenance, migration to a new datacenter, VM migration, etc? > > Warren Myers > http://antipaucity.com <http://antipaucity.com/> > https://www.digitalocean.com/?refcode=d197a961987a <https://www.digitalocean.com/?refcode=d197a961987a>------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: Joseph L. <jo...@ge...> - 2015-09-09 20:23:23
|
Hi Aleksander, Will there be time for some investigation into the performance of the FreeBSD client coming up? Thanks, -Joe > On Aug 28, 2015, at 1:38 AM, Aleksander Wieliczko <ale...@mo...> wrote: > > Hi. > Thank you for this information. > > Can you do one more simple test. > What network bandwidth can you achieve between two FreeBSD machines? > > I mean, something like: > FreeBSD 1 /dev/zero > 10Gb NIC > FreeBSD 2 /dev/null > (simple nc and dd tool will tell you a lot.) > > We know that FUSE on FreeBSD systems had some problems but we need to take close look to this issue. > We will try to repeat this scenario in our test environment and return to you after 08.09.2015, because we are in progress of different tests till this day. > > I would like to add one more aspect. > Mfsmaster application is single-thread so we compared your cpu with our. > This are the results: > (source: cpubenchmark.net) > > CPU Xeon E5-1620 v2 @ 3.70GHz > Avarage points: 9508 > Single Thread points: 1920 > > CPU Intel Atom C2758 @ 2.40GHz > Avarage points: 3620 > Single Thread points: 520 > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <x-msg://9/moosefs.com> > |
From: Mihail V. <vuk...@gm...> - 2015-09-09 12:38:44
|
Hello , We have the task for finding a highly available solution for distributed file system for a highly loaded application. We want to use mooseFS and we're testing it now. So far it works pretty well. What concerns me is that the environment needs to be scalable and we might want to add more than 4 chunk servers. We want to keep the high availability and for this we plan to use goal=2 so that the writes are distributed and not on all servers. However we'll have 2 data centers and we want to split the chunk servers 2 in DC1 and 2 in DC2. We would like the system to be able to work even if one datacenter is not available. We read that in version 3.0 there is the concept of labels, where we mark 2 of the servers as "A" the other two in DC2 will be "B" and we can set the goal to be at least in A and B. We want to have a supported stable version however, is there an equivalent for this in the stable version 2? Thanks in advance. |