You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Boyko Y. <b.y...@ex...> - 2011-04-20 07:57:32
|
Hi, Just want to add a question, As I'm not familiar w/ VMware's fault tolerance mode, is it possible to configure more than one 'backup' vm? E.g. having 2 or more backup copies of the same vm? Boyko On Apr 20, 2011, at 10:28 AM, Michal Borychowski wrote: > Hi! > > The solution is very smart. In what environment have you done the tests? > What operating systems (for the physical machine and virtual machine). How > many files there were in MooseFS? How big was metadata file? How many > clients connected? Was the whole MooseFS quite busy? What about performance > of the master running in virtual machines? > > > Kind regards > -Michal > > -----Original Message----- > From: Krzysztof Janiszewski - ecenter sp. z o.o. > [mailto:k.j...@ec...] > Sent: Wednesday, April 20, 2011 9:06 AM > To: 'Boyko Yordanov' > Cc: moo...@li... > Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices > > We have found failover solution for mfsserver. Our mfsserver is running on > VMWare virtual machine, which is configured in Fault tolerance mode. This > means that there are in fact two running virtual machines on two physical > hosts. Primary VM and secondary VM are doing the same CPU operations. When > primary VM fails, secondary takes over all tasks and everything is runnining > fine without interruption or data loss. > > Best regards > Krzysztof Janiszewski > ecenter sp. z o.o. > -------------------------------------- > Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: > > > -----Original Message----- > From: Boyko Yordanov [mailto:b.y...@ex...] > Sent: Wednesday, April 20, 2011 8:48 AM > To: Michal Borychowski > Cc: moo...@li... > Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices > > Hi, > > Great news indeed, however, isn't it more important to implement reliable > failover solution and fix the single point of failure? > > So far there is no real working failover solution, not even Thomas Hatch's > one. Its just that mfsmetalogger can't be trusted. > > Boyko > > On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: > >>> I've got great news - we are going to introduce big improvements in > upcoming 1.6.21 version which also include "rack awaraness" :) This is a > feature lots of people were waiting for and I hope it will cater to your > needs. >> > > > ---------------------------------------------------------------------------- > -- > Benefiting from Server Virtualization: Beyond Initial Workload Consolidation > -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ---------------------------------------------------------------------------- > -- > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Boyko Y. <b.y...@ex...> - 2011-04-20 07:33:06
|
Yep, I can imagine it isn't that easy indeed. It does just works, and works well, until the mfsmaster goes down. Then its lottery.. if you're lucky, you'll get metadata successfully recovered. If not.. you're lost. I've been with Thomas implementing his solution on a few boxes for test purposes and it really looks promising, however not completely reliable yet. Hopefully he'll have the time to work it further. Boyko On Apr 20, 2011, at 9:59 AM, Michal Borychowski wrote: > Hi > > This is a difficult question. If preparing a failover solution had been that > easy, we would have done it already. And it is not. On the other hand we've > been using MooseFS for more than 5 years with this architecture in a very > busy production environment and it just works. Let Thomas finish his > failover solution :) I am sure it will satisfy the needs of all users. > > > Regards > Michal > > > -----Original Message----- > From: Boyko Yordanov [mailto:b.y...@ex...] > Sent: Wednesday, April 20, 2011 8:48 AM > To: Michal Borychowski > Cc: 'Léon Keijser'; moo...@li... > Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices > > Hi, > > Great news indeed, however, isn't it more important to implement reliable > failover solution and fix the single point of failure? > > So far there is no real working failover solution, not even Thomas Hatch's > one. Its just that mfsmetalogger can't be trusted. > > Boyko > > On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: > >>> I've got great news - we are going to introduce big improvements in > upcoming 1.6.21 version which also include "rack awaraness" :) This is a > feature lots of people were waiting for and I hope it will cater to your > needs. >> |
From: Michal B. <mic...@ge...> - 2011-04-20 07:29:17
|
Hi! The solution is very smart. In what environment have you done the tests? What operating systems (for the physical machine and virtual machine). How many files there were in MooseFS? How big was metadata file? How many clients connected? Was the whole MooseFS quite busy? What about performance of the master running in virtual machines? Kind regards -Michal -----Original Message----- From: Krzysztof Janiszewski - ecenter sp. z o.o. [mailto:k.j...@ec...] Sent: Wednesday, April 20, 2011 9:06 AM To: 'Boyko Yordanov' Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices We have found failover solution for mfsserver. Our mfsserver is running on VMWare virtual machine, which is configured in Fault tolerance mode. This means that there are in fact two running virtual machines on two physical hosts. Primary VM and secondary VM are doing the same CPU operations. When primary VM fails, secondary takes over all tasks and everything is runnining fine without interruption or data loss. Best regards Krzysztof Janiszewski ecenter sp. z o.o. -------------------------------------- Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: -----Original Message----- From: Boyko Yordanov [mailto:b.y...@ex...] Sent: Wednesday, April 20, 2011 8:48 AM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices Hi, Great news indeed, however, isn't it more important to implement reliable failover solution and fix the single point of failure? So far there is no real working failover solution, not even Thomas Hatch's one. Its just that mfsmetalogger can't be trusted. Boyko On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: >> I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. > ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Krzysztof J. - e. s. z o.o. <k.j...@ec...> - 2011-04-20 07:22:47
|
We have found failover solution for mfsserver. Our mfsserver is running on VMWare virtual machine, which is configured in Fault tolerance mode. This means that there are in fact two running virtual machines on two physical hosts. Primary VM and secondary VM are doing the same CPU operations. When primary VM fails, secondary takes over all tasks and everything is runnining fine without interruption or data loss. Best regards Krzysztof Janiszewski ecenter sp. z o.o. -------------------------------------- Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: -----Original Message----- From: Boyko Yordanov [mailto:b.y...@ex...] Sent: Wednesday, April 20, 2011 8:48 AM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices Hi, Great news indeed, however, isn't it more important to implement reliable failover solution and fix the single point of failure? So far there is no real working failover solution, not even Thomas Hatch's one. Its just that mfsmetalogger can't be trusted. Boyko On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: >> I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. > ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-20 07:00:13
|
Hi This is a difficult question. If preparing a failover solution had been that easy, we would have done it already. And it is not. On the other hand we've been using MooseFS for more than 5 years with this architecture in a very busy production environment and it just works. Let Thomas finish his failover solution :) I am sure it will satisfy the needs of all users. Regards Michal -----Original Message----- From: Boyko Yordanov [mailto:b.y...@ex...] Sent: Wednesday, April 20, 2011 8:48 AM To: Michal Borychowski Cc: 'Léon Keijser'; moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices Hi, Great news indeed, however, isn't it more important to implement reliable failover solution and fix the single point of failure? So far there is no real working failover solution, not even Thomas Hatch's one. Its just that mfsmetalogger can't be trusted. Boyko On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: >> I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. > |
From: Boyko Y. <b.y...@ex...> - 2011-04-20 06:48:07
|
Hi, Great news indeed, however, isn't it more important to implement reliable failover solution and fix the single point of failure? So far there is no real working failover solution, not even Thomas Hatch's one. Its just that mfsmetalogger can't be trusted. Boyko On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: >> I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. > |
From: Michal B. <mic...@ge...> - 2011-04-20 05:19:16
|
I cannot give any fixed date. Probably it would be still in April, but I do not want to promise anything. It is being tested in our production environment now. Regards Michal -----Original Message----- From: Léon Keijser [mailto:ke...@st...] Sent: Tuesday, April 19, 2011 10:47 PM To: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices On Mon, 2011-04-18 at 12:02 +0000, Michal Borychowski wrote: > Hi! > > I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. Hi Michal, This is great news indeed. When (approximately) do you think 1.6.21 will be released? Kind regards, Léon ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-20 05:17:21
|
Hola Jose! Si, Pater Noster qui es in Polonia ;) Saludos! Michal -----Original Message----- From: jose maria [mailto:let...@us...] Sent: Wednesday, April 20, 2011 12:44 AM To: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices >> El vie, 03-12-2010 a las 18:44 +0100 >> * Until the feature "Goal per Rac, and it is not foreseen" not >> "Location awareness" comes, >> Pater Noster qui es in Polonia ....you need N independent cluster's, and rsync or similar .... >El lun, 18-04-2011 a las 14:02 +0200, Michal Borychowski escribió: > Hi! > > I've got great news - we are going to introduce big improvements in upcoming 1.6.21 > version which also include "rack awaraness" :) This is a feature lots of people were > waiting for and I hope it will cater to your needs. > PS. If you want to have chunkservers across different locations you still would > > have to have a very good connection between them. > * Yeaaaah !!! ;-) * I appreciate your effort, sincerely. >> El lun, 08-11-2010 a las 15:05 -0700 >> * MooseFS not provide HA, unfortunately, it appears that I have a >> candle to the Black Virgin of Chestokova. >> * cheers .... * Pater Noster qui es in Polonia ... && Chestokova Rulez .. ;-) ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: jose m. <let...@us...> - 2011-04-19 22:44:29
|
>> El vie, 03-12-2010 a las 18:44 +0100 >> * Until the feature "Goal per Rac, and it is not foreseen" not >> "Location awareness" comes, >> Pater Noster qui es in Polonia ....you need N independent cluster's, and rsync or similar .... >El lun, 18-04-2011 a las 14:02 +0200, Michal Borychowski escribió: > Hi! > > I've got great news - we are going to introduce big improvements in upcoming 1.6.21 > version which also include "rack awaraness" :) This is a feature lots of people were > waiting for and I hope it will cater to your needs. > PS. If you want to have chunkservers across different locations you still would > > have to have a very good connection between them. > * Yeaaaah !!! ;-) * I appreciate your effort, sincerely. >> El lun, 08-11-2010 a las 15:05 -0700 >> * MooseFS not provide HA, unfortunately, it appears that I have a >> candle to the Black Virgin of Chestokova. >> * cheers .... * Pater Noster qui es in Polonia ... && Chestokova Rulez .. ;-) |
From: Léon K. <ke...@st...> - 2011-04-19 21:06:44
|
On Mon, 2011-04-18 at 12:02 +0000, Michal Borychowski wrote: > Hi! > > I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. Hi Michal, This is great news indeed. When (approximately) do you think 1.6.21 will be released? Kind regards, Léon |
From: Eric K. <ek...@op...> - 2011-04-19 19:05:31
|
Hi all, We would like to be able to simply redirect clients to a new master if needed, by changing dns entry. This could be used for failover, master upgrade, etc. Right now, when the master goes down, or otherwise unreachable, client just hangs, and retries the same IP again and again, without resolving master's name (i.e. mfsmaster) again. I see that support for that already exists in the code for "session lost", and treating master connection failure as session lost seems to work very well. Specifically , in mfsmount/mastercomm.c : line 783 in 1.6.20 if (tcpnumconnect(fd,masterip,masterport)<0) { syslog(LOG_WARNING,"can't connect to master (\"%s\":\"%"PRIu16"\")",masterstrip,masterport); sessionlost=1; // TREAT IT AS SESSION LOST, SO CLIENT RECONNECTS TO THE MASTER tcpclose(fd); fd=-1; return; } Now, is there any particular reasons why we shouldn't treat failing to connect to master as session lost? Thanks guys. |
From: Davies L. <dav...@gm...> - 2011-04-19 14:17:08
|
Changelog files are text, you could merge these changelogs from mfsmaster and mfsmetalogger by hand with text editor, if the holes do not cover each other. If this method do not work, some changes were lost in both changelog files, you could delete changes after the "hole", then parts of data could been recovered. Do not give up, try again. Davies On Thu, Mar 31, 2011 at 6:54 PM, <da...@sq...> wrote: > It appears that both the master and the metalogger servers have gotten > corrupted. When I try to run an mfsmeterestore -a on the master I get the > following: > > hole in change files (entries from 10772493 to 624987257 are missing) - add > more files > > So I go to the metalogger and I get the same message but with a different > set of numbers. Is there a way to run a partial restore, so I can at least > get to some of the data? Or is my data just gone entirely? (I still have the > chunkservers, which are intact) > > # mfsmetarestore -v > version: 1.6.20 > > I've been searching for several days, but haven't really been able to find > much in relation to this. I know that it isn't related to the snapshot bug, > I haven't used snapshots yet. > > Thanks, > Dallin Jones > > > ------------------------------------------------------------------------------ > Create and publish websites with WebMatrix > Use the most popular FREE web apps or write code yourself; > WebMatrix provides all the features you need to develop and > publish your website. http://p.sf.net/sfu/ms-webmatrix-sf > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- - Davies |
From: Michal B. <mic...@ge...> - 2011-04-19 08:19:44
|
From: boblin [mailto:ai...@qq...] Sent: Friday, April 15, 2011 5:00 PM To: Michal Borychowski Subject: Re:RE: [Moosefs-users] Questions about MooseFS before putting it on the production env Thanks for reply. That article really helps me a lot , It's very very useful and most of my puzzles are now gone ^_^ 。I've translated it into Chinese , How can I put it on the website to help others ? [MB] That’s very nice! We are preparing a new version (1.6.21) and I would have to make some updates to the article and then we could publish it in Chinese, too. In addition I'd like to test Thomas's patch , but it seems no any doc in the tarball , I don't know how to use it ... [MB] Thomas lately informed me that there is some error and the project needs some time to get better. Still have some sugguestions and questions , Here down below : ---------------------------------------------------------------------------- ---------------------------- #### Questions ---------------------------------------------------------------------------- ---------------------------- Question#1 : I can't still find any option to change the behavior for dumping metadata in RAM to files in mfsmaster.cfg . -_-! In that article , you said it's one hour by default . [MB] Yes, this is one hour by default and there is no way to change it in the config files. Honestly you do not need to change it as changelogs run “online”. Question#2 : file1's size is 1GB ,and file2's size is 1MB. Do they cost the same memory for metadata in RAM ? Is there any formula to estimate the memory usage ? [MB] Yes, they use the same memory in RAM. 1 million files takes approximately 300 MiB of RAM. Question#3 : Does metalogger downloads just the difference of metadata.mfs.back against the master's ( like Rsync ) , or simply the full file ? [MB] Metalogger downloads full metadata.mfs file every 24hours and constantly gets changelog files. Question#4 : When client-A request file1 , can it read parallel from multile chunkservers , just like RAID-0 ? [MB] MooseFS could read files like this, but it doesn’t do this. It could speed up single operations, but globally (due to lots of seeks on hard drives) the performance of the system would reduce. ---------------------------------------------------------------------------- -------------------------------------------------------- #### Suggestions ---------------------------------------------------------------------------- -------------------------------------------------------- 1、By now I can only get statis from mfscgiserv through web interface,Had you ever considered to provide a cmd-line tool like "squidclient mgr:info|5min" , so I can just write a shell script and send those data to my alarm system without watching the graph "periodically & manually" ? [MB] We have on our roadmap so called "mfstools” and we would introduce such functionalities into them. 2、Why mfscgiserver can only just run on mfsmaster ? can It query the master' metadata remotely ? [MB] You can run mfscgiserver on any machine. In mfs.cgi you just need to enter proper IP address of the master. Just running it on master is convenient. You can also run files from “PREFIX/share/mfscgi/” on any HTTP server which supports CGI (Apache,Lighttpd,Nginx, etc.). See examples in “index.html” Kind regards Michal Best Regard Boblin --------------------------------------------------------------- Tencent Company , High-Tech park , Shenzhen , Guangdong Province China ------------------ Original ------------------ From: "Michal Borychowski"<mic...@ge...>; Date: Fri, Apr 15, 2011 08:18 PM To: "'ailms'"<ai...@qq...>; Cc: "'moosefs-users'"<moo...@li...>; Subject: RE: [Moosefs-users] Questions about MooseFS before putting it on the production env Hi Bob! My replies are in the text. And for start please read this article: http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html From: ailms [mailto:ai...@qq...] Sent: Thursday, April 14, 2011 5:52 PM To: moosefs-users Subject: [Moosefs-users] Questions about MooseFS before putting it on the production env Dear guys, We are plan to use moosefs as our store mechanism, which will be more than 5TB off-line command log. But now we have someconcern, Please find my queries below: ----- 1. is there any plan about resolving the single-point failure of master ? [MB] Please check a project developed by Thomas Hatch: https://github.com/thatch45/mfs-failover It still needs some improvements but is a great way to test the failovers. And Thomas needs some beta testers. 2. The data will lost if master crash without dumping metadata from RAM into disk,is it right? [MB] Changelogs are saved continuously on the fly, there is very little probability you would lose your data upon crash of the master server 3. How can I know which data were lost if the question #2 happend? [MB] When you make a recovery process of metadata you would know if it was successful or not 4. Dose the master only caches metadata read-only or will update the data in RAM and flush to disk periodically? [MB] The changes to metadata are saved to changelogs continuously on the fly 5. Are the change logs based on the metadata.mfs.back or the data in RAM ? [MB] We can consider them as based on RAM 6. Can I change the default dumpling behaviour for master metadata from per hour to user defined value? Such as 30 min? [MB] As far as I remember, yes you can 7. How can we make sure no data lost at any time with HA(KeepAlived)+ DRBD + mfs solution ? [MB] We have not done such tests. Please first check a solution given in question 1. 8. The mfsmetaloggr didn't re-connect automatically even though I had stoped and restarted the mfsmater, can only by manual? [MB] Should have reconnected automatically. What was your environment? 9. which option in mfsmaster.cfg controls the rotation of changelog ? according to the size ? or each time when mfsmaster dump the metadata ? [MB] Please see "man mfsmaster.cfg” - you should have all the options listed Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 Any help will be appreciated. Looking forward your answer. Best Regard Bob Lin |
From: Steve <st...@bo...> - 2011-04-18 13:15:46
|
Indeed!! "big improvements" sounds intriguing even though I think moosefs is already perfect!! -------Original Message------- From: Alexander Akhobadze Date: 18/04/2011 13:37:18 To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices WOW!!!! That is grate news!!! wbr Alexander ====================================================== Hi! I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" This is a feature lots of people were waiting for and I hope it will cater to your needs. PS. If you want to have chunkservers across different locations you still would have to have a very good connection between them. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Steve [mailto:st...@bo...] Sent: Monday, April 18, 2011 1:45 PM To: moo...@li... Subject: [Moosefs-users] Fw: Re: chunkserver over several offices I will fwd to the list as your more detailed description may help someone chip in. I'm sure Michal will be along. I think what your asking is beyond the current scope of moose. Which is primarily designed be gemius for there in house use although some rack awareness has been discussed and maybe on the road map. Maybe search the list archive and/or moose website. -------Original Message------- From: Eric mourgaya Date: 18/04/2011 12:02:55 To: Steve Subject: Re: [Moosefs-users] chunkserver over several offices hi, In fact , we have 4 datacenters and I need for some partitions not all, at least one copie of these partition on each datacenter or a subset of these datacenters (at least 2). And for some datacenter the number of chunkserver is greater than 4 so I have to ensure that these chunks are not on the same datacenter. The goal is to ensure a recovery on another datacenter in crash case. Have you an idea to solve my problem with a distributed point of view? thanks 2011/4/18 Steve <st...@bo...> What are you trying to gain over having them in one place ? -------Original Message------- From: Eric mourgaya Date: 18/04/2011 09:40:09 To: moo...@li... Subject: [Moosefs-users] chunkserver over several offices Hi, How to ensure a least a copy of each chunk on servers dispatch on several offices. The solution is easy if we have one chunkserver on each office. How can I do this when some office have, for example, 5 chunkservers and other only one? thank for you help -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! ----------------------------------------------------------------------------- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! ----------------------------------------------------------------------------- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ----------------------------------------------------------------------------- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ----------------------------------------------------------------------------- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: youngcow <you...@gm...> - 2011-04-18 13:09:29
|
Wow! That's great! > Hi! > > I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. > > PS. If you want to have chunkservers across different locations you still would have to have a very good connection between them. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > -----Original Message----- > From: Steve [mailto:st...@bo...] > Sent: Monday, April 18, 2011 1:45 PM > To: moo...@li... > Subject: [Moosefs-users] Fw: Re: chunkserver over several offices > > > I will fwd to the list as your more detailed description may help someone > chip in. I'm sure Michal will be along. > > > > I think what your asking is beyond the current scope of moose. Which is > primarily designed be gemius for there in house use although some rack > awareness has been discussed and maybe on the road map. Maybe search the > list archive and/or moose website. > > > > > > > > -------Original Message------- > > > > From: Eric mourgaya > > Date: 18/04/2011 12:02:55 > > To: Steve > > Subject: Re: [Moosefs-users] chunkserver over several offices > > > > hi, > > > > > > In fact , we have 4 datacenters and I need for some partitions not all, at > least one copie of these partition on each datacenter or a subset of these > datacenters (at least 2). > > And for some datacenter the number of chunkserver is greater than 4 so I > have to ensure that these chunks are not on the same datacenter. > > > > The goal is to ensure a recovery on another datacenter in crash case. Have > you an idea to solve my problem with a distributed point of view? > > > > > > > > > > thanks > > > > > > > > 2011/4/18 Steve<st...@bo...> > > > > What are you trying to gain over having them in one place ? > > > > > > > > > > -------Original Message------- > > > > From: Eric mourgaya > > Date: 18/04/2011 09:40:09 > > To: moo...@li... > > Subject: [Moosefs-users] chunkserver over several offices > > > > Hi, > > > > How to ensure a least a copy of each chunk on servers dispatch on several > offices. The solution is easy if we have one chunkserver on each office. > > How can I do this when some office have, for example, 5 chunkservers and > other only one? > > > > thank for you help > > > > > |
From: Alexander A. <akh...@ri...> - 2011-04-18 12:36:47
|
WOW!!!! That is grate news!!! wbr Alexander ====================================================== Hi! I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" This is a feature lots of people were waiting for and I hope it will cater to your needs. PS. If you want to have chunkservers across different locations you still would have to have a very good connection between them. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Steve [mailto:st...@bo...] Sent: Monday, April 18, 2011 1:45 PM To: moo...@li... Subject: [Moosefs-users] Fw: Re: chunkserver over several offices I will fwd to the list as your more detailed description may help someone chip in. I'm sure Michal will be along. I think what your asking is beyond the current scope of moose. Which is primarily designed be gemius for there in house use although some rack awareness has been discussed and maybe on the road map. Maybe search the list archive and/or moose website. -------Original Message------- From: Eric mourgaya Date: 18/04/2011 12:02:55 To: Steve Subject: Re: [Moosefs-users] chunkserver over several offices hi, In fact , we have 4 datacenters and I need for some partitions not all, at least one copie of these partition on each datacenter or a subset of these datacenters (at least 2). And for some datacenter the number of chunkserver is greater than 4 so I have to ensure that these chunks are not on the same datacenter. The goal is to ensure a recovery on another datacenter in crash case. Have you an idea to solve my problem with a distributed point of view? thanks 2011/4/18 Steve <st...@bo...> What are you trying to gain over having them in one place ? -------Original Message------- From: Eric mourgaya Date: 18/04/2011 09:40:09 To: moo...@li... Subject: [Moosefs-users] chunkserver over several offices Hi, How to ensure a least a copy of each chunk on servers dispatch on several offices. The solution is easy if we have one chunkserver on each office. How can I do this when some office have, for example, 5 chunkservers and other only one? thank for you help -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! ----------------------------------------------------------------------------- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-18 12:03:07
|
Hi! I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. PS. If you want to have chunkservers across different locations you still would have to have a very good connection between them. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Steve [mailto:st...@bo...] Sent: Monday, April 18, 2011 1:45 PM To: moo...@li... Subject: [Moosefs-users] Fw: Re: chunkserver over several offices I will fwd to the list as your more detailed description may help someone chip in. I'm sure Michal will be along. I think what your asking is beyond the current scope of moose. Which is primarily designed be gemius for there in house use although some rack awareness has been discussed and maybe on the road map. Maybe search the list archive and/or moose website. -------Original Message------- From: Eric mourgaya Date: 18/04/2011 12:02:55 To: Steve Subject: Re: [Moosefs-users] chunkserver over several offices hi, In fact , we have 4 datacenters and I need for some partitions not all, at least one copie of these partition on each datacenter or a subset of these datacenters (at least 2). And for some datacenter the number of chunkserver is greater than 4 so I have to ensure that these chunks are not on the same datacenter. The goal is to ensure a recovery on another datacenter in crash case. Have you an idea to solve my problem with a distributed point of view? thanks 2011/4/18 Steve <st...@bo...> What are you trying to gain over having them in one place ? -------Original Message------- From: Eric mourgaya Date: 18/04/2011 09:40:09 To: moo...@li... Subject: [Moosefs-users] chunkserver over several offices Hi, How to ensure a least a copy of each chunk on servers dispatch on several offices. The solution is easy if we have one chunkserver on each office. How can I do this when some office have, for example, 5 chunkservers and other only one? thank for you help -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! ----------------------------------------------------------------------------- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-18 11:51:52
|
Hi Steven! Here is a link to search the archives: https://sourceforge.net/search/?group_id=228631 <https://sourceforge.net/search/?group_id=228631&type_of_search=mlists> &type_of_search=mlists (You can also get to the search on our main list page http://sourceforge.net/mailarchive/forum.php?forum_name=moosefs-users when you put a mouse over "Maliling Lists" in the top menu and select "Search Mail Lists") Regards -Michal From: Scoleri, Steven [mailto:Sco...@gs...] Sent: Friday, April 15, 2011 10:01 PM To: moo...@li... Subject: [Moosefs-users] Few Questions? 1. Does anyone know a way to search this list? There doesn't seem to be a search engine on - http://sourceforge.net/mailarchive/forum.php?forum_name=moosefs-users 2. I lost a couple of chunkservers (human error). Anyway we lost a some chunks to the zero column. How do I clean that up now? I've deleted the files that were in the 0 column. 3. When I try to mount the mfsmeta I get: "mfsmaster register error: Permission denied" Thanks, -Scoleri |
From: Steve <st...@bo...> - 2011-04-18 11:44:53
|
I will fwd to the list as your more detailed description may help someone chip in. I'm sure Michal will be along. I think what your asking is beyond the current scope of moose. Which is primarily designed be gemius for there in house use although some rack awareness has been discussed and maybe on the road map. Maybe search the list archive and/or moose website. -------Original Message------- From: Eric mourgaya Date: 18/04/2011 12:02:55 To: Steve Subject: Re: [Moosefs-users] chunkserver over several offices hi, In fact , we have 4 datacenters and I need for some partitions not all, at least one copie of these partition on each datacenter or a subset of these datacenters (at least 2). And for some datacenter the number of chunkserver is greater than 4 so I have to ensure that these chunks are not on the same datacenter. The goal is to ensure a recovery on another datacenter in crash case. Have you an idea to solve my problem with a distributed point of view? thanks 2011/4/18 Steve <st...@bo...> What are you trying to gain over having them in one place ? -------Original Message------- From: Eric mourgaya Date: 18/04/2011 09:40:09 To: moo...@li... Subject: [Moosefs-users] chunkserver over several offices Hi, How to ensure a least a copy of each chunk on servers dispatch on several offices. The solution is easy if we have one chunkserver on each office. How can I do this when some office have, for example, 5 chunkservers and other only one? thank for you help -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! ----------------------------------------------------------------------------- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Eric Mourgaya, Respectons la planète! Luttons contre la médiocrité! |
From: Michal B. <mic...@ge...> - 2011-04-18 11:37:25
|
Hi! I assure your that in this way MooseFS is not "eco friendly" :) There are no delaying loops, sleeps or anything else like this. We are not sure if any restrictions may come from the protocol itself? But I guess they shouldn't. You can try to run several "dd"s at the same moment - for sure CPU and HDD usage will go up in the CS. And I wonder what would the results for SSD disks be. Do you have any possibility to test them? Kind regards Michal -----Original Message----- From: Razvan Dumitrescu [mailto:ra...@re...] Sent: Friday, April 15, 2011 6:04 PM To: Michal Borychowski Cc: moo...@li... Subject: [Moosefs-users] RE-2: Is there a limit set for network transfer of 1Gbit set in moosefs-fuse? Hello again Michal! I agree with you that i would not get 600MB/s when i will have 10 reads and 10 writes of video files but will be close to this value as a total because in my case we only deal with large video files 2 to 10 GB in size and i can tell from experience that this storage used directly via ftp deliver to the users around 40-5000MB/s on each controller with around 5 reads and 5 writes each of ~45MB/s. To me from what i could see in all the tests i've done it looks like somewhere the filesystem it's not reaching it's true transfer potential. I say this because when i read with 123MB/s, disks are used 15% each, CPU power is 70% free, RAM is also plenty but mfschunkserver processes are is sleep state(S) at 40% CPU time. So what i can understand from this is that chunkservers instead of delivering data as fast as is possible, they just sit and take a nap ( :D ) for 60% of the CPU time. If there is some filesystem IO moderation in action, that is part of how the system works then in understandable but if is just sleeping then we can say that MooseFS is also a "Green" filesystem (ECO-friendly) :P since consumes less CPU power. I will also try to look in the code in the following days and see if i can answer myself to these questions and also to get a better understanding on how the FS works. Well hope i haven't been too obnoxious with my jokes, i'm looking forward for you reply :). Kind regards, Razvan Dumitrescu PS: In order to justify my conclusions and answer to the questions you asked, i sumbit also the following test logs: ====================================== start test logs ================================= newsdesk-storage-02 / # uname -a Linux newsdesk-storage-02 2.6.38-gentoo-r1 #1 SMP Tue Apr 5 18:28:19 EEST 2011 x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux RAID 6 128k stripe size, total 12 disks, 10 data disks, 2 parity disks Each volume has been set with XFS and coresponding RAID array stripe aligning mkfs.xfs -f -d su=128k,sw=10 -l version=2,su=128k /dev/sdc mkfs.xfs -f -d su=128k,sw=10 -l version=2,su=128k /dev/sdd READ & WRITE TEST DIRECTLY TO DISKS =================================== (reads were done simultaneus on both controllers /mnt/osd0 and /mnt/osd1) (writes were done also simultaneous on both controllers as you will be able to see below in atop screen) controller 1 write: newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd0/test.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.3845 s, 597 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd0/test1.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.2794 s, 602 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd0/test2.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.9359 s, 575 MB/s read: newsdesk-storage-02 / # dd if=/mnt/osd0/test.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 11.7127 s, 733 MB/s newsdesk-storage-02 / # dd if=/mnt/osd0/test1.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 12.3034 s, 698 MB/s newsdesk-storage-02 / # dd if=/mnt/osd0/test2.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 13.2949 s, 646 MB/s controller 2 write: newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd1/test.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.8654 s, 578 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd1/test1.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.9257 s, 576 MB/s newsdesk-storage-02 / # dd if=/dev/zero of=/mnt/osd1/test2.file bs=4k count=2048k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 14.6867 s, 585 MB/s read: newsdesk-storage-02 / # dd if=/mnt/osd1/test.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 11.5464 s, 744 MB/s newsdesk-storage-02 / # dd if=/mnt/osd1/test1.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 11.6056 s, 740 MB/s newsdesk-storage-02 / # dd if=/mnt/osd1/test2.file of=/dev/null bs=4k 2097152+0 records in 2097152+0 records out 8589934592 bytes (8.6 GB) copied, 12.2195 s, 703 MB/s ATOP - newsdesk-storage-0 2011/04/15 16:40:41 5 seconds elapsed PRC | sys 7.70s | user 0.22s | #proc 136 | #zombie 0 | #exit 0 | CPU | sys 147% | user 4% | irq 12% | idle 573% | wait 64% | cpu | sys 55% | user 3% | irq 5% | idle 11% | cpu000 w 27% | cpu | sys 51% | user 1% | irq 6% | idle 12% | cpu002 w 29% | cpu | sys 22% | user 0% | irq 0% | idle 78% | cpu001 w 0% | cpu | sys 7% | user 0% | irq 1% | idle 88% | cpu006 w 4% | cpu | sys 6% | user 0% | irq 0% | idle 91% | cpu004 w 2% | cpu | sys 5% | user 0% | irq 0% | idle 95% | cpu005 w 0% | CPL | avg1 1.05 | avg5 0.54 | avg15 0.31 | csw 10777 | intr 28418 | MEM | tot 3.9G | free 34.6M | cache 3.6G | buff 0.0M | slab 38.3M | SWP | tot 988.4M | free 979.4M | | vmcom 513.7M | vmlim 2.9G | PAG | scan 1769e3 | stall 0 | | swin 0 | swout 0 | DSK | sdc | busy 101% | read 6635 | write 0 | avio 0 ms | DSK | sdd | busy 101% | read 7144 | write 0 | avio 0 ms | NET | transport | tcpi 29 | tcpo 29 | udpi 0 | udpo 0 | NET | network | ipi 46 | ipo 29 | ipfrw 0 | deliv 37 | NET | eth1 0% | pcki 54 | pcko 7 | si 6 Kbps | so 4 Kbps | NET | eth0 0% | pcki 34 | pcko 14 | si 4 Kbps | so 1 Kbps | NET | bond0 ---- | pcki 88 | pcko 21 | si 11 Kbps | so 6 Kbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17681 3.19s 0.13s 0K 0K root 1 -- - R 67% dd 17680 3.07s 0.08s 0K 0K root 1 -- - R 64% dd 437 1.39s 0.00s 0K 0K root 1 -- - R 28% kswapd0 17450 0.00s 0.01s 0K 0K mfs 1 -- - S 0% mfsmaster 17677 0.01s 0.00s 0K 0K root 1 -- - R 0% atop 9 0.01s 0.00s 0K 0K root 1 -- - S 0% ksoftirqd/1 12 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/2:0 24 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/6:0 16809 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/0:2 4574 0.00s 0.00s 0K 0K root 1 -- - S 0% kworker/u:2 13791 0.00s 0.00s 0K 0K root 1 -- - S 0% xfsbufd/sdc 17648 0.00s 0.00s 0K 0K root 1 -- - S 0% flush-8:48 As atop shows this is the maximum transfer RAID provides sdc/sdd being busy 100% which is quite ok since that means around 70MB/s read and 60MB/s write for each of the 10 data disks in each array so far so good and pleased how things work, when i try to copy from one storage to another i get the following results which again i can understand since the limitation is the CPU. copy from storage to storage using cp newsdesk-storage-02 / # time cp /mnt/osd0/test2.file /mnt/osd1/test6.file real 0m19.376s user 0m0.050s sys 0m15.280s newsdesk-storage-02 / # time cp /mnt/osd1/test2.file /mnt/osd0/test6.file real 0m15.691s user 0m0.030s sys 0m15.560s newsdesk-storage-02 / # time cp /mnt/osd0/test4.file /mnt/osd1/test7.file real 0m16.652s user 0m0.070s sys 0m16.390s newsdesk-storage-02 / # time cp /mnt/osd1/test4.file /mnt/osd0/test7.file real 0m16.551s user 0m0.030s sys 0m15.760s 8589934592 bytes (8.6 GB) copied in 19.376s = 422.79 MB/s 8589934592 bytes (8.6 GB) copied in 15.691s = 522.08 MB/s 8589934592 bytes (8.6 GB) copied in 16.652s = 491.95 MB/s 8589934592 bytes (8.6 GB) copied in 16.551s = 494.95 MB/s The transfer reaches only this values cause cp caps in CPU while the disks are used half of available speed as you can see below ATOP - newsdesk-storage-0 2011/04/15 17:04:36 5 seconds elapsed PRC | sys 7.39s | user 0.01s | #proc 133 | #zombie 0 | #exit 0 | CPU | sys 141% | user 0% | irq 5% | idle 655% | wait 0% | cpu | sys 96% | user 0% | irq 4% | idle 0% | cpu001 w 0% | cpu | sys 27% | user 0% | irq 0% | idle 73% | cpu006 w 0% | cpu | sys 18% | user 0% | irq 0% | idle 82% | cpu002 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu000 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu005 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu007 w 0% | CPL | avg1 0.11 | avg5 0.27 | avg15 0.26 | csw 6104 | intr 23475 | MEM | tot 3.9G | free 31.6M | cache 3.5G | buff 0.0M | slab 160.8M | SWP | tot 988.4M | free 978.4M | | vmcom 513.4M | vmlim 2.9G | PAG | scan 1339e3 | stall 0 | | swin 0 | swout 0 | DSK | sdc | busy 56% | read 0 | write 5905 | avio 0 ms | DSK | sdd | busy 40% | read 5216 | write 1 | avio 0 ms | DSK | sda | busy 3% | read 0 | write 3 | avio 43 ms | DSK | sdb | busy 1% | read 0 | write 3 | avio 10 ms | NET | transport | tcpi 42 | tcpo 41 | udpi 0 | udpo 0 | NET | network | ipi 85 | ipo 41 | ipfrw 0 | deliv 56 | NET | eth0 0% | pcki 70 | pcko 22 | si 10 Kbps | so 2 Kbps | NET | eth1 0% | pcki 61 | pcko 9 | si 7 Kbps | so 3 Kbps | NET | bond0 ---- | pcki 131 | pcko 31 | si 18 Kbps | so 6 Kbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17745 4.97s 0.01s 0K -12K root 1 -- - R 100% cp 437 1.23s 0.00s 0K 0K root 1 -- - S 25% kswapd0 17700 0.76s 0.00s 0K 0K root 1 -- - S 15% flush-8:32 17650 0.19s 0.00s 0K 0K root 1 -- - S 4% kworker/2:2 17702 0.13s 0.00s 0K 0K root 1 -- - S 3% kworker/2:1 24 0.05s 0.00s 0K 0K root 1 -- - S 1% kworker/6:0 17742 0.02s 0.00s 0K 0K root 1 -- - R 0% atop 17450 0.01s 0.00s 0K 0K mfs 1 -- - S 0% mfsmaster 17456 0.01s 0.00s 0K 0K mfs 24 -- - S 0% mfschunkserver 581 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/1:2 17645 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/1:0 17234 0.00s 0.00s 0K -4K root 1 -- - S 0% bash 17483 0.00s 0.00s 0K 0K mfs 24 -- - S 0% mfschunkserver 121 0.00s 0.00s 0K 0K root 1 -- - S 0% sync_supers 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% md1_raid1 While doing dd read symultaneous with 2 clients using MooseFS i get these results: - disks not used - CPU not used - memory again plenty newsdesk-storage-01 / # dd if=/mnt/mfs/test2.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 201.276 s, 42.7 MB/s newsdesk-storage-01 / # dd if=/mnt/mfs/test2.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 183.349 s, 46.9 MB/s fluxuri / # dd if=/mnt/mfs/test1.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 144.376 s, 59.5 MB/s fluxuri / # dd if=/mnt/mfs/test1.file of=/dev/null bs=16k 524288+0 records in 524288+0 records out 8589934592 bytes (8.6 GB) copied, 144.132 s, 59.6 MB/s ATOP - newsdesk-storage-0 2011/04/15 16:21:24 5 seconds elapsed PRC | sys 1.72s | user 2.27s | #proc 132 | #zombie 0 | #exit 0 | CPU | sys 18% | user 23% | irq 6% | idle 750% | wait 4% | cpu | sys 3% | user 5% | irq 1% | idle 91% | cpu005 w 0% | cpu | sys 3% | user 3% | irq 1% | idle 94% | cpu001 w 0% | cpu | sys 2% | user 2% | irq 1% | idle 94% | cpu007 w 1% | cpu | sys 1% | user 3% | irq 1% | idle 94% | cpu003 w 2% | cpu | sys 3% | user 2% | irq 0% | idle 94% | cpu000 w 1% | cpu | sys 3% | user 3% | irq 1% | idle 93% | cpu002 w 0% | cpu | sys 2% | user 2% | irq 0% | idle 96% | cpu004 w 0% | cpu | sys 1% | user 3% | irq 1% | idle 94% | cpu006 w 0% | CPL | avg1 0.08 | avg5 0.11 | avg15 0.07 | csw 61782 | intr 71792 | MEM | tot 3.9G | free 32.1M | cache 3.6G | buff 0.0M | slab 37.8M | SWP | tot 988.4M | free 979.6M | | vmcom 530.8M | vmlim 2.9G | PAG | scan 121157 | stall 0 | | swin 0 | swout 0 | DSK | sdd | busy 11% | read 492 | write 0 | avio 1 ms | DSK | sdc | busy 5% | read 671 | write 0 | avio 0 ms | DSK | sda | busy 4% | read 2 | write 3 | avio 38 ms | DSK | sdb | busy 1% | read 1 | write 3 | avio 12 ms | NET | transport | tcpi 137827 | tcpo 356731 | udpi 0 | udpo 0 | NET | network | ipi 137862 | ipo 27661 | ipfrw 0 | deliv 137832 | NET | eth0 46% | pcki 70537 | pcko 193565 | si 7525 Kbps | so 463 Mbps | NET | eth1 39% | pcki 67369 | pcko 163226 | si 7221 Kbps | so 392 Mbps | NET | bond0 ---- | pcki 137906 | pcko 356791 | si 14 Mbps | so 856 Mbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17456 0.79s 1.22s 144K -28K mfs 24 -- - S 42% mfschunkserver 17483 0.82s 1.04s -44K -192K mfs 24 -- - S 39% mfschunkserver 437 0.08s 0.00s 0K 0K root 1 -- - S 2% kswapd0 17613 0.02s 0.00s 8676K 516K root 1 -- - R 0% atop 17450 0.01s 0.00s 0K 0K mfs 1 -- - S 0% mfsmaster 16376 0.00s 0.01s 0K 0K root 1 -- - S 0% apache2 12 0.00s 0.00s 0K 0K root 1 -- - R 0% kworker/2:0 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% md1_raid1 13277 0.00s 0.00s 0K 0K root 1 -- - S 0% xfsbufd/md1 Bonding works fine on newsdesk-storage-02 when i use iperf from same 2 clients i get around 1.9Gbit/s ATOP - newsdesk-storage-0 2011/04/15 17:29:05 5 seconds elapsed PRC | sys 0.96s | user 0.04s | #proc 132 | #zombie 0 | #exit 0 | CPU | sys 10% | user 1% | irq 2% | idle 788% | wait 0% | cpu | sys 2% | user 0% | irq 1% | idle 97% | cpu005 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 98% | cpu003 w 0% | cpu | sys 2% | user 0% | irq 0% | idle 98% | cpu000 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 98% | cpu001 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 98% | cpu006 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu004 w 0% | cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu002 w 0% | cpu | sys 0% | user 0% | irq 0% | idle 100% | cpu007 w 0% | CPL | avg1 0.06 | avg5 0.23 | avg15 0.28 | csw 18503 | intr 45720 | MEM | tot 3.9G | free 40.7M | cache 3.5G | buff 0.0M | slab 158.8M | SWP | tot 988.4M | free 978.4M | | vmcom 545.8M | vmlim 2.9G | DSK | sda | busy 3% | read 0 | write 4 | avio 37 ms | DSK | sdb | busy 0% | read 0 | write 4 | avio 2 ms | DSK | sdc | busy 0% | read 0 | write 1 | avio 0 ms | DSK | sdd | busy 0% | read 0 | write 1 | avio 0 ms | NET | transport | tcpi 782002 | tcpo 372077 | udpi 0 | udpo 0 | NET | network | ipi 782031 | ipo 372084 | ipfrw 0 | deliv 782021 | NET | eth0 98% | pcki 406520 | pcko 372072 | si 984 Mbps | so 39 Mbps | NET | eth1 90% | pcki 375564 | pcko 7 | si 909 Mbps | so 2 Kbps | NET | bond0 ---- | pcki 782084 | pcko 372079 | si 1893 Mbps | so 39 Mbps | NET | lo ---- | pcki 8 | pcko 8 | si 0 Kbps | so 0 Kbps | PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/1 17785 0.92s 0.03s 0K 0K root 5 -- - S 19% iperf 17450 0.01s 0.00s 0K 0K mfs 1 -- - S 0% mfsmaster 17789 0.01s 0.00s 0K 0K root 1 -- - R 0% atop 17456 0.01s 0.00s 0K 0K mfs 24 -- - S 0% mfschunkserver 17483 0.00s 0.01s 0K 0K mfs 24 -- - S 0% mfschunkserver 16809 0.01s 0.00s 0K 0K root 1 -- - S 0% kworker/0:2 4574 0.00s 0.00s 0K 0K root 1 -- - S 0% kworker/u:2 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% md1_raid1 ======================================= end test logs ================================== On Fri, 15 Apr 2011 14:51:03 +0200, Michal Borychowski wrote: > Hi Razvan! > > Thanks for nice words about MooseFS! :) > > There is absolutely no built-in limit for transfers in MooseFS. And > 130 MB/s > for SATA2 disks is a very good result - compare it to these > benchmarks: > http://hothardware.com/printarticle.aspx?articleid=881. I would never > expect > 600MB/s on SATA2 disks. And what are results of 'dd' for the disks > alone? > But remember that such tests do not reflect real life environments > where > lots of users use the system in the same time. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > > -----Original Message----- > From: Razvan Dumitrescu [mailto:ra...@re...] > Sent: Thursday, April 14, 2011 7:11 PM > To: moo...@li... > Subject: [Moosefs-users] Is there a limit set for network transfer of > 1Gbit > set in moosefs-fuse? > > Hello guys! > > First of all i have to say you made a great job with MooseFS, from > all the open source distributed filesystems i loooked over this is > by > far the best > in features, documentation and simple efficiency (installation, > configuration, tools). Great piece of work! > Now in order to be in pure extasy while looking on how good my > storage performs i need to get an answer :). > I have set up a MooseFS on a server box, 2 x Xeon X5355, 4GB RAM, > Intel 5000P board, 2 x Areca 1321 controllers, 24 x ST3750640NS > disks. > What puzzles me is that when i mount the MoooseFS on same machine > and > i try to make different transfers the bottleneck the i hit is always > around 1Gbit. My 2 storages sdc and sdd are both used around 15-20% > while reaching transfer from moosefs of ~130MB/s (both controllers > deliver around 600MB/s read and write). > For this local test the loopback device is used (mounted on same > machine) and from what i can see using atop > the only section that seems to cap is the network at loopback device > with a value slighltly under 1Gbit. > Using iperf on same machine i get 20Gb/s so it seems loopback device > can't be blamed for the limitation. > > Mounts: > > storage-02 / # df -h > Filesystem Size Used Avail Use% Mounted on > /dev/md1 149G 15G 135G 10% / > udev 10M 192K 9.9M 2% /dev > shm 2.0G 0 2.0G 0% /dev/shm > /dev/sdc 6.9T 31G 6.8T 1% /mnt/osd0 > /dev/sdd 6.9T 31G 6.8T 1% /mnt/osd1 > 192.168.8.88:9421 14T 62G 14T 1% /mnt/mfs > > Network test: > > > storage-02 / # iperf -s -B 127.0.0.1 > ------------------------------------------------------------ > Server listening on TCP port 5001 > Binding to local address 127.0.0.1 > TCP window size: 1.00 MByte (default) > ------------------------------------------------------------ > [ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 52464 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-20.0 sec 47.6 GBytes 20.4 Gbits/sec > > > storage-02 / # ls -la /mnt/mfs/ > -rw-r--r-- 1 root root 17179869184 Apr 14 18:43 test.file > > Read test from MooseFS: > > dd if=/mnt/mfs/test.file of=/dev/null bs=4k > 4194304+0 records in > 4194304+0 records out > 17179869184 bytes (17 GB) copied, 139.273 s, 123 MB/s > > atop while doing read test: > > PRC | sys 4.28s | user 3.10s | #proc 120 | #zombie 0 | > #exit > 0 | > CPU | sys 58% | user 43% | irq 3% | idle 617% | > wait > 80% | > cpu | sys 12% | user 12% | irq 1% | idle 57% | > cpu003 > w 18% | > cpu | sys 14% | user 7% | irq 0% | idle 57% | > cpu007 > w 21% | > cpu | sys 9% | user 6% | irq 1% | idle 64% | > cpu000 > w 20% | > cpu | sys 9% | user 7% | irq 0% | idle 71% | > cpu004 > w 13% | > cpu | sys 5% | user 5% | irq 0% | idle 90% | > cpu005 > w 0% | > cpu | sys 5% | user 3% | irq 1% | idle 87% | > cpu001 > w 4% | > cpu | sys 2% | user 2% | irq 0% | idle 96% | > cpu002 > w 0% | > cpu | sys 3% | user 2% | irq 0% | idle 92% | > cpu006 > w 3% | > CPL | avg1 0.68 | avg5 0.24 | avg15 0.15 | csw 123346 | > intr > 61464 | > MEM | tot 3.9G | free 33.3M | cache 3.5G | buff 0.0M | > slab > 184.4M | > SWP | tot 988.4M | free 983.9M | | vmcom 344.5M | > vmlim > 2.9G | > PAG | scan 297315 | stall 0 | | swin 0 | > swout > 0 | > DSK | sdd | busy 10% | read 540 | write 1 | > avio > 0 ms | > DSK | sda | busy 6% | read 0 | write 6 | > avio > 50 ms | > DSK | sdc | busy 6% | read 692 | write 0 | > avio > 0 ms | > DSK | sdb | busy 1% | read 0 | write 6 | > avio > 6 ms | > NET | transport | tcpi 62264 | tcpo 80771 | udpi 0 | > udpo > 0 | > NET | network | ipi 62277 | ipo 62251 | ipfrw 0 | > deliv > 62270 | > NET | eth1 0% | pcki 75 | pcko 7 | si 9 Kbps | so > 6 Kbps | > NET | eth0 0% | pcki 22 | pcko 24 | si 3 Kbps | so > 3 Kbps | > NET | lo ---- | pcki 62221 | pcko 62221 | si 978 Mbps | so > 978 Mbps | > NET | bond0 ---- | pcki 97 | pcko 31 | si 12 Kbps | so > 10 Kbps | > > PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD > 1/1 > 28379 1.11s 1.31s -44K 408K mfs 24 -- - S 52% > mfschunkserver > 28459 1.53s 0.75s 0K 0K root 19 -- - S 49% > mfsmount > 28425 0.93s 1.02s 16K -256K mfs 24 -- - S 42% > mfschunkserver > 28687 0.48s 0.01s 0K 0K root 1 -- - D 10% dd > 437 0.20s 0.00s 0K 0K root 1 -- - S 4% > kswapd0 > 28697 0.02s 0.00s 8676K 516K root 1 -- - R 0% > atop > 28373 0.00s 0.01s 0K 0K mfs 1 -- - S 0% > mfsmaster > 28510 0.01s 0.00s 0K 0K root 1 -- - S 0% > kworker/7:0 > 4574 0.00s 0.00s 0K 0K root 1 -- - S 0% > kworker/u:2 > 13251 0.00s 0.00s 0K 0K root 1 -- - S 0% > md1_raid1 > 17959 0.00s 0.00s 0K 0K root 1 -- - S 0% > xfsbufd/sdc > > > Is there a limitation set in moosefs or fuse client than sets the > maximum socket transfer at 1Gbit? > Is there a trick to unleash the moose beast? :P > If you have any ideea how can i get the full potential from my > storage please let me know how. > Looking forward for you reply! > > > Kind regards, > > Razvan Dumitrescu > System engineer > Realitatea TV > > > > > > ---------------------------------------------------------------------------- > -- > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and > improve > application availability and disaster protection. Learn more about > boosting > the value of server virtualization. > http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-18 10:44:03
|
Hi! You need to remember that MooseFS is a network distributed file system and such tests are not very practical. For example each write waits for success confirmation from CS - it cannot be bypassed. Kind regards Michal -----Original Message----- From: Fyodor Ustinov [mailto:uf...@uf...] Sent: Friday, April 15, 2011 3:20 AM To: moo...@li... Subject: [Moosefs-users] bonnie++ test Hi! I'm still not understand why bonnie++ "Sequental Outpet Char" and "Sequental Output rewrite" test show dramatically low results. May be anyone know about resolving this problem? During the execution of this tests disk tps/bps, cpu, ethernet on chunk/meta servers not loaded. But results like: Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP stb1-1 8G 128 30 91398 25 12128 2 4190 98 60089 3 620.8 7 Latency 85864us 77570us 276ms 3763us 200ms 147ms Version 1.96 ------Sequential Create------ --------Random Create-------- stb1-1 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 825 4 32020 50 2891 10 691 4 3205 7 1059 5 Latency 39868us 4367us 13013us 27704us 4812us 178ms Result in csv: 1.96,1.96,stb1-1,1,1302839383,8G,,128,30,91398,25,12128,2,4190,98,60089,3,62 0.8,7,16,,,,,825,4,32020,50,2891,10,691,4,3205,7,1059,5,85864us,77570us,276m s,3763us,200ms,147ms,39868us,4367us,13013us,27704us,4812us,178ms mount command: mfsmount -o mfscachefiles -o mfsentrycacheto=1 -o mfswritecachesize=256 /mnt WBR, Fyodor. ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Giovanni T. <me...@gi...> - 2011-04-17 09:16:53
|
On Fri, Apr 15, 2011 at 10:12 PM, eric mourgaya <eri...@gm...> wrote: > How to ensure a least a copy of each chunk on servers dispatch on several > offices. The solution is easy if we have one chunkserver on each office. > How can I do this when some office have, for example, 5 chunkservers and > other only one? This isn't possible with the current release. I was asking this sort of feature some weeks ago: in a terminalserver-like environment with 2 big chunkserver and many small chunkserver not everytime online: something like a chunkserver priority in the master logic would solve those kind of problems. Bye. -- Giovanni Toraldo http://gionn.net/ |
From: Thomas S H. <tha...@gm...> - 2011-04-16 14:53:20
|
I have been able to cleanly failover to a metalogger when the master crashes and keep vms running. My schedule will be opening up soon and I will FINALLY be able to clean up and finish my failover suite so that it just runs. -Thomas S Hatch On Sat, Apr 16, 2011 at 1:59 AM, Giovanni Toraldo <me...@gi...> wrote: > On Sat, Apr 16, 2011 at 4:15 AM, 颜秉珩 <rw...@12...> wrote: > > If we storage VM disk img file with MFS, when master crash, this img file > > was destroyed and u can't boot this vm with that img file. > > That means, we lost this vm permanently. > > Metalogger exists to have a recently consistent metadata backup. If > you use metalogger's metadata, you should get your VM images > immediately before the crash (in this specific situation, you probably > need to restart VMs). > > -- > Giovanni Toraldo > http://gionn.net/ > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Giovanni T. <me...@gi...> - 2011-04-16 08:30:09
|
On Sat, Apr 16, 2011 at 4:15 AM, 颜秉珩 <rw...@12...> wrote: > If we storage VM disk img file with MFS, when master crash, this img file > was destroyed and u can't boot this vm with that img file. > That means, we lost this vm permanently. Metalogger exists to have a recently consistent metadata backup. If you use metalogger's metadata, you should get your VM images immediately before the crash (in this specific situation, you probably need to restart VMs). -- Giovanni Toraldo http://gionn.net/ |
From: 颜. <rw...@12...> - 2011-04-16 02:19:11
|
Unfortunately, if the master crashed when some client is writing data to MFS, the data is lost absolutely. If we storage VM disk img file with MFS, when master crash, this img file was destroyed and u can't boot this vm with that img file. That means, we lost this vm permanently. 发件人: Michal Borychowski 发送时间: 2011-04-15 20:19:37 收件人: 'ailms' 抄送: 'moosefs-users' 主题: Re: [Moosefs-users] Questions about MooseFS before putting it onthe production env Hi Bob! My replies are in the text. And for start please read this article: http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html From: ailms [mailto:ai...@qq...] Sent: Thursday, April 14, 2011 5:52 PM To: moosefs-users Subject: [Moosefs-users] Questions about MooseFS before putting it on the production env Dear guys, We are plan to use moosefs as our store mechanism, which will be more than 5TB off-line command log. But now we have someconcern, Please find my queries below: ----- 1. is there any plan about resolving the single-point failure of master ? [MB] Please check a project developed by Thomas Hatch: https://github.com/thatch45/mfs-failover It still needs some improvements but is a great way to test the failovers. And Thomas needs some beta testers. 2. The data will lost if master crash without dumping metadata from RAM into disk,is it right? [MB] Changelogs are saved continuously on the fly, there is very little probability you would lose your data upon crash of the master server 3. How can I know which data were lost if the question #2 happend? [MB] When you make a recovery process of metadata you would know if it was successful or not 4. Dose the master only caches metadata read-only or will update the data in RAM and flush to disk periodically? [MB] The changes to metadata are saved to changelogs continuously on the fly 5. Are the change logs based on the metadata.mfs.back or the data in RAM ? [MB] We can consider them as based on RAM 6. Can I change the default dumpling behaviour for master metadata from per hour to user defined value? Such as 30 min? [MB] As far as I remember, yes you can 7. How can we make sure no data lost at any time with HA(KeepAlived)+ DRBD + mfs solution ? [MB] We have not done such tests. Please first check a solution given in question 1. 8. The mfsmetaloggr didn't re-connect automatically even though I had stoped and restarted the mfsmater, can only by manual? [MB] Should have reconnected automatically. What was your environment? 9. which option in mfsmaster.cfg controls the rotation of changelog ? according to the size ? or each time when mfsmaster dump the metadata ? [MB] Please see „man mfsmaster.cfg” – you should have all the options listed Kind regards Micha┅ Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wo┅oska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 Any help will be appreciated. Looking forward your answer. Best Regard Bob Lin |