You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: WK <wk...@bn...> - 2012-06-16 01:51:12
|
One of our MFS clusters is now running 1.6.25 with three chunkerservers, each with a single drive and a Goal Set of 3. chunkerserver1 started exhibiting some smartd errors, so we added a 4th chunkserver (chunkserver4) and we marked the single drive in chunkserver1 as 'marked for removal' and restarted the chunkserver process. The cgi immediately reflected that change and moved chunkerserver1 into the marked for removal column and we see that MFS is actively replicating new chunks onto the new Chunkserver4 at the rate in mfsmaster.cfg 2/10 as it should. However it is also deleting chunks from the now deprecated chunkerserver1 and since the chunk deletion rate is higher than the replication rate our undergoal count is climbing in the 'all' view. So over time, we find ourselves closer to what would have occurred if the chunkerserver/harddrive had simply failed completely. We have verified that the number of chunks in the marked for deletion column in chunkserver1 is declining and in the new chart provided by 1.6.25 for chunkerserver1 specifically, we show a steady green stream of chunk deletion activity. In the past (1.6.20) chunks were not removed from a deprecated disk, only copied. Has the behaviour changed in 1.6.25 or is the CGI not reflecting what is really going on? If this is the behaviour, it seems wrong because its causing double work. Because chunks are being deleted faster than they are replicated, many chunks will then have to be repaired later when the replication process catches up. Making us somewhat more vulnerable to an additional chunkserver outage as the gap grows. Furthermore, all the chunk deletion activity on the deprecated chunkserver seems like wasted activity. Its already marked for removal. Why bother deleting? -bill |
From: Ben B. <ben...@gm...> - 2012-06-15 10:55:57
|
I have a log sting: mfsmount[2089]: file: 594358, index: 0, chunk: 363698 How to get real file name (ie /mnt/mfs/FILENEME.EXT) from this string? |
From: Laurent W. <lw...@hy...> - 2012-06-15 09:07:39
|
Moreover I strongly encourage you to use packages available in rpmforge repository. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Deon C. <deo...@gm...> - 2012-06-14 23:56:33
|
Chunks that do get deleted start off in goal 0 valid 1, but I can see that they get moved to goal 0 valid 0 area in the grid. Then I can see that they get removed. These chunks with goal 0, valid 1 have been there for almost a month now. It seems to grow as well. I am sure that the underlying file system is not having problems deleting the chunks. Other chunks are being deleted. I am unsure of how to track these chunks down manually. If I could delete them manually I would. Is there a way to correlate the chunk file with the goal? Deon |
From: Scott D. <sd...@cl...> - 2012-06-14 21:40:47
|
You need the FUSE development libraries in order to build mfsmount. yum install fuse-devel On Tue, Jun 12, 2012 at 7:45 PM, pete wituszynski <pe...@re... > wrote: > Hiya! > I'm pretty new to the linux game but I wanted to give mfs a whirl in place > of DFS-R. I followed the instructions on the quickstart guide on a CentOS > 6.2 box and everything works great except mfsmount wasn't working. On > closer inspection of the ./configure readout, mfs is telling me that I > don't have FUSE installed and won't build mfsmount. I followed the > quickstart guide's directions on compiling FUSE, and yum even tells me that > it's installed and up to date, but mfs doesn't like it. > Is there an obviously newbie trick that i'm missing here? > Sorry for being vague, i really don't know what i'm doing. > > Pete > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Florent B. <fl...@co...> - 2012-06-14 08:31:55
|
Hi, I think it's pending deletion chunks, no ? Chunks that are waiting for deletion on a ChunkServer. If they stay in this state a long time, maybe the CS has problem to delete them on disk ? Le 14/06/2012 06:01, Deon Cui a écrit : > Hi Moosemates, > > in my MFS setup I have approx 6930 chunks which are displaying as goal > of 0 but there are 1 valid copies of. > > How do I purge them? What are they? I have emptied the trash and they > never seem to go away... > > Deon > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Deon C. <deo...@gm...> - 2012-06-14 04:01:26
|
Hi Moosemates, in my MFS setup I have approx 6930 chunks which are displaying as goal of 0 but there are 1 valid copies of. How do I purge them? What are they? I have emptied the trash and they never seem to go away... Deon |
From: Olivier T. <Oli...@lm...> - 2012-06-13 13:52:29
|
Hi Karl, Le 11/06/2012 18:45, Karl Zander a écrit : > Hi Chris, > > Thanks for the fast response. I'll give it a shot but there must be a way that > isn't so time consuming. My moosefs is 500TB and I'd imagine this operation > could take days or a week. I'll get it started now and report back with my > findings. As far as I understand, logs on the mfsmaster shows the missing chunk immediatly followed by the files that are stored in this chunk. Logs on my mfsmaster look like this : Jun 12 09:29:09 manganese mfsmaster[18364]: currently unavailable chunk 00000000003DDE38 (inode: 5122921 ; index: 0) Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu-lmpt/75/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu-lmpt/0/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu-lmpt/49/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu-lmpt/56/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu-lmpt/64/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu/0/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu/6/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/pc/ubuntu/8/f%252f/fusr/fshare/fxfig/fLibraries/fFlowchart/foffline_storage.fig Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 5122921: 2copies/backuppc/cpool/9/b/f/9bf659facce54906ee6f3dd1b1794e5e Jun 12 09:29:09 manganese mfsmaster[18364]: currently unavailable chunk 00000000000A1073 (inode: 928652 ; index: 0) Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file 928652: 3copies/home_etu/20900601t/.dbus/session-bus/0ddb87ec776a0a55b80e2b184c09165c-1 So you can see here two missing chunks : 00000000003DDE38 and 00000000000A1073, with their inode number, respectively 5122921 and 928652. What I did in such case was simply remove those files. (i know they have been stored in other chunks lately). Before deletion, i set the trashtime to 1 second so the files don't even stay in the trash space. Automated way could then be something like : #!/bin/bash # Go to the mfs root cd /mfsroot/ for f in `grep "currently unavailable file" mfsmaster.log | cut -d ' ' -f 11-`; do mfssettrashtime 1 $f rm -f $f done It may be necessary to escape some character in the file name (such as space for example). Before doing this, I would suggest to be sure that every chunkservers are online and every disks/partitions of each chunkserver are available, in order to be sure that listed missing files are really missing in the moosefs space. Comments on this way of practice are welcome. It worked very nice for me, but i am not sure it's the best way. Regards, Olivier > > Regards, > > Karl > > On Jun 11, 2012 12:39 PM, "Chris Picton" <ch...@ec... > <mailto:ch...@ec...>> wrote: > > Hi Karl > > I use the following command to find files which have 0 copy chunks: > > find /mnt/mfs/root/ -type f | xargs mfscheckfile | grep -B1 ^0 | grep :$ | > sed 's/:$//g' > > In my case, I have mounted the root of mfs on /mnt/mfs/root > > If the above finds no files, they have to be in trash (afaik) > > Chris > > On 2012/06/11 3:47 PM, Karl Zander wrote: >> Hi MooseFS Users, >> >> I'm hoping someone has experienced this and can help me out. I have a >> fairly large Moose and I lost 2 nodes due to hardware failure. I know >> have some 30,000 missing chunks on my moosefs and I have no idea how to >> clear them. I've tried recursively checking each file with mfsfileinfo >> and I can't seem to find the missing chunks/files. I have files listed on >> the web interface that are missing but I've already deleted them and the >> trash time has already been exceeded. I'm finding this to be very >> inefficient. Isn't there any easier way to delete chunks that are offline >> and never will be online again? >> >> Also, is there a better way to query the moose besides the web interface? >> I'd like to be able to call this information using scripts and the mfs* >> commands don't give me everything I need. I appreciate any help! Thanks. >> >> MooseFS version 1.6.20-8 >> >> >> Karl >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats.http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Olivier THIBAULT Université François Rabelais - UFR Sciences et Techniques Laboratoire de Mathématiques et Physique Théorique (UMR CNRS 7350) Service Informatique de l'UFR Parc de Grandmont 37200 Tours - France Email: olivier.thibault at lmpt.univ-tours.fr Tel: (33)(0)2 47 36 69 12 Fax: (33)(0)2 47 36 70 68 |
From: Karl Z. <pv...@gm...> - 2012-06-13 13:37:43
|
Hi Olivier, Thanks for the response. The solution Chris gave of walking through the moose files worked and I was able to locate and remove all files quite quickly actually. The next time I run into this, which I hope is never. I will have 2 great and easy ways to take care of this. Thank you both so much for your help. Cheers, Karl 2012/6/13 Olivier Thibault <Oli...@lm...> > Hi Karl, > > Le 11/06/2012 18:45, Karl Zander a écrit : > > Hi Chris, >> >> Thanks for the fast response. I'll give it a shot but there must be a >> way that >> isn't so time consuming. My moosefs is 500TB and I'd imagine this >> operation >> could take days or a week. I'll get it started now and report back with >> my >> findings. >> > > As far as I understand, logs on the mfsmaster shows the missing chunk > immediatly followed by the files that are stored in this chunk. > Logs on my mfsmaster look like this : > Jun 12 09:29:09 manganese mfsmaster[18364]: currently unavailable chunk > 00000000003DDE38 (inode: 5122921 ; index: 0) > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu-**lmpt/75/f%252f/fusr/fshare/** > fxfig/fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu-**lmpt/0/f%252f/fusr/fshare/** > fxfig/fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu-**lmpt/49/f%252f/fusr/fshare/** > fxfig/fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu-**lmpt/56/f%252f/fusr/fshare/** > fxfig/fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu-**lmpt/64/f%252f/fusr/fshare/** > fxfig/fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu/0/**f%252f/fusr/fshare/fxfig/** > fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu/6/**f%252f/fusr/fshare/fxfig/** > fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/pc/ubuntu/8/**f%252f/fusr/fshare/fxfig/** > fLibraries/fFlowchart/**foffline_storage.fig > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 5122921: 2copies/backuppc/cpool/9/b/f/**9bf659facce54906ee6f3dd1b1794e**5e > Jun 12 09:29:09 manganese mfsmaster[18364]: currently unavailable chunk > 00000000000A1073 (inode: 928652 ; index: 0) > Jun 12 09:29:09 manganese mfsmaster[18364]: * currently unavailable file > 928652: 3copies/home_etu/20900601t/.**dbus/session-bus/** > 0ddb87ec776a0a55b80e2b184c0916**5c-1 > > So you can see here two missing chunks : 00000000003DDE38 and > 00000000000A1073, with their inode number, respectively 5122921 and 928652. > > What I did in such case was simply remove those files. (i know they have > been stored in other chunks lately). > Before deletion, i set the trashtime to 1 second so the files don't even > stay in the trash space. > > Automated way could then be something like : > > #!/bin/bash > > # Go to the mfs root > cd /mfsroot/ > > for f in `grep "currently unavailable file" mfsmaster.log | cut -d ' ' -f > 11-`; do > mfssettrashtime 1 $f > rm -f $f > done > > It may be necessary to escape some character in the file name (such as > space for example). > Before doing this, I would suggest to be sure that every chunkservers are > online and every disks/partitions of each chunkserver are available, in > order to be sure that listed missing files are really missing in the > moosefs space. > > Comments on this way of practice are welcome. > It worked very nice for me, but i am not sure it's the best way. > > Regards, > > Olivier > > > > > >> Regards, >> >> Karl >> >> On Jun 11, 2012 12:39 PM, "Chris Picton" <ch...@ec... >> <mailto:ch...@ec...>**> wrote: >> >> Hi Karl >> >> I use the following command to find files which have 0 copy chunks: >> >> find /mnt/mfs/root/ -type f | xargs mfscheckfile | grep -B1 ^0 | grep >> :$ | >> sed 's/:$//g' >> >> In my case, I have mounted the root of mfs on /mnt/mfs/root >> >> If the above finds no files, they have to be in trash (afaik) >> >> Chris >> >> On 2012/06/11 3:47 PM, Karl Zander wrote: >> >>> Hi MooseFS Users, >>> >>> I'm hoping someone has experienced this and can help me out. I have a >>> fairly large Moose and I lost 2 nodes due to hardware failure. I know >>> have some 30,000 missing chunks on my moosefs and I have no idea how >>> to >>> clear them. I've tried recursively checking each file with >>> mfsfileinfo >>> and I can't seem to find the missing chunks/files. I have files >>> listed on >>> the web interface that are missing but I've already deleted them and >>> the >>> trash time has already been exceeded. I'm finding this to be very >>> inefficient. Isn't there any easier way to delete chunks that are >>> offline >>> and never will be online again? >>> >>> Also, is there a better way to query the moose besides the web >>> interface? >>> I'd like to be able to call this information using scripts and the >>> mfs* >>> commands don't give me everything I need. I appreciate any help! >>> Thanks. >>> >>> MooseFS version 1.6.20-8 >>> >>> >>> Karl >>> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> Live Security Virtual Conference >>> Exclusive live event will cover all the ways today's security and >>> threat landscape has changed and how IT managers can respond. >>> Discussions >>> will include endpoint security, mobile security and the latest in >>> malware >>> threats.http://www.accelacomm.**com/jaw/sfrnl04242012/114/**50122263/<http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/> >>> >>> >>> ______________________________**_________________ >>> moosefs-users mailing list >>> moosefs-users@lists.**sourceforge.net<moo...@li...> <mailto: >>> moosefs-users@lists.**sourceforge.net<moo...@li...> >>> > >>> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>> >> >> >> >> ------------------------------**------------------------------** >> ------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/**sfrnl04242012/114/50122263/<http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/> >> >> >> >> ______________________________**_________________ >> moosefs-users mailing list >> moosefs-users@lists.**sourceforge.net<moo...@li...> >> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> >> > > -- > Olivier THIBAULT > Université François Rabelais - UFR Sciences et Techniques > Laboratoire de Mathématiques et Physique Théorique (UMR CNRS 7350) > Service Informatique de l'UFR > Parc de Grandmont > 37200 Tours - France > Email: olivier.thibault at lmpt.univ-tours.fr > Tel: (33)(0)2 47 36 69 12 > Fax: (33)(0)2 47 36 70 68 > > > > |
From: pete w. <pe...@re...> - 2012-06-13 00:17:33
|
Hiya! I'm pretty new to the linux game but I wanted to give mfs a whirl in place of DFS-R. I followed the instructions on the quickstart guide on a CentOS 6.2 box and everything works great except mfsmount wasn't working. On closer inspection of the ./configure readout, mfs is telling me that I don't have FUSE installed and won't build mfsmount. I followed the quickstart guide's directions on compiling FUSE, and yum even tells me that it's installed and up to date, but mfs doesn't like it. Is there an obviously newbie trick that i'm missing here? Sorry for being vague, i really don't know what i'm doing. Pete |
From: Léon K. <ke...@st...> - 2012-06-12 19:07:22
|
Hi, For a customer we're planning a storage solution that has to be redundant in 2 locations. Now, I'm aware that the new 'location awareness' feature isn't what I first thought it was, namely a way to specify where exactly copies of files are stored. However, the customer still wants to have a storage solution in one location, and have it instantly replicated to another location so that if location A becomes unavailable, the clients will still be able to continue working using the storage in location B. Then when location A becomes available again, file synchronization needs to resume and clients should/could continue their work on location A again. I'm interested to hear if anyone has come up with a solution like this in combination with moosefs. kind regards, Léon |
From: Steve T. <sm...@cb...> - 2012-06-11 23:17:53
|
On Mon, 11 Jun 2012, Boris Epstein wrote: > Has anybody gotten a MooseFS client on Mac OS X to mount automatically as the system boots? Yes; I do it with a simple plist file in /Library/LaunchDaemons that executes a shell script that waits for the network to be available and then runs mfsmount. What I can't do, however, is to get MFS to work properly without I/O errors or hangs, on all OSX versions from 10.5 on. Are you using OSXFUSE? Steve |
From: Boris E. <bor...@gm...> - 2012-06-11 23:07:30
|
Hello listmates, Has anybody gotten a MooseFS client on Mac OS X to mount automatically as the system boots? Thanks. Boris. |
From: Boris E. <bor...@gm...> - 2012-06-11 21:16:32
|
That is true, and that is weird. I suspect it has to do with the fact that I have a router bouncing the IP packets to a local host for routing to a subnet. Thanks - your guess was correct! Boris. On Mon, Jun 11, 2012 at 5:10 PM, Atom Powers <ap...@di...> wrote: > It sounds to me like your client can talk to the meta-master but can't > talk to the chunk-servers. Have you verified that connection? > > > On 06/11/2012 01:18 PM, Boris Epstein wrote: > >> Hello listmates, >> >> Here's a really strange behaviour I am seeing. I have a Centos 6-based >> MooseFS installation with at this point Centos and Ubuntu clients using >> it. One Ubuntu client mounts it just fine, traverses directories at a >> lightning speed and then gets hung on trying to read actual data. >> >> Something like: >> >> more .profile >> >> which is just a request to output a small text file sits there for >> minutes in a row but never returns a thing. >> >> Does anybody know what this may be about? >> >> Thanks. >> >> Boris. >> >> >> ------------------------------**------------------------------** >> ------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/**sfrnl04242012/114/50122263/<http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/> >> >> >> >> ______________________________**_________________ >> moosefs-users mailing list >> moosefs-users@lists.**sourceforge.net<moo...@li...> >> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > > -- > -- > Perfection is just a word I use occasionally with mustard. > --Atom Powers-- > Director of IT > DigiPen Institute of Technology > +1 (425) 895-4443 > |
From: Boris E. <bor...@gm...> - 2012-06-11 20:18:14
|
Hello listmates, Here's a really strange behaviour I am seeing. I have a Centos 6-based MooseFS installation with at this point Centos and Ubuntu clients using it. One Ubuntu client mounts it just fine, traverses directories at a lightning speed and then gets hung on trying to read actual data. Something like: more .profile which is just a request to output a small text file sits there for minutes in a row but never returns a thing. Does anybody know what this may be about? Thanks. Boris. |
From: Chris P. <ch...@ec...> - 2012-06-11 17:05:43
|
Hi Karl I use the following command to find files which have 0 copy chunks: find /mnt/mfs/root/ -type f | xargs mfscheckfile | grep -B1 ^0 | grep :$ | sed 's/:$//g' In my case, I have mounted the root of mfs on /mnt/mfs/root If the above finds no files, they have to be in trash (afaik) Chris On 2012/06/11 3:47 PM, Karl Zander wrote: > Hi MooseFS Users, > > I'm hoping someone has experienced this and can help me out. I have a > fairly large Moose and I lost 2 nodes due to hardware failure. I know > have some 30,000 missing chunks on my moosefs and I have no idea how > to clear them. I've tried recursively checking each file with > mfsfileinfo and I can't seem to find the missing chunks/files. I have > files listed on the web interface that are missing but I've already > deleted them and the trash time has already been exceeded. I'm > finding this to be very inefficient. Isn't there any easier way to > delete chunks that are offline and never will be online again? > > Also, is there a better way to query the moose besides the web > interface? I'd like to be able to call this information using scripts > and the mfs* commands don't give me everything I need. I appreciate > any help! Thanks. > > MooseFS version 1.6.20-8 > > > Karl > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Karl Z. <pv...@gm...> - 2012-06-11 16:45:15
|
Hi Chris, Thanks for the fast response. I'll give it a shot but there must be a way that isn't so time consuming. My moosefs is 500TB and I'd imagine this operation could take days or a week. I'll get it started now and report back with my findings. Regards, Karl On Jun 11, 2012 12:39 PM, "Chris Picton" <ch...@ec...> wrote: > Hi Karl > > I use the following command to find files which have 0 copy chunks: > > find /mnt/mfs/root/ -type f | xargs mfscheckfile | grep -B1 ^0 | grep :$ | > sed 's/:$//g' > > In my case, I have mounted the root of mfs on /mnt/mfs/root > > If the above finds no files, they have to be in trash (afaik) > > Chris > > On 2012/06/11 3:47 PM, Karl Zander wrote: > > Hi MooseFS Users, > > I'm hoping someone has experienced this and can help me out. I have a > fairly large Moose and I lost 2 nodes due to hardware failure. I know have > some 30,000 missing chunks on my moosefs and I have no idea how to clear > them. I've tried recursively checking each file with mfsfileinfo and I > can't seem to find the missing chunks/files. I have files listed on the > web interface that are missing but I've already deleted them and the trash > time has already been exceeded. I'm finding this to be very inefficient. > Isn't there any easier way to delete chunks that are offline and never > will be online again? > > Also, is there a better way to query the moose besides the web > interface? I'd like to be able to call this information using scripts and > the mfs* commands don't give me everything I need. I appreciate any help! > Thanks. > > MooseFS version 1.6.20-8 > > > Karl > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > _______________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Karl Z. <pv...@gm...> - 2012-06-11 13:48:06
|
Hi MooseFS Users, I'm hoping someone has experienced this and can help me out. I have a fairly large Moose and I lost 2 nodes due to hardware failure. I know have some 30,000 missing chunks on my moosefs and I have no idea how to clear them. I've tried recursively checking each file with mfsfileinfo and I can't seem to find the missing chunks/files. I have files listed on the web interface that are missing but I've already deleted them and the trash time has already been exceeded. I'm finding this to be very inefficient. Isn't there any easier way to delete chunks that are offline and never will be online again? Also, is there a better way to query the moose besides the web interface? I'd like to be able to call this information using scripts and the mfs* commands don't give me everything I need. I appreciate any help! Thanks. MooseFS version 1.6.20-8 Karl |
From: Boris E. <bor...@gm...> - 2012-06-08 14:59:42
|
Hello listmates, If I am trying to put my master and meta servers on different machines - if there a canonical way for doing so? Thanks. Boris. |
From: Michal B. <mic...@co...> - 2012-06-08 09:10:34
|
Hi! Yes, we thought about using ssd for storing metadata - either directly by MooseFS (eg. less used data in SSD, ofthen used data in RAM) or allowing to do the swapping onto SSD just by the kernel. Unfortunately we have not implemented it yet and cannot tell when it would happen. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: jyc [mailto:mai...@gm...] Sent: Wednesday, June 06, 2012 6:44 PM To: moo...@li... Subject: Re: [Moosefs-users] ssd for metadatas ? yes I agree with you. but above 64GB of Ram, you need a server that can handle more... and it cost much more... for example : OCZ RevoDrive 3 PCI-Express 240 Go in France => 500 euros for a 8GB = from 50 to 120 euros (don't know if ECC) for 500 euros, i can have 10x 8GB => 80GB of el cheapo memory... I think ssd is one of the future of distributed filesystems... and I think moosefs as to think about it. Wang Jian a écrit : > Under normal or ideal workload, most of metadata operation is 'append > write' (for new file), some 'overwrite' (for file update, like > enlarge, delete, and relocate). This is ok for SSD. > > However, under workload that modify files a lot, it may be problematic. > > BTW, 64GB RAM alone is cheap. 8GB ~= 100 - 150 USD. It's expensive > when you need 64GB x N and N is big. > > 于 2012/6/6 22:44, Boris Epstein 写道: >> On Wed, Jun 6, 2012 at 10:06 AM, jyc <mai...@gm... >> <mailto:mai...@gm...>> wrote: >> >> Hi every one ! >> >> Hi michal ! >> >> will there be in a near futur a way to store the metadatas on a >> device >> like a ssd, instead of putting them into memory ? >> having 64GB of Ram or more isn't available easely (even nowadays) >> but having 120 GB of SSD (mirrored) on a pci-e bus is much more >> larger, >> and cost a lot less. but maybe slower... >> >> anyone with an opinion is welcome :-) >> >> >> I believe this may not be a bad idea. One thing that would trouble me >> is that SSD drives apparently don't like to do lots of writes - that >> causes their life expectancy to go down significantly. That's what >> I've heard. >> >> Boris. >> > ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Boris E. <bor...@gm...> - 2012-06-08 03:36:41
|
Hello listmates, I tried installing fairly massive (4TB and 20TB) MooseFS installations on server-class RAIDed machines using VM's for my master and meta servers. In both cases I experienced the following: 1) Uneven performance. 2) Freezing (i.e., about 5 seconds before the first byte would be read). 3) I/O errors even thought the underlying XFS file systems were fine. The software used was CentOS 6.2 and VirtualBox 4.1.something for virtualization. What would you think it was? I would really welcome any opinions, especially those of people who tried something along the same lines. Personally, I am inclined to think that delays associated with virtualization were to blame. Any other thoughts? Thanks. Boris. |
From: wkmail <wk...@bn...> - 2012-06-07 17:14:48
|
On 6/6/12 6:18 PM, Robert Sandilands wrote: > > What I found on Linux is that the problem is related to swap + total > RAM. mfsmaster does a fork() whenever it does the dump of the > metadata. If 2 x the size of the application is less than the sum of > the swap space + RAM then the fork() process fails and the dump of the > metadata is done in the same process that is handling the requests > from mfsmount. This leads to the timeouts. We fixed it by increasing > the amount of swap. It does not actually use the swap but the Linux > kernel does this check during fork(). > Well, I followed your excellent thread on the issue last year and we actually had checked for that earlier and made sure our swap space was 1.5x the RAM used on each of the MFS clusters we admin. We still have the issue. If the mfsmaster process usage exceeds 50-60% of available RAM AND the previous hour's operations exceed some threshold creating a huge changelog, we get the cutout at the top of the hour. If its quiet (such as in the evening) then we never see it. In our case, we see the second mfsmaster process running writing out the temp file and consuming lots of CPU. I suspect its really a tuning issue, but because our largest MFS process only uses 5GB of RAM, we've been able to easily double the memory from parts lying around, so the problem has not merited more attention. Eventually, I intend to deliberately load a cluster and play around with schedulers, pages etc to see what works best (unless anyone on the list has any immediate suggestions). I'd also like to build a Super Master with ultra fast RAM and SSD disks to see how much of a perceptable performance gain we would see, but we tend to use older generation servers coming out of production for MFS components and as long as they have lots of memory they work great. > We use around 53GB of RAM for 155 million files. Hmm, thats much closer to the suggested sizing in the FAQ, (though still somewhat higher than the 300MB/1Mil files suggested, more like 350MB/1Mil). We are using very small files, a goal of 3 and we have an extremely long trashtime of 72 hours (imap users do really silly things at times <GRIN>) so one of those parameters must be accounting for our higher usage per file. -bill |
From: Robert S. <rsa...@ne...> - 2012-06-07 01:18:40
|
> On top of that, we have found that if the mfsmaster RAM usage exceeds > 50% of the available RAM, and its a really busy cluster (lots of ops per > hour) you can get mount connection losses for a minute or two at the top > of the hour when it processes the change log and dumps that data into a > temp file. What I found on Linux is that the problem is related to swap + total RAM. mfsmaster does a fork() whenever it does the dump of the metadata. If 2 x the size of the application is less than the sum of the swap space + RAM then the fork() process fails and the dump of the metadata is done in the same process that is handling the requests from mfsmount. This leads to the timeouts. We fixed it by increasing the amount of swap. It does not actually use the swap but the Linux kernel does this check during fork(). We use around 53GB of RAM for 155 million files. Robert |
From: wkmail <wk...@bn...> - 2012-06-07 01:10:26
|
On 6/6/2012 12:22 PM, mARK bLOORE wrote: > http://www.moosefs.org/reference-guide.html says "The master server > should have approximately 300 MiB of RAM allocated to handle 1 million > files on chunkservers." > Are those files as seen by clients, or do all the copies count > separately? Or perhaps it is the number of chunk files that counts? > > I have nearly a billion (and growing) files to store, mostly just a > few kilobytes in size, with a goal of at least 2, so is my RAM > requirement 300 GiB or 600 GiB? (Either way, yikes!) > We have found that calculation to be too optimistic (at least for small files). On a small MFS cluster we have here that is doing small files (IMAP Maildirs) it is supporting almost 3.9 million files (actual files as seen by the OS mounting the MFS mount) in 10+ million chunks and uses about 1.8 GB of ram. We have a goal of 3. I checked a few other clusters and they have the same ratios. So our sizing requirement would be more like 500MB of RAM per 1 million files if you wanted to be conservative. On top of that, we have found that if the mfsmaster RAM usage exceeds 50% of the available RAM, and its a really busy cluster (lots of ops per hour) you can get mount connection losses for a minute or two at the top of the hour when it processes the change log and dumps that data into a temp file. So now you are at 1GB per 1 Million files. -bill |
From: Davies L. <dav...@gm...> - 2012-06-07 00:34:11
|
MooseFS is not suitable for TOO MANY small files, you should try: 1. TFS from taobao.com (http://code.taobao.org/p/tfs/src/) 2. beansdb & beanseye from douban.com ( http://github.com/douban/) 3. Any other Blob or K-V storage designed for large scale. (Riak, Mongo ...) On Thu, Jun 7, 2012 at 3:22 AM, mARK bLOORE <mb...@gm...> wrote: > http://www.moosefs.org/reference-guide.html says "The master server > should have approximately 300 MiB of RAM allocated to handle 1 million > files on chunkservers." > Are those files as seen by clients, or do all the copies count > separately? Or perhaps it is the number of chunk files that counts? > > I have nearly a billion (and growing) files to store, mostly just a > few kilobytes in size, with a goal of at least 2, so is my RAM > requirement 300 GiB or 600 GiB? (Either way, yikes!) > > -- > mARK bLOORE <mb...@gm...> > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- - Davies |