You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jun C. P. <jun...@gm...> - 2010-12-23 21:25:21
|
Hi, Using nfs (v4 on CentOS 5.5) on top of mfs is definitely silly. However, some applications that I plan to use can run only with nfs, so I tried to export mfs mount point as an nfs export. For easiness, right now I am not using any iptables rules on any node. On the nfs server (10.1.2.22, at the same time, as a mfs client): # cat /etc/exports /mnt/mfs 10.*(rw,no_root_squash) # df | grep mfs mfs#10.1.2.22:9421 on /mnt/mfs type fuse (rw,allow_other,default_permissions) On the nfs client (10.1.2.24): I got the following error. # mount -t nfs 10.1.2.22:/mnt/mfs /mnt/nfs_mfs/ mount: 10.1.2.22:/mnt/mfs failed, reason given by server: Permission denied # tail /var/log/messages Dec 23 14:23:23 host1a mountd[17279]: authenticated mount request from 10.1.2.24:805 for /mnt/mfs (/mnt/mfs) Dec 23 14:23:23 host1a mountd[17279]: Cannot export /mnt/mfs, possibly unsupported filesystem or fsid= required Is there anyone who knows a workaround? Thanks, -Jun |
From: Thomas S H. <tha...@gm...> - 2010-12-23 16:16:08
|
I am now the maintainer for the ArchLinux Moosefs packages, the previous maintainer was very gracious in allowing me to take them over. I am using MooseFS on ArchLinux in my QA environment and Ubuntu in my production environment until we can finish moving it to ArchLinux. These packages are not split yet, but I will have a split package up in the next couple of weeks, enjoy! https://aur.archlinux.org/packages.php?ID=23742 -Thomas S Hatch |
From: jose m. <let...@us...> - 2010-12-22 19:30:07
|
El mié, 22-12-2010 a las 08:14 +0100, Michal Borychowski escribió: > > Yes, we are aware about this. We have plans to change the operations priorities. > > * Simply priority is needed for the files that stay with a one valid copy, on that they have "more than one" valid copy. * No problem on the priority of the performance, the time in the full rebalance after a new chunkserver adds is not a problem, the number of cost copies is the problem, the time to adquire "fault-tolerant" status, * The "porcentage" of occupation of the chunkservers is a passing situation and trivial (IMO). * Thank you. |
From: Thomas S H. <tha...@gm...> - 2010-12-22 17:54:00
|
Thats good, the file perms can get tricky On Wed, Dec 22, 2010 at 10:50 AM, Steve <st...@bo...> wrote: > Thomas, > > Yes different owner. Ive changed the ownership and restarted. > > This wasn't reported before and these files are 2008. > > > > > *-------Original Message-------* > > *From:* Thomas S Hatch <tha...@gm...> > *Date:* 22/12/2010 15:33:18 > *To:* Steve <st...@bo...> > *Subject:* Re: [Moosefs-users] 1.6.19 > > > My guess would be a permissions issue, is the chunkserver running as the > same user as who own the files? > On Dec 22, 2010 4:24 AM, "Steve" <st...@bo...> wrote: > > > > Hi, > > > > > > > > Just upgrading, first chunkserver ok, then this on the 2nd > > > > > > > > root@chunk2:~# mfschunkserver > > > > working directory: /usr/local/var/mfs > > > > lockfile created and locked > > > > initializing mfschunkserver modules ... > > > > hdd space manager: scanning folder /mnt/disk1/ ... > > > > access to file: /mnt/disk1/02/chunk_0000000000003202_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/02/chunk_0000000000002D02_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/42/chunk_0000000000003142_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/42/chunk_0000000000003042_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/42/chunk_0000000000002E42_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/52/chunk_0000000000002E52_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/62/chunk_0000000000002F62_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/62/chunk_0000000000003262_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/62/chunk_0000000000002B62_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/92/chunk_0000000000002F92_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/92/chunk_0000000000002B92_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/A2/chunk_00000000000031A2_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/A2/chunk_00000000000030A2_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/B2/chunk_0000000000002FB2_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/B2/chunk_0000000000002EB2_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/D2/chunk_00000000000030D2_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/E2/chunk_0000000000002BE2_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/F2/chunk_0000000000002DF2_00000001.mfs: EACCES > > (Permission denied) > > > > access to file: /mnt/disk1/F2/chunk_0000000000002CF2_00000001.mfs: EACCES > > (Permission denied) > > > > hdd space manager: scanning complete > > > > hdd space manager: /mnt/disk1/: 87764 chunks found > > > > hdd space manager: scanning complete > > > > main server module: listen on *:9422 > > > > stats file has been loaded > > > > mfschunkserver daemon initialized properly > > > > > > > > Whats the problem/fix ? bad disk ? > > > > > > > > Steve > > > > > ------------------------------------------------------------------------------ > > Forrester recently released a report on the Return on Investment (ROI) of > > Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even > > within 7 months. Over 3 million businesses have gone Google with Google > Apps: > > an online email calendar, and document program that's accessible from > your > > browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > [image: FREE Christmas Animations for your email – by IncrediMail! Click > Here!] <http://www.incredimail.com/?id=613056&rui=41714715&sd=20101222> > |
From: Steve <st...@bo...> - 2010-12-22 12:24:26
|
Hi, Just upgrading, first chunkserver ok, then this on the 2nd root@chunk2:~# mfschunkserver working directory: /usr/local/var/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /mnt/disk1/ ... access to file: /mnt/disk1/02/chunk_0000000000003202_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/02/chunk_0000000000002D02_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/42/chunk_0000000000003142_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/42/chunk_0000000000003042_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/42/chunk_0000000000002E42_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/52/chunk_0000000000002E52_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/62/chunk_0000000000002F62_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/62/chunk_0000000000003262_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/62/chunk_0000000000002B62_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/92/chunk_0000000000002F92_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/92/chunk_0000000000002B92_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/A2/chunk_00000000000031A2_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/A2/chunk_00000000000030A2_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/B2/chunk_0000000000002FB2_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/B2/chunk_0000000000002EB2_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/D2/chunk_00000000000030D2_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/E2/chunk_0000000000002BE2_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/F2/chunk_0000000000002DF2_00000001.mfs: EACCES (Permission denied) access to file: /mnt/disk1/F2/chunk_0000000000002CF2_00000001.mfs: EACCES (Permission denied) hdd space manager: scanning complete hdd space manager: /mnt/disk1/: 87764 chunks found hdd space manager: scanning complete main server module: listen on *:9422 stats file has been loaded mfschunkserver daemon initialized properly Whats the problem/fix ? bad disk ? Steve |
From: Michal B. <mic...@ge...> - 2010-12-22 07:45:09
|
Hi Josef! Thanks for your interest in MooseFS! Our experience tells us that the best algorithm for this is just: "random". Things like "load" or "network bandwidth" may change very quickly and basing on these facts to choose chunkserver may bring the opposite result. But of course if you want to experiment we'd gladly hear your observations. Creating new chunk: At the beginning of "chunk_multi_modify" function there is: servcount = matocsserv_getservers_wrandom(ptrs,goal); And "matocsserv_getservers_wrandom" function writes to "ptrs" array servers on which the new chunk should be created and returns its quantity. Parameter "goal" tells how many servers should be inserted to the array. In reply function return number of servers, typically it should be the value equal to the "goal" parameter but if there is too few servers available, the value would be lower and array "ptrs" would contain all the servers. Servers for clients are returned in order by "chunk_getversionandlocations" function. Generally speaking this function returns random servers and one exception is returning IP of the client server at the first position (if the client is also a chunk). In the future we would add here a more complicated algorithm to take into account "rack/switch awareness". For the read process the clients uses the first server and for the write sets the chain in the given order. While reading the client can by itself change the server for the next one. The client checks how many operations it has with the chunkserver and may choose the one with the smallest number of operations. I hope it helps and you can have some interesting experiments with the code. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Josef [mailto:pe...@p-...] Sent: Monday, December 13, 2010 2:02 PM To: moo...@li... Subject: [Moosefs-users] process planing Hello, I've been studying moosefs source codes to find a planing mechanism, where to place new chunks, but due to a lack of comments it's quite difficult. Would someone recommand me, where to search? The same would interest me for reading, if there is more copies which one the client chooses to read from. Ideal planning mechanism should take into account factors as chunkserver load, disk usage, avilable network bandwidth and so on. My topic of disertation thesis at the CTU in Prague is process planing in distributed systems, so if there is a lack of such mechanism, it would interest me to help. Josef ---------------------------------------------------------------------------- -- Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL, new data types, scalar functions, improved concurrency, built-in packages, OCI, SQL*Plus, data movement tools, best practices and more. http://p.sf.net/sfu/oracle-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2010-12-22 07:20:50
|
Hi! We think that the best solution would be creating a dedicated module in HTTPD for MooseFS. But as we think about it we cannot tell when we could prepare it. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: 丁赞 [mailto:di...@ba...] Sent: Wednesday, December 01, 2010 5:05 AM To: moo...@li... Subject: [Moosefs-users] about mfs_mount client performance Hi all FUSE is a accept+fock module, which means if N clients(500+ httpd threads in our enviroment) read data from mfs through one mfs_mount simultaneously, FUSE could create N threads to handle the requests. This mechanism spent much more time on context siwtch,which made apache response time much more longer in our enviroment。SO anyone had tried to promote the fuse or mfs_client performace? Using threads pool or user level cache would work??? BTW:I am a new guy here:) Thanks all DingZan @baidu.com |
From: Michal B. <mic...@ge...> - 2010-12-22 07:14:44
|
Hi Jose! Yes, we are aware about this. We have plans to change the operations priorities. Kind regards Michal -----Original Message----- From: jose maria [mailto:let...@us...] Sent: Monday, December 20, 2010 7:38 PM To: moosefs-users Subject: [Moosefs-users] Priority recovering goals * I have been doing some tests, provoking mistakes in a cluster and I have observed a behavior that should be corrected or be improved. * I have stopped a chunkserver and have provoked mistakes on discs and as a result, 600.000 files marked with Goal 3 have stayed with 2 valid copies and 15.000 files marked with Goal 2 have stayed with 1 valid copy, the case is that the valid copies have been regenerated in equal proportion in both cases. The result is that for 10 hours that has taken in provide valid copies, the cluster is vulnerable to loss of information before a disc mistake, I believe that of being possible it would be necessary to implement an absolute priority in the regenerate of copies cost those files that stay with 1 valid copy. * ¿Is this mechanism configurable in the code source? * Thank you, and again pardon for my poor English. ------------------------------------------------------------------------------ Lotusphere 2011 Register now for Lotusphere 2011 and learn how to connect the dots, take your collaborative environment to the next level, and enter the era of Social Business. http://p.sf.net/sfu/lotusphere-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Thomas S H. <tha...@gm...> - 2010-12-22 04:59:24
|
As far as I can tell, if you want to designate two disk locations you can, and this is very viable, but you would just need two mfsmaster servers and explicitly declare which one you are connecting too when you mount. As for being able to do that from just one mfsmaster, I do not believe that is possible -Tom Hatch On Tue, Dec 21, 2010 at 7:53 PM, Anh K. Huynh <ky...@vi...> wrote: > On Tue, 21 Dec 2010 16:54:02 -0700 > Jun Cheol Park <jun...@gm...> wrote: > > ... > > Now let me describe what I want in detail. For example, suppose > > that I have the following mfs-mount point using "mfsmount -H > > 10.1.2.22 -o suid -o dev -o rw -o exec /mnt/mfs." > > > > # cat /etc/mfs/mfshdd.cfg > > /home/mfs-rdvol1 > > /home/mfs-rdvol2 > > So "/home/mfs-rdvol{1,2}/" will be used by mfsmaster/chunk server (to store > data) and they are *transparent* to end users (to clients). You can write > data directory to /home/mfs-rdvol{1,2}. > > > ... > > However, I want to let /mnt/mfs points out only to /home/mfs-rdvol1 > > while /mnt/mfs2 uses /home/mfs-rdvol2, illustrated below as an > > example. > > > > # df -h | grep mfs > > mfs#10.1.2.22:9421 24T 399G 24T 2% /mnt/mfs > > mfs#10.1.2.22:?? 10T 1T 10T 10% /mnt/mfs2 > > > > However, I got the following error when I tried to add a separate > > mount point: > > > > # mfsmount /mnt/mfs2 -H 10.1.2.22 -S /home/mfs-rdvol2 -o suid -o dev > > -o rw -o exec > > mfsmaster register error: No such file or directory > > Because "/home/mfs-rdvol2" isn't a exported directory. If you want to mount > "/foo/bar/" from the mfsmaster, that directory should be mentioned in > "mfsexports.cfg". > > > How can I get a separate mfs-mount point (/mnt/mfs2) that sits on a > > different set of hdd drives? > > I have no idea why you need that. You can remove the second disk > (/home/mfs-rdvol2) from the current master, and install another > mfsmaster/mfschunk servers that use "/home/mfs-rdvol2" -- but this way is > too complex. I mean you would have two different setup of MooseFS. > > Regards, > > -- > Anh Ky Huynh at UTC+7 > > > ------------------------------------------------------------------------------ > Forrester recently released a report on the Return on Investment (ROI) of > Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even > within 7 months. Over 3 million businesses have gone Google with Google > Apps: > an online email calendar, and document program that's accessible from your > browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Anh K. H. <ky...@vi...> - 2010-12-22 03:01:03
|
On Wed, 22 Dec 2010 09:53:38 +0700 "Anh K. Huynh" <ky...@vi...> wrote: > You can write data directory to /home/mfs-rdvol{1,2}. Oops my English.. I meant "you can't write directly to /home/mfs-*" -- Anh Ky Huynh at UTC+7 |
From: Anh K. H. <ky...@vi...> - 2010-12-22 02:53:57
|
On Tue, 21 Dec 2010 16:54:02 -0700 Jun Cheol Park <jun...@gm...> wrote: > ... > Now let me describe what I want in detail. For example, suppose > that I have the following mfs-mount point using "mfsmount -H > 10.1.2.22 -o suid -o dev -o rw -o exec /mnt/mfs." > > # cat /etc/mfs/mfshdd.cfg > /home/mfs-rdvol1 > /home/mfs-rdvol2 So "/home/mfs-rdvol{1,2}/" will be used by mfsmaster/chunk server (to store data) and they are *transparent* to end users (to clients). You can write data directory to /home/mfs-rdvol{1,2}. > ... > However, I want to let /mnt/mfs points out only to /home/mfs-rdvol1 > while /mnt/mfs2 uses /home/mfs-rdvol2, illustrated below as an > example. > > # df -h | grep mfs > mfs#10.1.2.22:9421 24T 399G 24T 2% /mnt/mfs > mfs#10.1.2.22:?? 10T 1T 10T 10% /mnt/mfs2 > > However, I got the following error when I tried to add a separate > mount point: > > # mfsmount /mnt/mfs2 -H 10.1.2.22 -S /home/mfs-rdvol2 -o suid -o dev > -o rw -o exec > mfsmaster register error: No such file or directory Because "/home/mfs-rdvol2" isn't a exported directory. If you want to mount "/foo/bar/" from the mfsmaster, that directory should be mentioned in "mfsexports.cfg". > How can I get a separate mfs-mount point (/mnt/mfs2) that sits on a > different set of hdd drives? I have no idea why you need that. You can remove the second disk (/home/mfs-rdvol2) from the current master, and install another mfsmaster/mfschunk servers that use "/home/mfs-rdvol2" -- but this way is too complex. I mean you would have two different setup of MooseFS. Regards, -- Anh Ky Huynh at UTC+7 |
From: Jun C. P. <jun...@gm...> - 2010-12-21 23:54:09
|
Thanks for the comment. Somehow my question seems not clear enough to get some help. Please, let me clarify what I want. First of all, I am using mfs-1.6.15-2. > mfsmount [-h master] [-p port] [-l path] [-w mount-point] I think that the right command manual would be the following (the current manual on mfsmount has conflicting guides). There is no -l path option in mfsmount in my version. mfsmount mountpoint [-d] [-f] [-s] [-m] [-n] [-p] [-H MASTER] [-P PORT] [-S PATH] [-o OPT[,OPT...]] Now let me describe what I want in detail. For example, suppose that I have the following mfs-mount point using "mfsmount -H 10.1.2.22 -o suid -o dev -o rw -o exec /mnt/mfs." # cat /etc/mfs/mfshdd.cfg /home/mfs-rdvol1 /home/mfs-rdvol2 # cat /etc/mfs/mfsexports.cfg 10.1.2.23-10.1.2.25 /home/mfs_rdvol1 rw,alldirs,maproot=nobody,password=test1 10.1.2.23-10.1.2.25 /home/mfs_rdvol2 rw,alldirs,maproot=nobody,password=test1 # df -h | grep mfs mfs#10.1.2.22:9421 24T 399G 24T 2% /mnt/mfs However, I want to let /mnt/mfs points out only to /home/mfs-rdvol1 while /mnt/mfs2 uses /home/mfs-rdvol2, illustrated below as an example. # df -h | grep mfs mfs#10.1.2.22:9421 24T 399G 24T 2% /mnt/mfs mfs#10.1.2.22:?? 10T 1T 10T 10% /mnt/mfs2 However, I got the following error when I tried to add a separate mount point: # mfsmount /mnt/mfs2 -H 10.1.2.22 -S /home/mfs-rdvol2 -o suid -o dev -o rw -o exec mfsmaster register error: No such file or directory How can I get a separate mfs-mount point (/mnt/mfs2) that sits on a different set of hdd drives? -Jun |
From: Fabien G. <fab...@gm...> - 2010-12-21 22:47:53
|
Hi Jose, On Mon, Dec 20, 2010 at 7:37 PM, jose maria <let...@us...> wrote: > > The result is that for 10 hours that has taken in provide valid copies, > the cluster is vulnerable to loss of information before a disc mistake, > I believe that of being possible it would be necessary to implement an > absolute priority in the regenerate of copies cost those files that stay > with 1 valid copy. As Michal explained a few days ago on this list (see "Message-ID: <00d901cb9852$fefb3f90$fcf1beb0$@bor...@ge...>") : "But please remember that this process takes quite a long time (it may be 2-3 weeks). Performance is the most important thing for the system, not rebalancing." Fabien |
From: Michal B. <mic...@ge...> - 2010-12-21 21:38:17
|
Hi! You don't need any hacks, look at http://www.moosefs.org/reference-guide.html at " Clients (mfsmount) " section. There you have: MooseFS is mounted with the following command: mfsmount [-h master] [-p port] [-l path] [-w mount-point] where master is the host name of the managing server, port is the same as given in MATOCU_LISTEN_PORT in file mfsmaster.cfg, path is mounted MooseFS subdirectory (default is /, which means mounting the whole file system), mount-point is the previously created directory for MooseFS. (Or you need still another functionality?) Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Jun Cheol Park [mailto:jun...@gm...] Sent: Tuesday, December 21, 2010 10:19 PM To: moo...@li... Subject: [Moosefs-users] Is it possible to make separate mfs-mount points on different sets of hdd drives? Hi, When I list up a set of hdd drives in mfshdd.cfg, all the given hdd drives are automatically managed as single storage by mfs which, I agree, is desired in many cases. However, is there any way to make separate mfs-mount points on different storage? For instance, what if one wants to have two different mfs-mount points on the same mfs client node while the two mount points are pointing out to different sets of hdd drivers or different directories? I got this question while comparing MFS with GlusterFS that provides the capability of allowing multiple mount points to sit on different storage devices. Any comment or workaround will be greatly appreciated. -Jun ---------------------------------------------------------------------------- -- Forrester recently released a report on the Return on Investment (ROI) of Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even within 7 months. Over 3 million businesses have gone Google with Google Apps: an online email calendar, and document program that's accessible from your browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Jun C. P. <jun...@gm...> - 2010-12-21 21:19:25
|
Hi, When I list up a set of hdd drives in mfshdd.cfg, all the given hdd drives are automatically managed as single storage by mfs which, I agree, is desired in many cases. However, is there any way to make separate mfs-mount points on different storage? For instance, what if one wants to have two different mfs-mount points on the same mfs client node while the two mount points are pointing out to different sets of hdd drivers or different directories? I got this question while comparing MFS with GlusterFS that provides the capability of allowing multiple mount points to sit on different storage devices. Any comment or workaround will be greatly appreciated. -Jun |
From: Linux E. <lin...@gm...> - 2010-12-21 18:32:29
|
I'm researching how to setup a 2-server filesystem with complete live replication across a WAN link. The total FS size is 1TB and each server will hold a complete replica, with users on each end modifying files. Users run windows and will access the files via Samba. For various reasons, I think MooseFS is the best solution but I had a question regarding writes. Moose will have a goal of 2 for the entire 1TB directory. If a user on Site A saves a large file, will they have to wait until that file is saved to the remote node across the WAN link, or will MooseFS save it to the local server the users is connected to, and replicate it asynchronously in the background? |
From: jose m. <let...@us...> - 2010-12-20 18:37:46
|
* I have been doing some tests, provoking mistakes in a cluster and I have observed a behavior that should be corrected or be improved. * I have stopped a chunkserver and have provoked mistakes on discs and as a result, 600.000 files marked with Goal 3 have stayed with 2 valid copies and 15.000 files marked with Goal 2 have stayed with 1 valid copy, the case is that the valid copies have been regenerated in equal proportion in both cases. The result is that for 10 hours that has taken in provide valid copies, the cluster is vulnerable to loss of information before a disc mistake, I believe that of being possible it would be necessary to implement an absolute priority in the regenerate of copies cost those files that stay with 1 valid copy. * ¿Is this mechanism configurable in the code source? * Thank you, and again pardon for my poor English. |
From: Thomas S H. <tha...@gm...> - 2010-12-15 22:01:20
|
It is just the way I do it, I also have everything in my cloud running in such a way that if a certain vm cluster needs more or less resources they can be dynamically and even automatically scaled. Honestly I couldn't do it without MooseFS! But if there is anything I can help you with with on KVM Clouds feel free to ask! -Thomas S Hatch On Wed, Dec 15, 2010 at 2:52 PM, Jun Cheol Park <jun...@gm...>wrote: > Thanks a lot for sharing your experience with me. I will try your steps on > MFS. > > -Jun > > On Wed, Dec 15, 2010 at 2:44 PM, Thomas S Hatch <tha...@gm...> > wrote: > > Thats good that you have made progress! Personally I am using a slightly > > different setup... > > I am creating the xml configuration files for the vms and use pre > installed, > > generic vm images, I generate them with these applications: > > ArchLinux: Varch > > fedora/redhat: thincrust > > Ubuntu/Debian: Vmbuilder > > Then I copy the vm image for an individual vm and start start it with: > > virsh create <path to xml for the vm> > > So, I never use virt-install, I also configure the vms so that they run > > puppet as soon as they start so that everything is auto configured. > > As for libvirt, the problem I think you are seeing is with the storage > pool > > mechanism in libvirt, I have never been a fan of it, which is why I use > > custom xml configs. It can cause permissions issues with vm images when > they > > are not in the default location. > |
From: Jun C. P. <jun...@gm...> - 2010-12-15 21:52:50
|
Thanks a lot for sharing your experience with me. I will try your steps on MFS. -Jun On Wed, Dec 15, 2010 at 2:44 PM, Thomas S Hatch <tha...@gm...> wrote: > Thats good that you have made progress! Personally I am using a slightly > different setup... > I am creating the xml configuration files for the vms and use pre installed, > generic vm images, I generate them with these applications: > ArchLinux: Varch > fedora/redhat: thincrust > Ubuntu/Debian: Vmbuilder > Then I copy the vm image for an individual vm and start start it with: > virsh create <path to xml for the vm> > So, I never use virt-install, I also configure the vms so that they run > puppet as soon as they start so that everything is auto configured. > As for libvirt, the problem I think you are seeing is with the storage pool > mechanism in libvirt, I have never been a fan of it, which is why I use > custom xml configs. It can cause permissions issues with vm images when they > are not in the default location. |
From: Thomas S H. <tha...@gm...> - 2010-12-15 21:44:55
|
Thats good that you have made progress! Personally I am using a slightly different setup... I am creating the xml configuration files for the vms and use pre installed, generic vm images, I generate them with these applications: ArchLinux: Varch fedora/redhat: thincrust Ubuntu/Debian: Vmbuilder Then I copy the vm image for an individual vm and start start it with: virsh create <path to xml for the vm> So, I never use virt-install, I also configure the vms so that they run puppet as soon as they start so that everything is auto configured. As for libvirt, the problem I think you are seeing is with the storage pool mechanism in libvirt, I have never been a fan of it, which is why I use custom xml configs. It can cause permissions issues with vm images when they are not in the default location. On Wed, Dec 15, 2010 at 2:39 PM, Jun Cheol Park <jun...@gm...>wrote: > It turned out that, when I used "virsh start mfskvm1," it simply > worked well, no network issue any more. Interestingly enough, when I > manually invoked qemu-kvm with the same options, I got all weird > problems (network issue, udev fails, and so on...). So in my tentative > conclusion, "virsh start domain" does internally something more than > invoking the qemu-kvm process. > > Anyway, it seems that there is a way to successfully create a KVM VM > on the native MFS although virt-manger and virt-install still do not > work well on MFS for me. > > -Jun > > On Tue, Dec 14, 2010 at 3:13 PM, Jun Cheol Park > <jun...@gm...> wrote: > > Hi, > > > > I have been trying to create a KVM guest VM on the native MFS. But it > > has not been quite successful although I getting closer.... Before > > delving into the problems, I would like to clarify some relevant > > issues. When I used a loop device on top of MFS using sparse files > > (For instance, mount -o loop /mnt/mfs/mfs_sparse_200g > > /mnt/mfs/loop0/), I didn't have any problems in using KVM on > > /mnt/mfs/loop0/). > > > > The problems that I am experiencing now are from the cases where I try > > to create KVM guest VMs directly on the native MFS. Here I hope to get > > any successful story as to how to **manually** setup a KVM guest OS on > > the native MFS without using virt-manager or virt-install because > > those tools didn't work on the native MFS. > > > > The following are the details of the problems. > > > > First I made an MFS mountpoint on a KVM node (and MFS client) as follows. > > # mfsmount -H $metadata_server_ip -o suid -o dev -o rw -o exec /mnt/mfs > > # mount > > ... > > mfs#$metadata_server_ip:9421 on /mnt/mfs type fuse > > (rw,allow_other,default_permissions) > > > > Then, create a disk image file for the guest storage. > > # qemu-img create -f qcow2 /mnt/mfs/mfskvm1.img 10G > > # virt-install --name CentOS5 --ram 1000 --disk > > path=/mnt/mfs/mfskvm1.img,size=10 --network network:default > > --accelerate --vnc --cdrom /mnt/mfs/CentOS-5.5-x86_64-bin-DVD-1of2.iso > > --os-type=linux > > > > Then, I got the following error: > > > > Starting install... > > ERROR internal error unable to start guest: qemu: could not open > > disk image /mnt/mfs/mfsvkm1.img > > > > So it seems that the given KVM management tools including virt-install > > and virt-manager are not working well directly on the native MFS > > because (I guess) there are some incompatible options used by those > > tools, which conflicts with MFS. > > > > Then, I tried to manually create a KVM on the naitve MFS as follows > > via qemu-kvm. > > > > # /usr/libexec/qemu-kvm -M rhel5.4.0 -m 1024 -smp 1 -name mfskvm1 > > -uuid af1be3ae-daab-d450-293b-dc03ff7c3c35 -no-kvm-pit-reinjection > > -monitor pty -pidfile /var/run/libvirt/qemu/mfskvm1.pid -no-reboot > > -boot d -drive > file=/mnt/mfs/mfskvm1.img,if=virtio,index=0,format=qcow2,cache=writethrough > > -drive > file=/mnt/mfs/CentOS-5.5-x86_64-bin-DVD-1of2.iso,if=ide,media=cdrom,index=2,format=raw,cache=writethrough > > -net nic,macaddr=12:01:77:99:00:20,vlan=0,model=e1000 -net > > tap,fd=33,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb > > -usbdevice tablet -vnc :10 & > > > > Then, it worked well! The differences from virt-manager or > > virt-install are here I use 'cache=writethrough' rather than 'none' > > and don't use '-S' as opposed to the options generated by virt-manager > > virt-install. So I could successfully installed CentOS.iso on > > mfskvm1.img, and then rebooted it, which resulted in the qemu-kvm > > process gone due to the option of -no-reboot. > > > > Then, I invoked the qemu-kvm process in order to boot up mfskvm1 using > > the installed image on mfskvm1.img as follows: > > > > # /usr/libexec/qemu-kvm -M rhel5.4.0 -m 1024 -smp 1 -name mfskvm1 > > -uuid af1be3ae-daab-d450-293b-dc03ff7c3c35 -no-kvm-pit-reinjection > > -monitor pty -pidfile /var/run/libvirt/qemu/mfskvm1.pid -boot c -drive > > > file=/mnt/mfs/mfskvm1.img,if=virtio,index=0,boot=on,format=qcow2,cache=writethrough > > -drive file=,if=ide,media=cdrom,index=2 -net > > nic,macaddr=12:01:77:99:00:20,vlan=0,model=e1000 -net > > tap,fd=34,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb > > -usbdevice tablet -vnc :10 & > > > > Everything works fine except for the problem that the ip determination > > for eth0 fails. As I mentioned, when I used virt-install on the MFS > > **loop** device, the IP determination worked well, which means my dhcp > > server properly responded. > > > > I would like to get some information how to resolve this network setup > > issue on the native MFS. > > > > Thanks in advance, > > > > -Jun > > > > > ------------------------------------------------------------------------------ > Lotusphere 2011 > Register now for Lotusphere 2011 and learn how > to connect the dots, take your collaborative environment > to the next level, and enter the era of Social Business. > http://p.sf.net/sfu/lotusphere-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Jun C. P. <jun...@gm...> - 2010-12-15 21:39:31
|
It turned out that, when I used "virsh start mfskvm1," it simply worked well, no network issue any more. Interestingly enough, when I manually invoked qemu-kvm with the same options, I got all weird problems (network issue, udev fails, and so on...). So in my tentative conclusion, "virsh start domain" does internally something more than invoking the qemu-kvm process. Anyway, it seems that there is a way to successfully create a KVM VM on the native MFS although virt-manger and virt-install still do not work well on MFS for me. -Jun On Tue, Dec 14, 2010 at 3:13 PM, Jun Cheol Park <jun...@gm...> wrote: > Hi, > > I have been trying to create a KVM guest VM on the native MFS. But it > has not been quite successful although I getting closer.... Before > delving into the problems, I would like to clarify some relevant > issues. When I used a loop device on top of MFS using sparse files > (For instance, mount -o loop /mnt/mfs/mfs_sparse_200g > /mnt/mfs/loop0/), I didn't have any problems in using KVM on > /mnt/mfs/loop0/). > > The problems that I am experiencing now are from the cases where I try > to create KVM guest VMs directly on the native MFS. Here I hope to get > any successful story as to how to **manually** setup a KVM guest OS on > the native MFS without using virt-manager or virt-install because > those tools didn't work on the native MFS. > > The following are the details of the problems. > > First I made an MFS mountpoint on a KVM node (and MFS client) as follows. > # mfsmount -H $metadata_server_ip -o suid -o dev -o rw -o exec /mnt/mfs > # mount > ... > mfs#$metadata_server_ip:9421 on /mnt/mfs type fuse > (rw,allow_other,default_permissions) > > Then, create a disk image file for the guest storage. > # qemu-img create -f qcow2 /mnt/mfs/mfskvm1.img 10G > # virt-install --name CentOS5 --ram 1000 --disk > path=/mnt/mfs/mfskvm1.img,size=10 --network network:default > --accelerate --vnc --cdrom /mnt/mfs/CentOS-5.5-x86_64-bin-DVD-1of2.iso > --os-type=linux > > Then, I got the following error: > > Starting install... > ERROR internal error unable to start guest: qemu: could not open > disk image /mnt/mfs/mfsvkm1.img > > So it seems that the given KVM management tools including virt-install > and virt-manager are not working well directly on the native MFS > because (I guess) there are some incompatible options used by those > tools, which conflicts with MFS. > > Then, I tried to manually create a KVM on the naitve MFS as follows > via qemu-kvm. > > # /usr/libexec/qemu-kvm -M rhel5.4.0 -m 1024 -smp 1 -name mfskvm1 > -uuid af1be3ae-daab-d450-293b-dc03ff7c3c35 -no-kvm-pit-reinjection > -monitor pty -pidfile /var/run/libvirt/qemu/mfskvm1.pid -no-reboot > -boot d -drive file=/mnt/mfs/mfskvm1.img,if=virtio,index=0,format=qcow2,cache=writethrough > -drive file=/mnt/mfs/CentOS-5.5-x86_64-bin-DVD-1of2.iso,if=ide,media=cdrom,index=2,format=raw,cache=writethrough > -net nic,macaddr=12:01:77:99:00:20,vlan=0,model=e1000 -net > tap,fd=33,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb > -usbdevice tablet -vnc :10 & > > Then, it worked well! The differences from virt-manager or > virt-install are here I use 'cache=writethrough' rather than 'none' > and don't use '-S' as opposed to the options generated by virt-manager > virt-install. So I could successfully installed CentOS.iso on > mfskvm1.img, and then rebooted it, which resulted in the qemu-kvm > process gone due to the option of -no-reboot. > > Then, I invoked the qemu-kvm process in order to boot up mfskvm1 using > the installed image on mfskvm1.img as follows: > > # /usr/libexec/qemu-kvm -M rhel5.4.0 -m 1024 -smp 1 -name mfskvm1 > -uuid af1be3ae-daab-d450-293b-dc03ff7c3c35 -no-kvm-pit-reinjection > -monitor pty -pidfile /var/run/libvirt/qemu/mfskvm1.pid -boot c -drive > file=/mnt/mfs/mfskvm1.img,if=virtio,index=0,boot=on,format=qcow2,cache=writethrough > -drive file=,if=ide,media=cdrom,index=2 -net > nic,macaddr=12:01:77:99:00:20,vlan=0,model=e1000 -net > tap,fd=34,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb > -usbdevice tablet -vnc :10 & > > Everything works fine except for the problem that the ip determination > for eth0 fails. As I mentioned, when I used virt-install on the MFS > **loop** device, the IP determination worked well, which means my dhcp > server properly responded. > > I would like to get some information how to resolve this network setup > issue on the native MFS. > > Thanks in advance, > > -Jun > |
From: 丁赞 <di...@ba...> - 2010-12-15 01:57:24
|
In mfs_mount it is 31 times and I think the chunk would be the same value. the mount reconnecting behavior would last about 30 minutes Di...@ba... -----邮件原件----- 发件人: moo...@li... [mailto:moo...@li...] 发送时间: 2010年12月1日 16:06 收件人: moo...@li... 主题: moosefs-users Digest, Vol 12, Issue 1 Send moosefs-users mailing list submissions to moo...@li... To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/moosefs-users or, via email, send a message with subject or body 'help' to moo...@li... You can reach the person managing the list at moo...@li... When replying, please edit your Subject line so it is more specific than "Re: Contents of moosefs-users digest..." Today's Topics: 1. Re: Problem with mfsmounts inside kvm (Thomas S Hatch) 2. about mfs_mount client performance (??) 3. Re: A problem of reading the same file at the same moment (Micha? Borychowski) 4. Re: reset time count in trash ? (Micha? Borychowski) 5. Re: how many times chunckserver will retry when disconnecting from metaserver ? (Micha? Borychowski) ---------------------------------------------------------------------- Message: 1 Date: Tue, 30 Nov 2010 09:34:23 -0700 From: Thomas S Hatch <tha...@gm...> Subject: Re: [Moosefs-users] Problem with mfsmounts inside kvm To: moosefs-users <moo...@li...> Message-ID: <AAN...@ma...> Content-Type: text/plain; charset="iso-8859-1" Ignore me :) We had a problem with our routes, our vms could not see all of our chunk servers. On Tue, Nov 30, 2010 at 9:00 AM, Thomas S Hatch <tha...@gm...> wrote: > I am experiencing problems with moosefs mounts inside of kvm virtual > machines. There are a number of files which I cannot read on my mfs mount > from inside a kvm virtual machine, (the read attempts hang and then issue an > IOError). But I am confident that the files are good because they can be > read without problem from an mfsmount on a bare metal system. I am running > moosefs 1.6.18 (a pre-release) on Ubuntu 10.04. > > Please let me know if there is any additional information I can send. > > -Thomas S Hatch > -------------- next part -------------- An HTML attachment was scrubbed... ------------------------------ Message: 2 Date: Wed, 1 Dec 2010 12:05:12 +0800 From: ?? <di...@ba...> Subject: [Moosefs-users] about mfs_mount client performance To: <moo...@li...> Message-ID: <086001cb910c$f40eb230$dc2c1690$@com> Content-Type: text/plain; charset="gb2312" Hi all FUSE is a accept+fock module, which means if N clients(500+ httpd threads in our enviroment) read data from mfs through one mfs_mount simultaneously, FUSE could create N threads to handle the requests. This mechanism spent much more time on context siwtch?which made apache response time much more longer in our enviroment?SO anyone had tried to promote the fuse or mfs_client performace? Using threads pool or user level cache would work??? BTW?I am a new guy here?? Thanks all DingZan @baidu.com -------------- next part -------------- An HTML attachment was scrubbed... ------------------------------ Message: 3 Date: Wed, 1 Dec 2010 08:53:36 +0100 From: Micha? Borychowski <mic...@ge...> Subject: Re: [Moosefs-users] A problem of reading the same file at the same moment To: "'Laurent Wandrebeck'" <lw...@hy...> Cc: moo...@li... Message-ID: <000601cb912c$df7f94e0$9e7ebea0$@bor...@ge...> Content-Type: text/plain; charset="utf-8" Hi Laurent! If master doesn't take full 100% of CPU it would work even better as one thread. Generally speaking multithreading is often overestimated. Please have a look at this article: http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf Regards Micha? -----Original Message----- From: Laurent Wandrebeck [mailto:lw...@hy...] Sent: Friday, November 26, 2010 12:00 PM To: moo...@li... Subject: Re: [Moosefs-users] A problem of reading the same file at the same moment On Thu, 25 Nov 2010 10:55:03 +0100 Micha? Borychowski <mic...@ge...> wrote: > Hi! > > As written on http://www.moosefs.org/moosefs-faq.html#goal increasing goal may only increase the reading speed under certain conditions. You can just try increasing the goal, wait for the replication and see if it helps. I'm wondering if such a behaviour could be due to mfsmaster being a monothread program. Thus, in high-load cases, the master being busy answering a request kind of queues the others, being a performance bottleneck by adding latency, preventing one than one request to be worked on at the same time. Comments ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C ------------------------------ Message: 4 Date: Wed, 1 Dec 2010 08:59:39 +0100 From: Micha? Borychowski <mic...@ge...> Subject: Re: [Moosefs-users] reset time count in trash ? To: "'jose maria'" <let...@us...> Cc: moo...@li... Message-ID: <000701cb912d$b73dd630$25b98290$@bor...@ge...> Content-Type: text/plain; charset="utf-8" Hola Jose! Yes, that's right - changing the goal for files in trash resets their timer. We will look into it further and probably make a patch. Saludos Michal -----Original Message----- From: jose maria [mailto:let...@us...] Sent: Saturday, November 27, 2010 4:23 PM To: moo...@li... Subject: Re: [Moosefs-users] reset time count in trash ? El vie, 26-11-2010 a las 15:57 +0100, jose maria escribi?: > * I have applied for testing, to the files of the cluster a trashtime of > 12 hours, a script is executed every hour and reduces to 2 goals the > files of the trash, for one week the number of files has been increasing > in the trash and it continues without becoming stable, at present there > is 400.000.- > > * ?Is it possible that on having applied setgoal 2, reset de time count? > > * The number of files in the reply of the secondary cluster with equal > configuration of trashtime 12 hours is trivial, without execution of the > script that applies mfssetgoal 2. > * Confirmed, I have disabled the cronjob that applies mfssetgoal to the files in trash and in 12 hours the number of files has reduced from 375.000 to 1.500. ?any other idea to reduce goals of trashfiles? I need 1 week retained trashfiles ...... ---------------------------------------------------------------------------- -- Increase Visibility of Your 3D Game App & Earn a Chance To Win $500! Tap into the largest installed PC base & get more eyes on your game by optimizing for Intel(R) Graphics Technology. Get started today with the Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. http://p.sf.net/sfu/intelisp-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------ Message: 5 Date: Wed, 1 Dec 2010 09:06:11 +0100 From: Micha? Borychowski <mic...@ge...> Subject: Re: [Moosefs-users] how many times chunckserver will retry when disconnecting from metaserver ? To: "'kuer ku'" <ku...@gm...> Cc: moo...@li... Message-ID: <000b01cb912e$a0e50d80$e2af2880$@bor...@ge...> Content-Type: text/plain; charset="utf-8" Hi! You have here some problem with starting the chunkserver. But for sure this has no connection with the TIMEMODE_RUNONCE constant (it says that after clock time change sth has to be run once and not 60 times). We also had some very rare cases that while starting master server, chunkserver got hung up. In 1.6.18 this problem should be eliminated. Kind regards Michal From: kuer ku [mailto:ku...@gm...] Sent: Monday, November 29, 2010 12:18 PM To: moo...@li... Subject: [Moosefs-users] how many times chunckserver will retry when disconnecting from metaserver ? Hi, all, I deloyed a mfs-1.6.15 in my environment, today I found a problem. The appearance is one of mfsmount (FUSE) complained that : Nov 29 18:26:11 storage04 mfsmount[32233]: file: 43, index: 0 - can't connect to proper chunkserver (try counter: 29) I donot know which chunkserver cause this. ??? On web interface, I found storage01, one of chunkservers, is not in the server list. and on storage01, there are some logs in /var/log/messages : Nov 29 14:43:27 storage01 mfsmount[13155]: master: connection lost (1) Nov 29 14:43:27 storage01 mfsmount[13155]: registered to master Nov 29 14:44:12 storage01 mfschunkserver[11730]: Master connection lost ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ mfschunkserver found connection lost, but there no logs indicate that mfschunkserver try to reconnect with master Nov 29 15:07:44 storage01 smartd[4268]: System clock time adjusted to the past. Resetting next wakeup time. # the following log happened because I restart chunkserver forcely Nov 29 18:29:59 storage01 h*U?2[11730]: closing *:19722 Nov 29 18:30:13 storage01 mfschunkserver[6764]: listen on *:19722 Nov 29 18:30:13 storage01 mfschunkserver[6764]: connecting ... Nov 29 18:30:13 storage01 mfschunkserver[6764]: open files limit: 10000 Nov 29 18:30:13 storage01 mfschunkserver[6764]: connected to Master and in chunkserver/masterconn.c , I found codes : 1311 main_eachloopregister(masterconn_check_hdd_reports); 1312 main_timeregister(TIMEMODE_RUNONCE,ReconnectionDelay,0,masterconn_reconnect) ; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ it will try to reconnect once ?????? 1313 main_destructregister(masterconn_term); 1314 main_pollregister(masterconn_desc,masterconn_serve); 1315 main_reloadregister(masterconn_reload); I think chunkserver should re-connect to master again and again, until it reachs master. but I does not find that in the code. P.S. I remember that I adjust storage01 's time by ntpdate dateserver. does this affect chunckserver so seriously ?? thanks -- kuer -------------- next part -------------- An HTML attachment was scrubbed... ------------------------------ ---------------------------------------------------------------------------- -- Increase Visibility of Your 3D Game App & Earn a Chance To Win $500! Tap into the largest installed PC base & get more eyes on your game by optimizing for Intel(R) Graphics Technology. Get started today with the Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. http://p.sf.net/sfu/intelisp-dev2dev ------------------------------ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users End of moosefs-users Digest, Vol 12, Issue 1 ******************************************** |
From: Jun C. P. <jun...@gm...> - 2010-12-14 22:14:03
|
Hi, I have been trying to create a KVM guest VM on the native MFS. But it has not been quite successful although I getting closer.... Before delving into the problems, I would like to clarify some relevant issues. When I used a loop device on top of MFS using sparse files (For instance, mount -o loop /mnt/mfs/mfs_sparse_200g /mnt/mfs/loop0/), I didn't have any problems in using KVM on /mnt/mfs/loop0/). The problems that I am experiencing now are from the cases where I try to create KVM guest VMs directly on the native MFS. Here I hope to get any successful story as to how to **manually** setup a KVM guest OS on the native MFS without using virt-manager or virt-install because those tools didn't work on the native MFS. The following are the details of the problems. First I made an MFS mountpoint on a KVM node (and MFS client) as follows. # mfsmount -H $metadata_server_ip -o suid -o dev -o rw -o exec /mnt/mfs # mount ... mfs#$metadata_server_ip:9421 on /mnt/mfs type fuse (rw,allow_other,default_permissions) Then, create a disk image file for the guest storage. # qemu-img create -f qcow2 /mnt/mfs/mfskvm1.img 10G # virt-install --name CentOS5 --ram 1000 --disk path=/mnt/mfs/mfskvm1.img,size=10 --network network:default --accelerate --vnc --cdrom /mnt/mfs/CentOS-5.5-x86_64-bin-DVD-1of2.iso --os-type=linux Then, I got the following error: Starting install... ERROR internal error unable to start guest: qemu: could not open disk image /mnt/mfs/mfsvkm1.img So it seems that the given KVM management tools including virt-install and virt-manager are not working well directly on the native MFS because (I guess) there are some incompatible options used by those tools, which conflicts with MFS. Then, I tried to manually create a KVM on the naitve MFS as follows via qemu-kvm. # /usr/libexec/qemu-kvm -M rhel5.4.0 -m 1024 -smp 1 -name mfskvm1 -uuid af1be3ae-daab-d450-293b-dc03ff7c3c35 -no-kvm-pit-reinjection -monitor pty -pidfile /var/run/libvirt/qemu/mfskvm1.pid -no-reboot -boot d -drive file=/mnt/mfs/mfskvm1.img,if=virtio,index=0,format=qcow2,cache=writethrough -drive file=/mnt/mfs/CentOS-5.5-x86_64-bin-DVD-1of2.iso,if=ide,media=cdrom,index=2,format=raw,cache=writethrough -net nic,macaddr=12:01:77:99:00:20,vlan=0,model=e1000 -net tap,fd=33,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb -usbdevice tablet -vnc :10 & Then, it worked well! The differences from virt-manager or virt-install are here I use 'cache=writethrough' rather than 'none' and don't use '-S' as opposed to the options generated by virt-manager virt-install. So I could successfully installed CentOS.iso on mfskvm1.img, and then rebooted it, which resulted in the qemu-kvm process gone due to the option of -no-reboot. Then, I invoked the qemu-kvm process in order to boot up mfskvm1 using the installed image on mfskvm1.img as follows: # /usr/libexec/qemu-kvm -M rhel5.4.0 -m 1024 -smp 1 -name mfskvm1 -uuid af1be3ae-daab-d450-293b-dc03ff7c3c35 -no-kvm-pit-reinjection -monitor pty -pidfile /var/run/libvirt/qemu/mfskvm1.pid -boot c -drive file=/mnt/mfs/mfskvm1.img,if=virtio,index=0,boot=on,format=qcow2,cache=writethrough -drive file=,if=ide,media=cdrom,index=2 -net nic,macaddr=12:01:77:99:00:20,vlan=0,model=e1000 -net tap,fd=34,script=,vlan=0,ifname=vnet0 -serial pty -parallel none -usb -usbdevice tablet -vnc :10 & Everything works fine except for the problem that the ip determination for eth0 fails. As I mentioned, when I used virt-install on the MFS **loop** device, the IP determination worked well, which means my dhcp server properly responded. I would like to get some information how to resolve this network setup issue on the native MFS. Thanks in advance, -Jun |
From: Jun C. P. <jun...@gm...> - 2010-12-13 17:13:46
|
Thank all of you for the comments and the information. I also had similar experiences on GlusterFS where failover was not properly working, resulting in inconsistency of distributed files. GlusterFS 3.1.1. resolved some failover issues, though. What I really like about MFS compared to GlusterFS is its performance. The results of MFS via iozone were significantly better than the ones of GlusterFS in most of the test cases. I have some questions for the KVM use on MFS. I think it would be suitable to post those questions in a different thread. Thanks again, -Jun On Fri, Dec 10, 2010 at 11:13 PM, Thomas S Hatch <tha...@gm...> wrote: > I am a Sr engineer at Beyond Oblivion, we are using Moose in a 140T and > growing setup. > We are using kvm for virtual machines, the performance is adequate. > We also tried a number of other distributed filesystems and moose was by far > the best option. For what it is worth, glusterfs was a disaster, we saw > rampant file corruption, and the distribution of files was inconsistent and > unfortunately ceph is not ready yet, I figure moose will be giving it a run > for its money when it is. > -Thomas S Hatch > > On Fri, Dec 10, 2010 at 2:57 PM, Jun Cheol Park <jun...@gm...> > wrote: >> >> One more question: >> I am planning how to use KVM on MFS. Is there any use example of this >> combination? >> >> Thanks, >> >> -Jun >> >> >> On Fri, Dec 10, 2010 at 2:33 PM, Jun Cheol Park >> <jun...@gm...> wrote: >> > Hi, >> > >> > I would like to know how many use examples of MFS are in real >> > production so far. And also wondering how big they are. >> > >> > Is there anyone who can give me comments on how substantially reliable >> > MFS is for production? >> > >> > Thanks in advance, >> > >> > -Jun >> > >> >> >> ------------------------------------------------------------------------------ >> Oracle to DB2 Conversion Guide: Learn learn about native support for >> PL/SQL, >> new data types, scalar functions, improved concurrency, built-in packages, >> OCI, SQL*Plus, data movement tools, best practices and more. >> http://p.sf.net/sfu/oracle-sfdev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Josef <pe...@p-...> - 2010-12-13 13:18:10
|
Hello, I've been studying moosefs source codes to find a planing mechanism, where to place new chunks, but due to a lack of comments it's quite difficult. Would someone recommand me, where to search? The same would interest me for reading, if there is more copies which one the client chooses to read from. Ideal planning mechanism should take into account factors as chunkserver load, disk usage, avilable network bandwidth and so on. My topic of disertation thesis at the CTU in Prague is process planing in distributed systems, so if there is a lack of such mechanism, it would interest me to help. Josef |