You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: 蒋文佼 <j.w...@gm...> - 2011-12-14 08:45:56
|
moosefs is not so good for small file, so I want to use loop device on it. I want to know someone do the same things, and is moosefs good for loop device mount? |
From: Michał B. <mic...@ge...> - 2011-12-14 08:44:35
|
Hi! Unfortunately POSIX dosen’t give any clear specification on this subject. MooseFS behaves in a way which is found in most other systems and to be honest is the safest one. For example at Max OS X (HFS+) we have: (acid: </tmp/aqq>) $ mkdir dir1 (acid: </tmp/aqq>) $ ls -ld dir1 drwxr-xr-x 2 acid wheel 68 Dec 13 21:15 dir1 (acid: </tmp/aqq>) $ chmod g+s dir1 (acid: </tmp/aqq>) $ chgrp staff dir1 (acid: </tmp/aqq>) $ ls -ld dir1 drwxr-xr-x 2 acid staff 68 Dec 13 21:15 dir1 (acid: </tmp/aqq>) $ cd dir1 (acid: </tmp/aqq/dir1>) $ mkdir dir2 (acid: </tmp/aqq/dir1>) $ ls -ld dir2 drwxr-xr-x 2 acid staff 68 Dec 13 21:15 dir2 And at FreeBSD 7.x (UFS) we have: [acid@fbsd7 /tmp/aqq]$ mkdir dir1 [acid@fbsd7 /tmp/aqq]$ ls -ld dir1 drwxr-xr-x 2 acid wheel 512 Dec 13 21:18 dir1 [acid@fbsd7 /tmp/aqq]$ chmod g+s dir1 [acid@fbsd7 /tmp/aqq]$ chgrp users dir1 [acid@fbsd7 /tmp/aqq]$ ls -ld dir1 drwxr-xr-x 2 acid users 512 Dec 13 21:18 dir1 [acid@fbsd7 /tmp/aqq]$ cd dir1 [acid@fbsd7 /tmp/aqq/dir1]$ mkdir dir2 [acid@fbsd7 /tmp/aqq/dir1]$ ls -ld dir2 drwxr-xr-x 2 acid users 512 Dec 13 21:18 dir2 The behaviour of sgid bit described in your email is probably only on Linux. In the future we could think of "LINUX SUGID COMPATIBILITY" config option. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Sébastien Morand [mailto:seb...@gm...] Sent: Tuesday, December 13, 2011 7:42 PM To: moo...@li... Cc: Aleksandra Rudnitska; Aleksandra Rudnitska Subject: Hi, I'm currently using the mfs-1.6.20-2 and figured out that the setgid bit is not correctly handled. $ groups toto test $ cd $HOME $ mkdir dir1 $ ls -ld dir1 drwxr-xr-x 2 toto toto 4096 Dec 13 18:36 dir1 $ chmod g+s dir1 $ chgrp test dir1 $ ls -ld dir1 drwxr-xr-x 2 toto test 4096 Dec 13 18:36 dir1 $ cd dir1 $ mkdir dir2 $ ls -ld dir2 drwxr-xr-x 2 toto test 4096 Dec 13 18:36 dir2 dir2 should have the setgid bit set, here it is the expected result : $ ls -ld dir2 drwxr-sr-x 2 toto test 4096 Dec 13 18:36 dir2 I'm attaching the patch for interested people. Only the mfsmaster is concerned. Sorry if this is corrected in later version. Regards, Sebastien |
From: Michał B. <mic...@ge...> - 2011-12-14 08:38:16
|
Hi! This is still too little information about how we could help you... What about metaloggers? Don't you have metadata_ml.mfs.back and changelog_ml.*.mfs files? You could put them on ftp as tar.gz and give us a link so that we try to recover them. In "emergency" situations MooseFS tries to write metadata file on the master machine in these locations: /metadata.mfs.emergency /tmp/metadata.mfs.emergency /var/metadata.mfs.emergency /usr/metadata.mfs.emergency /usr/share/metadata.mfs.emergency /usr/local/metadata.mfs.emergency /usr/local/var/metadata.mfs.emergency /usr/local/share/metadata.mfs.emergency You can try to find them there, but as RAID went broken it is possible you won't see them. And it is impossible to recover metadata from the chunks themselves. But file data are in the chunks so if you need to find something specific it would be possible. Each chunk has 5kb header (plain text) so simple 'grep' would be enough to find what you are looking at. BTW. How much data was kept on this MooseFS installation? Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: René Pfeiffer [mailto:ly...@lu...] Sent: Tuesday, December 13, 2011 2:17 PM To: moo...@li... Subject: Re: [Moosefs-users] Question about MooseFS metadata distribution/recovery On Dec 12, 2011 at 1554 -0800, wkmail appeared and said: > On 12/12/2011 3:36 PM, René Pfeiffer wrote: > > … > > The biggest problem is that we cannot figure out what the RAID > > controller exactly did to the file system of the master server, and > > we haven't found any traces of a more recent metadata file. The > > metalogger system had no problem, but can it be that the metalogger > > was/is out of sync due to the silent file system corruption on the master system? > > That is a question for the devs, but early in our MFS testing with > essentially throwaway kit, we had a master fail with a broken raid. In > that case the underlying disk system had been essentially readonly for > a few days and no recent data was in /usr/local/var/mfs. > > However, the metalogger DID have accurate information and we simply > recovered using that data using the restore process and then copying > over metadata file to the now fixed master. Except for the 'on the fly > files' lost when the damm thing crashed, no other data was lost, > including files that had been received and written to chunkserver > during the time the disk subsystem was out of order. > > So my guess is that the metaloggers get their info from the masters > memory, not from a file on the master. Ok, this might be the reason then, because the master went down hard two times (first time 21 October, second time 9 December) because the RAID controller totally locked the system. I assume this could explain some missing metadata. > But that is something that should be confirmed by the devs. Thanks, René. -- )\._.,--....,'``. fL Let GNU/Linux work for you while you take a nap. /, _.. \ _\ (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/ `._.-(,_..'--(,_..'`-.;.' - System administration + Consulting + Teaching - Got mail delivery problems? http://web.luchs.at/information/blockedmail.php |
From: Alexander A. <akh...@ri...> - 2011-12-14 08:18:17
|
Hi Elliot ! My experience is to use on mfsmaster via cron something like this: #cat /etc/crontab # /etc/crontab - root's crontab for FreeBSD #minute hour mday month wday who command 27 04 * * * root /root/do_dump #cat /root/do_dump /usr/local/bin/mfsmakesnapshot -o /moosefs_mnt/ftp /moosefs_mnt/backup/ftp/`date +%A` /usr/local/bin/mfssetgoal -r 1 /moosefs_mnt/backup/ftp/`date +%A` Works well :--) wbr Alexander ====================================================== Hello all, Has anyone used snapshots extensively? I'm just curios how robust they are. I have 50+ VMs running with their virtual disk on MFS. For backing up the VM images, I'd like to take a snapshot, copy it to remote location and then delete snapshot. Has anyone tried this or similar? If so, would you mind sharing your experience? Thanks, Elliot ------------------------------------------------------------------------------ Systems Optimization Self Assessment Improve efficiency and utilization of IT resources. Drive out cost and improve service delivery. Take 5 minutes to use this Systems Optimization Self Assessment. http://www.accelacomm.com/jaw/sdnl/114/51450054/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: 蒋文佼 <j.w...@gm...> - 2011-12-14 03:55:16
|
moosefs is not so good for small file, so I want to use loop device on it. I want to know someone do the same things, and is moosefs good for loop device mount? |
From: Travis H. <tra...@tr...> - 2011-12-14 00:51:06
|
I discovered this today found my solution, so sharing it. Using moosefs 1.6.20-2, with only one chunk server in the system, where I had set goal = 1. The single chunk server VM had 6 250GB (virtual) disks allocated to it. /dev/sdb1 on /var/lib/mfs/sdb1 type ext4 (rw,noatime) /dev/sdc1 on /var/lib/mfs/sdc1 type ext4 (rw,noatime) /dev/sdd1 on /var/lib/mfs/sdd1 type ext4 (rw,noatime) /dev/sde1 on /var/lib/mfs/sde1 type ext4 (rw,noatime) /dev/sdf1 on /var/lib/mfs/sdf1 type ext4 (rw,noatime) /dev/sdg1 on /var/lib/mfs/sdg1 type ext4 (rw,noatime) So I was interested in reclaiming one of the disks because the file system was only 5% in use. I edited the mfshdd.cfg on the chunk server to mark one disk (sdg1) as for removal /var/lib/mfs/sdb1 /var/lib/mfs/sdc1 /var/lib/mfs/sdd1 /var/lib/mfs/sde1 /var/lib/mfs/sdf1 */var/lib/mfs/sdg1 And I noticed after several hours of idle (no clients mounting it even), there were no chunk replications occurring to reorganize the chunks located on the disk marked as removal onto the other disks on the chunk server. Now really this makes sense, because I have goal = 1 and only one chunk server to work with. And since chunks seem to replicate to balance the utilization among all disks in all chunk servers over time, I guess I thought I could do a remove a disk on a single chunk server. When likely the process seems to assume more than one chunk server than the number of goals to work with. What I did to get it to migrate the chunks off, was to temporarily create a second chunk server process on the same VM as the chunk server, with only that disk I had marked as for removal. * copy the mfschunkserver.cfg to mfschunkserver-2.cfg ** change the DATA_PATH e.g. to /var/lib/mfs-2 ** change the CSSERV_LISTEN_PORT ** edit iptables if required to allow the new listen port ** change the HDD_CONF_FILENAME e.g. mfshdd-2 as well * copy the mfshdd.cffg to mfshdd-2.cfg ** remove all but the one disk marked as removal from mfshdd-2.cfg ** remove only the disk marked as removal from mfshdd.cfg * create an empty /var/lib/mfs-2 folder and chown it to nobody user (or user of mfschunkserver) * start the second chunk server with command line mfschunkserver -c /etc/mfschunkserver-2.cfg or, as I did, I copied my init script and modified it to specify the second config file. * monitor the CGI service, when the number of chunks under goal due to disks maked as removal goes to zero, turn off the second chunk server process, unmount the disk marked as for removal, and clean up these temporary "-2" files I created to launch the second chunk server process on the same machine as the regular chunk server process. (I guess I could have moved the virtual disk like a physical disk to a different machine and configured a new chunk server there too. this was just more convenient for me at the time). In the CGI monitor, now see two chunk servers, all of the disks, as before, but more important to my current interests, the chunks temporarily under goal due to them living on a disk marked as for removal are going down as the chunk replication operation to redistribute the chunks to other disks not marked as for removal is now working. So, was that a reasonable design decision to not have chunk replication for disk marked as removal within a single chunk server setup (as in this example), because files within a given "goal" correspond to the existence on a chunk server and not at the level of disks within the chunk server? Also, what I found interesting, and possibly helpful to anyone else playing with chunk servers, was that (for this version of moosefs anyway) the "disk" that belongs to a chunkserver can be relocated to a new chunk server process (assuming the original chunk server no longer is also using it of course), and configured to point to the mfsmaster as before. (chunk servers with many disks can be split apart into individual chunk servers. Now, I expect it could be very bad to go the other way, combining separate chunk servers with a single disk into a single chunk server with many disks ? |
From: Elliot F. <efi...@gm...> - 2011-12-13 21:02:36
|
Hello all, Has anyone used snapshots extensively? I'm just curios how robust they are. I have 50+ VMs running with their virtual disk on MFS. For backing up the VM images, I'd like to take a snapshot, copy it to remote location and then delete snapshot. Has anyone tried this or similar? If so, would you mind sharing your experience? Thanks, Elliot |
From: Laurent W. <lw...@hy...> - 2011-12-13 14:13:16
|
Hi, I'm no MFS dev, but AFAIK each metadata change on master is sent immediatly to metalogger then get written to disk on the master. HTH, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: René P. <ly...@lu...> - 2011-12-13 13:17:39
|
On Dec 12, 2011 at 1554 -0800, wkmail appeared and said: > On 12/12/2011 3:36 PM, René Pfeiffer wrote: > > … > > The biggest problem is that we cannot figure out what the RAID controller > > exactly did to the file system of the master server, and we haven't found > > any traces of a more recent metadata file. The metalogger system had no > > problem, but can it be that the metalogger was/is out of sync due to the > > silent file system corruption on the master system? > > That is a question for the devs, but early in our MFS testing with > essentially throwaway kit, we had a master fail with a broken raid. In > that case the underlying disk system had been essentially readonly for a > few days and no recent data was in /usr/local/var/mfs. > > However, the metalogger DID have accurate information and we simply > recovered using that data using the restore process and then copying > over metadata file to the now fixed master. Except for the 'on the fly > files' lost when the damm thing crashed, no other data was lost, > including files that had been received and written to chunkserver during > the time the disk subsystem was out of order. > > So my guess is that the metaloggers get their info from the masters > memory, not from a file on the master. Ok, this might be the reason then, because the master went down hard two times (first time 21 October, second time 9 December) because the RAID controller totally locked the system. I assume this could explain some missing metadata. > But that is something that should be confirmed by the devs. Thanks, René. -- )\._.,--....,'``. fL Let GNU/Linux work for you while you take a nap. /, _.. \ _\ (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/ `._.-(,_..'--(,_..'`-.;.' - System administration + Consulting + Teaching - Got mail delivery problems? http://web.luchs.at/information/blockedmail.php |
From: Kies L. <lil...@gm...> - 2011-12-13 08:05:59
|
I have encountered same problem. Do anyone have ideas about it ? 在 2011年12月1日星期四,liangzhenfang 写道: > Dear, mfser**** > > ** ** > > I have installed the MFS.**** > > But it can not cache the lookup operations .**** > > Here is the mfsmount way and the cat .stats result**** > > mfsmount -o mfsattrcacheto=60000 -o mfsentrycacheto=60000 -o > mfsentrycacheto=60000 -o mfswritecachesize=1024 /home/rrddata/ -H HOST**** > > fuse_ops.statfs: 24**** > > fuse_ops.access: 0**** > > fuse_ops.lookup-cached: 75**** > > fuse_ops.lookup: 12348564**** > > fuse_ops.getattr-cached: 170**** > > fuse_ops.getattr: 1578048**** > > fuse_ops.setattr: 35**** > > fuse_ops.mknod: 0**** > > fuse_ops.unlink: 1096**** > > fuse_ops.mkdir: 1**** > > fuse_ops.rmdir: 0**** > > fuse_ops.symlink: 0**** > > fuse_ops.readlink: 0**** > > fuse_ops.rename: 8**** > > fuse_ops.link: 0**** > > fuse_ops.opendir: 342**** > > fuse_ops.readdir: 1759**** > > fuse_ops.releasedir: 342**** > > fuse_ops.create: 1107**** > > fuse_ops.open: 4438617**** > > fuse_ops.release: 4439721**** > > fuse_ops.read: 3068682**** > > fuse_ops.write: 35827564**** > > fuse_ops.flush: 4439692**** > > fuse_ops.fsync: 13**** > > master.reconnects: 0**** > > master.bytes_sent: 929559594**** > > master.bytes_received: 1395627491**** > > master.packets_sent: 35603705**** > > master.packets_received: 35600113**** > > ** ** > > And How Can I improve the cache Hit.**** > > Thank you very much in advance **** > > Zhenfang,liang**** > > ** ** > |
From: wkmail <wk...@bn...> - 2011-12-12 23:55:57
|
On 12/12/2011 3:36 PM, René Pfeiffer wrote: > Yes, it did, but I get the "D" state for processes accessing the mount, > too. The logs show messages of the type "chunk xyz has only invalid copies > (1) - please repair it manually", so I guess the metadata is still not > correct (IP addresses and names of the chunk servers haven't changed). > > The biggest problem is that we cannot figure out what the RAID controller > exactly did to the file system of the master server, and we haven't found > any traces of a more recent metadata file. The metalogger system had no > problem, but can it be that the metalogger was/is out of sync due to the > silent file system corruption on the master system? That is a question for the devs, but early in our MFS testing with essentially throwaway kit, we had a master fail with a broken raid. In that case the underlying disk system had been essentially readonly for a few days and no recent data was in /usr/local/var/mfs. However, the metalogger DID have accurate information and we simply recovered using that data using the restore process and then copying over metadata file to the now fixed master. Except for the 'on the fly files' lost when the damm thing crashed, no other data was lost, including files that had been received and written to chunkserver during the time the disk subsystem was out of order. So my guess is that the metaloggers get their info from the masters memory, not from a file on the master. But that is something that should be confirmed by the devs. |
From: René P. <ly...@lu...> - 2011-12-12 23:37:13
|
On Dec 12, 2011 at 1520 -0800, wkmail appeared and said: > On 12/12/2011 11:45 AM, René Pfeiffer wrote: > > Recovery only yielded a metadata.mfs.back file which dates back to 29 June > > 2011. > > > > What about your metalogger? Didn't that have a more up to date metadata > file? Yes, it did, but I get the "D" state for processes accessing the mount, too. The logs show messages of the type "chunk xyz has only invalid copies (1) - please repair it manually", so I guess the metadata is still not correct (IP addresses and names of the chunk servers haven't changed). The biggest problem is that we cannot figure out what the RAID controller exactly did to the file system of the master server, and we haven't found any traces of a more recent metadata file. The metalogger system had no problem, but can it be that the metalogger was/is out of sync due to the silent file system corruption on the master system? Best, René. -- )\._.,--....,'``. fL Let GNU/Linux work for you while you take a nap. /, _.. \ _\ (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/ `._.-(,_..'--(,_..'`-.;.' - System administration + Consulting + Teaching - Got mail delivery problems? http://web.luchs.at/information/blockedmail.php |
From: wkmail <wk...@bn...> - 2011-12-12 23:21:48
|
On 12/12/2011 11:45 AM, René Pfeiffer wrote: > Recovery only yielded a metadata.mfs.back file which dates back to 29 June > 2011. > What about your metalogger? Didn't that have a more up to date metadata file? -bill |
From: René P. <ly...@lu...> - 2011-12-12 19:45:25
|
On Dec 11, 2011 at 1913 +0100, René Pfeiffer appeared and said: > Hello! > > We have the following scenario with a MooseFS deployment. > > - 3 servers > - server #3 : 1 master > - server #2 : chunk server > - server #1 : chunk server, one metalogger process running on this node I forgot to include: - All storage nodes run Debin 6.0 (x86_64). - All storage nodes run MooseFS on /var which is JFS. - MooseFS 1.6.20 is used (compiled from source). > … > We're currently trying to recover the original master's directory > containing the meta data. Recovery only yielded a metadata.mfs.back file which dates back to 29 June 2011. Best regards, René Pfeiffer. -- )\._.,--....,'``. fL Let GNU/Linux work for you while you take a nap. /, _.. \ _\ (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/ `._.-(,_..'--(,_..'`-.;.' - System administration + Consulting + Teaching - Got mail delivery problems? http://web.luchs.at/information/blockedmail.php |
From: René P. <ly...@lu...> - 2011-12-11 18:13:41
|
Hello! We have the following scenario with a MooseFS deployment. - 3 servers - server #3 : 1 master - server #2 : chunk server - server #1 : chunk server, one metalogger process running on this node Server #3 suffered from a hardware RAID failure including a trashed JFS file system where the master logs were on. We tried - recovering the log/state data from server #1 and restarting the master process, and - using the data from the metalogger process and starting a new master on server #1 (including changing the configs to use the new master as described in the documentation). The MooseFS mount is accessible, but some file operations involving data end in processes hanging in the "D" state (uninterruptible sleep). Logs on the chunk servers show that the MooseFS is missing data although the state of the chunk servers have not changed (no reboot, no file system damage, only reconfigured master). Is there a way to extract the data from the chunk directories? Is the a "fsck.mfs" or a similar tool? mfsfilerepair is not useful, because it only zeroed a file which was affected by the "D" state problem. We're currently trying to recover the original master's directory containing the meta data. Is there any documentation regarding the meta data or chunk data besides the source code? Does someone experienced a similar situation? Best regards, René Pfeiffer. -- )\._.,--....,'``. fL Let GNU/Linux work for you while you take a nap. /, _.. \ _\ (`._ ,. R. Pfeiffer <lynx at luchs.at> + http://web.luchs.at/ `._.-(,_..'--(,_..'`-.;.' - System administration + Consulting + Teaching - Got mail delivery problems? http://web.luchs.at/information/blockedmail.php |
From: Ken <ken...@gm...> - 2011-12-08 04:35:49
|
Hello, Thanks for the great work, I really love it. After some time of reading the source code, I noticed there is none unittest published. IMOH unittest to some independent module like: filesystem.c, chunks.c... will bring great stability in fast changing. Without doubt Moosefs is stable enough. But it is huge difficult to other users. My question are: Is there ever any in-depth unittest code published? Is there any script or anything else perform procedure input/output test for mfsrestore or others? Maybe adding unittest is hard dirty work. Normally coupled code is difficault to unittest. In filesystem.c fs_info call to matocsserv_getspace of matocuserv.c, Is possible or value enough to decouple them? Please don't hesitate to criticize me. Thanks. -Ken |
From: 武文英 <wuw...@16...> - 2011-12-06 00:54:52
|
I try to install mfs client in my Ubuntu system, the version of mfs is: mfs-1.6.20, and my files system is ext3, when I execute "/usr/bin/mfsmount /mnt/mfs -H mfsmaster " ,my ubuntu system always have the fault: sudo /usr/local/mfs/bin/mfsmount /mnt/mfs -H mfsmaster [sudo] password for autotest: mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root /bin/mount: unrecognized option '--no-canonicalize' and I have tried several times , and the fault is always the same, how to solve it ? Expecting a reply,thanks. |
From: matakura <mat...@gm...> - 2011-12-05 13:48:47
|
Does the '*' in mfsexports.cfg in line 2 mean that anyone have the privilegea to read and write -- matakura *"O conhecimento sólido é adquirido dia após dia, com tenacidade, obstinação, contundência, dedicação, concentração, sem perder o foco e com muita calma. Nada de afobar para apreender tudo de uma só vez. Lembrar sempre de que como a tartaruga venceu a corrida? Porque não desviou sua atenção para algo fora do seu objetivo.Este é o segredo; focalizar sempre o seu principal objetivo e não o perdê de vista.*" Autor: Soldado de peso |
From: 2. <272...@qq...> - 2011-12-05 05:38:48
|
Dear mfs : I'm use MFS now. and I make it run sccueed. but I have some issue about "mount -m /meta" such as below point: 1 . do I need "mount -m /meta" base on metaloggerserver run ? 2 . does the metaloggerserver can find file that already deleted ? 2 . if I want find deleted files and must need "mount -m /meta" , do I make "mount -m /meta" at one or more client ? constant 12/05/2011 |
From: Mertès S. <seb...@u-...> - 2011-12-04 14:29:53
|
hello, I have a problem with the rc mfsmaster script when I reboot and halt the FreeBSD operating system. In fact, the timeout of rc.shutdown script (30 s by default) kill the mfsmaster process if the msfmaster rc script is too long stopped. But I can't to know the time necessary for to stop rc msfmaster script correctly I have initialized rcshutdown_timeout="" in the rc.conf file and initialized, by example, kern.init_shutdown_timeout=600 in the /etc/sysctl.conf file. Is it the good method ? Thank you Sébastien |
From: Mario G. <mgi...@gm...> - 2011-12-04 13:29:17
|
Hello, I would like to understand better how moosefs works to see if it is fault tolerant. I have these questions to ask: 1) if mfsmaster burns what's really happen? Ok I have backups with two metaloggers but are they synchronous updated? If I promote a metalogger to master I get the filesystem back in the last state of the master or I loose last files created and or updated? 2) looking at docs it seems that if I have a replica the replication works it made directly by chunkservers. It seems to me it can cause a split brain. For example: chunkserver A is updating B. B goes offline so is not completely updated. Now A goes offline and B cames back online. But B now has stale data! 3) I am playing with moosefs now. I have powered off a chunkserver and then powered it on. Later a vmware virtual machine with a chunkserver inside hanged (not powered off, but kernel oops). Now I cannot write on several directories of my mfs mounted filesystem and cannot umount it. What's happened? Thanks, Mario |
From: Mario G. <mgi...@gm...> - 2011-12-04 12:53:18
|
Hello, I would like to understand better how moosefs works to see if it is fault tolerant. I have these questions to ask: 1) if mfsmaster burns what's really happen? Ok I have backups with two metaloggers but are they synchronous updated? If I promote a metalogger to master I get the filesystem back in the last state of the master or I loose last files created and or updated? 2) looking at docs it seems that if I have a replica the replication works it made directly by chunkservers. It seems to me it can cause a split brain. For example: chunkserver A is updating B. B goes offline so is not completely updated. Now A goes offline and B cames back online. But B now has stale data! 3) I am playing with moosefs now. I have powered off a chunkserver and then powered it on. Later a vmware virtual machine with a chunkserver inside hanged (not powered off, but kernel oops). Now I cannot write on several directories of my mfs mounted filesystem and cannot umount it. What's happened? Thanks, Mario |
From: yishi c. <hol...@gm...> - 2011-12-04 12:08:11
|
hi mfs-developer: recently I‘ve been reading the source code of moosefs,but be confused of the function of fsedge data structure.Is there anybody who is willing to answer the following quz: 1,this structure seems to link the children fsnode with the parent dir.since there is no notation for this part of code,can the developer explain the function of this structure in detail. 2,I think the nextchild and nextparent point to itself.and the prevchild/prevparent seems to point to itself as well.is it correct.if so,why moosefs need this? Best regard yishi cheng |
From: Giovanni T. <me...@gi...> - 2011-12-02 12:40:05
|
2011/12/2 matakura <mat...@gm...>: > Does MooseFS work with any kind of cloud controller like Nimbus, Eucalyptus > or Open nebula? You can find an OpenNebula transfer manager for Moosefs here: http://opennebula.org/software:ecosystem:moosefs Bye. -- Giovanni Toraldo http://gionn.net/ |
From: matakura <mat...@gm...> - 2011-12-02 12:02:43
|
Does MooseFS work with any kind of cloud controller like Nimbus, Eucalyptus or Open nebula? -- matakura *"O conhecimento sólido é adquirido dia após dia, com tenacidade, obstinação, contundência, dedicação, concentração, sem perder o foco e com muita calma. Nada de afobar para apreender tudo de uma só vez. Lembrar sempre de que como a tartaruga venceu a corrida? Porque não desviou sua atenção para algo fora do seu objetivo.Este é o segredo; focalizar sempre o seu principal objetivo e não o perdê de vista.*" Autor: Soldado de peso |