You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michał B. <mic...@ge...> - 2012-02-27 20:50:29
|
Hi Jose! But we cannot see any "warnings" in the attached files...? Where exactly is the problem? Regards Michal -----Original Message----- From: jose maria [mailto:let...@us...] Sent: Saturday, February 25, 2012 10:19 AM To: moo...@li... Subject: [Moosefs-users] new warnings on make process. * Opensuse 12.1, systemd, not sys V, ext3 partitions. * Python --version Python 2.7.2 * gcc (SUSE Linux) 4.6.2 * mfs-1.6.20-2 * glibc 2.14.1 (release 14.18.1) * filesystem 12.1 (release 12.1) * kernel 3.1.9-1.4-default * Regards. |
From: Quenten G. <QG...@on...> - 2012-02-27 03:03:22
|
Hi All, Just wondering how you have handled re-exporting the mfs or some how using native access for virtual machine storage? I've been looking at iscsi however the CPU overhead is pretty horrendous however what other options do we have? Cheers, Quenten |
From: Steffen Z. <me...@sa...> - 2012-02-26 22:33:28
|
On 02/24/2012 05:57 PM, Corin Langosch wrote: > Hi again, Hi, [...use Github...] > What do you think? +1 Contributing would be easier and you're able to track changes of others. > Corin Saz |
From: Stas O. <sta...@gm...> - 2012-02-25 22:12:27
|
Moreover, I noticed that running mount twice, causes double fuse mount, with unknown effects on system behavior. Meaning the MFS client unable to detect that it already mounted, and just mounts again. Will the additional mounting in rc.local cause such effect? Regards. On Sat, Feb 25, 2012 at 10:56 AM, Stas Oskin <sta...@gm...> wrote: > Hi. > > So you advice both having mount -a -t fuse in rc.local, and chkconfig > netfs for Linux systems? > > Regards. > > > On Fri, Feb 24, 2012 at 1:18 AM, Ricardo J. Barberis < > ric...@da...> wrote: > >> Also, beware of gigabit networks: some NICs are very slow at establishing >> link >> and mfsmount (or any other network filesystem) will fail mounting anyway, >> even using _netdev on fstab as suggested by Michał. >> >> I have 'mount -a -t fuse' in /etc/rc.local just beacuse of this. >> >> And, if you use _netdev you also have to enable mounting network >> flesystems on >> boot: 'chkconfig netfs on' in RedHat and derivatives should do the trick. >> >> >> PS: Michał, could you add that last bit about netfs to the Reference >> Guide? >> Thanks! >> >> El Martes 21/02/2012, Michał Borychowski escribió: >> > Hi Stas! >> > >> > What platform do you use? The solution with /etc/fstab works only on >> Linux >> > platforms (tested on Debian). On other platforms you need to prepare a >> > script in /usr/local/etc/rc.d which will run mfsmount with needed >> options. >> > >> > And on Linux you need _netdev option, as written on >> > http://www.moosefs.org/reference-guide.html in "Mounting the File >> System" >> > section. >> > >> > Kind regards >> > Michal >> > >> > From: Stas Oskin [mailto:sta...@gm...] >> > Sent: Sunday, January 15, 2012 10:01 AM >> > To: MooseFS >> > Subject: Connection timed out at mfsmount on boot >> > >> > Hi. >> > >> > We have the mfsmount set in fstab, but noticed it never properly works >> on >> > boot. >> >> >> -- >> Ricardo J. Barberis >> Senior SysAdmin / ITI >> Dattatec.com :: Soluciones de Web Hosting >> Tu Hosting hecho Simple! >> >> ------------------------------------------ >> >> >> ------------------------------------------------------------------------------ >> Virtualization & Cloud Management Using Capacity Planning >> Cloud computing makes use of virtualization - but cloud computing >> also focuses on allowing computing to be delivered as a service. >> http://www.accelacomm.com/jaw/sfnl/114/51521223/ >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > |
From: Corin L. <cor...@ne...> - 2012-02-25 10:10:40
|
Hola Ricardo, Am 24.02.2012 21:02, schrieb Ricardo J. Barberis: > What mfs version are you using and on what OS/distro? It's the latest (1.6.20), self compiled on a debian squeeze system. ./configure --prefix=/opt/mfs --with-default-user=mfs --with-default-group=mfs MAKEFLAGS=-j4 make install I just managed to fix the issue: the problem was I didn't use any spaces between the key and the value, so "AAA=BBB" is silently ignored while "AAA = BBB" works fine. I'd suggest to at least emit a warning on the console/ syslog when an invalid key is found. If the code'd already be hosted on github I'd even submit a patch/ pull rquest ;-) Thanks and regards, Corin |
From: jose m. <let...@us...> - 2012-02-25 09:45:26
|
* Opensuse 12.1, systemd, not sys V, ext3 partitions. * Python --version Python 2.7.2 * gcc (SUSE Linux) 4.6.2 * mfs-1.6.20-2 * glibc 2.14.1 (release 14.18.1) * filesystem 12.1 (release 12.1) * kernel 3.1.9-1.4-default * Regards. |
From: Ricardo J. B. <ric...@da...> - 2012-02-24 20:02:49
|
El Viernes 24/02/2012, Corin Langosch escribió: > Hi all, > > I'm quite new to moosefs but so far it's working great :) > > For security reasons (besides firewall etc.) I'd like to bind the > moosefs daemons only to a specific ip address. According to the man > > pages this is what I have in my mfsmaster.cfg: > > # only listen on internal network for security purposes > > MATOML_LISTEN_HOST=10.0.0.3 > > MATOCS_LISTEN_HOST=10.0.0.3 > > MATOCU_LISTEN_HOST=10.0.0.3 > > I restartet the master several times, but running "netstat -lnp" still > > shows me that the master is bound to 0.0.0.0: > > Proto Recv-Q Send-Q Local Address Foreign Address > > State PID/Program name > > tcp 0 0 0.0.0.0:9419 0.0.0.0:* > > LISTEN 177788/mfsmaster > > tcp 0 0 0.0.0.0:9420 0.0.0.0:* > > LISTEN 177788/mfsmaster > > tcp 0 0 0.0.0.0:9421 0.0.0.0:* > > LISTEN 177788/mfsmaster > > Am I doing anything wrong or is this a bug? > > Thanks in advance for any suggestions :) > > Corin What mfs version are you using and on what OS/distro? It's working for me, CentOS 6.2 64 bits, mfs 1.6.20 installed from repoforge.org, no special patches at all. Cheers, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Allen, B. S <bs...@la...> - 2012-02-24 17:34:55
|
I like this idea. Ben On Feb 24, 2012, at 9:57 AM, Corin Langosch wrote: > Hi again, > > Yesterday I had a short chat in the irc channel with an admin. One of my > questions was if moosefs is still alive - the news section on the > homepage is quite old and I didn't see any commits in the repo. I was > informed that moosefs is still alive and a new version is about to be > released (great news btw :). But the git repo gets only updated after a > stable release. > > I'd like to suggest to move the repo to github and push whenever changes > are made. I think this'd give the project a great boost in terms of > collaboration like contributing patches (pull requests), reporting bugs > etc. Even the manual, faq, etc. could be moved to the wiki. I saw > someone already cloned the repo on github, so others seem to be > interested in this as well. > > What do you think? > > Corin > > > ------------------------------------------------------------------------------ > Virtualization & Cloud Management Using Capacity Planning > Cloud computing makes use of virtualization - but cloud computing > also focuses on allowing computing to be delivered as a service. > http://www.accelacomm.com/jaw/sfnl/114/51521223/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Corin L. <cor...@ne...> - 2012-02-24 17:08:42
|
Hi all, I'm quite new to moosefs but so far it's working great :) For security reasons (besides firewall etc.) I'd like to bind the moosefs daemons only to a specific ip address. According to the man pages this is what I have in my mfsmaster.cfg: > # avoid swapping out process > LOCK_MEMORY=1 > > # reject mfsmounts older than 1.6.0 > REJECT_OLD_CLIENTS=1 > > # only listen on internal network for security purposes > MATOML_LISTEN_HOST=10.0.0.3 > MATOCS_LISTEN_HOST=10.0.0.3 > MATOCU_LISTEN_HOST=10.0.0.3 I restartet the master several times, but running "netstat -lnp" still shows me that the master is bound to 0.0.0.0: > Proto Recv-Q Send-Q Local Address Foreign Address > State PID/Program name > tcp 0 0 0.0.0.0:9419 0.0.0.0:* > LISTEN 177788/mfsmaster > tcp 0 0 0.0.0.0:9420 0.0.0.0:* > LISTEN 177788/mfsmaster > tcp 0 0 0.0.0.0:9421 0.0.0.0:* > LISTEN 177788/mfsmaster Am I doing anything wrong or is this a bug? Thanks in advance for any suggestions :) Corin |
From: Corin L. <in...@co...> - 2012-02-24 16:57:43
|
Hi again, Yesterday I had a short chat in the irc channel with an admin. One of my questions was if moosefs is still alive - the news section on the homepage is quite old and I didn't see any commits in the repo. I was informed that moosefs is still alive and a new version is about to be released (great news btw :). But the git repo gets only updated after a stable release. I'd like to suggest to move the repo to github and push whenever changes are made. I think this'd give the project a great boost in terms of collaboration like contributing patches (pull requests), reporting bugs etc. Even the manual, faq, etc. could be moved to the wiki. I saw someone already cloned the repo on github, so others seem to be interested in this as well. What do you think? Corin |
From: Elliot F. <efi...@gm...> - 2012-02-24 00:43:49
|
Hello, If I have several very active VMs with their virtual disks living on moose and I have a mfs-master failure, are the metaloggers guaranteed to have the same metadata as the master? Or is there the possibility that they'll be 1 update behind, thus rendering all the VM disks as unavailable because they will have a single block on the chunkservers that will be 1 revision newer than what the metalogger has recorded for it? Thanks in advance, Elliot |
From: Ricardo J. B. <ric...@da...> - 2012-02-23 23:18:19
|
Also, beware of gigabit networks: some NICs are very slow at establishing link and mfsmount (or any other network filesystem) will fail mounting anyway, even using _netdev on fstab as suggested by Michał. I have 'mount -a -t fuse' in /etc/rc.local just beacuse of this. And, if you use _netdev you also have to enable mounting network flesystems on boot: 'chkconfig netfs on' in RedHat and derivatives should do the trick. PS: Michał, could you add that last bit about netfs to the Reference Guide? Thanks! El Martes 21/02/2012, Michał Borychowski escribió: > Hi Stas! > > What platform do you use? The solution with /etc/fstab works only on Linux > platforms (tested on Debian). On other platforms you need to prepare a > script in /usr/local/etc/rc.d which will run mfsmount with needed options. > > And on Linux you need _netdev option, as written on > http://www.moosefs.org/reference-guide.html in "Mounting the File System" > section. > > Kind regards > Michal > > From: Stas Oskin [mailto:sta...@gm...] > Sent: Sunday, January 15, 2012 10:01 AM > To: MooseFS > Subject: Connection timed out at mfsmount on boot > > Hi. > > We have the mfsmount set in fstab, but noticed it never properly works on > boot. -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Ricardo J. B. <ric...@da...> - 2012-02-23 22:48:14
|
Sorry, I just saw your email today, I'll try to patch my metalogger and get back with my results. Thanks for you efforts in this! El Lunes 20/02/2012, Michał Borychowski escribió: > Hi Ricardo! > > Please try to implement these changes and give us feedback whether it > helped in your case. > > You need to open "masterconn.c" file in "mfsmetalogger" folder and in the > "masterconn_beforeclose" function you have at the end something like: > > if (eptr->logfd) { > fclose(eptr->logfd); > } > > Please add this line: "eptr->logfs = NULL;" so that the whole block looks > like: > > if (eptr->logfd) { > fclose(eptr->logfd); > eptr->logfd = NULL; > } > > Regards > Michał -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Michał B. <mic...@ge...> - 2012-02-23 21:22:27
|
Hi Elliot! Yes, I confirm, the new release would be really really soon :) Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Elliot Finley [mailto:efi...@gm...] Sent: Thursday, February 23, 2012 9:26 PM To: moo...@li... Subject: [Moosefs-users] new code Hello, Last year there was mention of new code being released in January of this year. Is it still being readied for release or did someone change their mind about releasing it? :) Thanks for any info, Elliot ---------------------------------------------------------------------------- -- Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Elliot F. <efi...@gm...> - 2012-02-23 20:26:11
|
Hello, Last year there was mention of new code being released in January of this year. Is it still being readied for release or did someone change their mind about releasing it? :) Thanks for any info, Elliot |
From: Michał B. <mic...@ge...> - 2012-02-23 15:42:44
|
Hi Davies! Here is our analysis of this situation. Different files are written simultaneously on the same CS - that's why pwrites are written do different files. Block size of 64kB is not that small. Writes in OS are sent through write cache so all saves being multiplication of 4096B should work equally fast. Our tests: dd on Linux 64k : 640k $ dd if=/dev/zero of=/tmp/test bs=64k count=10000 10000+0 records in 10000+0 records out 655360000 bytes (655 MB) copied, 22.1836 s, 29.5 MB/s $ dd if=/dev/zero of=/tmp/test bs=640k count=1000 1000+0 records in 1000+0 records out 655360000 bytes (655 MB) copied, 23.1311 s, 28.3 MB/s dd on Mac OS X 64k : 640k $ dd if=/dev/zero of=/tmp/test bs=64k count=10000 10000+0 records in 10000+0 records out 655360000 bytes transferred in 14.874652 secs (44058846 bytes/sec) $ dd if=/dev/zero of=/tmp/test bs=640k count=1000 1000+0 records in 1000+0 records out 655360000 bytes transferred in 14.578427 secs (44954096 bytes/sec) So the times are similar. Saves going to different files also should not be a problem as a kernel scheduler takes care of this. If you have some specific idea how to improve the saves please share it with us. Kind regards Michał -----Original Message----- From: Davies Liu [mailto:dav...@gm...] Sent: Wednesday, February 22, 2012 8:24 AM To: moo...@li... Subject: [Moosefs-users] Bad write performance of mfschunkserver Hi,devs: Today, We found that some mfschunkserver were not responsive, caused many timeout in mfsmount, then all the write operation were blocked. After some digging, we found that there were some small but continuous write bandwidth, strace show that many small pwrite() between several files: [pid 7087] 12:28:28 pwrite(19, "baggins3 60.210.18.235 sE7NtNQU7"..., 25995, 55684725 <unfinished ...> [pid 7078] 12:28:28 pwrite(17, "2012/02/22 12:28:28:root: WARNIN"..., 69, 21768909 <unfinished ...> [pid 7080] 12:28:28 pwrite(20, "gardner4 183.7.50.169 mr5vi+Z4H3"..., 47663, 34550257 <unfinished ...> [pid 7079] 12:28:28 pwrite(19, "\" \"Mozilla/5.0 (Windows NT 6.1) "..., 40377, 55710720 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, "MATP; InfoPath.2; .NET4.0C; 360S"..., 65536, 6427648 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, "; GTB7.2; SLCC2; .NET CLR 2.0.50"..., 65536, 6493184 <unfinished ...> [pid 7083] 12:28:28 pwrite(20, "\255BYU\355\237\347\226s\261\307N{A\355\203S\306\244\255\322[\322\rJ\32[z3\31\311\327"..., 4096, 1024 <unfinished ...> [pid 7078] 12:28:28 pwrite(23, "ovie/subject/4724373/reviews?sta"..., 65536, 6558720 <unfinished ...> [pid 7080] 12:28:28 pwrite(19, "[\"[\4\5\266v\324\366\245n\t\315\202\227\\\343=\336-\r k)\316\354\335\353\373\340\331;"..., 4096, 1024 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, "ta-Python/2.0.15\" 0.016\n211.147."..., 65536, 6624256 <unfinished ...> [pid 7081] 12:28:28 pwrite(23, "4034093?apikey=0eb695f25995d7eb2"..., 65536, 6689792 <unfinished ...> [pid 7084] 12:28:28 pwrite(23, " y8G23n95BKY:43534427:wind8vssc4"..., 65536, 6755328) = 65536 <0.000108> [pid 7078] 12:28:28 pwrite(23, "TkVvKuXfug:3248233:5Yo9vFoOIuo \""..., 65536, 6820864 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, ":s|1563396:s|1040897:s|1395290:s"..., 65536, 6886400 <unfinished ...> [pid 7085] 12:28:28 pwrite(23, "dows%3B%20U%3B%20Windows%20NT%20"..., 65536, 6951936 <unfinished ...> [pid 7087] 12:28:28 pwrite(23, "/533.17.9 (KHTML, like Gecko) Ve"..., 65536, 7017472 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, " r1m+tFW1T5M:: \"22/Feb/2012:00:0"..., 65536, 7083008 <unfinished ...> [pid 7086] 12:28:28 pwrite(19, "baggins5 61.174.60.117 i6MSCBvE1"..., 25159, 55751097 <unfinished ...> [pid 7084] 12:28:28 pwrite(20, "gardner1 182.118.7.64 TjxzPKdqNU"..., 10208, 34597920 <unfinished ...> [pid 7080] 12:28:28 pwrite(23, "d7eb2c23c1d70cc187c1&alt=json HT"..., 65536, 7148544 <unfinished ...> [pid 7083] 12:28:28 pwrite(23, "5_Google&type=n&channel=-3&user_"..., 65536, 7214080 <unfinished ...> [pid 7085] 12:28:28 pwrite(19, "12-02-22 12:28:27 1861 \"GET /ser"..., 23179, 55776256 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, "\"http://douban.fm/swf/53035/fmpl"..., 65536, 7279616 <unfinished ...> [pid 7078] 12:28:28 pwrite(20, "opic/27639291/add_comment HTTP/1"..., 18576, 34608128 <unfinished ...> [pid 7087] 12:28:28 pwrite(19, "[\"[\4\5\266v\324\366\245n\t\315\202\227\\\343=\336-\r k)\316\354\335\353\373\340\331;"..., 4096, 1024 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, "ww.douban.com%2Fgroup%2Ftopic%2F"..., 65536, 7345152 <unfinished ...> [pid 7081] 12:28:28 pwrite(20, "\255BYU\355\237\347\226s\261\307N{A\355\203S\306\244\255\322[\322\rJ\32[z3\31\311\327"..., 4096, 1024 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, "patible; MSIE 7.0; Windows NT 6."..., 65536, 7410688 <unfinished ...> [pid 7084] 12:28:28 pwrite(23, "fari/535.7 360EE\" 0.006\n211.147."..., 65536, 7476224 <unfinished ...> [pid 7080] 12:28:28 pwrite(23, "1:OUIVR8CIG5c \"22/Feb/2012:00:03"..., 65536, 7541760 <unfinished ...> [pid 7085] 12:28:28 pwrite(23, "fm \"GET /j/mine/playlist?type=s&"..., 65536, 7607296 <unfinished ...> [pid 7083] 12:28:28 pwrite(23, "pe=n&channel=18&user_id=39266798"..., 65536, 7672832 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, " 0.023\n125.34.190.128 :: \"22/Feb"..., 65536, 7738368 <unfinished ...> [pid 7078] 12:28:28 pwrite(23, "00 5859 \"http://www.douban.com/p"..., 65536, 7803904 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, "03:08 +0800\" www.douban.com \"GET"..., 65536, 7869440 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, "type=all HTTP/1.1\" 200 1492 \"-\" "..., 65536, 7934976 <unfinished ...> [pid 7084] 12:28:28 pwrite(23, "Hiapk&user_id=57982902&expire=13"..., 65536, 8000512 <unfinished ...> [pid 7080] 12:28:28 pwrite(23, "0.011\n116.253.89.216 rxASuWZf1wg"..., 65536, 8066048 <unfinished ...> [pid 7085] 12:28:28 pwrite(23, "9 +0800\" www.douban.com \"GET /ph"..., 65536, 8131584) = 65536 <0.000062> [pid 7083] 12:28:28 pwrite(23, " +0800\" www.douban.com \"GET /eve"..., 65536, 8197120 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, " +0800\" www.douban.com \"POST /se"..., 65536, 8262656) = 65536 <0.000103> [pid 7087] 12:28:28 pwrite(23, "0 12971 \"http://www.douban.com/g"..., 65536, 8328192 <unfinished ...> [pid 7081] 12:28:28 pwrite(23, ".0 (compatible; MSIE 7.0; Window"..., 65536, 8393728) = 65536 <0.000065> In order to get better performance, the chunk server should merge the continuous sequential write operations into larger ones. -- - Davies ------------------------------------------------------------------------------ Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: 周泰 <zho...@qq...> - 2012-02-23 14:59:10
|
Dear Sir Thanks for your open-source-DFS moosefs. I'm a student studying the sourcecode of moosefs now. Facing to so much code, i feel it hard to go on. So I want to get some help from you. All I want is some documents you design moosefs(If this is not a "secret"). Thanks a lot. |
From: James M. <ma...@3p...> - 2012-02-23 02:53:57
|
Cheers Dad, Did you know we are using/renting/selling an australian UAV drone in our operations? On Tue, 21 Feb 2012 06:02:21 +0000 moo...@li... wrote: > Send moosefs-users mailing list submissions to > moo...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/moosefs-users > or, via email, send a message with subject or body 'help' to > moo...@li... > > You can reach the person managing the list at > moo...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of moosefs-users digest..." > > > Today's Topics: > > 1. Re: MooseFS vs GlusterFS (Micha? Borychowski) > 2. Re: ask (Micha? Borychowski) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 21 Feb 2012 06:44:02 +0100 > From: Micha? Borychowski <mic...@ge...> > Subject: Re: [Moosefs-users] MooseFS vs GlusterFS > To: "'Quenten Grasso'" <QG...@on...> > Cc: moo...@li... > Message-ID: <00cf01ccf05b$d1521750$73f645f0$@bor...@ge...> > Content-Type: text/plain; charset="iso-8859-2" > > Hi Quenten! > > > > As no one else replies I'd like to refer you to the previous > discussion comparing MooseFS to GlusterFS at: > > http://sourceforge.net/mailarchive/message.php?msg_id=27728098 > > > > As far as support goes - there are also paid plans, please have a > look at: > > http://www.coretechnology.pl/support.php > > > > > > Kind regards > > Micha? Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wo?oska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > From: Quenten Grasso [mailto:QG...@on...] > Sent: Wednesday, February 15, 2012 1:48 AM > To: 'moo...@li...' > Subject: [Moosefs-users] MooseFS vs GlusterFS > > > > Hi Everyone, > > > > I've been asked to compare GlusterFS with MooseFS, please feel free to > comment if you agree or have more info to add! > > > > So far some of the current advantages of MooseFS are, > > > > 1) Block Based Storage with Object Type Storage resilience - > What I mean by this is as your not using Raid Controllers this makes > URE's a very minor issue. > > 2) Capacity is dynamically expandable/distributed when a new > chunk server is added > > 3) Deleted files have a "trash bin" > > 4) Snapshots! > > 5) If I have 36 disks I gain a benefit from all 36 disks instead > of a just 2 x mirror or 2 x server stripe (in gluster 3.3beta) > > 6) File/Folder Level replication "Goal" options > > > > > > Benefits of GlusterFS so far are, > > > > 1) Distributed hash tables (no metadata server) I know this is > being worked on with HA metadata servers > > 2) Paid support channels via red hat (paid support this is a big > benefit for our business) as we are looking to use this as our primary > storage platform. > > > > > > Also does MooseFS have an option for automatic snapshots eg: form of > backups? > > > > And does MooseFS have a offsite Replication Feature to another MooseFS > installation e.g.: 2nd DC via snapshots or otherwise? > > > > Does MooseFS use Granular locking if we were running vm's are we had a > server failure wile its replicating will everything continue to > function? > > > > Regards, > > Quenten Grasso > > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > Message: 2 > Date: Tue, 21 Feb 2012 07:02:18 +0100 > From: Micha? Borychowski <mic...@ge...> > Subject: Re: [Moosefs-users] ask > To: "'li_ndows'" <li_...@12...> > Cc: moo...@li... > Message-ID: <00f101ccf05e$5ea20410$1be60c30$@bor...@ge...> > Content-Type: text/plain; charset="utf-8" > > Hi! > > > > hdrbuff is an auxiliary space for header of an incoming packet. The > first 8 bytes are read into this place - system gets length of the > whole packet from the header. System allocates then new amount of > memory and reads into it the remaing part of the packet. > > > > Packet in packetstruct structure is an indicator to memory of packet > data. > > > > > > Kind regards > > Micha? Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wo?oska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > From: li_ndows [mailto:li_...@12...] > Sent: Tuesday, February 21, 2012 3:24 AM > To: moo...@li... > Subject: [Moosefs-users] ask > > > > ?Hello, > ?I want to ask some questions about the struct > matocsserventry? > > 1,what does the array hdrbuff[8] mean? > > 2,what does the struct packetstruct->packet contain? > THX > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft > developers is just $99.99! Visual Studio, SharePoint, SQL - plus > HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when > you subscribe now! http://p.sf.net/sfu/learndevnow-d2d > > ------------------------------ > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > End of moosefs-users Digest, Vol 26, Issue 7 > ******************************************** -- Best Regards, James Moyle General Manager Remote TS Pte Ltd |
From: Davies L. <dav...@gm...> - 2012-02-22 07:24:00
|
Hi,devs: Today, We found that some mfschunkserver were not responsive, caused many timeout in mfsmount, then all the write operation were blocked. After some digging, we found that there were some small but continuous write bandwidth, strace show that many small pwrite() between several files: [pid 7087] 12:28:28 pwrite(19, "baggins3 60.210.18.235 sE7NtNQU7"..., 25995, 55684725 <unfinished ...> [pid 7078] 12:28:28 pwrite(17, "2012/02/22 12:28:28:root: WARNIN"..., 69, 21768909 <unfinished ...> [pid 7080] 12:28:28 pwrite(20, "gardner4 183.7.50.169 mr5vi+Z4H3"..., 47663, 34550257 <unfinished ...> [pid 7079] 12:28:28 pwrite(19, "\" \"Mozilla/5.0 (Windows NT 6.1) "..., 40377, 55710720 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, "MATP; InfoPath.2; .NET4.0C; 360S"..., 65536, 6427648 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, "; GTB7.2; SLCC2; .NET CLR 2.0.50"..., 65536, 6493184 <unfinished ...> [pid 7083] 12:28:28 pwrite(20, "\255BYU\355\237\347\226s\261\307N{A\355\203S\306\244\255\322[\322\rJ\32[z3\31\311\327"..., 4096, 1024 <unfinished ...> [pid 7078] 12:28:28 pwrite(23, "ovie/subject/4724373/reviews?sta"..., 65536, 6558720 <unfinished ...> [pid 7080] 12:28:28 pwrite(19, "[\"[\4\5\266v\324\366\245n\t\315\202\227\\\343=\336-\r k)\316\354\335\353\373\340\331;"..., 4096, 1024 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, "ta-Python/2.0.15\" 0.016\n211.147."..., 65536, 6624256 <unfinished ...> [pid 7081] 12:28:28 pwrite(23, "4034093?apikey=0eb695f25995d7eb2"..., 65536, 6689792 <unfinished ...> [pid 7084] 12:28:28 pwrite(23, " y8G23n95BKY:43534427:wind8vssc4"..., 65536, 6755328) = 65536 <0.000108> [pid 7078] 12:28:28 pwrite(23, "TkVvKuXfug:3248233:5Yo9vFoOIuo \""..., 65536, 6820864 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, ":s|1563396:s|1040897:s|1395290:s"..., 65536, 6886400 <unfinished ...> [pid 7085] 12:28:28 pwrite(23, "dows%3B%20U%3B%20Windows%20NT%20"..., 65536, 6951936 <unfinished ...> [pid 7087] 12:28:28 pwrite(23, "/533.17.9 (KHTML, like Gecko) Ve"..., 65536, 7017472 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, " r1m+tFW1T5M:: \"22/Feb/2012:00:0"..., 65536, 7083008 <unfinished ...> [pid 7086] 12:28:28 pwrite(19, "baggins5 61.174.60.117 i6MSCBvE1"..., 25159, 55751097 <unfinished ...> [pid 7084] 12:28:28 pwrite(20, "gardner1 182.118.7.64 TjxzPKdqNU"..., 10208, 34597920 <unfinished ...> [pid 7080] 12:28:28 pwrite(23, "d7eb2c23c1d70cc187c1&alt=json HT"..., 65536, 7148544 <unfinished ...> [pid 7083] 12:28:28 pwrite(23, "5_Google&type=n&channel=-3&user_"..., 65536, 7214080 <unfinished ...> [pid 7085] 12:28:28 pwrite(19, "12-02-22 12:28:27 1861 \"GET /ser"..., 23179, 55776256 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, "\"http://douban.fm/swf/53035/fmpl"..., 65536, 7279616 <unfinished ...> [pid 7078] 12:28:28 pwrite(20, "opic/27639291/add_comment HTTP/1"..., 18576, 34608128 <unfinished ...> [pid 7087] 12:28:28 pwrite(19, "[\"[\4\5\266v\324\366\245n\t\315\202\227\\\343=\336-\r k)\316\354\335\353\373\340\331;"..., 4096, 1024 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, "ww.douban.com%2Fgroup%2Ftopic%2F"..., 65536, 7345152 <unfinished ...> [pid 7081] 12:28:28 pwrite(20, "\255BYU\355\237\347\226s\261\307N{A\355\203S\306\244\255\322[\322\rJ\32[z3\31\311\327"..., 4096, 1024 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, "patible; MSIE 7.0; Windows NT 6."..., 65536, 7410688 <unfinished ...> [pid 7084] 12:28:28 pwrite(23, "fari/535.7 360EE\" 0.006\n211.147."..., 65536, 7476224 <unfinished ...> [pid 7080] 12:28:28 pwrite(23, "1:OUIVR8CIG5c \"22/Feb/2012:00:03"..., 65536, 7541760 <unfinished ...> [pid 7085] 12:28:28 pwrite(23, "fm \"GET /j/mine/playlist?type=s&"..., 65536, 7607296 <unfinished ...> [pid 7083] 12:28:28 pwrite(23, "pe=n&channel=18&user_id=39266798"..., 65536, 7672832 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, " 0.023\n125.34.190.128 :: \"22/Feb"..., 65536, 7738368 <unfinished ...> [pid 7078] 12:28:28 pwrite(23, "00 5859 \"http://www.douban.com/p"..., 65536, 7803904 <unfinished ...> [pid 7079] 12:28:28 pwrite(23, "03:08 +0800\" www.douban.com \"GET"..., 65536, 7869440 <unfinished ...> [pid 7086] 12:28:28 pwrite(23, "type=all HTTP/1.1\" 200 1492 \"-\" "..., 65536, 7934976 <unfinished ...> [pid 7084] 12:28:28 pwrite(23, "Hiapk&user_id=57982902&expire=13"..., 65536, 8000512 <unfinished ...> [pid 7080] 12:28:28 pwrite(23, "0.011\n116.253.89.216 rxASuWZf1wg"..., 65536, 8066048 <unfinished ...> [pid 7085] 12:28:28 pwrite(23, "9 +0800\" www.douban.com \"GET /ph"..., 65536, 8131584) = 65536 <0.000062> [pid 7083] 12:28:28 pwrite(23, " +0800\" www.douban.com \"GET /eve"..., 65536, 8197120 <unfinished ...> [pid 7082] 12:28:28 pwrite(23, " +0800\" www.douban.com \"POST /se"..., 65536, 8262656) = 65536 <0.000103> [pid 7087] 12:28:28 pwrite(23, "0 12971 \"http://www.douban.com/g"..., 65536, 8328192 <unfinished ...> [pid 7081] 12:28:28 pwrite(23, ".0 (compatible; MSIE 7.0; Window"..., 65536, 8393728) = 65536 <0.000065> In order to get better performance, the chunk server should merge the continuous sequential write operations into larger ones. -- - Davies |
From: li_ndows <li_...@12...> - 2012-02-22 06:54:51
|
By the way , what's your e-mail address? and do you have other contact ? At 2012-02-21 14:02:18,"Michał Borychowski" <mic...@ge...> wrote: Hi! hdrbuff is an auxiliary space for header of an incoming packet. The first 8 bytes are read into this place - system gets length of the whole packet from the header. System allocates then new amount of memory and reads into it the remaing part of the packet. Packet in packetstruct structure is an indicator to memory of packet data. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: li_ndows [mailto:li_...@12...] Sent: Tuesday, February 21, 2012 3:24 AM To:moo...@li... Subject: [Moosefs-users] ask Hello, I want to ask some questions about the struct matocsserventry? 1,what does the array hdrbuff[8] mean? 2,what does the struct packetstruct->packet contain? THX |
From: li_ndows <li_...@12...> - 2012-02-22 06:53:18
|
So what 's the packet's structure?because it contains too much information,and I'm not very clear about it. thank you At 2012-02-21 14:02:18,"Michał Borychowski" <mic...@ge...> wrote: Hi! hdrbuff is an auxiliary space for header of an incoming packet. The first 8 bytes are read into this place - system gets length of the whole packet from the header. System allocates then new amount of memory and reads into it the remaing part of the packet. Packet in packetstruct structure is an indicator to memory of packet data. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: li_ndows [mailto:li_...@12...] Sent: Tuesday, February 21, 2012 3:24 AM To:moo...@li... Subject: [Moosefs-users] ask Hello, I want to ask some questions about the struct matocsserventry? 1,what does the array hdrbuff[8] mean? 2,what does the struct packetstruct->packet contain? THX |
From: Reinis R. <r...@ro...> - 2012-02-21 14:41:51
|
> I’ve been asked to compare GlusterFS with MooseFS, please feel free to comment if you agree or have more info to add! - One of the main drawbacks (or feature - depends how you look at it) in my opinion is the way how each of the products achieve file system consistency. GlusterFS - has the the "lazy replication" approach - e.g. it actually doesn't actively (at least till 3.2.x) check if the file has enough copies/replicas until you access that particular file or directory. In case of rarely accessed files it can lead to missing data if one (or more) storage nodes fail but the files on the "mirror" nodes are not accessed therefore no spare copies are made to another bricks. So if the node having the single copy fails the file is gone. While the targeted self heal "solution" exists ( http://community.gluster.org/a/howto-targeted-self-heal-repairing-less-than-the-whole-volume/ ) if the filesystem is huge it can be problematic. MooseFS on the other hand keeps online data about each chunk and tries to ensure enough copies exist at all times. The only tricky thing (at least for me) was to adjust the replication speed so it doesn't take too long (replicating million files over a month in case of 2 copies can lead to the same result as with glusterfs) but isn't also too aggressive to cripple the normal operations. - Second drawback (or feature again) is the centralised Moosefs meta server. While for file meta operations it gives superior speeds compared to gluster (for each stat() (not cached) needs to physically access each file on backends) it doesn't scale very well (no way to split) - if your file system starts to grow over 100 million files the metadata file and the memory consumption can go beyond what a single server instance can handle. This approach also makes the moosefs component startup quite slow (both meta and the storage nodes) - the chunkservers have to traverse and checksum each chunk versus gluster where the cluster (nodes) can be onlined/downed pretty much instantly. My 2 cents. rr |
From: Michał B. <mic...@ge...> - 2012-02-21 06:26:57
|
Hi Stas! What platform do you use? The solution with /etc/fstab works only on Linux platforms (tested on Debian). On other platforms you need to prepare a script in /usr/local/etc/rc.d which will run mfsmount with needed options. And on Linux you need _netdev option, as written on http://www.moosefs.org/reference-guide.html in "Mounting the File System" section. Kind regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Sunday, January 15, 2012 10:01 AM To: MooseFS Subject: Connection timed out at mfsmount on boot Hi. We have the mfsmount set in fstab, but noticed it never properly works on boot. When the system comes up, it just logs: Jan 15 06:55:51 ovsa1 mfsmount[6926]: file: 63934, index: 0, chunk: 24383640, version: 1 - writeworker: connection with (C0A80214:9422) was timed out (unfinished writes: 5; try counter: 1) Jan 15 06:55:52 ovsa1 mfsmount[6926]: writeworker: write error: 26 Jan 15 06:55:55 ovsa1 last message repeated 3 times Jan 15 06:55:56 ovsa1 mfschunkserver[6252]: testing chunk: /dfs1/75/chunk_00000000015C3675_00000001.mfs Jan 15 06:55:56 ovsa1 mfsmount[6926]: writeworker: write error: 26 Jan 15 06:56:06 ovsa1 last message repeated 8 times Jan 15 06:56:06 ovsa1 mfschunkserver[6252]: testing chunk: /dfs5/A7/chunk_00000000015C38A7_00000001.mfs Jan 15 06:56:07 ovsa1 mfsmount[6926]: writeworker: write error: 26 Jan 15 06:56:16 ovsa1 last message repeated 8 times Jan 15 06:56:17 ovsa1 mfschunkserver[6252]: testing chunk: /dfs4/AE/chunk_00000000015BDEAE_00000001.mfs Jan 15 06:56:17 ovsa1 mfsmount[6926]: writeworker: write error: 26 Jan 15 06:56:26 ovsa1 last message repeated 7 times Jan 15 06:56:27 ovsa1 mfschunkserver[6252]: testing chunk: /dfs3/DB/chunk_00000000015C42DB_00000001.mfs Jan 15 06:56:27 ovsa1 mfsmount[6926]: writeworker: write error: 26 Jan 15 06:56:37 ovsa1 last message repeated 10 times Jan 15 06:56:37 ovsa1 mfschunkserver[6252]: testing chunk: /dfs2/BB/chunk_00000000015C2EBB_00000001.mfs Jan 15 06:56:37 ovsa1 mfsmount[6926]: writeworker: write error: 26 Jan 15 06:56:46 ovsa1 last message repeated 9 times Jan 15 06:56:47 ovsa1 mfschunkserver[6252]: testing chunk: /dfs1/90/chunk_00000000015C3690_00000001.mfs Jan 15 06:56:48 ovsa1 mfsmount[6926]: writeworker: write error: 26 Also, the mfsmount never retries to connect after these errors. Any idea how to resolve this, perhaps by increasing the timeout, or making mfsmount reconnect until it succeeds? Thanks in advance. |
From: Michał B. <mic...@ge...> - 2012-02-21 06:02:20
|
Hi! hdrbuff is an auxiliary space for header of an incoming packet. The first 8 bytes are read into this place - system gets length of the whole packet from the header. System allocates then new amount of memory and reads into it the remaing part of the packet. Packet in packetstruct structure is an indicator to memory of packet data. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: li_ndows [mailto:li_...@12...] Sent: Tuesday, February 21, 2012 3:24 AM To: moo...@li... Subject: [Moosefs-users] ask Hello, I want to ask some questions about the struct matocsserventry? 1,what does the array hdrbuff[8] mean? 2,what does the struct packetstruct->packet contain? THX |
From: Michał B. <mic...@ge...> - 2012-02-21 05:43:50
|
Hi Quenten! As no one else replies I'd like to refer you to the previous discussion comparing MooseFS to GlusterFS at: http://sourceforge.net/mailarchive/message.php?msg_id=27728098 As far as support goes - there are also paid plans, please have a look at: http://www.coretechnology.pl/support.php Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Quenten Grasso [mailto:QG...@on...] Sent: Wednesday, February 15, 2012 1:48 AM To: 'moo...@li...' Subject: [Moosefs-users] MooseFS vs GlusterFS Hi Everyone, I've been asked to compare GlusterFS with MooseFS, please feel free to comment if you agree or have more info to add! So far some of the current advantages of MooseFS are, 1) Block Based Storage with Object Type Storage resilience - What I mean by this is as your not using Raid Controllers this makes URE's a very minor issue. 2) Capacity is dynamically expandable/distributed when a new chunk server is added 3) Deleted files have a "trash bin" 4) Snapshots! 5) If I have 36 disks I gain a benefit from all 36 disks instead of a just 2 x mirror or 2 x server stripe (in gluster 3.3beta) 6) File/Folder Level replication "Goal" options Benefits of GlusterFS so far are, 1) Distributed hash tables (no metadata server) I know this is being worked on with HA metadata servers 2) Paid support channels via red hat (paid support this is a big benefit for our business) as we are looking to use this as our primary storage platform. Also does MooseFS have an option for automatic snapshots eg: form of backups? And does MooseFS have a offsite Replication Feature to another MooseFS installation e.g.: 2nd DC via snapshots or otherwise? Does MooseFS use Granular locking if we were running vm's are we had a server failure wile its replicating will everything continue to function? Regards, Quenten Grasso |