You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve W. <st...@pu...> - 2012-04-13 16:41:21
|
Hi, About a week ago we started seeing poor performance on one of our MooseFS installations. We use MooseFS for user home directories and two users had problems with the gvfsd-metadata daemon rapidly creating/writing/deleting files in ~/.local/share/gvfs-metadata. A quick search on the Web shows this to be a common problem. The recommended fix is to kill the gvfsd-metadata daemon and then remove the ~/.local/share/gvfs-metadata directory which will be recreated later as needed. Hopefully this information will come in handy for anyone else encountering the same problem. Regards, Steve |
From: Quenten G. <QG...@on...> - 2012-04-11 23:23:23
|
Hi Ricardo, Well not quite what I meant, however I was referring to this "could" be adapted to the moosefs model as it also uses chunks to store data And could be very useful to allot of users who are trying to find a reasonable way to store virtual machines on mfs without the woes of metadata snap shotting? Qemu-RBD is userspace vs cephs-RBD which is kernel. Regards, Quenten Grasso -----Original Message----- From: Ricardo J. Barberis [mailto:ric...@da...] Sent: Thursday, 12 April 2012 7:02 AM To: moo...@li... Subject: Re: [Moosefs-users] RBD El Miércoles 11/04/2012, Quenten Grasso escribió: > Hey All, > > Has anyone tried using Ceph's Rados Block Device/QEMU-RBD on MooseFS? > > http://www.linux-kvm.com/content/cephrbd-block-driver-patches-qemu-kvm > > Regards, > Quenten Um, no but I assume rbd only works with Ceph? (from the website you linked: "rbd is described as a linux kernel driver that is part of the ceph file system module"). I mean, Ceph is a distributed fault tolerant filesystem, just like MooseFS, it's not a "regular" filesystem like ext3, ext4, xfs. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ ------------------------------------------------------------------------------ Better than sec? Nothing is better than sec when it comes to monitoring Big Data applications. Try Boundary one-second resolution app monitoring today. Free. http://p.sf.net/sfu/Boundary-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@da...> - 2012-04-11 21:02:38
|
El Miércoles 11/04/2012, Quenten Grasso escribió: > Hey All, > > Has anyone tried using Ceph's Rados Block Device/QEMU-RBD on MooseFS? > > http://www.linux-kvm.com/content/cephrbd-block-driver-patches-qemu-kvm > > Regards, > Quenten Um, no but I assume rbd only works with Ceph? (from the website you linked: "rbd is described as a linux kernel driver that is part of the ceph file system module"). I mean, Ceph is a distributed fault tolerant filesystem, just like MooseFS, it's not a "regular" filesystem like ext3, ext4, xfs. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Steve W. <st...@pu...> - 2012-04-11 13:55:09
|
On 03/30/2012 09:45 PM, Quenten Grasso wrote: > Also as a side note I went though my small test cluster which is 6 machines with 2 disks each (1ru servers) and replaced all of the disks which seemed to have a higher then average fsync than the other disks and this increased my clusters performance considerably and I'm not currently running any raid. I guess this may go without saying however thought I might mention it :) Is this a common problem... having disks (all the same model from the same manufacturer) which exhibit very different fsync and read/write performance? I've had to replace a few disks (they're Hitachi 3TB HDS723030ALA640 drives) and the ones I've replaced have fairly dismal performance compared to the other disks in the server. For example, the daily average read transfer speeds range from 58MB/s to 72MB/s but on the replacement drive it is 29MB/s. Similarly, write speeds range from 3.4MB/s to 3.8MB/s (but one outlier is giving 2.8MB/s) and the replacement drive is showing 2.3MB/s. Average fsync times range between 50,675us and 58,882us while the replacement drive shows 76,516us. Since the numbers were so far off from the average, I replaced one drive two more times with the same results. Could it just be that different batches of the same disk drive have such widely varying characteristics? Or is there something in how MooseFS rebalances that could be causing this? When I replace a drive, I don't mark it for removal first; I just leave myself undergoal for a while until all necessary chunks are replicated to the replaced drive. Thanks, Steve > > Quenten Grasso |
From: Quenten G. <QG...@on...> - 2012-04-11 03:39:56
|
Hey All, Has anyone tried using Ceph's Rados Block Device/QEMU-RBD on MooseFS? http://www.linux-kvm.com/content/cephrbd-block-driver-patches-qemu-kvm Regards, Quenten |
From: Steve W. <st...@pu...> - 2012-04-10 15:36:41
|
On 04/10/2012 11:32 AM, Steve Wilson wrote: > On 04/10/2012 11:24 AM, Steve Thompson wrote: >> On Tue, 10 Apr 2012, Steve Wilson wrote: >> >>> But I do make a nightly backup of this 24TB MooseFS volume just in case >>> something happens while we're running only one chunk server. >> You can back up 24 TB in one night? I'd like to hear how you do that. >> There's no way I can get anywhere close to that backup performance. >> >> Steve >> > And it usually only takes about 30 minutes. :-) Just to be very clear: this was a joke! Even our incremental backups each night take several hours to complete depending on how much data has changed since the previous night's backup. Steve |
From: Steve W. <st...@pu...> - 2012-04-10 15:32:48
|
On 04/10/2012 11:24 AM, Steve Thompson wrote: > On Tue, 10 Apr 2012, Steve Wilson wrote: > >> But I do make a nightly backup of this 24TB MooseFS volume just in case >> something happens while we're running only one chunk server. > You can back up 24 TB in one night? I'd like to hear how you do that. > There's no way I can get anywhere close to that backup performance. > > Steve > And it usually only takes about 30 minutes. :-) We don't run full backups so we're only moving files that have been modified over the network using rsync. Before each backup run we take a snapshot on the backup server so we end up with the logical equivalent of a full backup each night. It also helps that this MFS volume only has about 10TB of data on it even though it has capacity for 22TB. Steve |
From: Steve T. <sm...@cb...> - 2012-04-10 15:24:54
|
On Tue, 10 Apr 2012, Steve Wilson wrote: > But I do make a nightly backup of this 24TB MooseFS volume just in case > something happens while we're running only one chunk server. You can back up 24 TB in one night? I'd like to hear how you do that. There's no way I can get anywhere close to that backup performance. Steve |
From: Steve W. <st...@pu...> - 2012-04-10 15:09:14
|
Thanks, Peter! I'll plan to take this chunk server offline one evening and run memtest on it. Unfortunately, I don't have a spare system that can take the load from this one so we won't have any redundancy while running memtest. But I do make a nightly backup of this 24TB MooseFS volume just in case something happens while we're running only one chunk server. Steve On 04/08/2012 09:10 AM, Peter aNeutrino wrote: > Hi Steve :) > It looks like memory or mainboard/controller issue. > > However there is some probability that this machine has all hard > drives broken. > (eg. by temperature or by some shaking/vibration) > > If I were you I would mark this machine for maintenance and make full > tests on it: > - first we need to make sure that all data are with desired level of > safety by marking all disks in /etc/mfshdd.cfg config file with > asterisk like this: > */mfs/01 > */mfs/02 > ... > - restart the chunk server service (eg. /etc/init.d/mfs-chunkserver > restart) > - wait for all chunks from this machine to be replicated > - stop the chunk server service > > ....and then make tests eg.: > - "memtest" for memory > -- if error occours replace RAM test it again > -- if error occurs again so it looks like mainboard issue. > > - "badblock" for harddrives you can test all disk together parallel > but I would run them after I moved disks into different machine. > (just move them before you run memtest so you can run memtest and > badblock in the same tame) > > if all test PASS (no errors) than I would try to replace controller > and mainboard. > and put tested memory and disks into this new mainbord/controller (or > even CPU) > > That is for one server case. With big installations like 100+ such > errors of hardware can occur every week/month and it is worth to have > better procedure, which our Technical Support would create for you :) > > Good luck with testing and please share with us when you fix it :) > aNeutrino :) > -- > Peter aNeutrino > http://pl.linkedin.com/in/aneutrino > +48 602 302 132 > Evangelist and Product Manager ofhttp://MooseFS.org > at Core Technology sp. z o.o. > > > > > On Thu, Apr 5, 2012 at 22:29, Steve Wilson <st...@pu... > <mailto:st...@pu...>> wrote: > > Hi, > > One of my chunk servers will log a CRC error from time to time > like the > following: > > Apr 4 17:29:10 massachusetts mfschunkserver[2224]: > write_block_to_chunk: > file:/mfs/08/27/chunk_00000000066B5D27_00000001.mfs - crc error > > Is the most likely cause faulty system memory? Or disk > controller? We > get an error about every two days or so and spread across most of the > drives: > > # IP path (switch to name) chunks last > error > 9 128.210.48.62:9422:/mfs/01/ 934123 2012-03-28 17:41 > 10 128.210.48.62:9422:/mfs/02/ 931903 2012-03-23 21:28 > 11 128.210.48.62:9422:/mfs/03/ 888712 2012-03-30 19:13 > 12 128.210.48.62:9422:/mfs/04/ 931661 2012-04-01 03:01 > 13 128.210.48.62:9422:/mfs/05/ 935681 no errors > 14 128.210.48.62:9422:/mfs/06/ 929248 2012-04-04 13:41 > 15 128.210.48.62:9422:/mfs/07/ 929592 2012-03-30 19:02 > 16 128.210.48.62:9422:/mfs/08/ 829446 2012-04-04 17:29 > > Thanks, > Steve > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 |
From: Markus K. <mar...@tu...> - 2012-04-10 09:49:43
|
On Monday 09 April 2012, Steve Wilson wrote: > On 04/09/2012 02:51 PM, Steve Thompson wrote: > > mfs 1.6.20 > > > > I have marked a disk for removal with * in the mfshdd.cfg file. There are > > approximately 1.6 million chunks on this disk, and so far about 1.3 > > million chunks have been replicated elsewhere. All files have goal = 2. > > > > When viewing the CGI with Firefox on a Windows box, it shows 1.3 million > > in blue on the valid copies = 3 column of the goal = 2 line and nothing > > in the valid copies = 1 line. This number is increasing. > > > > When viewing the CGI with Firefox on a Linux box, it shows 300,000 in > > orange on the valid copies = 1 column of the goal = 2 line, and nothing > > in the valid copies = 3 line. This number is decreasing. > > > > The CGI is running on the mfsmaster in both cases. Why the difference? > > > > BTW, this has taken 5 days so far. A little slow, methinks. > > > > Steve > > Regarding the speed issue, have you modified the default > CHUNKS_WRITE_REP_LIMIT and CHUNKS_READ_REP_LIMIT in mfsmaster.cfg? > Following suggestions on the list, I have mine permanently set to: > CHUNKS_WRITE_REP_LIMIT = 5 > CHUNKS_READ_REP_LIMIT = 15 > instead of the default: > CHUNKS_WRITE_REP_LIMIT = 1 > CHUNKS_READ_REP_LIMIT = 5 I tried this settings today and it worked very good. Thanks for sharing your configuration. For me today a similar question followed up: How to find out if all chunks are migrated from a chunk server? At the moment I can see: chunk server marked for removal: chunks=4545 All chunks state matrix 'regular': goal=2: 1=1, 2=8028 All chunks state matrix 'all': goal=2: 1=1, 2=3483, 3=4545 So I know that I can remove the chunk server now because chunks state matrix 'all' says I have a overgoal of 4545 which is exactly the same number of chunks on my chunk server. At the moment I only run some test with goal=2. In production we will have different goals for different types of data. My plan is to use all desktop and laboratory hosts in our environment as a chunk server. Ones a year we need to reboot our laboratory hosts into windows for about two weeks. Which means I have to mark all partitions on this chunk servers for removal at the same time. Which I guess will make it hard to say which one is ready to reboot. At least not before all chunks of all chunk server are migrated. It would be very nice to see the status at the disk status table. At the moment I only can see status 'ok' or 'marked for removal'. Nice would be see something like: 'marked for removal, migration in progress' and 'marked for removal, migration finished' Markus -- Markus Köberl Graz University of Technology Signal Processing and Speech Communication Laboratory E-mail: mar...@tu... |
From: siddharth c. <sid...@gm...> - 2012-04-10 08:05:23
|
Hi, I have 4 chunkservers: 2 each on a specific subnet. and I want the replication to take place from one subnet to another. Is there anyway I can specify this in Moosefs so that the data is replicated across different networks and not just on the same network. Also, I would like to know the default replication policy of Moosefs and if there is any way to modify it? Thanks, Sid |
From: Rujia L. <ruj...@gm...> - 2012-04-10 07:49:02
|
Hi all! In the reference guide, it is said "If it's not possible to create a separate disk partition, filesystems in files can be used" and some instrutions followed. I can imagine that there will be a drop down in the performance (anyone has some statistical data about this?), but I don't know whether there are other drawbacks. I'm asking this because I wanna make use of some existing computers with partially empty partitions. I think It's a bit risky to resize the partitions. Thanks! - Rujia |
From: Steve T. <sm...@cb...> - 2012-04-09 19:10:57
|
On Mon, 9 Apr 2012, Steve Wilson wrote: > Regarding the speed issue, have you modified the default > CHUNKS_WRITE_REP_LIMIT and CHUNKS_READ_REP_LIMIT in mfsmaster.cfg? No, for now I have chosen not to change the defaults, because we are having issues with performance for regular application I/O, and in any event I don't really want to have to restart the master while we are in production for an unknown effect. It's just replicating at a much slower rate than I expected. Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- |
From: Steve W. <st...@pu...> - 2012-04-09 18:58:49
|
On 04/09/2012 02:51 PM, Steve Thompson wrote: > mfs 1.6.20 > > I have marked a disk for removal with * in the mfshdd.cfg file. There are > approximately 1.6 million chunks on this disk, and so far about 1.3 > million chunks have been replicated elsewhere. All files have goal = 2. > > When viewing the CGI with Firefox on a Windows box, it shows 1.3 million > in blue on the valid copies = 3 column of the goal = 2 line and nothing in > the valid copies = 1 line. This number is increasing. > > When viewing the CGI with Firefox on a Linux box, it shows 300,000 in > orange on the valid copies = 1 column of the goal = 2 line, and nothing in > the valid copies = 3 line. This number is decreasing. > > The CGI is running on the mfsmaster in both cases. Why the difference? > > BTW, this has taken 5 days so far. A little slow, methinks. > > Steve Regarding the speed issue, have you modified the default CHUNKS_WRITE_REP_LIMIT and CHUNKS_READ_REP_LIMIT in mfsmaster.cfg? Following suggestions on the list, I have mine permanently set to: CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 15 instead of the default: CHUNKS_WRITE_REP_LIMIT = 1 CHUNKS_READ_REP_LIMIT = 5 Steve |
From: Steve T. <sm...@cb...> - 2012-04-09 18:52:05
|
mfs 1.6.20 I have marked a disk for removal with * in the mfshdd.cfg file. There are approximately 1.6 million chunks on this disk, and so far about 1.3 million chunks have been replicated elsewhere. All files have goal = 2. When viewing the CGI with Firefox on a Windows box, it shows 1.3 million in blue on the valid copies = 3 column of the goal = 2 line and nothing in the valid copies = 1 line. This number is increasing. When viewing the CGI with Firefox on a Linux box, it shows 300,000 in orange on the valid copies = 1 column of the goal = 2 line, and nothing in the valid copies = 3 line. This number is decreasing. The CGI is running on the mfsmaster in both cases. Why the difference? BTW, this has taken 5 days so far. A little slow, methinks. Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- |
From: Fabien G. <fab...@gm...> - 2012-04-09 13:17:30
|
Hello, On Mon, Apr 9, 2012 at 1:45 PM, Yu Yu <tw...@gm...> wrote: > Hi, all > I just wonder whether we could divide the metadata into parts and store them > in different masters. > Is that possible? As far as I know, it is not possible. But I'm curious about the interest of such an infrastructure ? Metadata shared on n masters, means robustness divided by n... Fabien |
From: Yu Yu <tw...@gm...> - 2012-04-09 11:45:43
|
Hi, all I just wonder whether we could divide the metadata into parts and store them in different masters. Is that possible? Any suggestion is appreciated! Thx~ |
From: Peter a. <pio...@co...> - 2012-04-08 13:16:05
|
Many thanks Steve :) we fixed it and we will have it in our next release this month :) cheers aNeutrino -- Peter aNeutrino http://pl.linkedin.com/in/aneutrino+48 602 302 132 Evangelist and Product Manager of http://MooseFS.org at Core Technology sp. z o.o. On Thu, Apr 5, 2012 at 16:31, Steve Wilson <st...@pu...> wrote: > Hi, > > The initialization of the DATADIR, MFSUSER, and MFSGROUP variables in > the Debian init scripts is incorrect in the latest release of MooseFS > (1.6.24). Here are the original lines: > > DATADIR=$(sed -e 's/^DATA_PATH[ ]*=[ ]*\([^ ]*\)[ > > ]*$/\1/' "$CFGFILE") > > MFSUSER=$(sed -e 's/^WORKING_USER[ ]*=[ ]*\([^ ]*\)[ > > ]*$/\1/' "$CFGFILE") > > MFSGROUP=$(sed -e 's/^WORKING_GROUP[ ]*=[ ]*\([^ ]*\)[ > > ]*$/\1/' "$CFGFILE") > This places the contents of the whole config file into each variable > with one or more lines modified by sed. > > These lines should be changed to something like the following: > > DATADIR=$(grep "^DATA_PATH" "$CFGFILE" | tail -1 | sed -e > > 's/^DATA_PATH[ \t]*=[ \t]*\([^ \t]*\)[ \t]*$/\1/') > > MFSUSER=$(grep "^WORKING_USER" "$CFGFILE" | tail -1 | sed -e > > 's/^WORKING_USER[ \t]*=[ \t]*\([^ \t]*\)[ \t]*$/\1/') > > MFSGROUP=$(grep "^WORKING_GROUP" "$CFGFILE" | tail -1 | sed -e > > 's/^WORKING_GROUP[ \t]*=[ \t]*\([^ \t]*\)[ \t]*$/\1/') > > Thanks, > Steve > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Peter a. <pio...@co...> - 2012-04-08 13:11:26
|
Hi Steve :) It looks like memory or mainboard/controller issue. However there is some probability that this machine has all hard drives broken. (eg. by temperature or by some shaking/vibration) If I were you I would mark this machine for maintenance and make full tests on it: - first we need to make sure that all data are with desired level of safety by marking all disks in /etc/mfshdd.cfg config file with asterisk like this: */mfs/01 */mfs/02 ... - restart the chunk server service (eg. /etc/init.d/mfs-chunkserver restart) - wait for all chunks from this machine to be replicated - stop the chunk server service ....and then make tests eg.: - "memtest" for memory -- if error occours replace RAM test it again -- if error occurs again so it looks like mainboard issue. - "badblock" for harddrives you can test all disk together parallel but I would run them after I moved disks into different machine. (just move them before you run memtest so you can run memtest and badblock in the same tame) if all test PASS (no errors) than I would try to replace controller and mainboard. and put tested memory and disks into this new mainbord/controller (or even CPU) That is for one server case. With big installations like 100+ such errors of hardware can occur every week/month and it is worth to have better procedure, which our Technical Support would create for you :) Good luck with testing and please share with us when you fix it :) aNeutrino :) -- Peter aNeutrino http://pl.linkedin.com/in/aneutrino+48 602 302 132 Evangelist and Product Manager of http://MooseFS.org at Core Technology sp. z o.o. On Thu, Apr 5, 2012 at 22:29, Steve Wilson <st...@pu...> wrote: > Hi, > > One of my chunk servers will log a CRC error from time to time like the > following: > > Apr 4 17:29:10 massachusetts mfschunkserver[2224]: > write_block_to_chunk: > file:/mfs/08/27/chunk_00000000066B5D27_00000001.mfs - crc error > > Is the most likely cause faulty system memory? Or disk controller? We > get an error about every two days or so and spread across most of the > drives: > > # IP path (switch to name) chunks last error > 9 128.210.48.62:9422:/mfs/01/ 934123 2012-03-28 17:41 > 10 128.210.48.62:9422:/mfs/02/ 931903 2012-03-23 21:28 > 11 128.210.48.62:9422:/mfs/03/ 888712 2012-03-30 19:13 > 12 128.210.48.62:9422:/mfs/04/ 931661 2012-04-01 03:01 > 13 128.210.48.62:9422:/mfs/05/ 935681 no errors > 14 128.210.48.62:9422:/mfs/06/ 929248 2012-04-04 13:41 > 15 128.210.48.62:9422:/mfs/07/ 929592 2012-03-30 19:02 > 16 128.210.48.62:9422:/mfs/08/ 829446 2012-04-04 17:29 > > Thanks, > Steve > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Steve W. <st...@pu...> - 2012-04-05 20:29:45
|
Hi, One of my chunk servers will log a CRC error from time to time like the following: Apr 4 17:29:10 massachusetts mfschunkserver[2224]: write_block_to_chunk: file:/mfs/08/27/chunk_00000000066B5D27_00000001.mfs - crc error Is the most likely cause faulty system memory? Or disk controller? We get an error about every two days or so and spread across most of the drives: # IP path (switch to name) chunks last error 9 128.210.48.62:9422:/mfs/01/ 934123 2012-03-28 17:41 10 128.210.48.62:9422:/mfs/02/ 931903 2012-03-23 21:28 11 128.210.48.62:9422:/mfs/03/ 888712 2012-03-30 19:13 12 128.210.48.62:9422:/mfs/04/ 931661 2012-04-01 03:01 13 128.210.48.62:9422:/mfs/05/ 935681 no errors 14 128.210.48.62:9422:/mfs/06/ 929248 2012-04-04 13:41 15 128.210.48.62:9422:/mfs/07/ 929592 2012-03-30 19:02 16 128.210.48.62:9422:/mfs/08/ 829446 2012-04-04 17:29 Thanks, Steve |
From: Steve W. <st...@pu...> - 2012-04-05 14:31:35
|
Hi, The initialization of the DATADIR, MFSUSER, and MFSGROUP variables in the Debian init scripts is incorrect in the latest release of MooseFS (1.6.24). Here are the original lines: > DATADIR=$(sed -e 's/^DATA_PATH[ ]*=[ ]*\([^ ]*\)[ > ]*$/\1/' "$CFGFILE") > MFSUSER=$(sed -e 's/^WORKING_USER[ ]*=[ ]*\([^ ]*\)[ > ]*$/\1/' "$CFGFILE") > MFSGROUP=$(sed -e 's/^WORKING_GROUP[ ]*=[ ]*\([^ ]*\)[ > ]*$/\1/' "$CFGFILE") This places the contents of the whole config file into each variable with one or more lines modified by sed. These lines should be changed to something like the following: > DATADIR=$(grep "^DATA_PATH" "$CFGFILE" | tail -1 | sed -e > 's/^DATA_PATH[ \t]*=[ \t]*\([^ \t]*\)[ \t]*$/\1/') > MFSUSER=$(grep "^WORKING_USER" "$CFGFILE" | tail -1 | sed -e > 's/^WORKING_USER[ \t]*=[ \t]*\([^ \t]*\)[ \t]*$/\1/') > MFSGROUP=$(grep "^WORKING_GROUP" "$CFGFILE" | tail -1 | sed -e > 's/^WORKING_GROUP[ \t]*=[ \t]*\([^ \t]*\)[ \t]*$/\1/') Thanks, Steve |
From: Steve W. <st...@pu...> - 2012-04-04 17:29:50
|
On 04/03/2012 03:56 PM, Steve Thompson wrote: > OK, so now you have a nice and shiny and absolutely massive MooseFS file > system. How do you back it up? > > I am using Bacula and divide the MFS file system into separate areas (eg > directories beginning with a, those beginning with b, and so on) and use > several different chunkservers to run the backup jobs, on the theory that > at least some of the data is local to the backup process. But this still > leaves the vast majority of data to travel the network twice (a planned > dedicated storage network has not yet been implemented). This results in > pretty bad backup performance and high network load. Any clever ideas? > > Steve We have four 22TB and one 14TB MooseFS volumes that we backup onto disk-based backup servers. We used to use rsnapshot but now we use rsync in combination with ZFS snapshots. Each evening before our backup run, we take a snapshot on the backup filesystem and label it with the date. Then we run rsync on the volumes being backed up and only what has been modified since the previous backup is transfered over the network. The result is the equivalent of taking a full backup each night and it's very easy to recover data. I also use ZFS compression and dedup to help conserve space on our backup servers. The dedup option is especially helpful when a user decides to rename a large directory; rsync may have to bring it across the network and write it to the filesystem but ZFS will recognize the data as duplicates of already stored data. Steve |
From: Quenten G. <QG...@on...> - 2012-04-04 11:00:33
|
Very interesting Ken, ☺ Just thinking out loud It’d be great to see some kind of SSD integration so it was possible to have multiple tiers of storage as well, The SSD’s would be useful for Gold Virtual machine images / data that needs to be accessed or written to fast and maybe a replica is set on standard sata disks instead of using zfs’s caching we could use a couple of cheaper ssd’s in each node to act as caches similarly as to how the high end gear aka netapp etc works these days I agree the metadata server is the likely bottleneck but we are also planning to try and store our data inside a large files (virtual machine images). I guess I’m not quite sure if the performance in any distributed file system in general is were I expect yet. Even using 10gbe, or 20gbps Infiniband/IPoIB I found this an interesting read http://forums.gentoo.org/viewtopic-p-6875454.html?sid=d4299cd9365550ac3940a0f8a5beff46 imoo Regards, Quenten Grasso From: Ken [mailto:ken...@gm...] Sent: Wednesday, 4 April 2012 5:21 PM To: Wang Jian Cc: moo...@li...<mailto:moo...@li...>; Quenten Grasso Subject: Re: [Moosefs-users] Backup strategies more detail here: http://sourceforge.net/mailarchive/message.php?msg_id=28664530 -Ken |
From: Quenten G. <QG...@on...> - 2012-04-04 07:34:42
|
Hi Ben, Moose Users Thanks for your reply I've been thinking about using ZFS however as I understand some of the benefits of ZFS which are worth leveraging are data corruption prevention aka check summing of data via scrubs and compression. As I understand MFS for awhile now has check summing built into it? From the MFS fuse mount (across the network) all the way down to the disk level so whenever we access data its check summed which in it self is great, So this means that we don't "need" to use raid controllers for data protection and if we use a goal of 2 or more, so we are getting redundancy and data protection for little extra space. I've done some basic math on using ZFS, for example 4 chunk servers with 8 drives using 2TB drives. Using ZFS raidz2 with a goal of 2 vs single disks and a goal of 3. ZFS/GOAL2=24 Useable TB GOAL3=21 Useable TB So clearly there is a space saving here of around 3 TB using ZFS... Reliability, So on one hand with ZFS configuration anymore then 1 physical server or 5 disks fail at the same time within being replaced in the same time with in 2 chassis and our cluster is offline. VS GOAL A goal of 3 anymore then 2 disks with the same data set across the total number of servers or 2 physical servers fail at any one time and our cluster is effectively offline also keeping in mind, the chances of this happening would have to be pretty low as you increase your number of servers and drives.. SPEED... Raw speeds of a single SATA Disk around 75 IOPS and around 100mb/s throughput. Reliability, A RAIDZ2 I imagine we would achieve the speed of the 6 of 8 disks being 450 IOPS or 600mb/s per server GOAL With a Goal of 3, we would achieve a write of 75 IOPS & ~100mb/s per server. Single threads I think the ZFS system should be certainly faster throughput; however multiple threads the multiple paths in and out the goal of 3 I think would win. At this stage, it always seems like a trade off; of either reliability or performance pick one? So reviewing these examples the middle solution would be utilizing RaidZ1 with goal of 3, this would be the closest we could get to with performance and redundancy... This change's again when we look at scale, So now let's expand our servers to 40-80 servers using Raidz2, having 40 servers with 1 single volume and a goal of 3. Which 2 of the 40 servers could fail at anyone time and I wouldn't lose access to any data?, the chunks are effectively "randomly" placed among the cluster so I guess we would need to increase the over all goal by increasing utilized space usage once again for reliability..... non-raid/zfs setup for 40 servers/320 Hard Disks, 3 of which has my data on it, which 2 can fail without me losing access to my data :) So I guess this raises a few more questions which solution is the most effective... In the case of using ZFS raidz2/1 solutions What becomes the acceptable ratio of servers to GOAL from a reliability point of view or using individual disks/GOAL scaling the amount of servers would give us an increase of performance at the cost of reliability? Also from a performance point the higher the goal the more throughput however this may work against us if the cluster is "very busy" across all of the servers So I guess we are back to where we started we still have to pick one, Performance or Reliability? So any thoughts? Also thanks for reading, if you made it :) Regards, Quenten Grasso -----Original Message----- From: Allen, Benjamin S [mailto:bs...@la...] Sent: Wednesday, 4 April 2012 9:13 AM To: Quenten Grasso Cc: moo...@li... Subject: Re: [Moosefs-users] Backup strategies Quenten, I'm using MFS with ZFS. I use ZFS for RAIDZ2 (RAID6) and hot sparing on each chunkserver. I then only set a goal of 2 in MFS. I also have a "scratch" directory within MFS that is set to goal 1 and not backed up to tape. I attempt to get my users to organize their data between their data directory and scratch to minimize goal overhead for data that doesn't require it. Overhead of my particular ZFS setup is ~15% lost to parity and hot spares. Although I was a bit bold with my RAIDZ2 configuration, which will cause rebuild time to be quite long in trade off for lower overhead. This was done with the knowledge that RAIDZ2 can withstand two drive failures, and MFS would have another copy of the data on another chunk server. I have not however tested how well MFS handles a ZFS pool degraded with data loss. I'm guessing I would take the chunkserver daemon offline, get the ZFS pool into a rebuilding state, and restart the CS. I'm guessing the CS will see missing chunks, mark them undergoal, and re-replicate them. A more cautious RAID set would be closer to 30% overhead. Then of course with goal 2 you lose another 50%. A side benefit of using ZFS is on-the-fly compression and de-dup of your chunkserver, L2ARC SSD read cache (although it turns out most of my cache hits are from L1ARC, i.e. memory), and to speed up writes you can add a pair of ZIL SSDs. For disaster recovery you always need to be extra careful when relying on a single system todo your live and DR sites. In this case you're asking for MFS to push data to another site. You'd then be relying on a single piece of software that could equally corrupt your live site and your DR site. Ben On Apr 3, 2012, at 3:36 PM, Quenten Grasso wrote: > Hi All, > > How large is your metadata & logs at this stage? Just trying to mitigate this exact issue myself. > > I was planning to create hourly snapshots (as I understand the way they are implemented they don't affect performance unlike a vmware snapshot please correct me if I'm wrong) and copy these offsite to another mfs/cluster using rsync w/ snapshots on the other site with maybe a goal of 2 at most and using a goal of 3 on site. > > I guess the big issue here is storing our data 5 times in total vs. tapes however I guess it would be "quicker" to recover from a "failure" having a running cluster on site b vs a tape backup and dare i say it (possibly) more reliable then a singular tape and tape library. > > Also I've been tossing up the idea of using ZFS for storage, reason I say this is because I know mfs has built in check-summing/aka zfs and all that good stuff, however having to store our data 3 times + 2 times is expensive maybe storing it 2+1 instead would work out at scale by using the likes of ZFS for reliability then using mfs for purely for availability instead of reliability & availability as well... > > Would be great if there was away to use some kind of rack awareness to say at all times keep goal of 1 or 2 of the data offsite on our 2nd mfs cluster. When I was speaking to one of the staff of the mfs support team they mentioned this was kind of being developed for another customer, So we may see some kind of solution? > > Quenten > > -----Original Message----- > From: Allen, Benjamin S [mailto:bs...@la...] > Sent: Wednesday, 4 April 2012 7:17 AM > To: moo...@li... > Subject: Re: [Moosefs-users] Backup strategies > > Similar plan here. > > I have a dedicated server for MFS backup purposes. We're using IBM's Tivoli to push to a large GPFS archive system backed with a SpectraLogic tape library. I have the standard Linux Tivoli client running on this host. One key with Tivoli is to use the DiskCacheMethod, and set the disk cache to be somewhere on local disk instead of the root of the mfs mount. > > Also I backup mfsmaster's files every hour and retain at least a week of these backups. From the various horror stories we've heard on this mailing list, all have been from corrupt metadata files from mfsmaster. It's a really good idea to limit your exposure to this. > > For good measure I also backup metalogger's files every night. > > One dream for backup of MFS is to somehow utilize the metadata files dumped by mfsmaster or metalogger, to be able to do a metadata "diff". The goal of this process would be to produce a list of all objects in the filesystem that have changed between two metadata.mfs.back files. Thus you could feed your backup client a list of files, without having the need for the client to inspect the filesystem itself. This idea is inspired by ZFS' diff functionality. Where ZFS can show the changes between a snapshot and the live filesystem. > > Ben > > On Apr 3, 2012, at 2:18 PM, Atom Powers wrote: > >> I've been thinking about this for a while and I think occam's razor (the >> simplest ideas is the best) might provide some guidance. >> >> MooseFS is fault-tolerant; so you can mitigate "hardware failure". >> MooseFS provides a trash space, so you can mitigate "accidental >> deletion" events. >> MooseFS provides snapshots, so you can mitigate "corruption" events. >> >> The remaining scenario, "somebody stashes a nuclear warhead in the >> locker room", requires off-site backup. If "rack awareness" was able to >> guarantee chucks in multiple locations, then that would mitigate this >> event. Since it can't I'm going to be sending data off-site using a >> large LTO5 tape library managed by Bacula on a server that also runs >> mfsmount of the entire system. >> >> On 04/03/2012 12:56 PM, Steve Thompson wrote: >>> OK, so now you have a nice and shiny and absolutely massive MooseFS file >>> system. How do you back it up? >>> >>> I am using Bacula and divide the MFS file system into separate areas (eg >>> directories beginning with a, those beginning with b, and so on) and use >>> several different chunkservers to run the backup jobs, on the theory that >>> at least some of the data is local to the backup process. But this still >>> leaves the vast majority of data to travel the network twice (a planned >>> dedicated storage network has not yet been implemented). This results in >>> pretty bad backup performance and high network load. Any clever ideas? >>> >>> Steve >> >> -- >> -- >> Perfection is just a word I use occasionally with mustard. >> --Atom Powers-- >> Director of IT >> DigiPen Institute of Technology >> +1 (425) 895-4443 >> >> ------------------------------------------------------------------------------ >> Better than sec? Nothing is better than sec when it comes to >> monitoring Big Data applications. Try Boundary one-second >> resolution app monitoring today. Free. >> http://p.sf.net/sfu/Boundary-dev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ken <ken...@gm...> - 2012-04-04 07:21:27
|
more detail here: http://sourceforge.net/mailarchive/message.php?msg_id=28664530 -Ken On Wed, Apr 4, 2012 at 1:24 PM, Wang Jian <jia...@re...> wrote: > For desasters such as earthquake, fire, and flood, off-site backup is > must-have, and any RAID level solution is sheer futile. > > As Atom Powers said, Moosefs should provide off-site backup mechanism. > > Months before, my colleague Ken Shao sent in some patches to provide > "class" based goal mechanism, which enables us to define different "class" > to differentiate physical location and backup data in other physical > locations (i.e, 500km - 1000km away). > > The design principles are: > > 1. We can afford to lose some data during the backup point and disaster > point. In this case, old data or old version of data are intact, new data > or new version of data are lost. > 2. Because cluster-to-cluster backup has many drawbacks (performance, > consistency, etc), the duplication from one location to another location > should be within a single cluster. > 3. Location-to-location duplication should not happen when writing, or the > performance/latency is hurt badly. So, the goal recovery mechanism can be > and should be used (CS to CS duplication). And to improve bandwidth > efficiency and avoid peek load time, duplication can be controlled in > timely manner, and dirty/delta algorithm should be used. > 4. Meta data should be logger to the backup site. When disaster happens, > the backup site can be promoted to master site. > > The current rack awareness implementation is not the very thing we are > looking forward to. > > Seriously speaking, as 10gb ether connection is getting cheaper and > cheaper, the traditional rack awareness is rendered useless. > > > 于 2012/4/4 7:13, Allen, Benjamin S 写道: > >> Quenten, >> >> I'm using MFS with ZFS. I use ZFS for RAIDZ2 (RAID6) and hot sparing on >> each chunkserver. I then only set a goal of 2 in MFS. I also have a >> "scratch" directory within MFS that is set to goal 1 and not backed up to >> tape. I attempt to get my users to organize their data between their data >> directory and scratch to minimize goal overhead for data that doesn't >> require it. >> >> Overhead of my particular ZFS setup is ~15% lost to parity and hot >> spares. Although I was a bit bold with my RAIDZ2 configuration, which will >> cause rebuild time to be quite long in trade off for lower overhead. This >> was done with the knowledge that RAIDZ2 can withstand two drive failures, >> and MFS would have another copy of the data on another chunk server. I have >> not however tested how well MFS handles a ZFS pool degraded with data loss. >> I'm guessing I would take the chunkserver daemon offline, get the ZFS pool >> into a rebuilding state, and restart the CS. I'm guessing the CS will see >> missing chunks, mark them undergoal, and re-replicate them. >> >> A more cautious RAID set would be closer to 30% overhead. >> >> Then of course with goal 2 you lose another 50%. >> >> A side benefit of using ZFS is on-the-fly compression and de-dup of your >> chunkserver, L2ARC SSD read cache (although it turns out most of my cache >> hits are from L1ARC, i.e. memory), and to speed up writes you can add a >> pair of ZIL SSDs. >> >> For disaster recovery you always need to be extra careful when relying on >> a single system todo your live and DR sites. In this case you're asking for >> MFS to push data to another site. You'd then be relying on a single piece >> of software that could equally corrupt your live site and your DR site. >> >> Ben >> >> On Apr 3, 2012, at 3:36 PM, Quenten Grasso wrote: >> >> Hi All, >>> >>> How large is your metadata& logs at this stage? Just trying to mitigate >>> this exact issue myself. >>> >>> >>> I was planning to create hourly snapshots (as I understand the way they >>> are implemented they don't affect performance unlike a vmware snapshot >>> please correct me if I'm wrong) and copy these offsite to another >>> mfs/cluster using rsync w/ snapshots on the other site with maybe a goal of >>> 2 at most and using a goal of 3 on site. >>> >>> I guess the big issue here is storing our data 5 times in total vs. >>> tapes however I guess it would be "quicker" to recover from a "failure" >>> having a running cluster on site b vs a tape backup and dare i say it >>> (possibly) more reliable then a singular tape and tape library. >>> >>> Also I've been tossing up the idea of using ZFS for storage, reason I >>> say this is because I know mfs has built in check-summing/aka zfs and all >>> that good stuff, however having to store our data 3 times + 2 times is >>> expensive maybe storing it 2+1 instead would work out at scale by using the >>> likes of ZFS for reliability then using mfs for purely for availability >>> instead of reliability& availability as well... >>> >>> >>> Would be great if there was away to use some kind of rack awareness to >>> say at all times keep goal of 1 or 2 of the data offsite on our 2nd mfs >>> cluster. When I was speaking to one of the staff of the mfs support team >>> they mentioned this was kind of being developed for another customer, So we >>> may see some kind of solution? >>> >>> Quenten >>> >>> -----Original Message----- >>> From: Allen, Benjamin S [mailto:bs...@la...] >>> Sent: Wednesday, 4 April 2012 7:17 AM >>> To: moosefs-users@lists.**sourceforge.net<moo...@li...> >>> Subject: Re: [Moosefs-users] Backup strategies >>> >>> Similar plan here. >>> >>> I have a dedicated server for MFS backup purposes. We're using IBM's >>> Tivoli to push to a large GPFS archive system backed with a SpectraLogic >>> tape library. I have the standard Linux Tivoli client running on this host. >>> One key with Tivoli is to use the DiskCacheMethod, and set the disk cache >>> to be somewhere on local disk instead of the root of the mfs mount. >>> >>> Also I backup mfsmaster's files every hour and retain at least a week of >>> these backups. From the various horror stories we've heard on this mailing >>> list, all have been from corrupt metadata files from mfsmaster. It's a >>> really good idea to limit your exposure to this. >>> >>> For good measure I also backup metalogger's files every night. >>> >>> One dream for backup of MFS is to somehow utilize the metadata files >>> dumped by mfsmaster or metalogger, to be able to do a metadata "diff". The >>> goal of this process would be to produce a list of all objects in the >>> filesystem that have changed between two metadata.mfs.back files. Thus you >>> could feed your backup client a list of files, without having the need for >>> the client to inspect the filesystem itself. This idea is inspired by ZFS' >>> diff functionality. Where ZFS can show the changes between a snapshot and >>> the live filesystem. >>> >>> Ben >>> >>> On Apr 3, 2012, at 2:18 PM, Atom Powers wrote: >>> >>> I've been thinking about this for a while and I think occam's razor (the >>>> simplest ideas is the best) might provide some guidance. >>>> >>>> MooseFS is fault-tolerant; so you can mitigate "hardware failure". >>>> MooseFS provides a trash space, so you can mitigate "accidental >>>> deletion" events. >>>> MooseFS provides snapshots, so you can mitigate "corruption" events. >>>> >>>> The remaining scenario, "somebody stashes a nuclear warhead in the >>>> locker room", requires off-site backup. If "rack awareness" was able to >>>> guarantee chucks in multiple locations, then that would mitigate this >>>> event. Since it can't I'm going to be sending data off-site using a >>>> large LTO5 tape library managed by Bacula on a server that also runs >>>> mfsmount of the entire system. >>>> >>>> On 04/03/2012 12:56 PM, Steve Thompson wrote: >>>> >>>>> OK, so now you have a nice and shiny and absolutely massive MooseFS >>>>> file >>>>> system. How do you back it up? >>>>> >>>>> I am using Bacula and divide the MFS file system into separate areas >>>>> (eg >>>>> directories beginning with a, those beginning with b, and so on) and >>>>> use >>>>> several different chunkservers to run the backup jobs, on the theory >>>>> that >>>>> at least some of the data is local to the backup process. But this >>>>> still >>>>> leaves the vast majority of data to travel the network twice (a planned >>>>> dedicated storage network has not yet been implemented). This results >>>>> in >>>>> pretty bad backup performance and high network load. Any clever ideas? >>>>> >>>>> Steve >>>>> >>>> -- >>>> -- >>>> Perfection is just a word I use occasionally with mustard. >>>> --Atom Powers-- >>>> Director of IT >>>> DigiPen Institute of Technology >>>> +1 (425) 895-4443 >>>> >>>> ------------------------------**------------------------------** >>>> ------------------ >>>> Better than sec? Nothing is better than sec when it comes to >>>> monitoring Big Data applications. Try Boundary one-second >>>> resolution app monitoring today. Free. >>>> http://p.sf.net/sfu/Boundary-**dev2dev<http://p.sf.net/sfu/Boundary-dev2dev> >>>> ______________________________**_________________ >>>> moosefs-users mailing list >>>> moosefs-users@lists.**sourceforge.net<moo...@li...> >>>> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> Better than sec? Nothing is better than sec when it comes to >>> monitoring Big Data applications. Try Boundary one-second >>> resolution app monitoring today. Free. >>> http://p.sf.net/sfu/Boundary-**dev2dev<http://p.sf.net/sfu/Boundary-dev2dev> >>> ______________________________**_________________ >>> moosefs-users mailing list >>> moosefs-users@lists.**sourceforge.net<moo...@li...> >>> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>> >>> ------------------------------**------------------------------** >>> ------------------ >>> Better than sec? Nothing is better than sec when it comes to >>> monitoring Big Data applications. Try Boundary one-second >>> resolution app monitoring today. Free. >>> http://p.sf.net/sfu/Boundary-**dev2dev<http://p.sf.net/sfu/Boundary-dev2dev> >>> ______________________________**_________________ >>> moosefs-users mailing list >>> moosefs-users@lists.**sourceforge.net<moo...@li...> >>> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>> >> >> ------------------------------**------------------------------** >> ------------------ >> Better than sec? Nothing is better than sec when it comes to >> monitoring Big Data applications. Try Boundary one-second >> resolution app monitoring today. Free. >> http://p.sf.net/sfu/Boundary-**dev2dev<http://p.sf.net/sfu/Boundary-dev2dev> >> ______________________________**_________________ >> moosefs-users mailing list >> moosefs-users@lists.**sourceforge.net<moo...@li...> >> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> >> > |