You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve T. <sm...@cb...> - 2012-04-19 22:50:55
|
On Tue, 17 Apr 2012, Micha? Borychowski wrote: > Please observe that there are two "view modes" in the Info tab: - 'all' > - All chunks state matrix (counts 'regular' hdd space and 'marked for > removal' hdd space) - 'regular' - Regular chunks state matrix (counts > only 'regular' hdd space) Yes, you are of course correct. I screwed up; how embarrassing! Steve |
From: Ricardo J. B. <ric...@da...> - 2012-04-19 17:31:13
|
El Jueves 19/04/2012, Michał Borychowski escribió: > Hi Ricardo! > > As we can see the problem lies between fuse (mount.fuse) and plain mount > command: > > # strace -f -s512 -e trace=execve mount -t fuse mfsmount /mnt/mfs -o # > mfsmaster=mfsmaster,ro,suid,nodev,[...] > execve("/bin/mount", ["mount", "-t", "fuse", "mfsmount", "/mnt/mfs", "-o", > "mfsmaster=mfsmaster,ro,suid,nodev,[...]"], [/* 12 vars */]) = 0 [pid > 2218] execve("/sbin/mount.fuse", ["/sbin/mount.fuse", "mfsmount", > "/mnt/mfs", "-o", "ro,nodev,mfsmaster=mfsmaster,[...]"], [/* 8 vars */]) = > 0 [pid 2218] execve("/bin/sh", ["/bin/sh", "-c", "'mfsmount' '/mnt/mfs' > '-o' > 'ro,nodev,mfsmaster=mfsmaster,[...]'"], [/* 9 vars */]) = 0 [pid 2218] > execve("/usr/bin/mfsmount", ["mfsmount", "/mnt/mfs", "-o", > "ro,nodev,mfsmaster=mfsmaster,[...]"], [/* 8 vars */]) = 0 Nice stracing, I'll save it for future use :) > As you can see mount strips "suid" option from flag before calling > mount.fuse (assuming that it's the default). > But it's not the default for mount.fuse. > > I think it should be reported both to fuse and util-linux projects (the > latter maintains the mount utility) and let them cooperate. OK, I'll see if I can do it. > Probably the best solution would be to always pass user-specified "suid" > and "dev" flags to mount.XXX (on mount side). Agreed, that seems to be the behaviour for other mount options also, e.g. I was experimenting with barrier=1/barrier=0, barrier=1 is the defaul for ext4 but it seems it also gets passed by mount to mount.ext4. > Kind regards > Michal Thank you! > -----Original Message----- > From: Ricardo J. Barberis [mailto:ric...@da...] > Sent: Friday, April 13, 2012 10:36 PM > To: moo...@li... > Subject: [Moosefs-users] Question about mounting suid,dev > > Hello, list. > > I need to mount a filesystem with suid enabled and found I can't do it from > fstab: mo matter what options I set there, the mount always uses > nosuid,nodev which are fuse's default. > > This is in my /etc/fstab: > > mfsmount /opt fuse > defaults,noatime,suid,nodev,_netdev,mfsmaster=master,mfssubfolder=/ 0 0 > > And this is in /proc/mounts: > > mfs#master:9421 /opt fuse > rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_othe >r 0 0 > > However, if I mount it with: > > mfsmount -o noatime,suid,nodev /opt -H master -S / > > I got this in /proc/mounts: > > mfs#master:9421 /opt fuse > rw,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0 > > I have tried putting "suid" in /etc/mfs/mfsmount.cfg on the client but it > makes no difference when using fstab. > > All this is with CentOS 5.8 64 bits servers and a CentOS 5.8 32 bits client > but it also happens in a CentOS 5.8 64 bits client, all of them running > mfs-1.6.24 installed from RepoForge RPMs and fuse 2.7.4 from standard > CentOS repositories. > > > According to fuse's README: > > "Filesystems are mounted with '-onodev,nosuid' by default, which can only > be overridden by a privileged user." > > But I'm mounting MFS as root from command line. > > Is this a bug in MFS? fuse? me? > > Thank you, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Schmurfy <sch...@gm...> - 2012-04-19 14:24:14
|
Hi, I did some tests with moosefs on vms and "standard" machines (and I really love his project !) but now I need to decide on some rackable hardware to install in a datacenter and that's where things become annoying since I have no idea what to choose :( I was thinking of starting with two nodes one of them being the master as well as a chunk server and the other will be a backup master and a chunk server too so I suppose both servers will need a decent amount of memory and a not too slow cpu but they don't require some high end muti-core processor and of course they will need some disks. I think we will start with less than 10Tb on the cluster and replication goals set to 2. We are currently using some ProLiant DL160 G6 which has 4 cores running at 2.0GHz but judging by the theoretical needs of our moosefs machine I think these servers are way too powerful for this usage. Does anyone can give me some advices on what would be a good start ? Julien Ammous |
From: Michał B. <mic...@co...> - 2012-04-19 08:38:58
|
Hi Ricardo! As we can see the problem lies between fuse (mount.fuse) and plain mount command: # strace -f -s512 -e trace=execve mount -t fuse mfsmount /mnt/mfs -o # mfsmaster=mfsmaster,ro,suid,nodev,[...] execve("/bin/mount", ["mount", "-t", "fuse", "mfsmount", "/mnt/mfs", "-o", "mfsmaster=mfsmaster,ro,suid,nodev,[...]"], [/* 12 vars */]) = 0 [pid 2218] execve("/sbin/mount.fuse", ["/sbin/mount.fuse", "mfsmount", "/mnt/mfs", "-o", "ro,nodev,mfsmaster=mfsmaster,[...]"], [/* 8 vars */]) = 0 [pid 2218] execve("/bin/sh", ["/bin/sh", "-c", "'mfsmount' '/mnt/mfs' '-o' 'ro,nodev,mfsmaster=mfsmaster,[...]'"], [/* 9 vars */]) = 0 [pid 2218] execve("/usr/bin/mfsmount", ["mfsmount", "/mnt/mfs", "-o", "ro,nodev,mfsmaster=mfsmaster,[...]"], [/* 8 vars */]) = 0 As you can see mount strips "suid" option from flag before calling mount.fuse (assuming that it's the default). But it's not the default for mount.fuse. I think it should be reported both to fuse and util-linux projects (the latter maintains the mount utility) and let them cooperate. Probably the best solution would be to always pass user-specified "suid" and "dev" flags to mount.XXX (on mount side). Kind regards Michal -----Original Message----- From: Ricardo J. Barberis [mailto:ric...@da...] Sent: Friday, April 13, 2012 10:36 PM To: moo...@li... Subject: [Moosefs-users] Question about mounting suid,dev Hello, list. I need to mount a filesystem with suid enabled and found I can't do it from fstab: mo matter what options I set there, the mount always uses nosuid,nodev which are fuse's default. This is in my /etc/fstab: mfsmount /opt fuse defaults,noatime,suid,nodev,_netdev,mfsmaster=master,mfssubfolder=/ 0 0 And this is in /proc/mounts: mfs#master:9421 /opt fuse rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0 However, if I mount it with: mfsmount -o noatime,suid,nodev /opt -H master -S / I got this in /proc/mounts: mfs#master:9421 /opt fuse rw,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0 I have tried putting "suid" in /etc/mfs/mfsmount.cfg on the client but it makes no difference when using fstab. All this is with CentOS 5.8 64 bits servers and a CentOS 5.8 32 bits client but it also happens in a CentOS 5.8 64 bits client, all of them running mfs-1.6.24 installed from RepoForge RPMs and fuse 2.7.4 from standard CentOS repositories. According to fuse's README: "Filesystems are mounted with '-onodev,nosuid' by default, which can only be overridden by a privileged user." But I'm mounting MFS as root from command line. Is this a bug in MFS? fuse? me? Thank you, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ ---------------------------------------------------------------------------- -- For Developers, A Lot Can Happen In A Second. Boundary is the first to Know...and Tell You. Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! http://p.sf.net/sfu/Boundary-d2dvs2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ken <ken...@gm...> - 2012-04-19 01:01:44
|
hi, list We found some crashes in mfschunkserver(1.6.24) in stopping. The test script maybe weired: while true: select a ChunkServer stop_it start_it sleep 1 second Almost 20MiB/s are writing to the system when the script running. It's a little crazy? er. The crash stack: #0 0x00000000004139e7 in masterconn_replicationfinished (status=0 '\0', packet=0x269b170) at masterconn.c:351 351 if (eptr->mode==DATA || eptr->mode==HEADER) { #0 0x00000000004139e7 in masterconn_replicationfinished (status=0 '\0', packet=0x269b170) at masterconn.c:351 #1 0x0000000000403b6e in job_pool_check_jobs (jpool=0x7f39b43ddea0) at bgjobs.c:338 #2 0x0000000000403f17 in job_pool_delete (jpool=0x7f39b43ddea0) at bgjobs.c:365 #3 0x0000000000414b31 in masterconn_term () at masterconn.c:864 #4 0x0000000000419173 in destruct () at ../mfscommon/main.c:312 #5 0x000000000041b60f in main (argc=1, argv=0x7fffc810dda0) at ../mfscommon/main.c:1162 # mfschunkserver -v version: 1.6.24 I think masterconn_termm cause crash: void masterconn_term(void) { packetstruct *pptr,*paptr;// syslog(LOG_INFO,"closing %s:%s",MasterHost,MasterPort); masterconn *eptr = masterconnsingleton; if (eptr->mode!=FREE && eptr->mode!=CONNECTING) { tcpclose(eptr->sock); if (eptr->inputpacket.packet) { free(eptr->inputpacket.packet); } pptr = eptr->outputhead; while (pptr) { if (pptr->packet) { free(pptr->packet); } paptr = pptr; pptr = pptr->next; free(paptr); } } free(eptr); masterconnsingleton = NULL;* job_pool_delete(jpool); // this is too later* free(MasterHost); free(MasterPort); free(BindHost);} So we move the line to start. And patch below --- a/mfschunkserver/masterconn.c +++ b/mfschunkserver/masterconn.c @@ -842,6 +842,8 @@ void masterconn_term(void) { // syslog(LOG_INFO,"closing %s:%s",MasterHost,MasterPort); masterconn *eptr = masterconnsingleton; + job_pool_delete(jpool); + if (eptr->mode!=FREE && eptr->mode!=CONNECTING) { tcpclose(eptr->sock); @@ -861,7 +863,7 @@ void masterconn_term(void) { free(eptr); masterconnsingleton = NULL; - job_pool_delete(jpool); + free(MasterHost); free(MasterPort); free(BindHost); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ patch end Crash did not happened again with the patch, and the test almost run 12 hours. HTH -Ken |
From: Elliot F. <efi...@gm...> - 2012-04-18 22:57:35
|
I get a steady stream of: 'file: NNN, index: NNN, chunk: NNN, version: NNN - writeworker: connection with (XXXXXXXX:PPPP) was timed out (unfinished writes: Y; try counter: Z)' Where Y is usually 1 or 2 and Z is almost always 1. From the FAQ, I know this is informational and not an error. My question is: If I'm getting a steady stream of these, is it because I've got too much load on my chunkservers (i.e. too many network connections, too many pending writes?) Would adding more chunkservers alleviate this situation? MFS seems to only use 10 write threads on a chunkserver. Would it make sense to increase the number of write threads to equal the number of disks on the chunkserver? Most of my chunkservers have 16 disks. One of them has 36 disks. TIA, Elliot |
From: Peter a. <pio...@co...> - 2012-04-18 18:09:46
|
Well done Steve :) Several days ago I have detected broken ethernet port on switch (thanks to MooseFS's CRC checking on client site) - looks like one could use MooseFS for testing the whole hardware infrastructure in datacenter :p cheers aNeutrino :) -- Peter aNeutrino http://pl.linkedin.com/in/aneutrino+48 602 302 132 Evangelist and Product Manager of http://MooseFS.org at Core Technology sp. z o.o. On Wed, Apr 18, 2012 at 17:13, Steve Wilson <st...@pu...> wrote: > On 04/10/2012 11:09 AM, Steve Wilson wrote: > > Thanks, Peter! > > I'll plan to take this chunk server offline one evening and run memtest on > it. Unfortunately, I don't have a spare system that can take the load from > this one so we won't have any redundancy while running memtest. But I do > make a nightly backup of this 24TB MooseFS volume just in case something > happens while we're running only one chunk server. > > Steve > > Just a follow up on this... I finally was able to take the chunk server > out of service last night and run memtest on it. Within a few minutes, > memtest detected a memory error. So the CRC errors reported by MFS turned > out to be caused by faulty memory in this case. > > > Steve > > On 04/08/2012 09:10 AM, Peter aNeutrino wrote: > > Hi Steve :) > It looks like memory or mainboard/controller issue. > > However there is some probability that this machine has all hard drives > broken. > (eg. by temperature or by some shaking/vibration) > > If I were you I would mark this machine for maintenance and make full > tests on it: > - first we need to make sure that all data are with desired level of > safety by marking all disks in /etc/mfshdd.cfg config file with asterisk > like this: > */mfs/01 > */mfs/02 > ... > - restart the chunk server service (eg. /etc/init.d/mfs-chunkserver > restart) > - wait for all chunks from this machine to be replicated > - stop the chunk server service > > ....and then make tests eg.: > - "memtest" for memory > -- if error occours replace RAM test it again > -- if error occurs again so it looks like mainboard issue. > > - "badblock" for harddrives you can test all disk together parallel but > I would run them after I moved disks into different machine. > (just move them before you run memtest so you can run memtest and badblock > in the same tame) > > if all test PASS (no errors) than I would try to replace controller and > mainboard. > and put tested memory and disks into this new mainbord/controller (or even > CPU) > > That is for one server case. With big installations like 100+ such > errors of hardware can occur every week/month and it is worth to have > better procedure, which our Technical Support would create for you :) > > Good luck with testing and please share with us when you fix it :) > aNeutrino :) > > > -- > Peter aNeutrino http://pl.linkedin.com/in/aneutrino+48 602 302 132 > > Evangelist and Product Manager of http://MooseFS.org > at Core Technology sp. z o.o. > > > > > On Thu, Apr 5, 2012 at 22:29, Steve Wilson <st...@pu...> wrote: > >> Hi, >> >> One of my chunk servers will log a CRC error from time to time like the >> following: >> >> Apr 4 17:29:10 massachusetts mfschunkserver[2224]: >> write_block_to_chunk: >> file:/mfs/08/27/chunk_00000000066B5D27_00000001.mfs - crc error >> >> Is the most likely cause faulty system memory? Or disk controller? We >> get an error about every two days or so and spread across most of the >> drives: >> >> # IP path (switch to name) chunks last error >> 9 128.210.48.62:9422:/mfs/01/ 934123 2012-03-28 17:41 >> 10 128.210.48.62:9422:/mfs/02/ 931903 2012-03-23 21:28 >> 11 128.210.48.62:9422:/mfs/03/ 888712 2012-03-30 19:13 >> 12 128.210.48.62:9422:/mfs/04/ 931661 2012-04-01 03:01 >> 13 128.210.48.62:9422:/mfs/05/ 935681 no errors >> 14 128.210.48.62:9422:/mfs/06/ 929248 2012-04-04 13:41 >> 15 128.210.48.62:9422:/mfs/07/ 929592 2012-03-30 19:02 >> 16 128.210.48.62:9422:/mfs/08/ 829446 2012-04-04 17:29 >> >> Thanks, >> Steve >> >> >> ------------------------------------------------------------------------------ >> Better than sec? Nothing is better than sec when it comes to >> monitoring Big Data applications. Try Boundary one-second >> resolution app monitoring today. Free. >> http://p.sf.net/sfu/Boundary-dev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > -- > Steven M. Wilson, Systems and Network Manager > Markey Center for Structural Biology > Purdue University > (765) 496-1946 > > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free.http://p.sf.net/sfu/Boundary-dev2dev > > > > _______________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > -- > Steven M. Wilson, Systems and Network Manager > Markey Center for Structural Biology > Purdue University > (765) 496-1946 > > |
From: Steve W. <st...@pu...> - 2012-04-18 15:13:29
|
On 04/10/2012 11:09 AM, Steve Wilson wrote: > Thanks, Peter! > > I'll plan to take this chunk server offline one evening and run > memtest on it. Unfortunately, I don't have a spare system that can > take the load from this one so we won't have any redundancy while > running memtest. But I do make a nightly backup of this 24TB MooseFS > volume just in case something happens while we're running only one > chunk server. > > Steve > Just a follow up on this... I finally was able to take the chunk server out of service last night and run memtest on it. Within a few minutes, memtest detected a memory error. So the CRC errors reported by MFS turned out to be caused by faulty memory in this case. Steve > On 04/08/2012 09:10 AM, Peter aNeutrino wrote: >> Hi Steve :) >> It looks like memory or mainboard/controller issue. >> >> However there is some probability that this machine has all hard >> drives broken. >> (eg. by temperature or by some shaking/vibration) >> >> If I were you I would mark this machine for maintenance and make full >> tests on it: >> - first we need to make sure that all data are with desired level of >> safety by marking all disks in /etc/mfshdd.cfg config file with >> asterisk like this: >> */mfs/01 >> */mfs/02 >> ... >> - restart the chunk server service (eg. /etc/init.d/mfs-chunkserver >> restart) >> - wait for all chunks from this machine to be replicated >> - stop the chunk server service >> >> ....and then make tests eg.: >> - "memtest" for memory >> -- if error occours replace RAM test it again >> -- if error occurs again so it looks like mainboard issue. >> >> - "badblock" for harddrives you can test all disk together parallel >> but I would run them after I moved disks into different machine. >> (just move them before you run memtest so you can run memtest and >> badblock in the same tame) >> >> if all test PASS (no errors) than I would try to replace controller >> and mainboard. >> and put tested memory and disks into this new mainbord/controller (or >> even CPU) >> >> That is for one server case. With big installations like 100+ such >> errors of hardware can occur every week/month and it is worth to have >> better procedure, which our Technical Support would create for you :) >> >> Good luck with testing and please share with us when you fix it :) >> aNeutrino :) >> -- >> Peter aNeutrino >> http://pl.linkedin.com/in/aneutrino >> +48 602 302 132 >> Evangelist and Product Manager ofhttp://MooseFS.org >> at Core Technology sp. z o.o. >> >> >> >> >> On Thu, Apr 5, 2012 at 22:29, Steve Wilson <st...@pu... >> <mailto:st...@pu...>> wrote: >> >> Hi, >> >> One of my chunk servers will log a CRC error from time to time >> like the >> following: >> >> Apr 4 17:29:10 massachusetts mfschunkserver[2224]: >> write_block_to_chunk: >> file:/mfs/08/27/chunk_00000000066B5D27_00000001.mfs - crc error >> >> Is the most likely cause faulty system memory? Or disk >> controller? We >> get an error about every two days or so and spread across most of the >> drives: >> >> # IP path (switch to name) chunks >> last error >> 9 128.210.48.62:9422:/mfs/01/ 934123 2012-03-28 17:41 >> 10 128.210.48.62:9422:/mfs/02/ 931903 2012-03-23 21:28 >> 11 128.210.48.62:9422:/mfs/03/ 888712 2012-03-30 19:13 >> 12 128.210.48.62:9422:/mfs/04/ 931661 2012-04-01 03:01 >> 13 128.210.48.62:9422:/mfs/05/ 935681 no errors >> 14 128.210.48.62:9422:/mfs/06/ 929248 2012-04-04 13:41 >> 15 128.210.48.62:9422:/mfs/07/ 929592 2012-03-30 19:02 >> 16 128.210.48.62:9422:/mfs/08/ 829446 2012-04-04 17:29 >> >> Thanks, >> Steve >> >> ------------------------------------------------------------------------------ >> Better than sec? Nothing is better than sec when it comes to >> monitoring Big Data applications. Try Boundary one-second >> resolution app monitoring today. Free. >> http://p.sf.net/sfu/Boundary-dev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > > -- > Steven M. Wilson, Systems and Network Manager > Markey Center for Structural Biology > Purdue University > (765) 496-1946 > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 |
From: Rujia L. <ruj...@gm...> - 2012-04-18 02:25:43
|
I've already tried such a configuration in our company's LAN shortly after my post. We had only 3 chunkservers, but all of them used "filesystem in file". It's used primarily as the storage of Bacula (one backup per day, currently 50+GB written on mfs). 8 days have passed and so far so good :) 2012/4/18 Michał Borychowski <mic...@co...>: > Hi! > > We have not tried this in a production environment. In theory everything > should work fine. Please share your experience with our group after you run > this configuration. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > > > -----Original Message----- > From: Rujia Liu [mailto:ruj...@gm...] > Sent: Tuesday, April 10, 2012 9:49 AM > To: moo...@li... > Subject: [Moosefs-users] Drawbacks of using "filesystems in files" in chunk > servers? > > Hi all! > > In the reference guide, it is said "If it's not possible to create a > separate disk partition, filesystems in files can be used" and some > instrutions followed. I can imagine that there will be a drop down in the > performance (anyone has some statistical data about this?), but I don't know > whether there are other drawbacks. I'm asking this because I wanna make use > of some existing computers with partially empty partitions. I think It's a > bit risky to resize the partitions. > > Thanks! > > - Rujia > > ---------------------------------------------------------------------------- > -- > Better than sec? Nothing is better than sec when it comes to monitoring Big > Data applications. Try Boundary one-second resolution app monitoring today. > Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Ricardo J. B. <ric...@da...> - 2012-04-17 21:53:31
|
El Miércoles 11/04/2012, Quenten Grasso escribió: > Hi Ricardo, > > Well not quite what I meant, however I was referring to this "could" be > adapted to the moosefs model as it also uses chunks to store data And could > be very useful to allot of users who are trying to find a reasonable way to > store virtual machines on mfs without the woes of metadata snap shotting? Oh, yes, that would be very useful. I wanted to use a moosefs cluster to host virtual machines images to replace our current setup (DRBD + LVM), as we are currently moving /home to a separate moosefs cluster and the VM images are now much smaller. But moosefs didn't work for me as we use virtio for better performance, but there is/was some problem combining fuse and directio. Mind you, this was several months ago with CentOS 5.x, maybe things are better with CentOS 6.x (or with other distros). I'll have to give it try soon. > Qemu-RBD is userspace vs cephs-RBD which is kernel. I didn't know about qemu-rbd, it might be worth a look, thanks! > Regards, > Quenten Grasso > > -----Original Message----- > From: Ricardo J. Barberis [mailto:ric...@da...] > Sent: Thursday, 12 April 2012 7:02 AM > To: moo...@li... > Subject: Re: [Moosefs-users] RBD > > El Miércoles 11/04/2012, Quenten Grasso escribió: > > Hey All, > > > > Has anyone tried using Ceph's Rados Block Device/QEMU-RBD on MooseFS? > > > > http://www.linux-kvm.com/content/cephrbd-block-driver-patches-qemu-kvm > > > > Regards, > > Quenten > > Um, no but I assume rbd only works with Ceph? (from the website you > linked: "rbd is described as a linux kernel driver that is part of the ceph > file system module"). > > I mean, Ceph is a distributed fault tolerant filesystem, just like MooseFS, > it's not a "regular" filesystem like ext3, ext4, xfs. > > Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Michał B. <mic...@co...> - 2012-04-17 20:12:54
|
Hi! We have not tried this in a production environment. In theory everything should work fine. Please share your experience with our group after you run this configuration. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Rujia Liu [mailto:ruj...@gm...] Sent: Tuesday, April 10, 2012 9:49 AM To: moo...@li... Subject: [Moosefs-users] Drawbacks of using "filesystems in files" in chunk servers? Hi all! In the reference guide, it is said "If it's not possible to create a separate disk partition, filesystems in files can be used" and some instrutions followed. I can imagine that there will be a drop down in the performance (anyone has some statistical data about this?), but I don't know whether there are other drawbacks. I'm asking this because I wanna make use of some existing computers with partially empty partitions. I think It's a bit risky to resize the partitions. Thanks! - Rujia ---------------------------------------------------------------------------- -- Better than sec? Nothing is better than sec when it comes to monitoring Big Data applications. Try Boundary one-second resolution app monitoring today. Free. http://p.sf.net/sfu/Boundary-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@co...> - 2012-04-17 20:10:26
|
For the moment you need to see the 'regular' view mode and check if none of the chunks is in undergoal. If not you are ready to switch the machines off. And from 1.6.26 on it would be possible to reload config files on the fly (including rep speeds) without stopping the master process. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Markus Köberl [mailto:mar...@tu...] Sent: Tuesday, April 10, 2012 11:49 AM To: moo...@li... Subject: Re: [Moosefs-users] CGI curiosity On Monday 09 April 2012, Steve Wilson wrote: > On 04/09/2012 02:51 PM, Steve Thompson wrote: > > mfs 1.6.20 > > > > I have marked a disk for removal with * in the mfshdd.cfg file. > > There are approximately 1.6 million chunks on this disk, and so far > > about 1.3 million chunks have been replicated elsewhere. All files have goal = 2. > > > > When viewing the CGI with Firefox on a Windows box, it shows 1.3 > > million in blue on the valid copies = 3 column of the goal = 2 line > > and nothing in the valid copies = 1 line. This number is increasing. > > > > When viewing the CGI with Firefox on a Linux box, it shows 300,000 > > in orange on the valid copies = 1 column of the goal = 2 line, and > > nothing in the valid copies = 3 line. This number is decreasing. > > > > The CGI is running on the mfsmaster in both cases. Why the difference? > > > > BTW, this has taken 5 days so far. A little slow, methinks. > > > > Steve > > Regarding the speed issue, have you modified the default > CHUNKS_WRITE_REP_LIMIT and CHUNKS_READ_REP_LIMIT in mfsmaster.cfg? > Following suggestions on the list, I have mine permanently set to: > CHUNKS_WRITE_REP_LIMIT = 5 > CHUNKS_READ_REP_LIMIT = 15 > instead of the default: > CHUNKS_WRITE_REP_LIMIT = 1 > CHUNKS_READ_REP_LIMIT = 5 I tried this settings today and it worked very good. Thanks for sharing your configuration. For me today a similar question followed up: How to find out if all chunks are migrated from a chunk server? At the moment I can see: chunk server marked for removal: chunks=4545 All chunks state matrix 'regular': goal=2: 1=1, 2=8028 All chunks state matrix 'all': goal=2: 1=1, 2=3483, 3=4545 So I know that I can remove the chunk server now because chunks state matrix 'all' says I have a overgoal of 4545 which is exactly the same number of chunks on my chunk server. At the moment I only run some test with goal=2. In production we will have different goals for different types of data. My plan is to use all desktop and laboratory hosts in our environment as a chunk server. Ones a year we need to reboot our laboratory hosts into windows for about two weeks. Which means I have to mark all partitions on this chunk servers for removal at the same time. Which I guess will make it hard to say which one is ready to reboot. At least not before all chunks of all chunk server are migrated. It would be very nice to see the status at the disk status table. At the moment I only can see status 'ok' or 'marked for removal'. Nice would be see something like: 'marked for removal, migration in progress' and 'marked for removal, migration finished' Markus -- Markus Köberl Graz University of Technology Signal Processing and Speech Communication Laboratory E-mail: mar...@tu... ------------------------------------------------------------------------------ Better than sec? Nothing is better than sec when it comes to monitoring Big Data applications. Try Boundary one-second resolution app monitoring today. Free. http://p.sf.net/sfu/Boundary-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@co...> - 2012-04-17 20:03:03
|
Hi Steve! Please observe that there are two "view modes" in the Info tab: - 'all' - All chunks state matrix (counts 'regular' hdd space and 'marked for removal' hdd space) - 'regular' - Regular chunks state matrix (counts only 'regular' hdd space) So in case you are in the 'all' mode you'd have lots of chunks in overgoal (the disk marked for removal still keeps the chunks - they are copied not deleted from this drive) and when in 'regular' you'd see lots of chunks in undergoal - still waiting to be copied to the target goal. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Steve Thompson [mailto:sm...@cb...] Sent: Monday, April 09, 2012 8:52 PM To: moo...@li... Subject: [Moosefs-users] CGI curiosity mfs 1.6.20 I have marked a disk for removal with * in the mfshdd.cfg file. There are approximately 1.6 million chunks on this disk, and so far about 1.3 million chunks have been replicated elsewhere. All files have goal = 2. When viewing the CGI with Firefox on a Windows box, it shows 1.3 million in blue on the valid copies = 3 column of the goal = 2 line and nothing in the valid copies = 1 line. This number is increasing. When viewing the CGI with Firefox on a Linux box, it shows 300,000 in orange on the valid copies = 1 column of the goal = 2 line, and nothing in the valid copies = 3 line. This number is decreasing. The CGI is running on the mfsmaster in both cases. Why the difference? BTW, this has taken 5 days so far. A little slow, methinks. Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- -- For Developers, A Lot Can Happen In A Second. Boundary is the first to Know...and Tell You. Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! http://p.sf.net/sfu/Boundary-d2dvs2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@co...> - 2012-04-17 19:55:29
|
Hi The new metadata format is something like an "extension" to the 1.6 branch metadata. Upgrading to 1.7 would be easy - you won't have to think about these issues. Kind regards Michał Borychowski MooseFS Support Manager From: Scott Duckworth [mailto:sd...@cl...] Sent: Monday, April 16, 2012 7:44 PM To: moo...@li... Subject: [Moosefs-users] upcoming quotas and metadata format change We have multiple research groups that might be able to make use of MooseFS, and I'm trying to avoid having multiple installations of it. Per-directory quotas would make this easier to reliably implement. I note that in the MooseFS roadmap, http://www.moosefs.org/roadmap.html, that per-directory quota support is planned to be added in version 1.7, and that a new metadata file format will be introduced. Will version 1.7 be compatible with files written under version 1.6, and if so will they be compatible with the quota implementation? If there is some sort of metadata conversion that will be necessary, what will this process entail? Scott Duckworth, Systems Programmer II Clemson University School of Computing AIM/Skype: ClemsonUnixDuck |
From: Michał B. <mic...@co...> - 2012-04-17 09:33:25
|
Not exactly. There soon would a blog post about already implemented "rack awareness" functionality. Kind regards Michał Borychowski From: siddharth chandra [mailto:sid...@gm...] Sent: Tuesday, April 17, 2012 11:09 AM To: moo...@li... Subject: Re: [Moosefs-users] Moosefs Replication logic/policy Yes, I have already looked at the link you've mentioned. I actually want to know the replication algorithm that moosefs uses. Such as is it based on locality of the chunk servers and the client? e.g. if there are 3 chunk servers and 1 client. if two of the chunkservers and the client are in US and one chunk server is in Japan. With mfsgoal set to 2, will the chunk be stored on a chunkserver in Japan? Does moosefs handle this so that the chunks are not stored faraway so as to avoid network delays? Thanks, Siddharth 2012/4/17 Michał Borychowski <mic...@co...> Did you have a look at http://www.moosefs.org/? The process is described there with two diagrams. If this is not enough for you please ask more specific questions. Kind regards Michał Borychowski MooseFS Support Manager From: siddharth chandra [mailto:sid...@gm...] Sent: Tuesday, April 17, 2012 7:37 AM To: moo...@li... Subject: [Moosefs-users] Moosefs Replication logic/policy Hi, Can anyone please tell me the replication policy/logic that moosefs uses? Thanks, Sid |
From: siddharth c. <sid...@gm...> - 2012-04-17 09:09:40
|
Yes, I have already looked at the link you've mentioned. I actually want to know the replication algorithm that moosefs uses. Such as is it based on locality of the chunk servers and the client? e.g. if there are 3 chunk servers and 1 client. if two of the chunkservers and the client are in US and one chunk server is in Japan. With mfsgoal set to 2, will the chunk be stored on a chunkserver in Japan? Does moosefs handle this so that the chunks are not stored faraway so as to avoid network delays? Thanks, Siddharth 2012/4/17 Michał Borychowski <mic...@co...> > Did you have a look at http://www.moosefs.org/? The process is described > there with two diagrams. If this is not enough for you please ask more > specific questions.**** > > ** ** > > ** ** > > Kind regards**** > > Michał Borychowski **** > > MooseFS Support Manager**** > > ** ** > > *From:* siddharth chandra [mailto:sid...@gm...] > *Sent:* Tuesday, April 17, 2012 7:37 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] Moosefs Replication logic/policy**** > > ** ** > > Hi, > > Can anyone please tell me the replication policy/logic that moosefs > uses? > > Thanks, > Sid**** > |
From: Alexander A. <akh...@ri...> - 2012-04-17 06:22:09
|
Hi ! For purpose using metalogger and chunkservers on Windows we use VMWare GSX on Windows with guest FreeBSD (or Linux :--). This solution is not for hi load but works well ! And I heard something about colinux (cooperative Linux under windows) but have not tested. May be it will be a good case. wbr Alexander ====================================================== Hi! We did test mfs chunkserver under Windows with cygwin, but it was just confirming that it works - not even running it in any test environment for longer time. If you decide on running it under windows in a bigger environment, please share your experience with us. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Rujia Liu [mailto:ruj...@gm...] Sent: Monday, April 16, 2012 11:39 AM To: Michał Borychowski Subject: Re: [Moosefs-users] 你好,我想请教一下分布式存储系统 2012/4/16 Michał Borychowski <mic...@co...>: > Hi! > > The master, metalogger and chunkservers should work under Windows with Cygwin installed. > But as there is no FUSE implementation for Windows, it won't be possible to mount the volume directly. Is chunkserver tested seriously under Cygwin? I have already a linux mfs cluster running perfectly, but I still hope to utilize a plenty of windows machines with free spaces. Thanks in advance! - Rujia ------------------------------------------------------------------------------ For Developers, A Lot Can Happen In A Second. Boundary is the first to Know...and Tell You. Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! http://p.sf.net/sfu/Boundary-d2dvs2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@co...> - 2012-04-17 05:54:37
|
Did you have a look at http://www.moosefs.org/? The process is described there with two diagrams. If this is not enough for you please ask more specific questions. Kind regards Michał Borychowski MooseFS Support Manager From: siddharth chandra [mailto:sid...@gm...] Sent: Tuesday, April 17, 2012 7:37 AM To: moo...@li... Subject: [Moosefs-users] Moosefs Replication logic/policy Hi, Can anyone please tell me the replication policy/logic that moosefs uses? Thanks, Sid |
From: siddharth c. <sid...@gm...> - 2012-04-17 05:37:31
|
Hi, Can anyone please tell me the replication policy/logic that moosefs uses? Thanks, Sid |
From: Scott D. <sd...@cl...> - 2012-04-16 18:14:27
|
We have multiple research groups that might be able to make use of MooseFS, and I'm trying to avoid having multiple installations of it. Per-directory quotas would make this easier to reliably implement. I note that in the MooseFS roadmap, http://www.moosefs.org/roadmap.html, that per-directory quota support is planned to be added in version 1.7, and that a new metadata file format will be introduced. Will version 1.7 be compatible with files written under version 1.6, and if so will they be compatible with the quota implementation? If there is some sort of metadata conversion that will be necessary, what will this process entail? Scott Duckworth, Systems Programmer II Clemson University School of Computing AIM/Skype: ClemsonUnixDuck |
From: Michał B. <mic...@co...> - 2012-04-16 15:29:16
|
Hi! We did test mfs chunkserver under Windows with cygwin, but it was just confirming that it works - not even running it in any test environment for longer time. If you decide on running it under windows in a bigger environment, please share your experience with us. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Rujia Liu [mailto:ruj...@gm...] Sent: Monday, April 16, 2012 11:39 AM To: Michał Borychowski Subject: Re: [Moosefs-users] 你好,我想请教一下分布式存储系统 2012/4/16 Michał Borychowski <mic...@co...>: > Hi! > > The master, metalogger and chunkservers should work under Windows with Cygwin installed. But as there is no FUSE implementation for Windows, it won't be possible to mount the volume directly. Is chunkserver tested seriously under Cygwin? I have already a linux mfs cluster running perfectly, but I still hope to utilize a plenty of windows machines with free spaces. Thanks in advance! - Rujia |
From: Michał B. <mic...@co...> - 2012-04-16 09:24:49
|
Hi! The master, metalogger and chunkservers should work under Windows with Cygwin installed. But as there is no FUSE implementation for Windows, it won't be possible to mount the volume directly. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Florent Bautista [mailto:fl...@co...] Sent: Monday, April 16, 2012 10:27 AM To: moo...@li... Subject: Re: [Moosefs-users] 你好,我想请教一下分布式存储系统 Hi, No, MooseFS only works on Unix systems. Maybe you can find a way to compile and run it under Windows, but it won't be a native running. Found this in README file: Compatibility matrix ==================== (tested Operating Systems only): Client Master Chunkserver Linux 2.6.x (i386): YES YES YES FreeBSD 5.x (i386+amd64): NO YES YES FreeBSD 6.x (i386+amd64): YES YES YES FreeBSD 7.x (i386+amd64): YES YES YES FreeBSD 8.x (i386+amd64): YES YES YES MacOS X 10.3 (Panther, ppc): NO YES YES MacOS X 10.4 (Tiger, ppc+i386): YES YES YES MacOS X 10.5 (Leopard, ppc+i386): YES YES YES MacOS X 10.6 (Snow Leopard): YES YES YES Solaris 10 (sparc): NO YES YES OpenSolaris (i386): YES YES YES Le 16/04/2012 03:52, 吴梦君 a écrit : > hi, > I would like to understand abou t moosefs, he can run on a windows server? > > > > ---------------------------------------------------------------------- > -------- For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ For Developers, A Lot Can Happen In A Second. Boundary is the first to Know...and Tell You. Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! http://p.sf.net/sfu/Boundary-d2dvs2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Florent B. <fl...@co...> - 2012-04-16 08:44:23
|
Hi, No, MooseFS only works on Unix systems. Maybe you can find a way to compile and run it under Windows, but it won't be a native running. Found this in README file: Compatibility matrix ==================== (tested Operating Systems only): Client Master Chunkserver Linux 2.6.x (i386): YES YES YES FreeBSD 5.x (i386+amd64): NO YES YES FreeBSD 6.x (i386+amd64): YES YES YES FreeBSD 7.x (i386+amd64): YES YES YES FreeBSD 8.x (i386+amd64): YES YES YES MacOS X 10.3 (Panther, ppc): NO YES YES MacOS X 10.4 (Tiger, ppc+i386): YES YES YES MacOS X 10.5 (Leopard, ppc+i386): YES YES YES MacOS X 10.6 (Snow Leopard): YES YES YES Solaris 10 (sparc): NO YES YES OpenSolaris (i386): YES YES YES Le 16/04/2012 03:52, 吴梦君 a écrit : > hi, > I would like to understand abou t moosefs, he can run on a windows server? > > > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: 吴梦君 <wum...@ho...> - 2012-04-16 01:52:38
|
hi,I would like to understand about moosefs, he can run on a windows server? |
From: Ricardo J. B. <ric...@da...> - 2012-04-13 20:36:31
|
Hello, list. I need to mount a filesystem with suid enabled and found I can't do it from fstab: mo matter what options I set there, the mount always uses nosuid,nodev which are fuse's default. This is in my /etc/fstab: mfsmount /opt fuse defaults,noatime,suid,nodev,_netdev,mfsmaster=master,mfssubfolder=/ 0 0 And this is in /proc/mounts: mfs#master:9421 /opt fuse rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0 However, if I mount it with: mfsmount -o noatime,suid,nodev /opt -H master -S / I got this in /proc/mounts: mfs#master:9421 /opt fuse rw,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0 I have tried putting "suid" in /etc/mfs/mfsmount.cfg on the client but it makes no difference when using fstab. All this is with CentOS 5.8 64 bits servers and a CentOS 5.8 32 bits client but it also happens in a CentOS 5.8 64 bits client, all of them running mfs-1.6.24 installed from RepoForge RPMs and fuse 2.7.4 from standard CentOS repositories. According to fuse's README: "Filesystems are mounted with '-onodev,nosuid' by default, which can only be overridden by a privileged user." But I'm mounting MFS as root from command line. Is this a bug in MFS? fuse? me? Thank you, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |