You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michał B. <mic...@ge...> - 2010-08-09 12:53:57
|
Yes, we plan to implement quota support in version 1.7, the code still lacks saving quota state to metadata file. Posix locks should also be implemented in 1.7 version. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: SaVaGe [mailto:sa...@ba...] Sent: Wednesday, August 04, 2010 4:49 PM To: moo...@li... Subject: [Moosefs-users] quota and locks Hello, I just noticed that there are per-directory quota support functions which are set to be active on VERSMID >= 7. Just wanted to ask what is the state of development of these functions, what needs to be done or it is just that they are not tested ? Also regarding the .getlk and .setlk, is it something that will be stable in the visible future? Best Regards, Iliya ---------------------------------------------------------------------------- -- The Palm PDK Hot Apps Program offers developers who use the Plug-In Development Kit to bring their C/C++ apps to Palm for a share of $1 Million in cash or HP Products. Visit us here for more details: http://p.sf.net/sfu/dev2dev-palm _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-08-09 12:49:44
|
These messages (143616569: 1280322540|EMPTYRESERVED():0) are normal ones. Functions run periodically delete free i-nodes, deleted and "reserved" files. 0 means there was nothing to do. It may happen that there are still some operations on the clients' side. If nothing else happens you can always kill the master server process (in this case it would be "kill -9 23172"), wait till the process ends, run "mfsmetarestore -a" and run the master again by: "mfsmaster start". Are there any other messages in syslog close to these operations? Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: kuer ku [mailto:ku...@gm...] Sent: Wednesday, July 28, 2010 3:22 PM To: moo...@li... Subject: Re: [Moosefs-users] what should I do IF cannot shutdown master normally ?? hi, all, I find some strange things in changelog, $ tail -f changelog.0.mfs 143616569: 1280322540|EMPTYRESERVED():0 143616570: 1280322600|FREEINODES():0 143616571: 1280322600|EMPTYRESERVED():0 143616572: 1280322600|EMPTYTRASH():0,0 143616573: 1280322660|FREEINODES():0 143616574: 1280322660|EMPTYRESERVED():0 143616575: 1280322720|FREEINODES():0 143616576: 1280322720|EMPTYRESERVED():0 143616577: 1280322780|FREEINODES():0 143616578: 1280322780|EMPTYRESERVED():0 what does this means, how to fix it ? Jul 28 21:19:00 meta1 mfsmaster[23172]: chunkservers status: Jul 28 21:19:00 meta1 mfsmaster[23172]: server 1 (ip: 221.194.134.189, port: 19322): usedspace: 943864180736 (879.04 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.92% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 2 (ip: 221.194.134.187, port: 19322): usedspace: 957016182784 (891.29 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.79% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 3 (ip: 221.194.134.181, port: 19322): usedspace: 1898559021056 (1768.17 GiB), totalspace: 3000328257536 (2794.27 GiB), usage: 63.28% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 4 (ip: 221.194.134.186, port: 19322): usedspace: 940963352576 (876.34 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.72% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 5 (ip: 221.194.134.184, port: 19322): usedspace: 944276942848 (879.43 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.94% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 6 (ip: 221.194.134.190, port: 19322): usedspace: 1893327695872 (1763.30 GiB), totalspace: 3000328257536 (2794.27 GiB), usage: 63.10% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 7 (ip: 221.194.134.188, port: 19322): usedspace: 957261549568 (891.52 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 8 (ip: 221.194.134.185, port: 19322): usedspace: 957269495808 (891.53 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 9 (ip: 221.194.134.183, port: 19322): usedspace: 957314211840 (891.57 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 10 (ip: 221.194.134.182, port: 19322): usedspace: 956960980992 (891.24 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.79% Jul 28 21:19:00 meta1 mfsmaster[23172]: total: usedspace: 11406813614080 (10623.42 GiB), totalspace: 18001969545216 (16765.64 GiB), usage: 63.36% there are still free space on chunkservers ( almost 40% free space ) what resources runs out ??? how to fix them ? thanks -- kuer On Wed, Jul 28, 2010 at 8:59 PM, kuer ku <ku...@gm...> wrote: hi, I just want to know why master stucked, but I find nothing wrong in /var/log/messages. >From /var/log/messages, it seems that master still work, but why it does NOT exit when got SIGTERM ? Are there any other way that I can find some useful messages ? thanks -- kuer On Wed, Jul 28, 2010 at 8:52 PM, kuer ku <ku...@gm...> wrote: hi, all I cannot shutdown moosefs master normally. When shutdown, it shows : working directory: /usr/local/moosefs/bin/master sending SIGTERM to lock owner (pid:23172) waiting for termination ... 10s 20s 30s 40s 50s give up Something must be wrong, what should I do ? thanks -- kuer |
From: <li...@ci...> - 2010-08-09 05:50:28
|
Hi All experts: Thanks for develop this great file system , there’s one thing I want to know is can the Moosefs offer the same access control as NFS , as I know that NFS can mapping the local system’s user right if they are as same as the NFS server , but when I mount the Moosefs , what I’ve seen is all the users have the read/write access to everyone’s file , is there any way to archive each user can only read & write there own file ? Best regards! Alex Li 李 杰 Firm Management 中国国际金融有限公司 China International Capital Corporation Limited Tel : (86 10) 6505-1166 ext. 3485 Fax: (86 10) 6505-9539 E-mail: li...@ci... Mobile:13651054608 NOTICE: If received in error, please destroy and notify sender immediately. Sender does not intend to waive confidentiality or privilege. Copy, distribution or use of this email is prohibited when received in error. See http://www.cicc.com.cn/CICC/english/noresponsibility/index.htm for further important terms in relating to this email, please ensure you read them carefully. 敬启: 本邮件内容为保密信息,发件人保留与本邮件相关的一切权利。有关本邮件的其它重要事项,请点击http://www.cicc.com.cn/CICC/chinese/noresponsibility/index.htm 确保阅读。此外,若您误收到本邮件,敬请立刻删除、通知发件人,严禁复制、转发或以任何形式使用其内容。 |
From: Shen G. <sh...@ui...> - 2010-08-09 02:57:23
|
Don't worry! This is because some of your chunk servers are currently unreachable, and the master server notices it, then modifies the meta data of files in those chunk servers to set the "allvalidcopies" to 0 in "struct chunk". When the master is rescanning the files (fs_test_files() in filesystem.c), it finds out the valid copy is 0, then print information into syslog file, just as listed below. However, printing process is quite time-consuming, especially the mount of files is large. During this period, the master ignores the chunk server's connection (because it is in a big loop of test files, and it is a single thread to do this, maybe this is a pitfall). So although you make sure the chunk server working correctly, it is useless (you can notice the reconnecting information in chunk server's syslog file). You could let the master finish printing, then it will reconnect with chunk servers, and will notice the files is there, then set the "allvalidcopies" to a correct value. Then works normally. Or you can re-compile the program with commenting the line 5512 and line 5482 in filesystem.c(mfs-1.6.15). It will ignore the print messages and of cause, reduce the fs test time. Below is from Michal: ----------------------------------------------------------------------- We give you here some quick patches you can implement to the master server to improve its performance for that amount of files: In matocsserv.c in mfsmaster you need to change this line: #define MaxPacketSize 50000000 into this: #define MaxPacketSize 500000000 Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" function. Change this line: if ((uint32_t)(main_time())<=starttime+150) { into: if ((uint32_t)(main_time())<=starttime+900) { And also changing this line: for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { into this: for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { You need to recompile the master server and start it again. The above changes should make the master server work more stable with large amount of files. Another suggestion would be to create two MooseFS instances (eg. 2 x 200 million files). One master server could also be metalogger for the another system and vice versa. Kind regards Michał ----------------------------------------------------------------------------- -- Guowen Shen On Sun, 2010-08-08 at 22:51 +0800, TianYuchuan(田玉川) wrote: > > > hello,everyone! > I have a big quertion,please help me,thank you very much. > We intend to use moosefs at our product environment as the storage of > our online photo service. > We'll store for about 200 million photo files. > I've built one master server(48G mem), one metalogger server, eight > chunk servers(8*1T SATA). When I copy photo files to the moosefs > system. At start everything is good. But I had copyed files 57 > million ,the master machines'CPU were used 100% > I sthoped the master when used “/user/local/mfs/sbin/mfsmasterserver > -s”,that I started the master。but there was a big problem ,the > master had not read my files。 These documents are important to me,I > am very anxious,please help me recover these files,tihanks。 > > I got many error syslog from master server: > > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 41991323: 2668/2526212449954462668/176s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 00000000043CD358 (inode: 50379931 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 50379931: 2926/4294909215566102926/163b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 00000000002966C3 (inode: 48284 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 48284: bookdata/178/8533354296639220178/180b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000594726 (inode: 4242588 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 4242588: bookdata/6631/4300989258725036631/85s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000993541 (inode: 8436892 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 8436892: bookdata/7534/3147352338521267534/122b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000D906E6 (inode: 12631196 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 12631196: bookdata/8691/11879047433161548691/164s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 000000000118DC1E (inode: 16825500 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 16825500: bookdata/1232/17850056326363351232/166b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001681BC7 (inode: 21019804 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 21019804: bookdata/26/12779298489336140026/246s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001A804E1 (inode: 25214108 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 25214108: bookdata/3886/8729781571075193886/30s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001E7E826 (inode: 29408412 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 29408412: bookdata/4757/142868991575144757/316b.jpg > > > Aug 7 23:56:36 localhost mfsmaster[10546]: CS(192.168.0.124) packet > too long (115289537/50000000) > Aug 7 23:56:36 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.124, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > Aug 8 00:08:14 localhost mfsmaster[10546]: CS(192.168.0.127) packet > too long (104113889/50000000) > Aug 8 00:08:14 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.127, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > Aug 8 00:21:03 localhost mfsmaster[10546]: CS(192.168.0.120) packet > too long (117046565/50000000) > Aug 8 00:21:03 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.120, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > > when I visited the mfscgi,the error was“Can't connect to MFS master > (IP:127.0.0.1 ; PORT:9421)” > 。 > > Thanks all! > ------------------------------------------------------------------------------ > This SF.net email is sponsored by > > Make an app they can't live without > Enter the BlackBerry Developer Challenge > http://p.sf.net/sfu/RIM-dev2dev > _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: TianYuchuan(田玉川) <ti...@fo...> - 2010-08-08 14:52:21
|
hello,everyone! I have a big quertion,please help me,thank you very much. We intend to use moosefs at our product environment as the storage of our online photo service. We'll store for about 200 million photo files. I've built one master server(48G mem), one metalogger server, eight chunk servers(8*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But I had copyed files 57 million ,the master machines'CPU were used 100% . I sthoped the master when used “/user/local/mfs/sbin/mfsmasterserver -s”,that I started the master。but there was a big problem ,the master had not read my files。 These documents are important to me,I am very anxious,please help me recover these files,tihanks。 I got many error syslog from master server: Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 41991323: 2668/2526212449954462668/176s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 00000000043CD358 (inode: 50379931 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 50379931: 2926/4294909215566102926/163b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 00000000002966C3 (inode: 48284 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 48284: bookdata/178/8533354296639220178/180b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000000594726 (inode: 4242588 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 4242588: bookdata/6631/4300989258725036631/85s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000000993541 (inode: 8436892 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 8436892: bookdata/7534/3147352338521267534/122b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000000D906E6 (inode: 12631196 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 12631196: bookdata/8691/11879047433161548691/164s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 000000000118DC1E (inode: 16825500 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 16825500: bookdata/1232/17850056326363351232/166b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000001681BC7 (inode: 21019804 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 21019804: bookdata/26/12779298489336140026/246s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000001A804E1 (inode: 25214108 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 25214108: bookdata/3886/8729781571075193886/30s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000001E7E826 (inode: 29408412 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 29408412: bookdata/4757/142868991575144757/316b.jpg Aug 7 23:56:36 localhost mfsmaster[10546]: CS(192.168.0.124) packet too long (115289537/50000000) Aug 7 23:56:36 localhost mfsmaster[10546]: chunkserver disconnected - ip: 192.168.0.124, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Aug 8 00:08:14 localhost mfsmaster[10546]: CS(192.168.0.127) packet too long (104113889/50000000) Aug 8 00:08:14 localhost mfsmaster[10546]: chunkserver disconnected - ip: 192.168.0.127, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Aug 8 00:21:03 localhost mfsmaster[10546]: CS(192.168.0.120) packet too long (117046565/50000000) Aug 8 00:21:03 localhost mfsmaster[10546]: chunkserver disconnected - ip: 192.168.0.120, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) when I visited the mfscgi,the error was“Can't connect to MFS master (IP:127.0.0.1 ; PORT:9421)” 。 Thanks all! |
From: TianYuchuan(田玉川) <ti...@fo...> - 2010-08-07 18:23:40
|
hello,everyone! I have a big quertion,please help me,thank you very much. We intend to use moosefs at our product environment as the storage of our online photo service. We'll store for about 200 million photo files. I've built one master server(48G mem), one metalogger server, eight chunk servers(8*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But I had copyed files 57 million ,the master machines'CPU were used 100% . I sthoped the master when used “/user/local/mfs/sbin/mfsmasterserver -s”,that I started the master。but there was a big problem ,the master had not read my files。 These documents are important to me,I am very anxious,please help me recover these files,tihanks。 I got many error syslog from master server: Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 41991323: 2668/2526212449954462668/176s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 00000000043CD358 (inode: 50379931 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 50379931: 2926/4294909215566102926/163b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 00000000002966C3 (inode: 48284 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 48284: bookdata/178/8533354296639220178/180b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000000594726 (inode: 4242588 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 4242588: bookdata/6631/4300989258725036631/85s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000000993541 (inode: 8436892 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 8436892: bookdata/7534/3147352338521267534/122b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000000D906E6 (inode: 12631196 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 12631196: bookdata/8691/11879047433161548691/164s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 000000000118DC1E (inode: 16825500 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 16825500: bookdata/1232/17850056326363351232/166b.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000001681BC7 (inode: 21019804 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 21019804: bookdata/26/12779298489336140026/246s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000001A804E1 (inode: 25214108 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 25214108: bookdata/3886/8729781571075193886/30s.jpg Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable chunk 0000000001E7E826 (inode: 29408412 ; index: 0) Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable file 29408412: bookdata/4757/142868991575144757/316b.jpg Aug 7 23:56:36 localhost mfsmaster[10546]: CS(192.168.0.124) packet too long (115289537/50000000) Aug 7 23:56:36 localhost mfsmaster[10546]: chunkserver disconnected - ip: 192.168.0.124, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Aug 8 00:08:14 localhost mfsmaster[10546]: CS(192.168.0.127) packet too long (104113889/50000000) Aug 8 00:08:14 localhost mfsmaster[10546]: chunkserver disconnected - ip: 192.168.0.127, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Aug 8 00:21:03 localhost mfsmaster[10546]: CS(192.168.0.120) packet too long (117046565/50000000) Aug 8 00:21:03 localhost mfsmaster[10546]: chunkserver disconnected - ip: 192.168.0.120, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) when I visited the mfscgi,the error was“Can't connect to MFS master (IP:127.0.0.1 ; PORT:9421)” 。 Thanks all! |
From: Ólafur Ó. <osv...@ne...> - 2010-08-06 13:31:55
|
Hi, I'm looking at the pros and cons of using MFS in our enviroment and there are a few things that I have not found an answer too yet. How much CPU and RAM is required for chunkservers part from the requirements of the OS itself? How much info does the chunkserver need after reboot other than the chunks themselves? I'm thinking about running the OS from RAM and having the system disks only for MFS, that would result in the chunkserver having no data available at startup except for chunks written before reboot and the initial config. Would this result in the chunkserver loosing all its chunks or would it recover and just notify the master of the chunks that it finds on its drives? Hopefully my questions make sense. /Oli -- Ólafur Osvaldsson System Administrator e-mail: osv...@ne... phone: +354 517 3418 |
From: Michał B. <mic...@ge...> - 2010-08-06 12:17:29
|
Hi! ---------- Forwarded message ---------- From: Stas Oskin < <mailto:sta...@gm...> sta...@gm...> Date: 2010/7/30 Subject: Re: [Moosefs-users] Backing up MFS metadata To: Michał Borychowski < <mailto:mic...@ge...> mic...@ge...> Question. Yes, you can fully restore metadata from files saved by metalogger. How far beyond master the meta server is? [MB] What do you mean by this question? Where it's better to backup, on master or on metalogger? [MB] Master on its own creates binary metadata.mfs.back and text changelogs and you cannot disable it. Metalogger is optionally. If you ask about recovering from the backup if the master server had only power outage and has no hdd problems, you can restore metadata from the master itself. In a more serious damages to master you would need to restore meta from the metalogger. Please read this blog entry: http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html Regards Michał Thanks. |
From: Michał B. <mic...@ge...> - 2010-08-06 12:13:18
|
From: Stas Oskin [mailto:sta...@gm...] Sent: Friday, July 30, 2010 12:44 AM To: Roast Cc: moosefs-users; Michał Borychowski Subject: Re: [Moosefs-users] Backing up MFS metadata Hi. Getting back to this to clarify the matter: You can backup changelog files (you would need just the two newest ones - "0" and "1"). You can make these backups even every minute. So you can have potentially lost information for about 1-2 minutes. According to article only "0" file contains the ongoing info, so why the "1" is needed? How often the checkpoint files are updated? [MB] In most cases 0 would be enough. But there may happen some "border" cases and it is more secure to use also file 1. But the question is - why to back up changelogs manually? Metalogger machines are dedicated to this. You can have as many metalogger machines on the network as you like. And metalogger process can be run on any computer, even an older one. Perhaps I can put metaloggers on the chunkservers? How much resources the metalogger takes? [MB] You can easily put a metalogger process on a chunkserver. The resources needed for metalogger are not very high and I am sure your chunkservers have quite a lot spare processor power. Regards Michał |
From: Scoleri, S. <Sco...@gs...> - 2010-08-06 11:29:05
|
It works. I'm doing it. Performance isn't great because you can't mount direct io (I think this is a fuse thing) so you cannot use type disk:aio for your guest disk. You must use file:/<BLAH>/<BLAH>/disk.img. I have many guests on mfs and migration and other features all work. I think its completely viable as long as your guest doesn't do a lot of i/o to its image. -Steve Scoleri -----Original Message----- From: Ólafur Ósvaldsson [mailto:osv...@ne...] Sent: Friday, August 06, 2010 6:50 AM To: moo...@li... Subject: [Moosefs-users] Using MooseFS as a VM diskimage storage Hi, I was wondering if anyone has experience with using MFS to store disk images used by Xen or KVM virtual machines? If yes, are there any known issues with this setup, be it performance or other? /Oli -- Ólafur Osvaldsson System Administrator e-mail: osv...@ne... phone: +354 517 3418 ------------------------------------------------------------------------------ This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ólafur Ó. <osv...@ne...> - 2010-08-06 11:08:47
|
Hi, I was wondering if anyone has experience with using MFS to store disk images used by Xen or KVM virtual machines? If yes, are there any known issues with this setup, be it performance or other? /Oli -- Ólafur Osvaldsson System Administrator e-mail: osv...@ne... phone: +354 517 3418 |
From: 夏亮 <xia...@zh...> - 2010-08-06 03:00:28
|
Hi : I am programmer used c language. I want to add c interface in moosefs , which used like lidhdfs of hadoop. Could you give me some advice? thanks |
From: Ricardo J. B. <ric...@da...> - 2010-08-05 17:41:56
|
El Jue 05 Agosto 2010, Laurent Wandrebeck escribió: > Hi, Hi Laurent! > I'm in touch with the RPM maintainer about adding an init script for > mfscgiserv. He has concerns about its behaviour in a loaded env, and > he's wondering if it's a good idea to let it run for a long time as a > standalone, arguing that if it has to run all the time, you'd better > run it with apache (he provides a package especially for that). > What are your experiences with mfscgiserv ? FWIW, I have the mfscgiserv running for almost four months (since April 20) without any issue on CentOS 5.5 x86_64. This is of course in an internal network, only accessible from our NOC, but it might be nice to have it on public internet. For that you'd need some authorization mechanism and IMHO the rpm does the right thing. That said, the init script could be chkconfig'ed off and the admin would choose the way to run the cgi, either through apache or standalone. Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! |
From: Laurent W. <lw...@hy...> - 2010-08-05 08:18:05
|
Hi, I'm in touch with the RPM maintainer about adding an init script for mfscgiserv. He has concerns about its behaviour in a loaded env, and he's wondering if it's a good idea to let it run for a long time as a standalone, arguing that if it has to run all the time, you'd better run it with apache (he provides a package especially for that). What are your experiences with mfscgiserv ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: SaVaGe <sa...@ba...> - 2010-08-04 15:20:47
|
Hello, I just noticed that there are per-directory quota support functions which are set to be active on VERSMID >= 7. Just wanted to ask what is the state of development of these functions, what needs to be done or it is just that they are not tested ? Also regarding the .getlk and .setlk, is it something that will be stable in the visible future? Best Regards, Iliya |
From: Laurent W. <lw...@hy...> - 2010-07-30 09:38:05
|
On Sun, 25 Jul 2010 12:54:23 +0300, Stas Oskin <sta...@gm...> wrote: > Hi. > > I checked the latest rpmforge mfs and it worked great. > > I did notice mfs-cgi package requires full-fledged httpd server. > > As mfs-cgi can work on it's own, I wondered if you Steve could create an > additional init file which will allow to launch the CGI server on > start-up. > > Thanks in advance! mfs-cgi package is only for use with a webserver, so no init script in this one. the standalone cgi server is still available in the mfs package, AFAIK. Just checked, no init script for this one, will ask Steve to add it. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Stas O. <sta...@gm...> - 2010-07-29 22:53:33
|
Sorry, forgot to forward to the list. ---------- Forwarded message ---------- From: Stas Oskin <sta...@gm...> Date: 2010/7/30 Subject: Re: [Moosefs-users] Backing up MFS metadata To: Michał Borychowski <mic...@ge...> Question. > > > Yes, you can fully restore metadata from files saved by metalogger. > > How far beyond master the meta server is? Where it's better to backup, on master or on metalogger? Thanks. |
From: Stas O. <sta...@gm...> - 2010-07-29 22:44:49
|
Hi. Getting back to this to clarify the matter: > > You can backup changelog files (you would need just the two newest ones – >> “0” and “1”). You can make these backups even every minute. So you can have >> potentially lost information for about 1-2 minutes. >> >> According to article only "0" file contains the ongoing info, so why the "1" is needed? How often the checkpoint files are updated? > >> > > But the question is – why to back up changelogs manually? Metalogger >> machines are dedicated to this. You can have as many metalogger machines on >> the network as you like. And metalogger process can be run on any computer, >> even an older one. >> >> Perhaps I can put metaloggers on the chunkservers? How much resources the metalogger takes? Regards. |
From: Stas O. <sta...@gm...> - 2010-07-29 22:30:22
|
Hi. Following this article: http://www.moosefs.org/mini-howtos.html#redundant-master Has someone tried to set a Heartbeat to both move the virtual IP, and switch the services from metalogger to master? Regards. |
From: Chen, A. <alv...@in...> - 2010-07-29 02:32:10
|
I mean setting goal to 3. If setting goal to 1, the writing speed is 9MBytes/sec for 100Mbps networking, but if setting goal to 3, the speed is just 500KBytes/sec. For 100Mbps networking, the speed should reach 12.5Mbytes/sec, so if setting goal to 1, 9Mbytes/sec is reasonable, but if setting goal to 3, the speed should be around 3Mbytes/sec. By the way, I just use scp to copy 4GB data file to the mount folder. Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... From: mic...@ge... [mailto:mic...@ge...] Sent: Wednesday, July 28, 2010 7:45 PM To: Chen, Alvin Cc: moo...@li... Subject: RE: [Moosefs-users] How fast can you copy files to your Moosefs ? First of all if you want better performance you should use gigabit network. We have writes of about 20-30MiB/s (have a look here: http://www.moosefs.org/moosefs-faq.html#average). You can also have a look here: http://www.moosefs.org/moosefs-faq.html#mtu for some network tips. PS. Talking about 3 copies do you mean setting goal=3 or copying 3 files simultaneously? Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Chen, Alvin [mailto:alv...@in...] Sent: Tuesday, July 27, 2010 10:52 AM To: moo...@li... Subject: [Moosefs-users] How fast can you copy files to your Moosefs ? Hi guys, I am a new user of moosefs. I have 3 chunk servers, one master server with 100Mbps network. And I just copy a 4GB files from one client machine to Moosefs, and the copying speed can reach 9MB/s if just one copy, but the copying speed is just 500KB/s if 3copies. How fast do your Moosefs can reach? Does anybody get better performance? Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... |
From: Roast <zha...@gm...> - 2010-07-29 02:11:44
|
Hi,all. I want to setup the metalogger server as the backup for the master, so I use mfsmetarestore to generate the metadata.mfs to start the master at the metalogger server. But it doesn't works. Here is some logs: ----------------------------------------------------------------------------------------------------------------------- [root@localhost ~]# /usr/local/mfs/sbin/mfsmetarestore -m /data/mfs/meta/metadata_ml.mfs.back -o /data/mfs/meta/metadata.mfs /data/mfs/meta/changelog_ml.*.mfs loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok applying changes from file: /data/mfs/meta/changelog_ml.0.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.10.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.11.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.12.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.13.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.14.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.15.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.16.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.17.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.18.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.19.mfs meta data version: 961564 version after applying changelog: 961564 applying changes from file: /data/mfs/meta/changelog_ml.1.mfs meta data version: 961564 963911: *version mismatch* [root@localhost ~]# ----------------------------------------------------------------------------------------------------------------------- version mismatch? How to fix this problem? Thanks. -- The time you enjoy wasting is not wasted time! |
From: Scoleri, S. <Sco...@gs...> - 2010-07-28 13:53:32
|
Any change we can get the mfsmaster version to show up in the CGI. The chunkserver version show up but it would be cool for the metaserver version to be there as well. BTW - updated my mfsmaster this morning (flawless). Thanks, -Scoleri |
From: kuer ku <ku...@gm...> - 2010-07-28 13:50:33
|
hi, all, I find some strange things in changelog, $ tail -f changelog.0.mfs 143616569: 1280322540|EMPTYRESERVED():0 143616570: 1280322600|FREEINODES():0 143616571: 1280322600|EMPTYRESERVED():0 143616572: 1280322600|EMPTYTRASH():0,0 143616573: 1280322660|FREEINODES():0 143616574: 1280322660|EMPTYRESERVED():0 143616575: 1280322720|FREEINODES():0 143616576: 1280322720|EMPTYRESERVED():0 143616577: 1280322780|FREEINODES():0 143616578: 1280322780|EMPTYRESERVED():0 what does this means, how to fix it ? Jul 28 21:19:00 meta1 mfsmaster[23172]: chunkservers status: Jul 28 21:19:00 meta1 mfsmaster[23172]: server 1 (ip: 221.194.134.189, port: 19322): usedspace: 943864180736 (879.04 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.92% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 2 (ip: 221.194.134.187, port: 19322): usedspace: 957016182784 (891.29 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.79% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 3 (ip: 221.194.134.181, port: 19322): usedspace: 1898559021056 (1768.17 GiB), totalspace: 3000328257536 (2794.27 GiB), usage: 63.28% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 4 (ip: 221.194.134.186, port: 19322): usedspace: 940963352576 (876.34 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.72% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 5 (ip: 221.194.134.184, port: 19322): usedspace: 944276942848 (879.43 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 62.94% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 6 (ip: 221.194.134.190, port: 19322): usedspace: 1893327695872 (1763.30 GiB), totalspace: 3000328257536 (2794.27 GiB), usage: 63.10% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 7 (ip: 221.194.134.188, port: 19322): usedspace: 957261549568 (891.52 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 8 (ip: 221.194.134.185, port: 19322): usedspace: 957269495808 (891.53 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 9 (ip: 221.194.134.183, port: 19322): usedspace: 957314211840 (891.57 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.81% Jul 28 21:19:00 meta1 mfsmaster[23172]: server 10 (ip: 221.194.134.182, port: 19322): usedspace: 956960980992 (891.24 GiB), totalspace: 1500164128768 (1397.14 GiB), usage: 63.79% Jul 28 21:19:00 meta1 mfsmaster[23172]: total: usedspace: 11406813614080 (10623.42 GiB), totalspace: 18001969545216 (16765.64 GiB), usage: 63.36% there are still free space on chunkservers ( almost 40% free space ) what resources runs out ??? how to fix them ? thanks -- kuer On Wed, Jul 28, 2010 at 8:59 PM, kuer ku <ku...@gm...> wrote: > hi, > > I just want to know why master stucked, but I find nothing wrong in > /var/log/messages. > > From /var/log/messages, it seems that master still work, but why it does > NOT exit when got SIGTERM ? > > Are there any other way that I can find some useful messages ? > > thanks > > -- kuer > > > On Wed, Jul 28, 2010 at 8:52 PM, kuer ku <ku...@gm...> wrote: > >> hi, all >> >> I cannot shutdown moosefs master normally. When shutdown, it shows : >> >> working directory: /usr/local/moosefs/bin/master >> sending SIGTERM to lock owner (pid:23172) >> waiting for termination ... 10s 20s 30s 40s 50s give up >> >> Something must be wrong, what should I do ? >> >> thanks >> >> -- kuer >> > > |
From: kuer ku <ku...@gm...> - 2010-07-28 12:59:59
|
hi, I just want to know why master stucked, but I find nothing wrong in /var/log/messages. >From /var/log/messages, it seems that master still work, but why it does NOT exit when got SIGTERM ? Are there any other way that I can find some useful messages ? thanks -- kuer On Wed, Jul 28, 2010 at 8:52 PM, kuer ku <ku...@gm...> wrote: > hi, all > > I cannot shutdown moosefs master normally. When shutdown, it shows : > > working directory: /usr/local/moosefs/bin/master > sending SIGTERM to lock owner (pid:23172) > waiting for termination ... 10s 20s 30s 40s 50s give up > > Something must be wrong, what should I do ? > > thanks > > -- kuer > |
From: kuer ku <ku...@gm...> - 2010-07-28 12:52:39
|
hi, all I cannot shutdown moosefs master normally. When shutdown, it shows : working directory: /usr/local/moosefs/bin/master sending SIGTERM to lock owner (pid:23172) waiting for termination ... 10s 20s 30s 40s 50s give up Something must be wrong, what should I do ? thanks -- kuer |