You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michał B. <mic...@ge...> - 2010-06-22 10:47:35
|
Mfscgiserv was not touched by the patches, we had made tests with the exact patches and it worked properly. You can also try to run mfscgiserv with options -f and -v: /usr/local/sbin/mfscgiserv -f -v This way mfscgiserv would work in foreground and would write supported requests like: # /usr/local/sbin/mfscgiserv -f -v starting simple cgi server (host: any , port: 9425 , rootpath: /usr/local/share/mfscgi) Asynchronous HTTP server running on port 9425 localhost - - [22/Jun/2010 11:14:11] "GET / HTTP/1.1" 301 localhost - - [22/Jun/2010 11:14:11] "GET /index.html HTTP/1.1" 200 localhost - - [22/Jun/2010 11:14:12] "GET /mfs.cgi HTTP/1.1" 200 localhost - - [22/Jun/2010 11:14:12] "GET /mfs.css HTTP/1.1" 200 localhost - - [22/Jun/2010 11:14:12] "GET /logomini.png HTTP/1.1" 200 This should give us more interesting information. We were also wondering if you could test your environment of 400 million files putting master swap file on an SSD hard drive? Regards Michał From: marco lu [mailto:mar...@gm...] Sent: Tuesday, June 22, 2010 10:17 AM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) Thank Michał Borychowski ! This problem is resolved. The mfs system is restored too. Another question is : when i recompile mfsmaster as you said, mfscgiserv process cannot work normally. this process disappeared when i visit this url. Without any message (syslog or dmesg) to debug this problem . Thanks again. Mumonitor 2010/6/21 Michał Borychowski <mic...@ge...> We give you here some quick patches you can implement to the master server to improve its performance for that amount of files: In matocsserv.c in mfsmaster you need to change this line: #define MaxPacketSize 50000000 into this: #define MaxPacketSize 500000000 Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" function. Change this line: if ((uint32_t)(main_time())<=starttime+150) { into: if ((uint32_t)(main_time())<=starttime+900) { And also changing this line: for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { into this: for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { You need to recompile the master server and start it again. The above changes should make the master server work more stable with large amount of files. Another suggestion would be to create two MooseFS instances (eg. 2 x 200 million files). One master server could also be metalogger for the another system and vice versa. Kind regards Michał From: marco lu [mailto:mar...@gm...] Sent: Monday, June 21, 2010 6:04 AM To: moo...@li... Subject: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) hi, everyone We intend to use moosefs at our product environment as the storage of our online photo service. We'll store for about 400 million photo files. So the master server's mem is a big problem. I've built one master server(64G mem), one metalogger server, three chunk servers(10*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But when the master server's exhaust the memories. I got many error syslog from master server: Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 00000000018140FF (inode: 26710547 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 26710547: img.xxx.com/003/810/560/b.jpg Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 000000000144B907 (inode: 22516243 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 22516243: img.xxx.com/051/383/419/a.jpg and some error message like this: Jun 21 11:49:31 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.11, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:03 mfs-master[4166]: CS(10.25.40.111) packet too long (226064141/50000000) Jun 21 11:50:03 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.12, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:34 mfs-master[4166]: CS(10.25.40.113) packet too long (217185941/50000000) Jun 21 11:50:34 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.13, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) It's a memory problem or a kernel tuning problem? Anyone can give me some information? Thans all. Mumonitor |
From: marco lu <mar...@gm...> - 2010-06-22 08:17:18
|
Thank Michał Borychowski ! This problem is resolved. The mfs system is restored too. Another question is : when i recompile mfsmaster as you said, mfscgiserv process cannot work normally. this process disappeared when i visit this url. Without any message (syslog or dmesg) to debug this problem . Thanks again. Mumonitor 2010/6/21 Michał Borychowski <mic...@ge...> > We give you here some quick patches you can implement to the master > server to improve its performance for that amount of files: > > > > In matocsserv.c in mfsmaster you need to change this line: > > #define MaxPacketSize 50000000 > > > > into this: > > #define MaxPacketSize 500000000 > > > > > > > > Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" > function. Change this line: > > if ((uint32_t)(main_time())<=starttime+150) { > > > > into: > > if ((uint32_t)(main_time())<=starttime+900) { > > > > > > And also changing this line: > > for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { > > > > into this: > > for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { > > > > > > > > You need to recompile the master server and start it again. The above > changes should make the master server work more stable with large amount of > files. > > > > > > Another suggestion would be to create two MooseFS instances (eg. 2 x 200 > million files). One master server could also be metalogger for the another > system and vice versa. > > > > > > Kind regards > > Michał > > > > *From:* marco lu [mailto:mar...@gm...] > *Sent:* Monday, June 21, 2010 6:04 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too > long (226064141/50000000) > > > > hi, everyone > > We intend to use moosefs at our product environment as the storage of our online photo service. > > > We'll store for about 400 million photo files. So the master server's mem is a big problem. > > > > I've built one master server(64G mem), one metalogger server, three chunk servers(10*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But when the master server's exhaust the memories. I got many error syslog from master server: > > > > Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 00000000018140FF (inode: 26710547 ; index: 0) > > Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 26710547: img.xxx.com/003/810/560/b.jpg > > Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 000000000144B907 (inode: 22516243 ; index: 0) > > Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 22516243: img.xxx.com/051/383/419/a.jpg > > > > and some error message like this: > > > Jun 21 11:49:31 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.11, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > > Jun 21 11:50:03 mfs-master[4166]: CS(10.25.40.111) packet too long (226064141/50000000) > > Jun 21 11:50:03 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.12, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > > Jun 21 11:50:34 mfs-master[4166]: CS(10.25.40.113) packet too long (217185941/50000000) > > Jun 21 11:50:34 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.13, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > > > It's a memory problem or a kernel tuning problem? Anyone can give me some information? > > > > Thans all. > > > > Mumonitor > > |
From: Ruan C. <rua...@gm...> - 2010-06-21 16:05:49
|
Thank you for your reply! I agress with you i tesed on FreeBSD8.0 and FreeBSD8.0-p3 , the same problem. My guess is: when i copy files, Samba makes some operations ,but FUSE not supported, I'm not sure :) 2010/6/21 Michał Borychowski <mic...@ge...>: > Thank you for your submission. We will look into this situation but when the kernel makes a crash it is more probably caused by FUSE for FreeBSD than for MooseFS itself. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > >> -----Original Message----- >> From: Ruan Chunping [mailto:rua...@gm...] >> Sent: Monday, June 21, 2010 6:53 AM >> To: moo...@li... >> Subject: Re: [Moosefs-users] BUG report,mfs+samba+freebsd8amd64 crash >> >> and the samba version: >> >> pkg_info|grep samba >> samba-3.0.32_2,1 A free SMB and CIFS client and server for UNIX >> samba-libsmbclient-3.0.37 Shared libs from the samba package >> >> >> On Mon, Jun 21, 2010 at 12:50 PM, Ruan Chunping <rua...@gm...> >> wrote: >> > OS: FreeBSD dev.xxxx.com 8.0-RELEASE-p3 FreeBSD 8.0-RELEASE-p3 #0: Sun >> > Jun 20 13:06:16 CST 2010 >> > ro...@de...:/usr/obj/usr/src/sys/GENERIC amd64 >> > MFS: mfs-1.6.15.tar.gz from freebsd ports >> > FUSE: fusefs-kmod-0.3.9.p1.20080208_6 >> > fusefs-libs-2.7.4 >> > dmesg|grep fuse >> > fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 >> > >> > install and config MFS ( http://www.moosefs.org/reference-guide.html ) >> > ip: 192.168.1.77 >> > master,metalog,chunkserver,client,cgiserv are installed to one machine. >> > >> > /etc/hosts >> > 192.168.1.77 mfsmaster >> > >> > ... >> > >> > dd if=/dev/zero of=/chunk1 bs=4m count=1024 >> > dd if=/dev/zero of=/chunk2 bs=4m count=1024 >> > mdconfig -a -t vnode -f /chunk1 -u 0 >> > mdconfig -a -t vnode -f /chunk2 -u 1 >> > newfs -m0 -O2 /dev/md0 >> > newfs -m0 -O2 /dev/md1 >> > mount /dev/md0 /mnt/mfschunk1 >> > mount /dev/md2 /mnt/mfschunk2 >> > >> > ..ok >> > >> > config and start mfschunkserver >> > >> > .. ok >> > >> > mfsmount /mnt/mfs -H mfsmaster >> > >> > ..ok >> > >> > cd /mnt/mfs/ >> > mkdir test >> > >> > .. ok >> > >> > echo "test" > test.txt >> > >> > .. ok >> > >> > cat test.txt >> > >> > .. ok (i noticed that: read/readdir operation increasing, from mfscgiserver >> ) >> > >> > mkdir MFS (/mnt/mfs/MFS) >> > >> > .. ok >> > >> > ln -s /mnt/mfs/MFS /mnt/SMB/ >> > >> > ls -l /mnt/SMB/MFS >> > >> > .. ok >> > >> > >> > now, in my work pc(OS:Win7, ip:192.168.1.10),i can see the MFS folder >> > ( \\192.168.1.77\SMB ) >> > 1. copy a file (<1M) to \\192.168.1.77\SMB >> > .. ok >> > 2. and copy the sam file to \\192.168.1.77\SMB\MFS\ >> > freebsd crash,and auto reboot , no kernel dump :( >> > >> > >> > * I can reproduce this bug >> > >> >> >> >> -- >> 米胖 >> www.mipang.com >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- 米胖 www.mipang.com |
From: Fabien G. <fab...@gm...> - 2010-06-21 14:41:19
|
Hello, We had exactly the same issue as Marco this morning (while copying lots of files, it suddenly stopped working with the same error messages). The three modifications in the source code provided by Michal + recompilation of mfsmaster binary solved the problem, it's backup to life :-) Notice that we "only" have 11'480'000 chunks (whereas Gemius seems to run a 26'000'000 chunks MFS cluster). Do you have any clue why it can happen, whereas our current cluster is quite slam ? Our configuration : one master server (8 GB of RAM), one master backup server, 5 chunk servers (1 BG of RAM, 2 x 4 TB HDD on each chunkserver, and about 2'200'000 chunks of each HDD, which means about 4'500'000 chunks stored on each chunk server). Regards, Fabien 2010/6/21 Michał Borychowski <mic...@ge...> > We give you here some quick patches you can implement to the master > server to improve its performance for that amount of files: > > > > In matocsserv.c in mfsmaster you need to change this line: > > #define MaxPacketSize 50000000 > > > > into this: > > #define MaxPacketSize 500000000 > > > > > > > > Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" > function. Change this line: > > if ((uint32_t)(main_time())<=starttime+150) { > > > > into: > > if ((uint32_t)(main_time())<=starttime+900) { > > > > > > And also changing this line: > > for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { > > > > into this: > > for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { > > > > > > > > You need to recompile the master server and start it again. The above > changes should make the master server work more stable with large amount of > files. > > > > > > Another suggestion would be to create two MooseFS instances (eg. 2 x 200 > million files). One master server could also be metalogger for the another > system and vice versa. > > > > > > Kind regards > > Michał > > > > *From:* marco lu [mailto:mar...@gm...] > *Sent:* Monday, June 21, 2010 6:04 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too > long (226064141/50000000) > > > > hi, everyone > > We intend to use moosefs at our product environment as the storage of our online photo service. > > > We'll store for about 400 million photo files. So the master server's mem is a big problem. > > > > I've built one master server(64G mem), one metalogger server, three chunk servers(10*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But when the master server's exhaust the memories. I got many error syslog from master server: > > > > > Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 00000000018140FF (inode: 26710547 ; index: 0) > > Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 26710547: img.xxx.com/003/810/560/b.jpg > > Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 000000000144B907 (inode: 22516243 ; index: 0) > > Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 22516243: img.xxx.com/051/383/419/a.jpg > > > > and some error message like this: > > > Jun 21 11:49:31 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.11, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > > Jun 21 11:50:03 mfs-master[4166]: CS(10.25.40.111) packet too long (226064141/50000000) > > Jun 21 11:50:03 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.12, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > > Jun 21 11:50:34 mfs-master[4166]: CS(10.25.40.113) packet too long (217185941/50000000) > > Jun 21 11:50:34 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.13, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > > > It's a memory problem or a kernel tuning problem? Anyone can give me some information? > > > > Thans all. > > > > Mumonitor > > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Laurent W. <lw...@hy...> - 2010-06-21 13:54:34
|
On Mon, 21 Jun 2010 12:12:03 +0200 Michał Borychowski <mic...@ge...> wrote: > Thanks for "doxygenizing" the project. I hope it would be the first step for better documentation of MooseFS source code. But I know still lots of depends on time of our dev guys. You're welcome. I've begun to dive into the code to comment it, and it's quite far from easy :) Is MooseFS the result of a thesis or something, so I could read a bit about algorythms used etc ? I only know very superficially FS inner working, and MooseFS isn't exactly as « simple » as ext2 :) I'm still fighting with doxygen btw, I may find a better config one of these days. Will keep you informed, thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-21 11:51:08
|
We give you here some quick patches you can implement to the master server to improve its performance for that amount of files: In matocsserv.c in mfsmaster you need to change this line: #define MaxPacketSize 50000000 into this: #define MaxPacketSize 500000000 Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" function. Change this line: if ((uint32_t)(main_time())<=starttime+150) { into: if ((uint32_t)(main_time())<=starttime+900) { And also changing this line: for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { into this: for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { You need to recompile the master server and start it again. The above changes should make the master server work more stable with large amount of files. Another suggestion would be to create two MooseFS instances (eg. 2 x 200 million files). One master server could also be metalogger for the another system and vice versa. Kind regards Michał From: marco lu [mailto:mar...@gm...] Sent: Monday, June 21, 2010 6:04 AM To: moo...@li... Subject: [Moosefs-users] mfs-master[4166]: CS(10.10.10.10) packet too long (226064141/50000000) hi, everyone We intend to use moosefs at our product environment as the storage of our online photo service. We'll store for about 400 million photo files. So the master server's mem is a big problem. I've built one master server(64G mem), one metalogger server, three chunk servers(10*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But when the master server's exhaust the memories. I got many error syslog from master server: Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 00000000018140FF (inode: 26710547 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 26710547: img.xxx.com/003/810/560/b.jpg Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 000000000144B907 (inode: 22516243 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 22516243: img.xxx.com/051/383/419/a.jpg and some error message like this: Jun 21 11:49:31 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.11, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:03 mfs-master[4166]: CS(10.25.40.111) packet too long (226064141/50000000) Jun 21 11:50:03 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.12, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:34 mfs-master[4166]: CS(10.25.40.113) packet too long (217185941/50000000) Jun 21 11:50:34 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.13, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) It's a memory problem or a kernel tuning problem? Anyone can give me some information? Thans all. Mumonitor |
From: Roast <zha...@gm...> - 2010-06-21 10:58:03
|
master server support cluster or metadata can be store at disk will be a great feature for us. On Mon, Jun 21, 2010 at 12:03 PM, marco lu <mar...@gm...> wrote: > hi, everyone > > We intend to use moosefs at our product environment as the storage of our online photo service. > > We'll store for about 400 million photo files. So the master server's mem is a big problem. > > I've built one master server(64G mem), one metalogger server, three chunk servers(10*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But when the master server's exhaust the memories. I got many error syslog from master server: > > > Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 00000000018140FF (inode: 26710547 ; index: 0) > Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 26710547: img.xxx.com/003/810/560/b.jpg > > > Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 000000000144B907 (inode: 22516243 ; index: 0) > Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 22516243: img.xxx.com/051/383/419/a.jpg > > > and some error message like this: > > Jun 21 11:49:31 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.11, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > Jun 21 11:50:03 mfs-master[4166]: CS(10.25.40.111) packet too long (226064141/50000000) > > > Jun 21 11:50:03 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.12, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > Jun 21 11:50:34 mfs-master[4166]: CS(10.25.40.113) packet too long (217185941/50000000) > > > Jun 21 11:50:34 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.13, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) > > It's a memory problem or a kernel tuning problem? Anyone can give me some information? > > > Thans all. > > > Mumonitor > > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- The time you enjoy wasting is not wasted time! |
From: Michał B. <mic...@ge...> - 2010-06-21 10:57:06
|
Thank you for your submission. We will look into this situation but when the kernel makes a crash it is more probably caused by FUSE for FreeBSD than for MooseFS itself. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 > -----Original Message----- > From: Ruan Chunping [mailto:rua...@gm...] > Sent: Monday, June 21, 2010 6:53 AM > To: moo...@li... > Subject: Re: [Moosefs-users] BUG report,mfs+samba+freebsd8amd64 crash > > and the samba version: > > pkg_info|grep samba > samba-3.0.32_2,1 A free SMB and CIFS client and server for UNIX > samba-libsmbclient-3.0.37 Shared libs from the samba package > > > On Mon, Jun 21, 2010 at 12:50 PM, Ruan Chunping <rua...@gm...> > wrote: > > OS: FreeBSD dev.xxxx.com 8.0-RELEASE-p3 FreeBSD 8.0-RELEASE-p3 #0: Sun > > Jun 20 13:06:16 CST 2010 > > ro...@de...:/usr/obj/usr/src/sys/GENERIC amd64 > > MFS: mfs-1.6.15.tar.gz from freebsd ports > > FUSE: fusefs-kmod-0.3.9.p1.20080208_6 > > fusefs-libs-2.7.4 > > dmesg|grep fuse > > fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 > > > > install and config MFS ( http://www.moosefs.org/reference-guide.html ) > > ip: 192.168.1.77 > > master,metalog,chunkserver,client,cgiserv are installed to one machine. > > > > /etc/hosts > > 192.168.1.77 mfsmaster > > > > ... > > > > dd if=/dev/zero of=/chunk1 bs=4m count=1024 > > dd if=/dev/zero of=/chunk2 bs=4m count=1024 > > mdconfig -a -t vnode -f /chunk1 -u 0 > > mdconfig -a -t vnode -f /chunk2 -u 1 > > newfs -m0 -O2 /dev/md0 > > newfs -m0 -O2 /dev/md1 > > mount /dev/md0 /mnt/mfschunk1 > > mount /dev/md2 /mnt/mfschunk2 > > > > ..ok > > > > config and start mfschunkserver > > > > .. ok > > > > mfsmount /mnt/mfs -H mfsmaster > > > > ..ok > > > > cd /mnt/mfs/ > > mkdir test > > > > .. ok > > > > echo "test" > test.txt > > > > .. ok > > > > cat test.txt > > > > .. ok (i noticed that: read/readdir operation increasing, from mfscgiserver > ) > > > > mkdir MFS (/mnt/mfs/MFS) > > > > .. ok > > > > ln -s /mnt/mfs/MFS /mnt/SMB/ > > > > ls -l /mnt/SMB/MFS > > > > .. ok > > > > > > now, in my work pc(OS:Win7, ip:192.168.1.10),i can see the MFS folder > > ( \\192.168.1.77\SMB ) > > 1. copy a file (<1M) to \\192.168.1.77\SMB > > .. ok > > 2. and copy the sam file to \\192.168.1.77\SMB\MFS\ > > freebsd crash,and auto reboot , no kernel dump :( > > > > > > * I can reproduce this bug > > > > > > -- > 米胖 > www.mipang.com > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ruan C. <rua...@gm...> - 2010-06-21 04:53:39
|
and the samba version: pkg_info|grep samba samba-3.0.32_2,1 A free SMB and CIFS client and server for UNIX samba-libsmbclient-3.0.37 Shared libs from the samba package On Mon, Jun 21, 2010 at 12:50 PM, Ruan Chunping <rua...@gm...> wrote: > OS: FreeBSD dev.xxxx.com 8.0-RELEASE-p3 FreeBSD 8.0-RELEASE-p3 #0: Sun > Jun 20 13:06:16 CST 2010 > ro...@de...:/usr/obj/usr/src/sys/GENERIC amd64 > MFS: mfs-1.6.15.tar.gz from freebsd ports > FUSE: fusefs-kmod-0.3.9.p1.20080208_6 > fusefs-libs-2.7.4 > dmesg|grep fuse > fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 > > install and config MFS ( http://www.moosefs.org/reference-guide.html ) > ip: 192.168.1.77 > master,metalog,chunkserver,client,cgiserv are installed to one machine. > > /etc/hosts > 192.168.1.77 mfsmaster > > ... > > dd if=/dev/zero of=/chunk1 bs=4m count=1024 > dd if=/dev/zero of=/chunk2 bs=4m count=1024 > mdconfig -a -t vnode -f /chunk1 -u 0 > mdconfig -a -t vnode -f /chunk2 -u 1 > newfs -m0 -O2 /dev/md0 > newfs -m0 -O2 /dev/md1 > mount /dev/md0 /mnt/mfschunk1 > mount /dev/md2 /mnt/mfschunk2 > > ..ok > > config and start mfschunkserver > > .. ok > > mfsmount /mnt/mfs -H mfsmaster > > ..ok > > cd /mnt/mfs/ > mkdir test > > .. ok > > echo "test" > test.txt > > .. ok > > cat test.txt > > .. ok (i noticed that: read/readdir operation increasing, from mfscgiserver ) > > mkdir MFS (/mnt/mfs/MFS) > > .. ok > > ln -s /mnt/mfs/MFS /mnt/SMB/ > > ls -l /mnt/SMB/MFS > > .. ok > > > now, in my work pc(OS:Win7, ip:192.168.1.10),i can see the MFS folder > ( \\192.168.1.77\SMB ) > 1. copy a file (<1M) to \\192.168.1.77\SMB > .. ok > 2. and copy the sam file to \\192.168.1.77\SMB\MFS\ > freebsd crash,and auto reboot , no kernel dump :( > > > * I can reproduce this bug > -- 米胖 www.mipang.com |
From: Ruan C. <rua...@gm...> - 2010-06-21 04:51:18
|
OS: FreeBSD dev.xxxx.com 8.0-RELEASE-p3 FreeBSD 8.0-RELEASE-p3 #0: Sun Jun 20 13:06:16 CST 2010 ro...@de...:/usr/obj/usr/src/sys/GENERIC amd64 MFS: mfs-1.6.15.tar.gz from freebsd ports FUSE: fusefs-kmod-0.3.9.p1.20080208_6 fusefs-libs-2.7.4 dmesg|grep fuse fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 install and config MFS ( http://www.moosefs.org/reference-guide.html ) ip: 192.168.1.77 master,metalog,chunkserver,client,cgiserv are installed to one machine. /etc/hosts 192.168.1.77 mfsmaster ... dd if=/dev/zero of=/chunk1 bs=4m count=1024 dd if=/dev/zero of=/chunk2 bs=4m count=1024 mdconfig -a -t vnode -f /chunk1 -u 0 mdconfig -a -t vnode -f /chunk2 -u 1 newfs -m0 -O2 /dev/md0 newfs -m0 -O2 /dev/md1 mount /dev/md0 /mnt/mfschunk1 mount /dev/md2 /mnt/mfschunk2 ..ok config and start mfschunkserver .. ok mfsmount /mnt/mfs -H mfsmaster ..ok cd /mnt/mfs/ mkdir test .. ok echo "test" > test.txt .. ok cat test.txt .. ok (i noticed that: read/readdir operation increasing, from mfscgiserver ) mkdir MFS (/mnt/mfs/MFS) .. ok ln -s /mnt/mfs/MFS /mnt/SMB/ ls -l /mnt/SMB/MFS .. ok now, in my work pc(OS:Win7, ip:192.168.1.10),i can see the MFS folder ( \\192.168.1.77\SMB ) 1. copy a file (<1M) to \\192.168.1.77\SMB .. ok 2. and copy the sam file to \\192.168.1.77\SMB\MFS\ freebsd crash,and auto reboot , no kernel dump :( * I can reproduce this bug |
From: marco lu <mar...@gm...> - 2010-06-21 04:03:55
|
hi, everyone We intend to use moosefs at our product environment as the storage of our online photo service. We'll store for about 400 million photo files. So the master server's mem is a big problem. I've built one master server(64G mem), one metalogger server, three chunk servers(10*1T SATA). When I copy photo files to the moosefs system. At start everything is good. But when the master server's exhaust the memories. I got many error syslog from master server: Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 00000000018140FF (inode: 26710547 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 26710547: img.xxx.com/003/810/560/b.jpg Jun 21 11:48:58 mfs-master[4166]: currently unavailable chunk 000000000144B907 (inode: 22516243 ; index: 0) Jun 21 11:48:58 mfs-master[4166]: * currently unavailable file 22516243: img.xxx.com/051/383/419/a.jpg and some error message like this: Jun 21 11:49:31 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.11, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:03 mfs-master[4166]: CS(10.25.40.111) packet too long (226064141/50000000) Jun 21 11:50:03 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.12, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) Jun 21 11:50:34 mfs-master[4166]: CS(10.25.40.113) packet too long (217185941/50000000) Jun 21 11:50:34 mfs-master[4166]: chunkserver disconnected - ip: 10.10.10.13, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB) It's a memory problem or a kernel tuning problem? Anyone can give me some information? Thans all. Mumonitor |
From: Laurent W. <lw...@hy...> - 2010-06-17 14:04:55
|
Hi, I've used doxywizard on git repo. Of course, I've not yet begun to doxy-annotate the code. I bet devs would be much faster than me to comment the whole beast :) I'll try to do that, but it will take quite a bunch of time, and free time is seldom. The Doxyfile is provided. Please find everything here: http://centos.kodros.fr/doxygen.tar.gz Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-06-17 12:40:25
|
Hi, Please find attached the .spec used for rpms creation. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-06-17 12:30:19
|
On Thu, 17 Jun 2010 14:06:33 +0200 Michał Borychowski <mic...@ge...> wrote: > [ ... snip ... ] > [MB] Maybe someone from the community would like to take care of code documentation? For a start Doxygen would help to prepare the on-line html files. OK, I'll try to take care of it. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-17 12:06:53
|
[ ... snip ... ] > > > OK. I haven't been able to find a place explaining code structure > > > (call graphs, etc.). Does it already exist ? > > [MB] No, there are no call graphs, no code structure overview prepared by > us. Possibly Doxygen could make most of it? > I think so. It would greatly help for maintainability, and for new coders to > dive into the beast. > Is it on the roadmap ? [MB] Maybe someone from the community would like to take care of code documentation? For a start Doxygen would help to prepare the on-line html files. [ ... snip ... ] Kind regards Michał |
From: Laurent W. <lw...@hy...> - 2010-06-17 12:00:13
|
On Thu, 17 Jun 2010 13:50:46 +0200 Michał Borychowski <mic...@ge...> wrote: > [ ... snip ... ] > > > [MB] We plan to make it more automatic but I'm afraid it won't be in the > > next release. > > Do you mean 1.6.16 or 1.7 ? :) > [MB] For sure not 1.6.16 and maybe in some version in 1.7.x I guessed so, but at least it's clear now. > > > [ ... snip ... ] > > OK. I haven't been able to find a place explaining code structure (call > > graphs, etc.). Does it already exist ? > [MB] No, there are no call graphs, no code structure overview prepared by us. Possibly Doxygen could make most of it? I think so. It would greatly help for maintainability, and for new coders to dive into the beast. Is it on the roadmap ? > <snip> > [MB] Our technical team will look into these files and probably would incorporate them in one of the next releases. OK, I'll send in a couple minutes the latest .spec file version. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-17 11:51:08
|
[ ... snip ... ] > > [MB] We plan to make it more automatic but I'm afraid it won't be in the > next release. > Do you mean 1.6.16 or 1.7 ? :) [MB] For sure not 1.6.16 and maybe in some version in 1.7.x [ ... snip ... ] > > [MB] MooseFS is still developed mainly by Gemius as it is widely used in the > company and the development is backed up by the community. > OK. I haven't been able to find a place explaining code structure (call > graphs, etc.). Does it already exist ? [MB] No, there are no call graphs, no code structure overview prepared by us. Possibly Doxygen could make most of it? > > And also thank you for the CentOS repo! > You're welcome. Some people on CentOS mailing-list exprimed concerns about > security of the RPMS. Quite normal, as I'm anonymous, recent user of MooseFS, > I could have put anything in the source before building the repo (of course I > did NOT do that:). > Do you have a bit of workforce to verify* they are OK so you can « officialy » > back them (I'll maintain them happilly if you want), or to host them so such > doubt would not happen anymore ? > Regards, > * > rpm -Uvh http://centos.kodros.fr/5/SRPMS/mfs-1.6.15-2.src.rpm > wget > http://sourceforge.net/projects/moosefs/files/moosefs/1.6.15/mfs- > 1.6.15.tar.gz/download > md5sum mfs-1.6.15.tar.gz /usr/src/redhat/SOURCES/mfs-1.6.15.tar.gz > result is 90749a0fb0e55c0013fae6e23a990bd9 for both here. > and don't forget to take a look at /usr/src/redhat/SPEC/mfs.spec do check that > nothing is applied to the sources :) Thanks, [MB] Our technical team will look into these files and probably would incorporate them in one of the next releases. Regards Michał > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Roast <zha...@gm...> - 2010-06-17 01:21:07
|
Tank you very much,Michał Borychowski and Laurent Wandrebeck. And I think if there is a users list at the official site will be very helpful for us. :) 2010/6/16 Michał Borychowski <mic...@ge...> > > > We are plan to use moosefs at our product environment as the storage > > > of our online photo service. > > > > > > But since we got that master server store all the metadata at memory, > > > but we will store for about a hundred million photo files, so I wonder > > > how much memory should prepare for the master server? And how to > > > calculate this number? > > According to FAQ, http://www.moosefs.org/moosefs-faq.html#cpu , at > gemius, > 8GB > > ram is used by master for 25 millions files. So, for a hundred millions, > you'd > > need 32GB. > [MB] Yes, that's right 32GB would be enough to keep metadata in RAM but in > order that the whole system works smoothly you would need preferably 40-48 > GB of RAM in the master server. > > > > > If the memory is not enough, what will happened with master server's? > > I guess it'll swap. > [MB] The performance of the whole system would be substantially lower. > > > > > And I still wonder the performance about master server when use > > > moosefs to store a hundred million photo files? Anyone can give me > > > some more information? > > I've no experience with such a large setup. I guess memory caching used > to > > prevent bottleneck on master will still do the trick. > [MB] When you would have 40-48GB of RAM in the master server the system > would have no problems with performance or stability. > > If you need any further assistance please let us know. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > > > regards, > > -- > > Laurent Wandrebeck > > HYGEOS, Earth Observation Department / Observation de la Terre > > Euratechnologies > > 165 Avenue de Bretagne > > 59000 Lille, France > > tel: +33 3 20 08 24 98 > > http://www.hygeos.com > > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C > D17C > > F64C > > -- The time you enjoy wasting is not wasted time! |
From: Michał B. <mic...@ge...> - 2010-06-16 11:50:21
|
> > We are plan to use moosefs at our product environment as the storage > > of our online photo service. > > > > But since we got that master server store all the metadata at memory, > > but we will store for about a hundred million photo files, so I wonder > > how much memory should prepare for the master server? And how to > > calculate this number? > According to FAQ, http://www.moosefs.org/moosefs-faq.html#cpu , at gemius, 8GB > ram is used by master for 25 millions files. So, for a hundred millions, you'd > need 32GB. [MB] Yes, that's right 32GB would be enough to keep metadata in RAM but in order that the whole system works smoothly you would need preferably 40-48 GB of RAM in the master server. > > If the memory is not enough, what will happened with master server's? > I guess it'll swap. [MB] The performance of the whole system would be substantially lower. > > And I still wonder the performance about master server when use > > moosefs to store a hundred million photo files? Anyone can give me > > some more information? > I've no experience with such a large setup. I guess memory caching used to > prevent bottleneck on master will still do the trick. [MB] When you would have 40-48GB of RAM in the master server the system would have no problems with performance or stability. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 > regards, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Laurent W. <lw...@hy...> - 2010-06-16 11:10:00
|
On Wed, 16 Jun 2010 12:39:31 +0200 Michał Borychowski <mic...@ge...> wrote: > Hi Laurent! Hi Michał ! > > Thank you very much for registration of moosefs channel. If you could also prepare a short description on how to use the channel which we would post on our Contact page. Here it is: How to join us on IRC: - webchat: Go to http://webchat.freenode.net/ , choose a nickname, put #moosefs in Channels box. Click on connect. If you already have a registered nickname, don't forget to check « Auth to services ». - IRC client (xchat, weechat, epic, mirc, you name it) First properly configure your client (nickname…). Join a freenode server: /server chat.freenode.net if using IPV4, /server ipv6.chat.freenode.net for IPV6. Join the channel: /join #moosefs For a more detailled help, please see http://freenode.net/faq.shtml and your IRC client documentation. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-16 10:39:54
|
Hi Laurent! Thank you very much for registration of moosefs channel. If you could also prepare a short description on how to use the channel which we would post on our Contact page. Regards Michal > -----Original Message----- > From: Laurent Wandrebeck [mailto:lw...@hy...] > Sent: Wednesday, June 16, 2010 11:19 AM > To: moo...@li... > Subject: Re: [Moosefs-users] IRC channel > > On Mon, 14 Jun 2010 15:36:24 +0200 > Laurent Wandrebeck <lw...@hy...> wrote: > > > Hi, > > > > Is the registration of a moosefs channel on an IRC network planned ? > > It'd be nice to keep in touch devs and users IMHO. I checked freenode > > and OFTC networks, #moosefs and #mfs are available. > As silence is consent, I've registered #moosefs on freenode. > Of course I'll give rights to Michał and Jakub. > Waiting to meet you there, > regards, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C > F64C |
From: Michał B. <mic...@ge...> - 2010-06-16 10:34:31
|
You can download screens from our CGI monitor and have a look at loads on the master server and on one of chunks: http://www.moosefs.org/tl_files/monitor_screens_200912.tar Kind regards Michał Borychowski From: Stas Oskin [mailto:sta...@gm...] Sent: Monday, June 14, 2010 2:48 PM To: Laurent Wandrebeck Cc: moo...@li... Subject: Re: [Moosefs-users] updated spec file Hi. I'm using the initial rpm I built two weeks ago without problem up to now. 6 machines involved (1 master, 1 metalogger, 4 chunks). Can you share your typical loads? PS: please don't forget to post to the list. Sure. |
From: Laurent W. <lw...@hy...> - 2010-06-16 10:18:39
|
Hi, Please find attached a small patch that adds an entry for BIND_HOST (which *should* be correct if I understood code correctly). That is, BIND_HOST allows to specify an IP the chunkserver will use as DNS. Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-06-16 10:04:30
|
From: Stas Oskin [mailto:sta...@gm...] Sent: Monday, June 14, 2010 1:27 PM To: moo...@li... Subject: [Moosefs-users] Fwd: Append and seek while writing functionality Also, can any process / machine read file being written by other process / machine? (I presume it can from the ability to write to same file from multiple machines). [MB] Yes, it can. So this covers appending, but can a write operation to file pause, jump to file start for example, and update some data? [MB] Yes, seeking (jumping) is fully supported by MooseFS (and writing in that place). The only thing you could not do is when two clients would like to simultaneously append (write at the end) to the same file. FUSE library make only operations similar to "pwrite" - saving of X data starting at Y position. The starting position is dependent on the operating system. Probably operation "append" would be consisted from several operations: open, getsize, N*pwrite close So if two clients' computers make these operations in parallel then at the beginning "getsize" would return on both clients the same value and both would start writing data on the same position and they would overwrite data by each other and after finishing the file would have a doubled length. The only possible solution would be to allow only one computer for writing from "open" till "close". Thanks to it only on computer at the given moment could have the file opened for writing. For the moment we are not going to implement this limitation because appending to the same file by several computers is quite a strange thing. (Do you have a real life example where it would be necessary and helpful?) The other option is just to use by yourself lockfiles. Or it would also be possible to introduce parallel appending to the same file but in the form of "booking" the space - so every client which wants to append knows how many bytes it wants to append. The scenario of writing X bytes to Y file would like similarly to this: lock(Y+".lock") FD=open(Y) S=FD.getsize() FD.setsize(S+X) unlock(Y+".lock") FD.write(data,S,X) FD.close() Regards Michał 2010/6/14 Stas Oskin <sta...@gm...> Hi. Thanks for the explanation. So this covers appending, but can a write operation to file pause, jump to file start for example, and update some data? Regards. 2010/6/14 MichaĹ, Borychowski <mic...@ge...> MooseFS fully supports appending to a file and writing to any position of a file. It also supports creating sparse files.  Two independent processes / machines may write in parallel to the same file at different positions. If two positions are located in different chunks (pos1 / 64Mi != pos2 / 64Mi) the writing process would be run at normal speed; if two positions are in the same chunk you can expect a substantial loss of writing speed (due to chunk lock for writing).   Kind regards MichaĹ, Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. WoĹ,oska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01    From: Stas Oskin [mailto:sta...@gm...] Sent: Thursday, June 10, 2010 2:13 PM To: moo...@li... Cc: MooseFS Subject: [Moosefs-users] Append and seek while writing functionality  Hi. Does MooseFS supports append functionality? Also, does it support the ability to seek file being it's being written and write data in other place (like regular file system)? Thanks! |
From: Laurent W. <lw...@hy...> - 2010-06-16 09:19:05
|
On Mon, 14 Jun 2010 15:36:24 +0200 Laurent Wandrebeck <lw...@hy...> wrote: > Hi, > > Is the registration of a moosefs channel on an IRC network planned ? > It'd be nice to keep in touch devs and users IMHO. I checked freenode > and OFTC networks, #moosefs and #mfs are available. As silence is consent, I've registered #moosefs on freenode. Of course I'll give rights to Michał and Jakub. Waiting to meet you there, regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |