You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Laurent W. <lw...@hy...> - 2010-11-25 09:51:16
|
On Thu, 25 Nov 2010 10:22:42 +0100 Laurent Wandrebeck <lw...@hy...> wrote: hmmmmm even funnier. When the problem with the failing disk happened, replication/rebalance stopped shortly during the card reset. It continued after that for several hours then stops. nothing particular in mfschunkserver logs, nor in mfsmaster ones. global space taken on that box is on par with the others. only space taken on local disks on that box isn't. Hope I'm clear. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-11-25 09:50:41
|
Hi! In the 1.6.18 version mfsmetarestore got substantial improvements. The problem you describe would disappear. Right now we are testing this version in our environment. Regards Michał From: flex [mailto:fro...@gm...] Sent: Tuesday, November 23, 2010 5:32 AM To: moo...@li... Subject: [Moosefs-users] Can not mfsmetarestore twice in a short time on a metalog server ? Hi, everyone: i want to change my mfs master, so i make a test on metalog sever: 1. mfsmetalogger -s 2. mfsmetarestore -a it did create a metadata.mfs, after this: 3. mfsmetalogger # start metalogger again and let it run a few minutes 4. mfsmetalogger -s 5. mfsmetarestore -a this gave me an error: 629860086: error: 32 (Data mismatch) and it will resume only after about an hour. Can anyone help me? thanks -- System Administrator, Focus on System Management and Basic Development |
From: Laurent W. <lw...@hy...> - 2010-11-25 09:46:33
|
On Thu, 25 Nov 2010 10:22:42 +0100 Laurent Wandrebeck <lw...@hy...> wrote: > Hi, > > I got a chunkserver tower box, with 12 sata disks plugged on a 3ware > 9650 (jbod). Forgot to add I run 1.6.17 x86_64 everywhere, rpm packages for rpmforge repository, on CentOS 5.5 x86_64. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-11-25 09:35:25
|
Hi! This is normal behaviour. The master server upon startup builds lots of complicated data structures like hash arrays for directories, etc. Possibly this data could be built later "on demand", during normal work. But now we have no plans to change it. If your file structure is not secret you can send to my email address your gzipped metadata.mfs.back file so that we check if there is nothing unusual in it. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Cheng Yaodong [mailto:ch...@ih...] Sent: Monday, November 22, 2010 8:30 AM To: moosefs-users Subject: [Moosefs-users] mfsmaster start very slow Hi, I am using MFS 1.6.17, but when I start mfsmaster, it is very slow - about 10 minutes. About 5 million files, totally 387GB are stored in MFS currenlty, the size of metadata.mfs is 622MB. I use the default mfsmaster.cfg execept set BACK_LOGS = 2. The metadata is stored in a dedicated ext3 RAID1 with two 15k RPM SAS disks, and performance is good. The memory of mfsmaster machine is 24GB, and two xeon E5506@2.13GHz CPUs are installed. Can you help me? Thank you very much in advance. #time mfsmaster start working directory: /data/mfsmeta lockfile created and locked initializing mfsmaster modules ... loading sessions ... ok sessions file has been loaded exports file has been loaded loading metadata ... loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok all inodes: 6385024 directory inodes: 410854 file inodes: 5882087 chunks: 5702997 metadata file has been loaded stats file has been loaded master <-> metaloggers module: listen on *:9419 master <-> chunkservers module: listen on *:9420 main master server module: listen on *:9421 mfsmaster daemon initialized properly real 10m35.844s user 0m0.000s sys 0m0.005s # ls -l /data/mfsmeta total 647904 -rw-r----- 1 mfs mfs 1174 Nov 22 11:41 changelog.1.mfs -rw-r----- 1 mfs mfs 1840 Nov 22 11:25 changelog.2.mfs -rw-r----- 1 mfs mfs 662172068 Nov 22 11:42 metadata.mfs -rw-r----- 1 mfs mfs 1521 Nov 22 11:03 sessions.mfs -rw-r----- 1 mfs mfs 610016 Nov 22 11:41 stats.mfs # mfsdirinfo /mnt/mfs /mnt/mfs: inodes: 6382747 directories: 410854 files: 5879810 chunks: 5701098 length: 387902712498 size: 745191196672 realsize: 745191196672 2010-11-22 ---------------------------------------------------------------------------- - Computing center,the Institute of High Energy Physics, China Yaodong CHENG Tel: (+86) 10 8823 6008 P.O. Box 918-7 Fax: (+86) 10 8823 6839 Beijing 100049 Email: Yao...@ih... P.R. China Yao...@ce... |
From: Michał B. <mic...@ge...> - 2010-11-25 09:30:08
|
Hi! Usually client process would read from chunkserver process running on the same machine. But sometimes when the chunkserver is already under heavy load it may be decided that this client would get files from another chunk server. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Abuse [mailto:Abu...@go...] Sent: Wednesday, November 10, 2010 10:00 PM To: moo...@li... Subject: [Moosefs-users] copy from localhost I have a question, I have masterserver A, chunkservers A,B,C,D,E,F. If chunkserver C as a client mounts the mfs parition and copies a file to a local drive, will it acess any of the chunks from it's own chunkserver? I see the copy going at 11.6MB/s on a 100M switch, however I assumed that any chunks locally would copy faster. i.e. will the chunkserver when acting as an mfs client copy from it's own chunkserver? regards Gov.gd sysadmin |
From: Laurent W. <lw...@hy...> - 2010-11-25 09:22:59
|
Hi, I got a chunkserver tower box, with 12 sata disks plugged on a 3ware 9650 (jbod). A disk failed, so I halted the box, changed the disk, start mfschunkserver without the new one. formatted it, and add it back in mfshdd.cfg, then restarted mfschunkserver. replication/rebalance went on normally. At night, another disk failed on that box: Nov 25 01:43:24 msg kernel: 3w-9xxx: scsi0: AEN: ERROR (0x04:0x0009): Drive timeout detected:port=10. Nov 25 01:43:24 msg kernel: sd 0:0:9:0: WARNING: (0x06:0x002C): Command (0x35) timed out, resetting card. Nov 25 01:44:08 msg kernel: 3w-9xxx: scsi0: AEN: WARNING (0x04:0x0043): Backup DCB read error detected:port=10, error=0x204. Nov 25 01:44:09 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=0. Nov 25 01:44:09 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=4. Nov 25 01:44:09 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=8. testing chunks bla bla on several lines. Nov 25 01:50:33 msg kernel: sd 0:0:9:0: WARNING: (0x06:0x002C): Command (0x35) timed out, resetting card. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: ERROR (0x04:0x000A): Drive error detected:unit=9, port=10. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=0. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=1. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=2. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=3. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=4. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=6. Nov 25 01:51:03 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=7. Nov 25 01:51:04 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=8. Nov 25 01:51:04 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=9. Nov 25 01:51:04 msg kernel: 3w-9xxx: scsi0: AEN: INFO (0x04:0x005E): Cache synchronization completed:unit=10. Nov 25 01:51:14 msg mfschunkserver[3386]: replicator: connection lost and replication/rebalance kind of stopped. the new disk is being filled at 46%, while the others on that box are around 90%. I restarted mfschunkserver, but replication/rebalance isn't starting over. Any clue before I change the second failing disk (which seems to behave normally again) ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-11-25 09:16:41
|
On Wed, 24 Nov 2010 14:42:01 -0700 Thomas S Hatch <tha...@gm...> wrote: > I am running moosefs 1.6.18 and I have run into some issues, twice now we > have had our mfsmaster loose connection to almost all of our chunkservers, > when I attempted to restart the mfsmaster it failed on loading chunk data. > > it was strange, just out of the blue it went haywire, I had to fail it over > to a metalogger (which worked quite well). > > What information do you need? I will package it all up and git t right to > you. > > -Thomas S Hatch Where did you get 1.6.18 which, isn't, AFAIK, unfortunately unavailable to the public, and not even released ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Thomas S H. <tha...@gm...> - 2010-11-24 21:42:08
|
I am running moosefs 1.6.18 and I have run into some issues, twice now we have had our mfsmaster loose connection to almost all of our chunkservers, when I attempted to restart the mfsmaster it failed on loading chunk data. it was strange, just out of the blue it went haywire, I had to fail it over to a metalogger (which worked quite well). What information do you need? I will package it all up and git t right to you. -Thomas S Hatch |
From: Laurent W. <lw...@hy...> - 2010-11-24 14:15:42
|
Hi, This line is IMHO really polluting logs for nothing really valuable. Find attached a simple patch commenting out that line. Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Thomas S H. <tha...@gm...> - 2010-11-23 19:09:56
|
I have a small request for the moosefs team, it seems that the moosefs daemon commands return a 0 even if they fail to start, this is wreaking havoc on my init scripts, is there any way they could return a non zero value when the commands fail? Thanks -Thomas S Hatch |
From: flex <fro...@gm...> - 2010-11-23 04:31:53
|
Hi, everyone: i want to change my mfs master, so i make a test on metalog sever: 1. mfsmetalogger -s 2. mfsmetarestore -a it did create a metadata.mfs, after this: 3. mfsmetalogger # start metalogger again and let it run a few minutes 4. mfsmetalogger -s 5. mfsmetarestore -a this gave me an error: 629860086: error: 32 (Data mismatch) and it will resume only after about an hour. Can anyone help me? thanks -- System Administrator, Focus on System Management and Basic Development |
From: Steve <st...@bo...> - 2010-11-22 09:12:45
|
Hi, I use moosefs for movies/music in addition to personal data. My hardware is mainly second hand dells of 533 to 866 mhz and minimal ram for the majority of chunkservers, one is a newer atom based box. My master (256meg ram) and samba server also reside in dells. 100 meg LAN. This 6.2tb feeds my netgear media player and 3 PC's via samba shares. While ive never done any analysis or deliberately tried to hammer it in real life we have no problems. Typical movies are of 1.5gb size are you trying to move something less compressed ? I use ext4 and no raid My goal for movies isnt as high as 3 Steve -------Original Message------- From: 陈轶博 Date: 22/11/2010 08:05:22 To: moosefs-users Subject: [Moosefs-users] A problem of reading the same file at the samemoment Hi I am jacky. I'm using MFS to build a movies center for myown video streaming server. And when I work on stress testing, there is a reading problem. first,in my application, the requirment is: the case is occuring all the time that many process read a file at the same moment. So, my movies center can surpport process as more as possible to read a file at same moment. secend,please allow me to instruduce my test enviroment: hardware: master: IBM 3550 chunks and clients are the same server: CPU: inetel Xeon5520 * 2 2.6G (quad-core) mem: 16G RAID card: Adaptec 52445 disk: 450G*24, SAS nic: 3*PCI-E GE switch: Gigabit Switch H3 S9306 software: MFS version: 1.6.17 http://www.moosefs.org/download.html OS: CentOS 5.4 ,64bit, one disk FS: XFS for CentOS5.4 nic: bonding 4GE, mode=6 RAID: the other 23 disks for RAID6 mfs goal = 1 netork structure: third, the result of my testing is followed: Sequential read testing: #cat /dev/sda1 > /dev/null ..................189MB/S sda is the single disk for OS. #cat /dev/sdb1 > /dev/null....................383MB/S sdb1 is the RAID6 #dd if=/dev/sdb1 of=/dev/null BS=4M.........413MB/S Random read testing on one client: (carbon is my testing program with c, multi-thread, each thread for one random file, just read the file to a buffer, then drop the buffer) #./carbon fp=/mnt/fent fn=1000 TN=8 BS=8M------------------250MB/S #./carbon fp=/mnt/fent fn=1000 TN=16 BS=8M------------------260MB/S #./carbon fp=/mnt/fent fn=1000 TN=32 BS=8M------------------240MB/S #./carbon fp=/mnt/fent fn=1000 TN=64 BS=8M------------------260MB/S fp=path of file for reading fn=number of files TN=number of thread BS=blocksize(KB) third, the problem: there are 3 clients. when I runed {#./carbon fp=/mnt/fent fn=1000 TN=8 BS=8M} on each client, I fond that the third client(maybe anyone client), may always waiting for reading, when the 1,2 finished reading some files, the third begin to read. then, I confirmed this problem in another way: I rebuilt the envirmont with PC and the same other configration. on the each client: I run for (1 to 8 ) {#dd if=/mnt/fent?(1 to 8).ts of=/dev/null BS=4M}, and I found that: run on first client: the read speed is 70MB/S run on first and second clients: the read speed is 30~40MB/S run on 3 clients: the read speed is < 10MB/S, in my opinion, the result means, the more process (either on one client or different clients) read the same file at the same moment, the reading performance is worse. Also, I can set the goal to a bigger value to improve the performance, but, in my application, the size of each the movies file is about 3GB. Bigger goal means more storage. The biggest goal value I can suffer is 3, I'm afraid this can't sovle the reading problem for me. finally, Is there any thing I can do,except setting the goal value? 2010-11-15 陈轶博 |
From: Cheng Y. <ch...@ih...> - 2010-11-22 07:47:40
|
Hi, I am using MFS 1.6.17, but when I start mfsmaster, it is very slow - about 10 minutes. About 5 million files, totally 387GB are stored in MFS currenlty, the size of metadata.mfs is 622MB. I use the default mfsmaster.cfg execept set BACK_LOGS = 2. The metadata is stored in a dedicated ext3 RAID1 with two 15k RPM SAS disks, and performance is good. The memory of mfsmaster machine is 24GB, and two xeon E5506@2.13GHz CPUs are installed. Can you help me? Thank you very much in advance. #time mfsmaster start working directory: /data/mfsmeta lockfile created and locked initializing mfsmaster modules ... loading sessions ... ok sessions file has been loaded exports file has been loaded loading metadata ... loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok all inodes: 6385024 directory inodes: 410854 file inodes: 5882087 chunks: 5702997 metadata file has been loaded stats file has been loaded master <-> metaloggers module: listen on *:9419 master <-> chunkservers module: listen on *:9420 main master server module: listen on *:9421 mfsmaster daemon initialized properly real 10m35.844s user 0m0.000s sys 0m0.005s # ls -l /data/mfsmeta total 647904 -rw-r----- 1 mfs mfs 1174 Nov 22 11:41 changelog.1.mfs -rw-r----- 1 mfs mfs 1840 Nov 22 11:25 changelog.2.mfs -rw-r----- 1 mfs mfs 662172068 Nov 22 11:42 metadata.mfs -rw-r----- 1 mfs mfs 1521 Nov 22 11:03 sessions.mfs -rw-r----- 1 mfs mfs 610016 Nov 22 11:41 stats.mfs # mfsdirinfo /mnt/mfs /mnt/mfs: inodes: 6382747 directories: 410854 files: 5879810 chunks: 5701098 length: 387902712498 size: 745191196672 realsize: 745191196672 2010-11-22 ----------------------------------------------------------------------------- Computing center,the Institute of High Energy Physics, China Yaodong CHENG Tel: (+86) 10 8823 6008 P.O. Box 918-7 Fax: (+86) 10 8823 6839 Beijing 100049 Email: Yao...@ih... P.R. China Yao...@ce... |
From: Anh K. H. <ky...@vi...> - 2010-11-22 07:35:03
|
On Mon, 22 Nov 2010 08:24:36 +0100 Michał Borychowski <mic...@ge...> wrote: > First, you should start to recover the metadata from the master > server. So backup "metadata.mfs.back" on the master server and > later run "mfsmetarestore -a" on the master server. I tried all these methods but none is success :( > > On the other hand please send to me the metadata_ml.mfs.back file - > we would like to have a look at it. The meta file is quite big (190MB). Is this a critical problem when publishing this file ? Regards, > > Thank you > Michał Borychowski > > > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Sunday, November 21, 2010 12:14 PM > To: moosefs-users > Subject: [Moosefs-users] cannot read the file ./metadata_ml.mfs.back > > Hi, > > My master crashed. I am trying to restore the file system from the > meta logger, but the file metadata.mfs.back is unreadable: > > /------------------------------------------------------------ > | $ < mfsmetarestore -m ./metadata_ml.mfs.back -o u.mfs > changelog_ml.*.mfs | loading objects (files,directories,etc.) ... ok > | loading names ... loading edge: read error: Success > | error > | can't read metadata from file: ./metadata_ml.mfs.back > \------------------------------------------------------------ > > What's wrong? There are many files changelog*, but there's only a > file metadata and it's unreadable now. How can I recovery from this > error? > > Regards, > -- Anh Ky Huynh at UTC+7 |
From: Darren X. <dj...@gm...> - 2010-11-22 07:31:56
|
hi,all Anyone has setup virtualization platform using mfs as backend datastore? I set up a mfs cluster(one master server,two chunk servers), and put the kvm virtual machine's image on the cluster, start the virtual machine, but found the virtual machine run slowly. The virtual machine's disk image is about 5G, the network is 1Gbps, how and where I can tune mfs for this suituation? I think if resize chunk larger may be help, but I read the FAQ on mfs site learned that can not change chunk size. Is any idea? thanks --Darren |
From: Michał B. <mic...@ge...> - 2010-11-22 07:24:58
|
First, you should start to recover the metadata from the master server. So backup "metadata.mfs.back" on the master server and later run "mfsmetarestore -a" on the master server. On the other hand please send to me the metadata_ml.mfs.back file - we would like to have a look at it. Thank you Michał Borychowski -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Sunday, November 21, 2010 12:14 PM To: moosefs-users Subject: [Moosefs-users] cannot read the file ./metadata_ml.mfs.back Hi, My master crashed. I am trying to restore the file system from the meta logger, but the file metadata.mfs.back is unreadable: /------------------------------------------------------------ | $ < mfsmetarestore -m ./metadata_ml.mfs.back -o u.mfs changelog_ml.*.mfs | loading objects (files,directories,etc.) ... ok | loading names ... loading edge: read error: Success | error | can't read metadata from file: ./metadata_ml.mfs.back \------------------------------------------------------------ What's wrong? There are many files changelog*, but there's only a file metadata and it's unreadable now. How can I recovery from this error? Regards, -- Anh Ky Huynh at UTC+7 ---------------------------------------------------------------------------- -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today http://p.sf.net/sfu/msIE9-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Anh K. H. <ky...@vi...> - 2010-11-21 11:13:26
|
Hi, My master crashed. I am trying to restore the file system from the meta logger, but the file metadata.mfs.back is unreadable: /------------------------------------------------------------ | $ < mfsmetarestore -m ./metadata_ml.mfs.back -o u.mfs changelog_ml.*.mfs | loading objects (files,directories,etc.) ... ok | loading names ... loading edge: read error: Success | error | can't read metadata from file: ./metadata_ml.mfs.back \------------------------------------------------------------ What's wrong? There are many files changelog*, but there's only a file metadata and it's unreadable now. How can I recovery from this error? Regards, -- Anh Ky Huynh at UTC+7 |
From: Michał B. <mic...@ge...> - 2010-11-15 10:16:44
|
Hi! Before you start recovery process you have to assure that all chunkservers and hdds are connected and none of the hdds has status "damaged". So first check thoroughly the CGI monitor. If one of the disks is damaged probably you can find the missing chunks manually and copy them on any other working hdd. Real filenames you can also find in CGI monitor. "invalid copy" means chunk version incompatibility. You should not bother by other messages until "try counter" exceeds 1. And 0xD5AF4BDA is IP number written in hex: 0xD5 . 0xAF . 0x4B . 0xDA = 213.175.75.218 To recover you should run "mfsfilerepair path_to_the_file" on any client machine. Chunks with wrong version would be fixed and the content of the file would be filled with the best (lates) version available. Totally lost chunks would be replaced by new empty chunks (damagaded blocks would be read as 64MB of zeros). If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Reinis Rozitis [mailto:r...@ro...] Sent: Friday, November 12, 2010 3:42 PM To: moo...@li... Subject: [Moosefs-users] Recovery and logs Hello, have noticed such log lines: Nov 12 16:31:05 mfmaster215 mfsmaster[3925]: chunk 0000000002428757 has only invalid copies (1) - please repair it manually Nov 12 16:31:05 mfmaster215 mfsmaster[3925]: chunk 0000000002428757_00000002 - invalid copy on (192.168.0.221 - ver:00000001) Nov 12 16:31:05 mfmaster215 mfsmaster[3925]: chunk 0000000002428757_00000002 - invalid copy on (192.168.0.221 - ver:00000001) What does 'please repair it manually' mean in this case? Since I couldnt find any direct tools to interact with chunks itself. The mfscgiserv shows that there are kinda 8 files with no copies - is there a way to find out what files are theese? Second - what do theese log lines mean? Nov 12 16:32:46 mfmaster215 mfsmount[15074]: file: 41248800, index: 0, chunk: 37952708, version: 1 - writeworker: connection with (D5AF4BDB:9422) was timed out (unfinished writes: 1; try counter: 1) Nov 12 16:32:48 mfmaster215 mfsmount[15074]: file: 41248800, index: 0, chunk: 37952708, version: 1 - writeworker: connection with (D5AF4BDA:9422) was timed out (unfinished writes: 2; try counter: 1) Nov 12 16:32:54 mfmaster215 mfsmount[15074]: file: 41248959, index: 0, chunk: 37952867, version: 1 - writeworker: connection with (D5AF4BDB:9422) was timed out (unfinished writes: 1; try counter: 1) I assume D5AF4BDB and D5AF4BDA are names for chunkservers - is there a way to find out which are the physical nodes/ips? rr ---------------------------------------------------------------------------- -- Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-11-15 08:19:11
|
Hi! You could browse through the source code and try to find the messages (we don't know them by heart ;)). But on the other hand the messages may change at any time without any notice. So when we release any new version you should check the messages again. Kind regards Michał -----Original Message----- From: jose maria [mailto:let...@us...] Sent: Monday, November 08, 2010 11:25 PM To: moosefs-users Subject: [Moosefs-users] syslog messages * I've implemented an alerting system through two means, google calendar and movistar gateway to receive on mobile phone mfs processes monitored, and apply automatic acction reduce/increase goals in filesystem, trash, or reduce trashtime, delete in trash ..., now I need to know exactly what gives mfs syslog messages when errors occur on a hard disk or Damaged status to receive message alert. * Machines which would be suitable for monitoring, mfsmaster or chunkservers? * sorry poor english. ---------------------------------------------------------------------------- -- The Next 800 Companies to Lead America's Growth: New Video Whitepaper David G. Thomson, author of the best-selling book "Blueprint to a Billion" shares his insights and actions to help propel your business during the next growth cycle. Listen Now! http://p.sf.net/sfu/SAP-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-11-15 08:04:07
|
Hi! We already know about this error - would be fixed in 1.6.18 (published this week). If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Josef [mailto:pe...@p-...] Sent: Thursday, November 11, 2010 8:09 PM To: moo...@li... Subject: [Moosefs-users] mfscgiserv not working Hello, I have managed to make a log output of mfscgiserv. This is what happens, when I try to access the server: telex:/opt/mfs# /opt/mfs/sbin/mfscgiserv -f -v starting simple cgi server (host: any , port: 9425 , rootpath: /opt/share/mfscgi) Asynchronous HTTP server running on port 9425 Traceback (most recent call last): File "/opt/mfs/sbin/mfscgiserv", line 419, in <module> loop(server,HTTP) File "/opt/mfs/sbin/mfscgiserv", line 152, in loop client_handlers[r_socket].handle_read() File "/opt/mfs/sbin/mfscgiserv", line 65, in handle_read self.process_incoming() File "/opt/mfs/sbin/mfscgiserv", line 74, in process_incoming self.response = self.make_response() File "/opt/mfs/sbin/mfscgiserv", line 257, in make_response return self.err_resp(500,'Internal Server Error') File "/opt/mfs/sbin/mfscgiserv", line 346, in err_resp resp_line = "%s %s %s\r\n" %(self.protocol,code,msg) AttributeError: HTTP instance has no attribute 'protocol' ---------------------------------------------------------------------------- -- Centralized Desktop Delivery: Dell and VMware Reference Architecture Simplifying enterprise desktop deployment and management using Dell EqualLogic storage and VMware View: A highly scalable, end-to-end client virtualization framework. Read more! http://p.sf.net/sfu/dell-eql-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-11-15 08:02:02
|
Hi! Yes, you run out of memory on the master server. You probably work on a 32-bit? Please switch to 64-bit. Kind regards Michał -----Original Message----- From: Laurent Wandrebeck [mailto:lw...@hy...] Sent: Wednesday, November 10, 2010 11:20 AM To: moo...@li... Subject: Re: [Moosefs-users] chunkserver can't connect to masterserver On Wed, 10 Nov 2010 18:07:54 +0800 "lwxian_aha" <lwx...@16...> wrote: > hello: > > today I have new trouble with my new MFS system,chunkserver can't > connect to masterserver; > > my MFS system consist of one masterserver,three chunkserver,every > chunkserver with 7.2T diskspace; about 1.4T data and about 14 million > files ,every file with 2 copies; MFS version is 1.6.17 OS version > CENTOS 5.0 FS is ext3; > > Following is the error message in master server : > > [root@localhost mfs]# tail /var/log/messages Nov 10 17:26:49 localhost > mfsmaster[23530]: CS(192.168.10.21) packet: out of memory Nov 10 > 17:26:49 localhost mfsmaster[23530]: chunkserver disconnected - ip: > 192.168.10.21, port: 0, usedspace: 0 (0.00 GiB), t > otalspace: 0 (0.00 GiB) > > what's happen ?need you help! Looks like the answer is in the first line… Is your mfsmaster ram+swap full ? See dmesg to find if OOM killer triggered. Do you run x86_64 ? Oh, and update your centos to 5.5 ! Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-11-15 07:59:20
|
Hi! There is still a limit of 2TiB. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: michael.tangzj [mailto:mic...@gm...] Sent: Tuesday, November 09, 2010 10:22 AM To: moo...@li... Subject: [Moosefs-users] Does MooseFS (Version 1.6.17) remove the file size limit? Hi, As FAQ showed below, I want to confirm whether MooseFS current version(1.6.17)removed file size limit ( as said " in the near future")? If not, so how can i modify source codes in order to remove this limit by hand? thanks for reply. best wishes. -------------------------------------------- Does MooseFS have file size limit? Currently MooseFS imposes a maximum file size limit of 2 TiB (2,199,023,255, 552 bytes). However we are considering removing this limitation in the near future, at which point the maximum file size will reach the limits of the operating system, which is currently 16EiB (18,446,744,073,709,551,616 bytes). -- regards, michael.tang |
From: Michał B. <mic...@ge...> - 2010-11-15 07:58:10
|
We talk about file size limit here: http://www.moosefs.org/moosefs-faq.html#size_limit Limit for a name of a file is standard like in other *nix systems: 255 signs. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: tjing.tech [mailto:tji...@gm...] Sent: Thursday, November 04, 2010 9:27 AM To: moosefs-users Subject: [Moosefs-users] About mooseFS size limit hi! what is the mooseFS file system‘s size limit? what is the mooseFS name space size limit? thanks! _____ tjing.tech 2010-11-04 |
From: Michał B. <mic...@ge...> - 2010-11-15 07:32:32
|
Hi! Please have a look at these FAQ entries: http://www.moosefs.org/moosefs-faq.html#source_code (here are some examples no how to calculate the unused space) http://www.moosefs.org/moosefs-faq.html#modify_chunk (nope, it's hardcoded) If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Ioannis Aslanidis [mailto:ias...@fl...] Sent: Wednesday, November 10, 2010 10:44 AM To: moo...@li... Subject: Re: [Moosefs-users] Chunking in MooseFS Hello, >From what you say, by default we have blocks of 64KB and chunks of 64MB. Correct? This means that small files use 64KB blocks while big files use 64MB chunks. Is this the case? So in the end, the difference is that many small files can fill in a chunk, while big files take in the whole chunk. Are there any performance indicators that show how much space gets lost? Regards. On Wed, Nov 10, 2010 at 10:10 AM, Laurent Wandrebeck <lw...@hy...> wrote: > On Tue, 9 Nov 2010 17:13:34 +0100 > Ioannis Aslanidis <ias...@fl...> wrote: > >> Hello, >> >> I have some pretty important questions regarding chunking. >> >> The first question is with respect to the default chunk size, and >> whether it can easily be modified, perhaps in a configuration file. > Chunk size is hardcoded, for performance reasons. Having it modified is > possible, though you'll have to crawl through the code and modify the > config file parser. You may have to change Chunk header size too, and > some other things. So it's not really trivial. >> >> The second question is how exactly does the chunking work with small files. >> >> In our case we have four types of files: >> - several hundred thousand small files of less than 10KB > A file is stored in a 64KB block. So here you'll lose quite a lot of > space. >> - several million medium files of around 10MB > I'm wondering if the file would use 64KB blocks or a complete 64MB > chunk in that case. Probably 64KB blocks, but I'm really unsure. > Michal ? >> - several tens of thousand large files of around 200MB >> - several thousand extra large files, larger than 500MB > Here the answer is quite clear. >> >> What would be a good chunk size in this case to prevent space loss? > Maybe 16 MB. But I'm afraid of the performance hit. Keep in mind > MooseFS wasn't designed to store small files. Is lost space so > important given your volume compared to data security ? > HTH, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C > D17C F64C > > ------------------------------------------------------------------------------ > The Next 800 Companies to Lead America's Growth: New Video Whitepaper > David G. Thomson, author of the best-selling book "Blueprint to a > Billion" shares his insights and actions to help propel your > business during the next growth cycle. Listen Now! > http://p.sf.net/sfu/SAP-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. E-Mail: iaslanidis at flumotion dot com Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 ------------------------------------------------------------------------------ The Next 800 Companies to Lead America's Growth: New Video Whitepaper David G. Thomson, author of the best-selling book "Blueprint to a Billion" shares his insights and actions to help propel your business during the next growth cycle. Listen Now! http://p.sf.net/sfu/SAP-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: jose m. <let...@us...> - 2010-11-13 21:05:46
|
El vie, 12-11-2010 a las 16:42 +0200, Reinis Rozitis escribió: > > > Second - what do theese log lines mean? > > Nov 12 16:32:46 mfmaster215 mfsmount[15074]: file: 41248800, index: 0, > chunk: 37952708, version: 1 - writeworker: connection with (D5AF4BDB:9422) > was timed out (unfinished writes: 1; try counter: 1) > Nov 12 16:32:48 mfmaster215 mfsmount[15074]: file: 41248800, index: 0, > chunk: 37952708, version: 1 - writeworker: connection with (D5AF4BDA:9422) > was timed out (unfinished writes: 2; try counter: 1) > Nov 12 16:32:54 mfmaster215 mfsmount[15074]: file: 41248959, index: 0, > chunk: 37952867, version: 1 - writeworker: connection with (D5AF4BDB:9422) > was timed out (unfinished writes: 1; try counter: 1) > > > I assume D5AF4BDB and D5AF4BDA are names for chunkservers - is there a way > to find out which are the physical nodes/ips? > > rr > * http://www.moosefs.org/user-contributed-files.html * script mfs_get_path.sh |