You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Thomas S H. <tha...@gm...> - 2010-11-03 22:41:17
|
Ok, I found the metadata blog post and that answered my questions: http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html Great article, cleared up a lot! But I have another question, I am running mfsmetarestore on one of my loggers, and I keep getting errors, it is either: 174660: error: 3 (No such file or directory) or 873: error: 32 (Data mismatch) and I remember one being an error 7 Can anyone point me in the right direction to figure out how to solve these errors? My guess is that since I have been failing over masters using ucarp that I got some inconsistent data, but I am not sure. Thanks! On Tue, Nov 2, 2010 at 5:50 PM, Thomas S Hatch <tha...@gm...> wrote: > I am running an automatic metalogger failover using ucarp and running into > problems. After I failover to a new mfsmaster the new mfsmaster shows that > many files are missing, I have a goal set to 2 , many files are down to goal > 1, as one might expect, but the files that are missing never seem to come > back, even with all chunkservers connected. > > So my question is, what EXACTLY doses the metalogger pull from the master, > and what files need to be copied over to turn a metalogger into an > mfsmaster? > > Right now I am running mfsmetarestore -a and then copying > the /var/lib/mfs/metadata_ml.mfs.back to /var/lib/mfs/metadata.mfs and > then /var/lib/mfs/sessions_ml.mfs to /var/lib/mfs/sessions.mfs > > The more I look at it the more I think I am missing something or doing it > in the wrong order. > > And what exactly is in the changelog files? > > -Tom Hatch > |
From: Thomas S H. <tha...@gm...> - 2010-11-02 23:50:38
|
I am running an automatic metalogger failover using ucarp and running into problems. After I failover to a new mfsmaster the new mfsmaster shows that many files are missing, I have a goal set to 2 , many files are down to goal 1, as one might expect, but the files that are missing never seem to come back, even with all chunkservers connected. So my question is, what EXACTLY doses the metalogger pull from the master, and what files need to be copied over to turn a metalogger into an mfsmaster? Right now I am running mfsmetarestore -a and then copying the /var/lib/mfs/metadata_ml.mfs.back to /var/lib/mfs/metadata.mfs and then /var/lib/mfs/sessions_ml.mfs to /var/lib/mfs/sessions.mfs The more I look at it the more I think I am missing something or doing it in the wrong order. And what exactly is in the changelog files? -Tom Hatch |
From: Ken Z. <ken...@et...> - 2010-11-02 09:21:01
|
I've got a problem when I start the KVM virtual machine which stored in a MFS directory: Type: virsh start xx.xx.xx.xx (IP) I get: error: Failed to start domain xx.xx.xx.xx error: internal error process exited while connecting to monitor: char device redirected to /dev/pts/5 qemu: could not open disk image /share/xx.xx.xx.xx dsk: Permission denied What should I do to get the virtual machine started? Ken from Guangzhou china |
From: Josef <pe...@p-...> - 2010-10-31 14:22:13
|
Hello, I'm playing with MooseFS, but I'm unable to get some running info about the system. CGI monitor returns just a blank page, i'm runing it on the master. I'm runing it using /opt/mfs/sbin/mfscgiserv (mfs is installed into /opt/mfs). Do I have to set any extra parameters or something? My mfs version is 1.6.17. Thanks, Josef |
From: Stas O. <sta...@gm...> - 2010-10-31 14:07:58
|
Also, if it happens on the MFS mount side - where the information is kept during these retries? In RAM? On disk? Regards. 2010/10/31 Stas Oskin <sta...@gm...> > Hi. > > Thanks for the explanation. > > Is this error appears on replications of MFS servers themselves, or on > application writing to MFS mount (and as a result, writing to single MFS > server)? > > Also, if this happens on writing to MFS mount, is this error reported to > application as error in writing file? > > Thanks again! > > 2010/10/29 Michał Borychowski <mic...@ge...> > > There is a small timeout set for the write operation (several seconds). >> It may happen that a single write operation takes several or more seconds. >> If these messages are sent by different servers, there is nothing to worry >> about. >> >> >> >> But if the message is sent mainly by one server (IP in hex C0A8020F = >> 192.168.2.15) you should investigate it more. In CGI monitor go to the Disks >> tab and click “hour” in “I/O stats last min (switch to hour,day)” row and >> sort by “write” in “max time (switch to avg)” column. Now look if there are >> disks which obviously stay from the others. You can also look at the “fsync” >> column and sort the results. Maximum times should not exceed 2 seconds (2 >> million microseconds). You should look for individual disks which may be a >> bottleneck of the system. >> >> >> >> "try counter: 1" alone is not a problem – number of trials is set as an >> option to mfsmount (by default 30). Until mfsmounts reaches this limit write >> operations are repeated and the application gets the OK status. >> >> >> >> >> >> Regards >> >> Michal >> >> >> >> >> >> >> >> *From:* Stas Oskin [mailto:sta...@gm...] >> *Sent:* Wednesday, October 20, 2010 1:04 PM >> *To:* moosefs-users >> *Subject:* [Moosefs-users] writeWorker time out >> >> >> >> Hi. >> >> We noticed the following message in logs: >> file: 28, index: 7, chunk: 992, version: 1 - writeworker: connection with >> (C0A8020F:9422) was timed out (unfinished writes: 5; try counter: 1) >> >> MFS seems to be working and functioning normally. >> It seems to be related to write process timing-out, but connection is >> normal. >> >> Can it be caused by slow speed of disks? Also, what counter 1 can do, and >> where it can be changed? >> Finally, what operation system will return to application - that write >> operation has failed? >> >> Thanks in advance! >> > > |
From: Kai K. <kk...@we...> - 2010-10-29 13:42:58
|
Hi, we've been able to reduce getattr and read operations from constantly 1.5 million per minute to 100000 per minute by using 'mfsmount -o mfsdirentrycacheto=30 -s -f /mnt/mfs'. lookup operations were in the 20000/minute range and write operations differ widely between zero and 1500 per minute. Everyting else barely noticeable. But in the end it didn't help to reduce cpu usage, dtrace helped us to confirm that the high system call rate of 250000 per second is attributed to high select() count. Contex switches were at the rate of 25000 per second. Every box is using version 1.6.17. Unfortunately we're unable to exactly simulate the situation offline but running "find /mnt/mfs >/dev/null" on 20 clients easily gets cpu usage up to 90% at 10MiB/s network traffic whereas running "rsync -a /usr/ports /mnt/mfs" on 20 clients runs pretty slow at 5 to 15% percent cpu usage. Our mfs.cgi website is not generally accessible from the internet but I guess we're happy to enable access if you are so kind to tell us your static client IP-address :-) Thanks, Tino Am Friday 29 October 2010 14:16:21 schrieb Michał Borychowski: > Hi! > > Could you send some screens / charts with the master server usage? We would > be mainly interested what operations and how many of them take place. > > At our environment the main cause of CPU increase are "getattr", "lookup" > and "readdir" operations. The options used in mfsmount are correct and > should decrease number of these operations. But in your case probably > something else slows down the server. > > We have two master processes on a FreeBSD 7.1 machine: > > 791 nobody 1 80 -19 10665M 10047M CPU0 0 659.8H 16.70% mfsmaster > 786 nobody 1 69 -19 238M 225M select 1 127.0H 11.28% mfsmaster > > The first has 23 000 000 files and the second 5 000 > > systat gives: > > em0 in 808.354 KB/s 1010.191 KB/s 3.687 TB > out 1.328 MB/s 9.395 MB/s 11.022 TB > > so the transfer is similar. > > You can have a look at our charts and compare: http://80.48.16.122/ > > > > If you need any further assistance please let us know. > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > -----Original Message----- > From: Kai Kockro [mailto:kk...@we...] > Sent: Thursday, October 28, 2010 10:32 AM > To: moo...@li... > Subject: [Moosefs-users] Performance with 40 Clients > > Hi, > > we have one mfsmaster running on FreeBSD 8.1-STABLE. Filesystem is > completly > > ZFS. The Server has 16G Ram. The mfsmaster prozess has a res size of 173M. > CPU > is a Xeon L3426 @ 1.87GHz. > > Our 40 Clients are running on FreeBSD 7.2. There comes massive file > uploading > ( filesize from 10kb - 10mb ). > > The args for mfsmount are "mfsmount -o mfscachefiles -o mfsentrycacheto=30 > -o > mfsattrcacheto=30 -o mfsdirentrycacheto=30 -s -f /mnt/" Without these > settings > we had massive gettatr requests. > > If we use this mounts for live system ( massive traffic, systat -if 1 shows > 1-10MB/sec on mfsmaster server ), the mfsmaster runs to 100% cpu use. The > php > / httpd prozesses on the clients stuck in fu_ans state. Are there any tips > or > tricks we can try? If you need more infos, please let us know. > > Thanks, > Kai > > --------------------------------------------------------------------------- > - -- > Nokia and AT&T present the 2010 Calling All Innovators-North America > contest Create new apps & games for the Nokia N8 for consumers in U.S. > and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M > in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish > to Ovi Store http://p.sf.net/sfu/nokia-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-10-29 12:33:51
|
Hi! When talking about different disks in the same chunkserver slower disks would not slow down I/O operations in faster disks. When talking about saving a file where its copies are saved on chunkservers with disks of different speed, the overall time would be slowed down to the slowest disk because the data has to be written to all the disks (we wait for the success status from all the savings) If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: 3piece [mailto:ma...@3p...] Sent: Wednesday, October 27, 2010 1:11 AM To: moo...@li... Subject: [Moosefs-users] Does moosefs have Dynamic Tiered Storage? How does moosefs handle different types of media in the same server, i.e. ssd, sata and ide? ---------------------------------------------------------------------------- -- Nokia and AT&T present the 2010 Calling All Innovators-North America contest Create new apps & games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-10-29 12:16:39
|
Hi! Could you send some screens / charts with the master server usage? We would be mainly interested what operations and how many of them take place. At our environment the main cause of CPU increase are "getattr", "lookup" and "readdir" operations. The options used in mfsmount are correct and should decrease number of these operations. But in your case probably something else slows down the server. We have two master processes on a FreeBSD 7.1 machine: 791 nobody 1 80 -19 10665M 10047M CPU0 0 659.8H 16.70% mfsmaster 786 nobody 1 69 -19 238M 225M select 1 127.0H 11.28% mfsmaster The first has 23 000 000 files and the second 5 000 systat gives: em0 in 808.354 KB/s 1010.191 KB/s 3.687 TB out 1.328 MB/s 9.395 MB/s 11.022 TB so the transfer is similar. You can have a look at our charts and compare: http://80.48.16.122/ If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Kai Kockro [mailto:kk...@we...] Sent: Thursday, October 28, 2010 10:32 AM To: moo...@li... Subject: [Moosefs-users] Performance with 40 Clients Hi, we have one mfsmaster running on FreeBSD 8.1-STABLE. Filesystem is completly ZFS. The Server has 16G Ram. The mfsmaster prozess has a res size of 173M. CPU is a Xeon L3426 @ 1.87GHz. Our 40 Clients are running on FreeBSD 7.2. There comes massive file uploading ( filesize from 10kb - 10mb ). The args for mfsmount are "mfsmount -o mfscachefiles -o mfsentrycacheto=30 -o mfsattrcacheto=30 -o mfsdirentrycacheto=30 -s -f /mnt/" Without these settings we had massive gettatr requests. If we use this mounts for live system ( massive traffic, systat -if 1 shows 1-10MB/sec on mfsmaster server ), the mfsmaster runs to 100% cpu use. The php / httpd prozesses on the clients stuck in fu_ans state. Are there any tips or tricks we can try? If you need more infos, please let us know. Thanks, Kai ---------------------------------------------------------------------------- -- Nokia and AT&T present the 2010 Calling All Innovators-North America contest Create new apps & games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-10-29 10:16:05
|
Hi! This is information about deleting individual chunk copies not the whole chunks. Such operations (apart from deleting files from the system) happen also when you decrease goal for the given file or while rebalancing replications. If there are any chunks on an Info tab marked in blue - this means they have more copies than planned and are to be deleted. While rebalancing - first a copy on a server with more free space is created and later (after several minutes) a copy from the more occupied server is deleted. Kind regards Michał From: Roast [mailto:zha...@gm...] Sent: Thursday, October 21, 2010 1:08 PM To: moosefs-users Subject: [Moosefs-users] something about mfs.cgi Hi, all. The "Master Charts" at the cgi script, It shows that "chunk deletions (per minute) " is very high, for about 70, and it continued for about 3 hours. But at the trash directory which mount by " /opt/app/mfs/bin/mfsmount -m /mnt -H 192.150.26.100", It shows only for about only 10 files during the past 3 hours. Also we checked the log of our application, it shows no file deletions at those time. So I wonder why "chunk deletions (per minute) " at the mfs.cgi script report so high. Anybody else has met the same situation? And what happend with MFS? Thanks all. -- The time you enjoy wasting is not wasted time! |
From: Michał B. <mic...@ge...> - 2010-10-29 10:03:04
|
There is a small timeout set for the write operation (several seconds). It may happen that a single write operation takes several or more seconds. If these messages are sent by different servers, there is nothing to worry about. But if the message is sent mainly by one server (IP in hex C0A8020F = 192.168.2.15) you should investigate it more. In CGI monitor go to the Disks tab and click "hour" in "I/O stats last min (switch to hour,day)" row and sort by "write" in "max time (switch to avg)" column. Now look if there are disks which obviously stay from the others. You can also look at the "fsync" column and sort the results. Maximum times should not exceed 2 seconds (2 million microseconds). You should look for individual disks which may be a bottleneck of the system. "try counter: 1" alone is not a problem - number of trials is set as an option to mfsmount (by default 30). Until mfsmounts reaches this limit write operations are repeated and the application gets the OK status. Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Wednesday, October 20, 2010 1:04 PM To: moosefs-users Subject: [Moosefs-users] writeWorker time out Hi. We noticed the following message in logs: file: 28, index: 7, chunk: 992, version: 1 - writeworker: connection with (C0A8020F:9422) was timed out (unfinished writes: 5; try counter: 1) MFS seems to be working and functioning normally. It seems to be related to write process timing-out, but connection is normal. Can it be caused by slow speed of disks? Also, what counter 1 can do, and where it can be changed? Finally, what operation system will return to application - that write operation has failed? Thanks in advance! |
From: Laurent W. <lw...@hy...> - 2010-10-29 09:34:55
|
On Wed, 27 Oct 2010 13:09:31 -0600 Thomas S Hatch <tha...@gm...> wrote: > I was wondering if anyone had a working automatic failover setup with ucarp. > I can get the interfaces to failover propery and the new mfsmaster starts > up, but all my mounts go stale and I get IO errors until I remount them. > > Just wondering if anyone else had one of these set up and could give me a > few pointers. Unfortunately no. I didn't have time yet (nor money for the server:() to setup automatic failover. I was planning too to use ucarp. Wish you the best of luck, and please share your recipes :) -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-10-29 09:28:35
|
On Fri, 22 Oct 2010 12:26:46 -0600 Thomas S Hatch <tha...@gm...> wrote: > I am using a moosefs mount for kvm virtual machine images. These images are > showing somewhat iratic performance profiles, some of them operate at full > speed, around 40-60MB/s, but some of them are moving along at only 4-8 MB/s. > > I am running tests on multiple levels of my system to try and find where the > problems are. > > My question for the list is simply, do you all have any suggestions as to > what I can check? > > I am running Ubuntu 10.04 on all systems (I am planning on trying an > Archlinux vm eventually) and moosefs 1.6.17. Your setup description is too light. Can you expand ? Without that it's difficult to know where the culprit might be. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-10-29 09:26:09
|
On Thu, 21 Oct 2010 19:08:20 +0800 Roast <zha...@gm...> wrote: > Hi, all. Hi, > > The "Master Charts" at the cgi script, It shows that "chunk deletions (per > minute) " is very high, for about 70, and it continued for about 3 hours. > > But at the trash directory which mount by " /opt/app/mfs/bin/mfsmount -m > /mnt -H 192.150.26.100", It shows only for about only 10 files during the > past 3 hours. Also we checked the log of our application, it shows no file > deletions at those time. > > So I wonder why "chunk deletions (per minute) " at the mfs.cgi script > report so high. > > Anybody else has met the same situation? And what happend with MFS? My first guess is you added a chunkserver, and mfs is rebalacing chunks so that it's equally distributed among chunkservers to spread the I/O load at best. Does it ring any bell ? -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-10-29 09:23:35
|
On Thu, 21 Oct 2010 10:58:12 +0800 leon hong <cod...@gm...> wrote: > hello, Michal Hi Leon, > > I try delete some datas in mfsclient, but mfsmaster used ram is not > reduced. I had set the trash time is 0 (mfssettrashtime 0 / usr / mfs /), > what's the reason? I'm pretty sure that mfs doesn't have any memleak. It would have killed a lot of setups that run for months, or even years. I don't have time to dive in the code to see how it works, but don't forget that master only caches in ram metadata, not files contents. So deleting a couple files isn't enough to get back large bunch of ram. Definitely, if your master hardware config isn't strong enough to manage the volume, add ram ! Or, at worst, you could hack some metadata compression with zlib, but I'm really unsure about efficiency (metadata are quite small) and performance impact may be rough. Hope it helps, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-10-29 09:13:55
|
On Fri, 29 Oct 2010 10:15:28 +0200 Michał Borychowski <mic...@ge...> wrote: > Hi! Hi, I've tried your suggestion here, as our volume is lightly loaded (newly born). 1 master, 1 metalogger, 5 chunkservers, 29TB total, 15TB used for now, GBPS network, 7200 bytes jumbo frames. The volume was rebalancing for a while at the rate of 60 chunks replicated/deleted per minute. With 5 and 15 as you said, we're close to 300 :-) Thanks for the tip ! -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-10-29 08:15:44
|
Hi! Experience shows that while working in a production environment aggressive replication is not desirable as it can substantially slow down the whole system. So by default replication is non-aggressive. Replication speed may be altered upon startup by using these two options: CHUNKS_WRITE_REP_LIMIT CHUNKS_READ_REP_LIMIT The first tells how many chunks may be saved in parallel on one chunkserver while replicating. The other tells how many chunks may be read in parallel on one chunkserver while replicating. So you can experiment and set the first option to 5 and the second to 15 and restart the master server. After replication finishes you come back to the default values (1 and 5) and restart the master server machine. We hope that helps. Probably in the future replication speed could be changed "live" with the package of "mastertools". If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Reinis Rozitis [mailto:r...@ro...] Sent: Wednesday, October 20, 2010 4:07 PM To: moosefs-users Subject: [Moosefs-users] Replication mechanism/speed Hello, I want to clarify few things regarding MooseFS (didnt find clear answers in the FAQ section): On what principle (speed/rate) does the replication of chunks work? While actively using a MFS attached filesystem (rsyncing to it) switched of one of the chunkservers and of course when bringing it up again bunch of files were undergoal - atm I'm seeing something like 1 file/chunk replicated per second. While non-agressive replication is fine (doesn't degrade the rest of the system) is there a way to adjust / finetune it (or maybe its done automatically)? Is it somewhat connected to also chunkserver count - like more chunkservers = higher replication rates (since the replication is distributed then) or the goal size (the more file is undergoal (for example 1 copy out of 6) the faster it gets replicated) or its just some preset/hardcoded rate? rr ---------------------------------------------------------------------------- -- Download new Adobe(R) Flash(R) Builder(TM) 4 The new Adobe(R) Flex(R) 4 and Flash(R) Builder(TM) 4 (formerly Flex(R) Builder(TM)) enable the development of rich applications that run across multiple browsers and platforms. Download your free trials today! http://p.sf.net/sfu/adobe-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Kai K. <kk...@we...> - 2010-10-28 08:58:39
|
Hi, we have one mfsmaster running on FreeBSD 8.1-STABLE. Filesystem is completly ZFS. The Server has 16G Ram. The mfsmaster prozess has a res size of 173M. CPU is a Xeon L3426 @ 1.87GHz. Our 40 Clients are running on FreeBSD 7.2. There comes massive file uploading ( filesize from 10kb - 10mb ). The args for mfsmount are "mfsmount -o mfscachefiles -o mfsentrycacheto=30 -o mfsattrcacheto=30 -o mfsdirentrycacheto=30 -s -f /mnt/" Without these settings we had massive gettatr requests. If we use this mounts for live system ( massive traffic, systat -if 1 shows 1-10MB/sec on mfsmaster server ), the mfsmaster runs to 100% cpu use. The php / httpd prozesses on the clients stuck in fu_ans state. Are there any tips or tricks we can try? If you need more infos, please let us know. Thanks, Kai |
From: Thomas S H. <tha...@gm...> - 2010-10-27 19:09:39
|
I was wondering if anyone had a working automatic failover setup with ucarp. I can get the interfaces to failover propery and the new mfsmaster starts up, but all my mounts go stale and I get IO errors until I remount them. Just wondering if anyone else had one of these set up and could give me a few pointers. -Tom Hatch |
From: Laurent W. <lw...@hy...> - 2010-10-27 06:47:09
|
On Wed, 27 Oct 2010 01:11:01 +0200 3piece <ma...@3p...> wrote: > How does moosefs handle different types of media in the same server, > i.e. ssd, sata and ide? moosefs isn't aware of that level, as chunkservers read/write data to mount points, not devices. So whatever the underlying hardware, moosefs behaves the same…but speed ;) HTH, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: 3piece <ma...@3p...> - 2010-10-27 06:07:07
|
How does moosefs handle different types of media in the same server, i.e. ssd, sata and ide? |
From: jose m. <let...@us...> - 2010-10-26 16:17:26
|
El mar, 26-10-2010 a las 17:00 +0200, Germán Latorre escribió: > Wouldn't this affect performance? I believe that by building such an > architecture would make the FTP server a single point of failure. Am > I > right? > * export with samba, nfs, sshfs, or any other windows letter, affect performance, not native mfsmount, distributed network filesystem re-exported. * each client or chunkserver can be an FTP (mfsclient) or application server, native mfsclient, filesystem not re-exported, not affected performance. * Five permanent ftp's vsftpd, and two on-demand on backup site. http://control.seycob.es:9425/mfs.cgi?sections=MS http://control.seycob.es:9426 rsync replication. * sorry, google translator .... |
From: Germán L. <ger...@tw...> - 2010-10-26 15:04:15
|
Wouldn't this affect performance? I believe that by building such an architecture would make the FTP server a single point of failure. Am I right? El 26/10/2010 16:05, jose maria escribió: > El mar, 26-10-2010 a las 10:26 +0200, Germán Latorre escribió: >> Hello, >> >> We are choosing a distributed file system to create a scalable HA >> storage architecture for our solution. We are interested in MooseFS, >> as it definitely suits our needs. However our applications are >> Windows-based and will need to access MooseFS to store and retrieve >> files. >> >> How can that be achieved? >> > * Drive letter on windows, Expandrive + ssh on server, webdrive + > ftp,ssh,webdav on server. > > * Store and retreive files, ftp server, web server, DMS etc, on windows > filezilla or any other client. > > > > > > > ------------------------------------------------------------------------------ > Nokia and AT&T present the 2010 Calling All Innovators-North America contest > Create new apps& games for the Nokia N8 for consumers in U.S. and Canada > $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing > Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store > http://p.sf.net/sfu/nokia-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Germán Latorre Antín Principal Software Engineer TwinDocs - https://www.twindocs.com (+34) 902 523 232 |
From: jose m. <let...@us...> - 2010-10-26 14:05:54
|
El mar, 26-10-2010 a las 10:26 +0200, Germán Latorre escribió: > Hello, > > We are choosing a distributed file system to create a scalable HA > storage architecture for our solution. We are interested in MooseFS, > as it definitely suits our needs. However our applications are > Windows-based and will need to access MooseFS to store and retrieve > files. > > How can that be achieved? > * Drive letter on windows, Expandrive + ssh on server, webdrive + ftp,ssh,webdav on server. * Store and retreive files, ftp server, web server, DMS etc, on windows filezilla or any other client. |
From: Steve <st...@bo...> - 2010-10-26 11:48:51
|
samba server -------Original Message------- From: Germán Latorre Date: 26/10/2010 09:47:37 To: moo...@li... Subject: [Moosefs-users] Access MooseFS from Windows? Hello, We are choosing a distributed file system to create a scalable HA storage architecture for our solution. We are interested in MooseFS, as it definitely suits our needs. However our applications are Windows-based and will need to access MooseFS to store and retrieve files. How can that be achieved? Thanks very much in advance, Germán. -- Germán Latorre Antín Principal Software Engineer TwinDocs - https://www.twindocs.com (+34) 902 523 232 |
From: Michał B. <mic...@ge...> - 2010-10-26 09:56:33
|
Unfortunately there is no implementation of FUSE for Windows. So there is an option to have clients on *nix machines and Windows machines connect to them through Samba. Other option would be to implement some NFS/MFS interface. On the other hand the master server and chunkservers would work on Windows by using cygwin. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Germán Latorre [mailto:ger...@tw...] Sent: Tuesday, October 26, 2010 10:26 AM To: moo...@li... Subject: [Moosefs-users] Access MooseFS from Windows? Hello, We are choosing a distributed file system to create a scalable HA storage architecture for our solution. We are interested in MooseFS, as it definitely suits our needs. However our applications are Windows-based and will need to access MooseFS to store and retrieve files. How can that be achieved? Thanks very much in advance, Germán. -- Germán Latorre Antín Principal Software Engineer TwinDocs - https://www.twindocs.com (+34) 902 523 232 |