You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Fyodor U. <uf...@uf...> - 2011-08-06 12:29:50
|
Hi! root@amanda:~# mfsfileinfo /bacula/bacula-client-gen.sh /bacula/bacula-client-gen.sh: chunk 0: 0000000000000001_00000001 / (id:1 ver:1) copy 1: 10.5.51.141:9422 copy 2: 10.5.51.145:9422 copy 3: 10.5.51.147:9422 I see 3 copy. But: root@amanda:~# mfsgetgoal /bacula /bacula: 2 root@amanda:~# mfsgetgoal /bacula/bacula-client-gen.sh /bacula/bacula-client-gen.sh: 2 Why? WBR, Fyodor. |
From: Steve <st...@bo...> - 2011-08-05 13:38:34
|
Therefore hopefully we are waiting for a major revision ? -------Original Message------- From: Fyodor Ustinov Date: 05/08/2011 12:40:14 To: moo...@li... Subject: [Moosefs-users] Stagnation? Hi! In 2010 it was released 7 versions. In 2011 - only one, 7 month ago. No changes in public git. WBR, Fyodor. ----------------------------------------------------------------------------- BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA The must-attend event for mobile developers. Connect with experts. Get tools for creating Super Apps. See the latest technologies. Sessions, hands-on labs, demos & much more. Register early & save! http://p.sf.net/sfu/rim-blackberry-1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Fyodor U. <uf...@uf...> - 2011-08-05 11:38:58
|
Hi! In 2010 it was released 7 versions. In 2011 - only one, 7 month ago. No changes in public git. WBR, Fyodor. |
From: Robert S. <rsa...@ne...> - 2011-08-04 00:42:26
|
We have been spending a lot of time trying to get MooseFS stable and optimized. Something I have noticed is that mfsmaster seems to be a bottleneck in our setup. What I also noticed is that mfsmaster is single threaded. From reading the source code it seems to use a very interesting polling loop to handle all communications and actions. So a question: Is there anything on the roadmap to make mfsmaster multithreaded? It also seems that the performance of MooseFS is very dependent on the performance of mfsmaster. If the machine running mfsmaster is slow or is busy then it can slow everything down significantly or even cause instability in the file system. This also implies that if you want to buy a dedicated machine for mfsmaster that you have to buy the fastest possible CPU and as much RAM as you need. Local disk space and multiple CPUs and cores are not important. Is this correct? What would the recommendation be for an optimal machine to run mfsmaster? Robert |
From: Robert S. <rsa...@ne...> - 2011-08-03 11:57:36
|
I think the original question was about single disk IOPS. The moment you add multiple disks then effective IOPS will increase. But there is always a penalty. For MooseFS it is the network I/O. For RAID5/6 it is the calculation of the error correcting codes and the extra data that has to be read and written for every write. But even assuming you have a 100 x 7200 RPM spindles and the files are perfectly distributed with no overhead then with 10 kB files you will only get a 100 MB/s write speed under perfect conditions. In reality you will see much lower numbers. Adding more spindles and/or faster spindles is the only way to improve the speed of dealing with many small files but there are other limits/overhead which limits the gain you will get. Robert On 8/3/11 6:11 AM, Dennis Jacobfeuerborn wrote: > I would think that the iops issue is somewhat mitigated by the parallelism > of MooseFS since iops don't have to be processed in strictly sequential order. > On the negative side there are a lot of round-trips between the servers > involved for writes so if you have a lot of them they are going to be > slower than on local storage. > > Regards, > Dennis > > On 08/03/2011 01:39 AM, Robert Sandilands wrote: >> It is not really an issue of MFS or not. You will find the same performance >> issues on EXT2/3/4 and XFS. >> >> Hard drives and most file systems are made for streaming large files. Not >> dealing with large numbers of small files. It helps to have enough memory >> on the machines mounting the file systems to keep the folder structure >> cached in memory. In that way MFS is good as it forces the whole folder >> structure to be in memory through mfsmaster. >> >> But, there seems to be other limitations that makes it hard to deal with >> large numbers of small files. >> >> One of the largest limitations of working with small files is the IOPS >> (Input/Output Operations Per Second) limits of the individual drives and >> the controllers. A 7200 RPM drive can do at most 100 IOPS per second. An >> IOPS is a seek and read or a write. To open a file and read/write it can be >> several operations to follow the folder structure and to identify the >> location on the disk and then to actually read/write it. This implies that >> if you are very optimistic you can create 100 files per disk per second. If >> a single file is only 10k then it means you can write at around 1 MB/s to >> disk on a single disk as a theoretical maximum. In the real world it will >> probably be less than 50% of that. RAID5 and RAID6 will have a >> significantly negative effect on IOPS for an array of disks. >> >> A 15 kRPM drive is limited to around 210 IOPS. >> >> This obviously ignores the effect of caching (OS, controller, disk) and >> reordering of operations. But if the total size of the data set you are >> working with is larger than the amount of RAM available for caching then >> caching will not have much of an effect. Reordering of reads and writes can >> have a positive influence on the effective IOPS so it may be useful to >> invest in a better quality controller card. >> >> This all assumes that you avoid some of the obvious pitfalls when dealing >> with large numbers of files. One of them is not to store too many files per >> folder. A lot of operating systems/file systems slow down significantly >> once they find more than a few thousand files in a single folder. >> >> If you find a way around these types of limits I would love to know about >> it. I have looked at using SSD drives due to their much higher IOPS but >> then you have to be careful of the IOPS limits on the controller cards. >> That and the per GB cost of the SSD drives is a bit problematic at the >> moment. I have also looked at using MySQL for storing files of this size >> and that has shown some potential but it did not really outperform anything >> else I looked at. It was also somewhat problematic to scale. >> >> Robert >> >> On 8/2/11 2:41 PM, Kristofer Pettijohn wrote: >>> MFS is designed for large files, as it was written as a clone to Google >>> File System. It will perform very poorly with small files. I don't think >>> there is much you can do. >>> >>> --------------------------------------------------------------------------- >>> *From: *"Vineet Jain"<vin...@gm...> >>> *To: *moo...@li... >>> *Sent: *Tuesday, August 2, 2011 10:03:26 AM >>> *Subject: *[Moosefs-users] Small file performance is an order of >>> magnitude worse than large files >>> >>> I have a test instance up with a 5 drives one master and one chunk >>> server. Large file performance is fine. However when testing with >>> 100,000's of small files (10k - 100k) I'm getting about 200-300k write >>> performance per drive from looking at the output of iostat. Is that sound >>> right? Any way to speed up the writes. It is painfully slow. >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA >>> The must-attend event for mobile developers. Connect with experts. >>> Get tools for creating Super Apps. See the latest technologies. >>> Sessions, hands-on labs, demos& much more. Register early& save! >>> http://p.sf.net/sfu/rim-blackberry-1 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA >>> The must-attend event for mobile developers. Connect with experts. >>> Get tools for creating Super Apps. See the latest technologies. >>> Sessions, hands-on labs, demos& much more. Register early& save! >>> http://p.sf.net/sfu/rim-blackberry-1 >>> >>> >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> ------------------------------------------------------------------------------ >> BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA >> The must-attend event for mobile developers. Connect with experts. >> Get tools for creating Super Apps. See the latest technologies. >> Sessions, hands-on labs, demos& much more. Register early& save! >> http://p.sf.net/sfu/rim-blackberry-1 >> >> >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > The must-attend event for mobile developers. Connect with experts. > Get tools for creating Super Apps. See the latest technologies. > Sessions, hands-on labs, demos& much more. Register early& save! > http://p.sf.net/sfu/rim-blackberry-1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Dennis J. <den...@co...> - 2011-08-03 10:11:56
|
I would think that the iops issue is somewhat mitigated by the parallelism of MooseFS since iops don't have to be processed in strictly sequential order. On the negative side there are a lot of round-trips between the servers involved for writes so if you have a lot of them they are going to be slower than on local storage. Regards, Dennis On 08/03/2011 01:39 AM, Robert Sandilands wrote: > It is not really an issue of MFS or not. You will find the same performance > issues on EXT2/3/4 and XFS. > > Hard drives and most file systems are made for streaming large files. Not > dealing with large numbers of small files. It helps to have enough memory > on the machines mounting the file systems to keep the folder structure > cached in memory. In that way MFS is good as it forces the whole folder > structure to be in memory through mfsmaster. > > But, there seems to be other limitations that makes it hard to deal with > large numbers of small files. > > One of the largest limitations of working with small files is the IOPS > (Input/Output Operations Per Second) limits of the individual drives and > the controllers. A 7200 RPM drive can do at most 100 IOPS per second. An > IOPS is a seek and read or a write. To open a file and read/write it can be > several operations to follow the folder structure and to identify the > location on the disk and then to actually read/write it. This implies that > if you are very optimistic you can create 100 files per disk per second. If > a single file is only 10k then it means you can write at around 1 MB/s to > disk on a single disk as a theoretical maximum. In the real world it will > probably be less than 50% of that. RAID5 and RAID6 will have a > significantly negative effect on IOPS for an array of disks. > > A 15 kRPM drive is limited to around 210 IOPS. > > This obviously ignores the effect of caching (OS, controller, disk) and > reordering of operations. But if the total size of the data set you are > working with is larger than the amount of RAM available for caching then > caching will not have much of an effect. Reordering of reads and writes can > have a positive influence on the effective IOPS so it may be useful to > invest in a better quality controller card. > > This all assumes that you avoid some of the obvious pitfalls when dealing > with large numbers of files. One of them is not to store too many files per > folder. A lot of operating systems/file systems slow down significantly > once they find more than a few thousand files in a single folder. > > If you find a way around these types of limits I would love to know about > it. I have looked at using SSD drives due to their much higher IOPS but > then you have to be careful of the IOPS limits on the controller cards. > That and the per GB cost of the SSD drives is a bit problematic at the > moment. I have also looked at using MySQL for storing files of this size > and that has shown some potential but it did not really outperform anything > else I looked at. It was also somewhat problematic to scale. > > Robert > > On 8/2/11 2:41 PM, Kristofer Pettijohn wrote: >> MFS is designed for large files, as it was written as a clone to Google >> File System. It will perform very poorly with small files. I don't think >> there is much you can do. >> >> --------------------------------------------------------------------------- >> *From: *"Vineet Jain" <vin...@gm...> >> *To: *moo...@li... >> *Sent: *Tuesday, August 2, 2011 10:03:26 AM >> *Subject: *[Moosefs-users] Small file performance is an order of >> magnitude worse than large files >> >> I have a test instance up with a 5 drives one master and one chunk >> server. Large file performance is fine. However when testing with >> 100,000's of small files (10k - 100k) I'm getting about 200-300k write >> performance per drive from looking at the output of iostat. Is that sound >> right? Any way to speed up the writes. It is painfully slow. >> >> >> >> ------------------------------------------------------------------------------ >> BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA >> The must-attend event for mobile developers. Connect with experts. >> Get tools for creating Super Apps. See the latest technologies. >> Sessions, hands-on labs, demos & much more. Register early & save! >> http://p.sf.net/sfu/rim-blackberry-1 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> >> ------------------------------------------------------------------------------ >> BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA >> The must-attend event for mobile developers. Connect with experts. >> Get tools for creating Super Apps. See the latest technologies. >> Sessions, hands-on labs, demos& much more. Register early& save! >> http://p.sf.net/sfu/rim-blackberry-1 >> >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > The must-attend event for mobile developers. Connect with experts. > Get tools for creating Super Apps. See the latest technologies. > Sessions, hands-on labs, demos& much more. Register early& save! > http://p.sf.net/sfu/rim-blackberry-1 > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Robert S. <rsa...@ne...> - 2011-08-02 23:39:26
|
It is not really an issue of MFS or not. You will find the same performance issues on EXT2/3/4 and XFS. Hard drives and most file systems are made for streaming large files. Not dealing with large numbers of small files. It helps to have enough memory on the machines mounting the file systems to keep the folder structure cached in memory. In that way MFS is good as it forces the whole folder structure to be in memory through mfsmaster. But, there seems to be other limitations that makes it hard to deal with large numbers of small files. One of the largest limitations of working with small files is the IOPS (Input/Output Operations Per Second) limits of the individual drives and the controllers. A 7200 RPM drive can do at most 100 IOPS per second. An IOPS is a seek and read or a write. To open a file and read/write it can be several operations to follow the folder structure and to identify the location on the disk and then to actually read/write it. This implies that if you are very optimistic you can create 100 files per disk per second. If a single file is only 10k then it means you can write at around 1 MB/s to disk on a single disk as a theoretical maximum. In the real world it will probably be less than 50% of that. RAID5 and RAID6 will have a significantly negative effect on IOPS for an array of disks. A 15 kRPM drive is limited to around 210 IOPS. This obviously ignores the effect of caching (OS, controller, disk) and reordering of operations. But if the total size of the data set you are working with is larger than the amount of RAM available for caching then caching will not have much of an effect. Reordering of reads and writes can have a positive influence on the effective IOPS so it may be useful to invest in a better quality controller card. This all assumes that you avoid some of the obvious pitfalls when dealing with large numbers of files. One of them is not to store too many files per folder. A lot of operating systems/file systems slow down significantly once they find more than a few thousand files in a single folder. If you find a way around these types of limits I would love to know about it. I have looked at using SSD drives due to their much higher IOPS but then you have to be careful of the IOPS limits on the controller cards. That and the per GB cost of the SSD drives is a bit problematic at the moment. I have also looked at using MySQL for storing files of this size and that has shown some potential but it did not really outperform anything else I looked at. It was also somewhat problematic to scale. Robert On 8/2/11 2:41 PM, Kristofer Pettijohn wrote: > MFS is designed for large files, as it was written as a clone to > Google File System. It will perform very poorly with small files. I > don't think there is much you can do. > > ------------------------------------------------------------------------ > *From: *"Vineet Jain" <vin...@gm...> > *To: *moo...@li... > *Sent: *Tuesday, August 2, 2011 10:03:26 AM > *Subject: *[Moosefs-users] Small file performance is an order of > magnitude worse than large files > > I have a test instance up with a 5 drives one master and one chunk > server. Large file performance is fine. However when testing with > 100,000's of small files (10k - 100k) I'm getting about 200-300k write > performance per drive from looking at the output of iostat. Is that > sound right? Any way to speed up the writes. It is painfully slow. > > > > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > The must-attend event for mobile developers. Connect with experts. > Get tools for creating Super Apps. See the latest technologies. > Sessions, hands-on labs, demos & much more. Register early & save! > http://p.sf.net/sfu/rim-blackberry-1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > The must-attend event for mobile developers. Connect with experts. > Get tools for creating Super Apps. See the latest technologies. > Sessions, hands-on labs, demos& much more. Register early& save! > http://p.sf.net/sfu/rim-blackberry-1 > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@da...> - 2011-08-02 22:18:09
|
El Lunes 01 Agosto 2011, Darren Webb escribió: > Hello, Hi! > I understand that files are split into chunks and each chunk is a file > on a drive specified mfshdd.cfg on one of the file servers. Two > questions: > > 1) For any given file, are all chunks for a single copy kept on the same I asssume you mean all copies of a single chunk :) > chunkserver, or can they be spread across different servers? If so, is > each chunk copy guaranteed* to be stored on different chunk servers? And > if not, would this imply that setting higher goal means improved > performance? Every copy of a chunk is always stored on a different chunkserver, so if you lose one chunkserver you can still use the copy from another chunkserver, as long as you have set a goal > 1. A higher goal doesn't necessarily mean increased performance, at least that's not MFS's goal (I think this is in the FAQ if you want more about it). > 2) Can chunks from two copies of the same file be stored on the same > chunkserver? Not in the normal case. You might have two chunkserver processes running on the same server and defeat both, my answer and you high availability :) > 3) If my goal for all files is N, is it guaranteed* to withstand N-1 > simultaneous chunkserver failures? Hey, you said *two* questions! :) A file with goal = N is guaranteed to withstand N-1 chunkserver failures, yes. Also, when a chunkserver goes down, MFS replicates undergoal chunks to other chunkservers to achieve goal = N again. Cheers, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! |
From: Kristofer P. <kri...@cy...> - 2011-08-02 18:41:59
|
MFS is designed for large files, as it was written as a clone to Google File System. It will perform very poorly with small files. I don't think there is much you can do. ----- Original Message ----- From: "Vineet Jain" <vin...@gm...> To: moo...@li... Sent: Tuesday, August 2, 2011 10:03:26 AM Subject: [Moosefs-users] Small file performance is an order of magnitude worse than large files I have a test instance up with a 5 drives one master and one chunk server. Large file performance is fine. However when testing with 100,000's of small files (10k - 100k) I'm getting about 200-300k write performance per drive from looking at the output of iostat. Is that sound right? Any way to speed up the writes. It is painfully slow. ------------------------------------------------------------------------------ BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA The must-attend event for mobile developers. Connect with experts. Get tools for creating Super Apps. See the latest technologies. Sessions, hands-on labs, demos & much more. Register early & save! http://p.sf.net/sfu/rim-blackberry-1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Laurent W. <lw...@hy...> - 2011-08-02 17:54:30
|
Hi, a chunkserver in our mfs volume is being retired so we can put in newer and bigger disks. (*/path in mfshdd.cfg) So, chunks are being replicated on others chunkservers. One of the chunkservers is really close to being full, but chunks continue to get written to it. Others CS are not so full (disks are being changed box per box, so fill % is not the same on every CS). mfschunkserver process disappeared at least 10 times without any error message, nothing in dmesg, in /var/log/messages. Some corner case badly handled by the algorithm ? The only thing I get on master is chunkserver disconnected, and on chunkserver (write) write error: ECONNRESET (Connection reset by peer). Network is (apparently) fine, as it's running fine for months. reserved disks for mfs on the chunkserver: /dev/sdh1 670487 669993 494 100% /dataj /dev/sdi1 1408053 1405512 2542 100% /datak /dev/sdb1 632095 631663 432 100% /datab /dev/sdc1 670487 669804 684 100% /datad /dev/sdf1 1408053 1405889 2165 100% /datag /dev/sdk1 1408053 1405532 2522 100% /dataf /dev/sdl1 670487 669907 580 100% /datal /dev/sdd1 1408053 1405454 2600 100% /datac /dev/sde1 670487 669918 570 100% /datae /dev/sdg1 670487 670047 440 100% /datai Any idea ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Vineet J. <vin...@gm...> - 2011-08-02 15:03:33
|
I have a test instance up with a 5 drives one master and one chunk server. Large file performance is fine. However when testing with 100,000's of small files (10k - 100k) I'm getting about 200-300k write performance per drive from looking at the output of iostat. Is that sound right? Any way to speed up the writes. It is painfully slow. |
From: Michal B. <mic...@ge...> - 2011-08-02 13:58:58
|
You can try 10GB or even 5GB, but this is absolute minimum. Yes, we'll add some information to the chunkserver process Regards -Michal -----Original Message----- From: Dennis Jacobfeuerborn [mailto:den...@co...] Sent: Tuesday, August 02, 2011 1:53 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Cannot write to mounted filesystem Hi Michal, thanks for the pointer I was already beginning to think that this might be a problem. You said I *should* have 50GB for chunkservers but can I get away with less? I've only started with MooseFS and right now I'm trying to use 4 virtual machines to get a feel for it and I do not intend to do any real work with it. The documentation talks about "several gigabytes". Would 5 or 10 GB be sufficient as well? If there is an absolute minimum then the chunkserver daemon should probably exit with an error if there is not enough space available to allow it to operate properly. Regards, Dennis On 08/02/2011 08:37 AM, Michal Borychowski wrote: > Hi Dennis! > > You have too little disks. MooseFS reserves some space only for itself - it > is not a percent of the disk size, but a fixed size. You should have at > least 50GB for the chunkservers. > > > Kind regards > -Michal > > > -----Original Message----- > From: Dennis Jacobfeuerborn [mailto:den...@co...] > Sent: Friday, July 29, 2011 10:21 PM > To: moo...@li... > Subject: [Moosefs-users] Cannot write to mounted filesystem > > Hi, > I'm starting to play with MooseFS and while the setup seemed to work fine I > ran into trouble after mounting it. > > My Setup is a Centos 6 host system and four Centos 6 virtual machines > installed as master, metalogger and two chunk servers. > Installing everything went smoothly and I can see the two chunk server in > the cgi stats interface. Both chunk servers have a 1gb partition formatted > as ext4 mounted under /mnt/data for the storage. > The cgi interface shows 2GiB total space and 1.3GiB available. > > I can mount the filesystem on a client and df -h shows 1.4G of free space > which looks ok. However the moment I try to copy a file no matter how small > I immediately get a "No space left on device" error. Doing a "ls" after > that shows the file on the filesystem with a size of 0 bytes. A "mkdir > /mnt/mfs/test" for example works fine though. > > I see the following in the syslog on the client: > > Jul 29 21:47:11 centos6 mfsmount[9930]: file: 6, index: 0 - fs_writechunk > returns status 21 > Jul 29 21:47:11 centos6 mfsmount[9930]: error writing file number 6: ENOSPC > (No space left on device) > > Any ideas what the problem could be? > > Regards, > Dennis > > ---------------------------------------------------------------------------- > -- > Got Input? Slashdot Needs You. > Take our quick survey online. Come on, we don't ask for help often. > Plus, you'll get a chance to win $100 to spend on ThinkGeek. > http://p.sf.net/sfu/slashdot-survey > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > ---------------------------------------------------------------------------- -- BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA The must-attend event for mobile developers. Connect with experts. Get tools for creating Super Apps. See the latest technologies. Sessions, hands-on labs, demos & much more. Register early & save! http://p.sf.net/sfu/rim-blackberry-1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Dennis J. <den...@co...> - 2011-08-02 11:53:24
|
Hi Michal, thanks for the pointer I was already beginning to think that this might be a problem. You said I *should* have 50GB for chunkservers but can I get away with less? I've only started with MooseFS and right now I'm trying to use 4 virtual machines to get a feel for it and I do not intend to do any real work with it. The documentation talks about "several gigabytes". Would 5 or 10 GB be sufficient as well? If there is an absolute minimum then the chunkserver daemon should probably exit with an error if there is not enough space available to allow it to operate properly. Regards, Dennis On 08/02/2011 08:37 AM, Michal Borychowski wrote: > Hi Dennis! > > You have too little disks. MooseFS reserves some space only for itself - it > is not a percent of the disk size, but a fixed size. You should have at > least 50GB for the chunkservers. > > > Kind regards > -Michal > > > -----Original Message----- > From: Dennis Jacobfeuerborn [mailto:den...@co...] > Sent: Friday, July 29, 2011 10:21 PM > To: moo...@li... > Subject: [Moosefs-users] Cannot write to mounted filesystem > > Hi, > I'm starting to play with MooseFS and while the setup seemed to work fine I > ran into trouble after mounting it. > > My Setup is a Centos 6 host system and four Centos 6 virtual machines > installed as master, metalogger and two chunk servers. > Installing everything went smoothly and I can see the two chunk server in > the cgi stats interface. Both chunk servers have a 1gb partition formatted > as ext4 mounted under /mnt/data for the storage. > The cgi interface shows 2GiB total space and 1.3GiB available. > > I can mount the filesystem on a client and df -h shows 1.4G of free space > which looks ok. However the moment I try to copy a file no matter how small > I immediately get a "No space left on device" error. Doing a "ls" after > that shows the file on the filesystem with a size of 0 bytes. A "mkdir > /mnt/mfs/test" for example works fine though. > > I see the following in the syslog on the client: > > Jul 29 21:47:11 centos6 mfsmount[9930]: file: 6, index: 0 - fs_writechunk > returns status 21 > Jul 29 21:47:11 centos6 mfsmount[9930]: error writing file number 6: ENOSPC > (No space left on device) > > Any ideas what the problem could be? > > Regards, > Dennis > > ---------------------------------------------------------------------------- > -- > Got Input? Slashdot Needs You. > Take our quick survey online. Come on, we don't ask for help often. > Plus, you'll get a chance to win $100 to spend on ThinkGeek. > http://p.sf.net/sfu/slashdot-survey > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Robert S. <rsa...@ne...> - 2011-08-02 11:31:20
|
Hi Michal, Increasing the timeout seemed to have resolved the issue for me. I still get some times around the hour where mfsmaster is unresponsive but it does recover. There is no swapping on the master. The master has 64 GB of RAM and the mfsmaster process is using 33.5 GB of that. Robert On 8/2/11 2:46 AM, Michal Borychowski wrote: > Hi Robert! > > If really increasing the timeout helped in your case, probably chunkserver > registration process was slowing down the master. Again - with this number > of files/chunks it should not have place. Please also check your RAM and if > master is not swapping constantly. > > > Kind regards > -Michal > > > -----Original Message----- > From: Robert Sandilands [mailto:rsa...@ne...] > Sent: Monday, July 18, 2011 8:37 PM > To: Mike > Cc: moo...@li... > Subject: Re: [Moosefs-users] mfsmaster hanging at 100% cpu? > > I have had it running without a crash for more than 12 hours which is a > new record here. > > I changed one setting: > > MASTER_TIMEOUT = 120 > > in mfschunkserver.cfg. > > My guess at the moment is that on the hour the Master blocks connections > and dumps the metadata to disk and to the mfsmetalogger servers. Due to > existing load and the number of files/objects/chunks in our system this > takes longer than the chunk server timeout. This then leads to a process > where the chunkserver goes into a disconnect, reconnect loop until the > master gets confused. > > What also seems to contribute is that once mfsmaster starts blocking > connections mfsmount and mfschunkserver may start using more CPU which > tends to aggravate the situation. > > It may help the situation to move mfsmaster to an unloaded and dedicated > machine, but I can't help but think that this behavior limits > scalability. Given enough files/folders/chunks any timeout will be > exceeded even if the master machine is completely unloaded. > > Robert > > On 7/18/11 10:55 AM, Mike wrote: >>> Every time it gets into this state one or two chunks gets damaged >> and I have to manually repair them. Sometimes losing a file. >>> At this stage I can't even get to repairing the chunks as mfsmaster >> does not stay up for long enough to show me which files to repair. >>> What is also strange is how predictable it is. It always happens on >> the hour. Not 2 minutes past the hour, but precisely on the hour. It is >>> as if there is some job/process/thread that does something every >> hour that causes it to go into this state. >> >> I can reproduce this on our install fairly easily (well, I could last >> time I looked!) Given that I'm running a completely stock config with >> 2 chunkservers, it shouldn't be TOO hard to figure out what's going >> on. I can recompile/reinstall/change values as needed, someone just >> needs to point me in the right direction. >> >> > > ---------------------------------------------------------------------------- > -- > Storage Efficiency Calculator > This modeling tool is based on patent-pending intellectual property that > has been used successfully in hundreds of IBM storage optimization engage- > ments, worldwide. Store less, Store more with what you own, Move data to > the right place. Try It Now! > http://www.accelacomm.com/jaw/sfnl/114/51427378/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Michal B. <mic...@ge...> - 2011-08-02 06:46:47
|
Hi Robert! If really increasing the timeout helped in your case, probably chunkserver registration process was slowing down the master. Again - with this number of files/chunks it should not have place. Please also check your RAM and if master is not swapping constantly. Kind regards -Michal -----Original Message----- From: Robert Sandilands [mailto:rsa...@ne...] Sent: Monday, July 18, 2011 8:37 PM To: Mike Cc: moo...@li... Subject: Re: [Moosefs-users] mfsmaster hanging at 100% cpu? I have had it running without a crash for more than 12 hours which is a new record here. I changed one setting: MASTER_TIMEOUT = 120 in mfschunkserver.cfg. My guess at the moment is that on the hour the Master blocks connections and dumps the metadata to disk and to the mfsmetalogger servers. Due to existing load and the number of files/objects/chunks in our system this takes longer than the chunk server timeout. This then leads to a process where the chunkserver goes into a disconnect, reconnect loop until the master gets confused. What also seems to contribute is that once mfsmaster starts blocking connections mfsmount and mfschunkserver may start using more CPU which tends to aggravate the situation. It may help the situation to move mfsmaster to an unloaded and dedicated machine, but I can't help but think that this behavior limits scalability. Given enough files/folders/chunks any timeout will be exceeded even if the master machine is completely unloaded. Robert On 7/18/11 10:55 AM, Mike wrote: > > Every time it gets into this state one or two chunks gets damaged > and I have to manually repair them. Sometimes losing a file. > > At this stage I can't even get to repairing the chunks as mfsmaster > does not stay up for long enough to show me which files to repair. > > What is also strange is how predictable it is. It always happens on > the hour. Not 2 minutes past the hour, but precisely on the hour. It is > > as if there is some job/process/thread that does something every > hour that causes it to go into this state. > > I can reproduce this on our install fairly easily (well, I could last > time I looked!) Given that I'm running a completely stock config with > 2 chunkservers, it shouldn't be TOO hard to figure out what's going > on. I can recompile/reinstall/change values as needed, someone just > needs to point me in the right direction. > > ---------------------------------------------------------------------------- -- Storage Efficiency Calculator This modeling tool is based on patent-pending intellectual property that has been used successfully in hundreds of IBM storage optimization engage- ments, worldwide. Store less, Store more with what you own, Move data to the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-08-02 06:42:42
|
Hi Mike! The situation you describe is quite stange. We would like to connect to your master with gdb to check this if it were possible. Or maybe your master constantly swaps? How much RAM do you have? What is the RAM usage? Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Mike [mailto:isp...@gm...] Sent: Tuesday, July 12, 2011 4:14 PM To: moo...@li... Subject: [Moosefs-users] mfsmaster hanging at 100% cpu? I have a fairly small MFS installation - 14T of storage across 2 servers, a master node and a metalogger. I'm seeing the mfsmaster jump to 100% cpu and just sit there... rendering the filesystem dead. strace shows its not doing any IO. Any thoughts or ideas where to look next? |
From: Michal B. <mic...@ge...> - 2011-08-02 06:38:10
|
Hi Dennis! You have too little disks. MooseFS reserves some space only for itself - it is not a percent of the disk size, but a fixed size. You should have at least 50GB for the chunkservers. Kind regards -Michal -----Original Message----- From: Dennis Jacobfeuerborn [mailto:den...@co...] Sent: Friday, July 29, 2011 10:21 PM To: moo...@li... Subject: [Moosefs-users] Cannot write to mounted filesystem Hi, I'm starting to play with MooseFS and while the setup seemed to work fine I ran into trouble after mounting it. My Setup is a Centos 6 host system and four Centos 6 virtual machines installed as master, metalogger and two chunk servers. Installing everything went smoothly and I can see the two chunk server in the cgi stats interface. Both chunk servers have a 1gb partition formatted as ext4 mounted under /mnt/data for the storage. The cgi interface shows 2GiB total space and 1.3GiB available. I can mount the filesystem on a client and df -h shows 1.4G of free space which looks ok. However the moment I try to copy a file no matter how small I immediately get a "No space left on device" error. Doing a "ls" after that shows the file on the filesystem with a size of 0 bytes. A "mkdir /mnt/mfs/test" for example works fine though. I see the following in the syslog on the client: Jul 29 21:47:11 centos6 mfsmount[9930]: file: 6, index: 0 - fs_writechunk returns status 21 Jul 29 21:47:11 centos6 mfsmount[9930]: error writing file number 6: ENOSPC (No space left on device) Any ideas what the problem could be? Regards, Dennis ---------------------------------------------------------------------------- -- Got Input? Slashdot Needs You. Take our quick survey online. Come on, we don't ask for help often. Plus, you'll get a chance to win $100 to spend on ThinkGeek. http://p.sf.net/sfu/slashdot-survey _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-08-02 06:35:55
|
Hi! Internal rrror no 26 means "can't connect". That meanse that one of chunkservers in the chain of write operation could not connect to another one. Regards -Michal -----Original Message----- From: Samuel Hassine, Olympe Network [mailto:sam...@ol...] Sent: Sunday, July 31, 2011 11:52 PM To: moo...@li... Subject: [Moosefs-users] MFSMount error Hi all, I have the following errors on a new chunkserver I just installed: Jul 31 21:47:38 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:38 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:38 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 What does it mean? Thanks for your answer. Best regards. Samuel ---------------------------------------------------------------------------- -- Got Input? Slashdot Needs You. Take our quick survey online. Come on, we don't ask for help often. Plus, you'll get a chance to win $100 to spend on ThinkGeek. http://p.sf.net/sfu/slashdot-survey _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Robert S. <rsa...@ne...> - 2011-08-02 03:09:03
|
Has anybody looked at the effect on the performance and stability of MooseFS when you modify FUSE_MAX_BACKGROUND in the fuse kernel module? I notice that newer Linux kernels make this a tunable parameter but I don't want to go to a new kernel due to the khugepage problems reported with newer kernels in Centos 6 & Debian with fuse. This may contribute to the write starvation I have been seeing on our system. Adding more chunkservers does not seem to resolve the write starvation problem. Robert On 7/11/11 6:18 PM, Robert Sandilands wrote: > Hi Michal, > > Caching has not been a very successful path for us. With a 2 TB Squid > cache we got< 35% cache hit rates. One of the things we need to do is > to re-organize some of our processes to distribute the load a bit > better. Our load do come in bursts and it would lead to other > improvements if we can distribute the load better through time. > > But do I understand correctly that if I could distribute the opens to > more mounts on more machines that I could expect to open more files > simultaneously? That assumes the same mfsmaster and the same number of > chunk servers. Is the limit/slowdown in mfsmount or mfsmaster? I know > there is a limit on the disk speed and the number of spindles etc. But > our disk utilization is currently< 50% according to "iostat -x". > > Even if I could distribute the load better, the total load will just > increase with time and if there is a hard limit on the scalability of > MooseFS then it may become a bit problematic in a year or two. > > Robert > > On 7/11/11 9:29 AM, Michal Borychowski wrote: >> Hi Robert! >> >> We are not sure that switching to 10Gb/s network can help much. Still, all open files will be opened network connections in mfsmount... >> >> I would recommend to check the need of opening so many files at one moment... Is it really necessary? Or maybe you can use some mechanisms like memcache, etc.? >> >> We don't see any other way to improve the situation of opening so many files. >> >> >> Regards >> -Michał >> >> >> -----Original Message----- >> From: Robert Sandilands [mailto:rsa...@ne...] >> Sent: Tuesday, July 05, 2011 2:32 PM >> To: Michal Borychowski >> Cc: moo...@li... >> Subject: Re: [Moosefs-users] Write starvation >> >> Hi Michal, >> >> I need this to see if there is a way I can optimize the system to open >> more files per minute. At this stage our systems can open a few hundred >> files in parallel. I am not yet at the point where I can do thousands. >> What I think I am seeing is that the writes are starved because there >> are too many pending opens and most of them are for reading files. There >> seems to be a limit of around 2,400 opens per minute on the hardware I >> have and I am looking at what needs to be done to improve that. Based on >> your answer it sounds like the network traffic from the machine running >> mfsmount() to the master may be the biggest delay? Short of converting >> to 10 GB/s or trying to get all the servers on the same switch I don't >> know if there is much to be done about it? >> >> Robert >> >> On 7/5/11 3:15 AM, Michal Borychowski wrote: >>> Hi Robert! >>> >>> Ad. 1. There is no limit in mfsmount itself, but there are some limits in the operating system. Generally speaking it is wise not to open more than several thousands files in parallel. >>> >>> Ad. 2. Fopen invokes open, and open invokes (through kernel and FUSE) functions mfs_lookup and mfs_open. Mfs_lookup function changes consequtive path elements into i-node number. While mfs_open makes the target file opening. It sends a packet to the master in order to receive information about possibility to keep the file in the cache. It also marks the file in the master as opened - in cases it is deleted, it is sustained to the moment of closing. >>> >>> BTW. Why do you need this? >>> >>> >>> Kind regards >>> Michał Borychowski >>> MooseFS Support Manager >>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >>> Gemius S.A. >>> ul. Wołoska 7, 02-672 Warszawa >>> Budynek MARS, klatka D >>> Tel.: +4822 874-41-00 >>> Fax : +4822 874-41-01 >>> >>> >>> >>> -----Original Message----- >>> From: Robert Sandilands [mailto:rsa...@ne...] >>> Sent: Saturday, July 02, 2011 2:54 AM >>> To: moo...@li... >>> Subject: Re: [Moosefs-users] Write starvation >>> >>> Based on some tests I think the limit in this case is the number of >>> opens per minute. I think I need to understand what happens with an open >>> before I can make guesses on what can be done to get the number higher. >>> >>> But then it still does not quite explain the write starvation except if >>> the number of pending reads are just so much higher than the number of >>> pending writes that it seems to starve the writes. Maybe this will >>> resolve itself as I add more chunk servers. >>> >>> Some questions: >>> >>> 1. Is there a limit to the number of handles that client applications >>> can open per mount, per chunk server, per disk? >>> 2. What happens when an application does fopen() on a mount? Can >>> somebody give a quick overview or do I have to read some code? >>> >>> Robert >>> >>> On 6/30/11 11:32 AM, Ricardo J. Barberis wrote: >>>> El Miércoles 29 Junio 2011, Robert escribió: >>>>> Yes, we use Centos, but installing and using the ktune package generally >>>>> resolves most of the performance issues and differences I have seen with >>>>> Ubuntu/Debian. >>>> Nice to know about ktune and thank you for bringing it up, I'll take a look a >>>> it. >>>> >>>>> I don't understand the comment on hitting metadata a lot? What is a lot? >>>> A lot = reading / (re)writing / ls -l'ing / stat'ing too often. >>>> >>>> If the client can't cache the metadata but uses it often, that means it has to >>>> query the master every time. >>>> >>>> Network latencies might also play a role in the performance degradation. >>>> >>>>> Why would it make a difference? All the metadata is in RAM anyway? The >>>>> biggest limit to speed seems to be the number of IOPS that you can get out >>>>> of your disks you have available to you. Looking up the metadata from RAM >>>>> should be several orders of magnitude faster than that. >>>> Yep, and you have plenty of RAM, so that shouldn't be an issue in your case. >>>> >>>>> The activity reported through the CGI interface on the master is around >>>>> 2,400 opens per minute average. Reads and writes are also around 2400 per >>>>> minute alternating with each other. mknod has some peaks around 2,800 per >>>>> minute but is generally much lower. Lookup's are around 8,000 per minute >>>>> and getattr is around 700 per minute. Chunk replication and deletion is >>>>> around 50 per minute. The other numbers are generally very low. >>>> Mmm, maybe 2 chunkservers are just too litle to handle that activity but I >>>> would also check the network latencies. >>>> >>>> I'm also not really confident about having master and cunkserver on the same >>>> server but I don't have any hard evidence to support my feelings ;) >>>> >>>>> Is there a guide/hints specific to MooseFS on what IO/Net/Process >>>>> parameters would be good to investigate for mfsmaster? >>>> I'd like to know that too! >>>> >>>> Cheers, >>> ------------------------------------------------------------------------------ >>> All of the data generated in your IT infrastructure is seriously valuable. >>> Why? It contains a definitive record of application performance, security >>> threats, fraudulent activity, and more. Splunk takes this data and makes >>> sense of it. IT sense. And common sense. >>> http://p.sf.net/sfu/splunk-d2d-c2 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> ------------------------------------------------------------------------------ >> All of the data generated in your IT infrastructure is seriously valuable. >> Why? It contains a definitive record of application performance, security >> threats, fraudulent activity, and more. Splunk takes this data and makes >> sense of it. IT sense. And common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Darren W. <dt...@po...> - 2011-08-02 02:46:35
|
Hello, I understand that files are split into chunks and each chunk is a file on a drive specified mfshdd.cfg on one of the file servers. Two questions: 1) For any given file, are all chunks for a single copy kept on the same chunkserver, or can they be spread across different servers? If so, is each chunk copy guaranteed* to be stored on different chunk servers? And if not, would this imply that setting higher goal means improved performance? 2) Can chunks from two copies of the same file be stored on the same chunkserver? 3) If my goal for all files is N, is it guaranteed* to withstand N-1 simultaneous chunkserver failures? Thanks, Darren Webb * guaranteed in the theoretical sense, not legal, obviously :-) |
From: Samuel H. O. N. <sam...@ol...> - 2011-07-31 21:52:33
|
Hi all, I have the following errors on a new chunkserver I just installed: Jul 31 21:47:38 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:38 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:38 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:39 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 Jul 31 21:47:40 on-004 mfsmount[2914]: writeworker: write error: 26 What does it mean? Thanks for your answer. Best regards. Samuel |
From: Thomas S H. <tha...@gm...> - 2011-07-30 17:47:36
|
This looks like great stuff Joseph! It will help my deployment a lot so that we will be able to gather stats directly from the mfsmaster and merge them back into our main stats database. I also think that this will greatly enhance Salt's moosefs support as well, thanks Joseph! -Thomas S Hatch On Fri, Jul 29, 2011 at 8:37 PM, Joseph Hall <per...@gm...> wrote: > Hey All, > > I hope I'm not stepping on any toes with this project. A few weeks ago > I was trying to set up monitoring for MooseFS, and I realized that the > closest thing to an API that seemed to be available was the web > interface. I didn't want to have to write a web scraper for it, so I > decided to take a peek inside and see if I could just do what it was > doing. > > Long story short, I ripped out the parts that query the Moose and > tossed them into a class. I'll be the first to admit that most of the > code is stolen^Wborrowed from the original mfs.cgi, but it's certainly > much easier to integrate into monitoring apps than the web interface. > I have a project set up at: > > https://github.com/techhat/python-moosefs > > Hopefully it'll come in handy for somebody else too. > > -- > Joseph > > > ------------------------------------------------------------------------------ > Got Input? Slashdot Needs You. > Take our quick survey online. Come on, we don't ask for help often. > Plus, you'll get a chance to win $100 to spend on ThinkGeek. > http://p.sf.net/sfu/slashdot-survey > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Ólafur Ó. <osv...@ne...> - 2011-07-30 04:21:49
|
You should be able to find my packages on launchpad if you don't want to compile yourself. /Oli On 30.7.2011, at 00:10, "Atom Powers" <ap...@di...> wrote: > > I've Googled up and down the planet and haven't found any recent > packages for Ubuntu, or even anybody discussing them. This seems so > strange that I'm almost convinced that I'm missing something. > > Am I missing something, or is there really nobody maintaining packages > for Ubuntu/Debian? > > -- > -- > Perfection is just a word I use occasionally with mustard. > --Atom Powers-- > Director of IT > DigiPen Institute of Technology > (425) 895-4443 > > ------------------------------------------------------------------------------ > Got Input? Slashdot Needs You. > Take our quick survey online. Come on, we don't ask for help often. > Plus, you'll get a chance to win $100 to spend on ThinkGeek. > http://p.sf.net/sfu/slashdot-survey > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Joseph H. <per...@gm...> - 2011-07-30 02:37:15
|
Hey All, I hope I'm not stepping on any toes with this project. A few weeks ago I was trying to set up monitoring for MooseFS, and I realized that the closest thing to an API that seemed to be available was the web interface. I didn't want to have to write a web scraper for it, so I decided to take a peek inside and see if I could just do what it was doing. Long story short, I ripped out the parts that query the Moose and tossed them into a class. I'll be the first to admit that most of the code is stolen^Wborrowed from the original mfs.cgi, but it's certainly much easier to integrate into monitoring apps than the web interface. I have a project set up at: https://github.com/techhat/python-moosefs Hopefully it'll come in handy for somebody else too. -- Joseph |
From: Thomas S H. <tha...@gm...> - 2011-07-30 02:18:56
|
There is a debian directory that can be used to build packages for ubuntu and debian. http://moosefs.git.sourceforge.net/git/gitweb.cgi?p=moosefs/moosefs;a=tree |