You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Alexey S. <al...@si...> - 2014-07-02 04:36:12
|
I'm working on closed government project. We use 2 servers with 10Tb of max data. Please tell me the price of moosefs. Also do I have to pay once for license and each year for support? -----Исходное сообщение----- От: "Krzysztof Kielak" <krz...@co...> Отправлено: 01.07.2014 22:35 Кому: "moo...@li..." <moo...@li...> Тема: [Moosefs-users] MooseFS 2.0 released today! Dear MooseFS Users, We are pleased to announce that the new version of MooseFS is publicly available starting today! System will be distributed in two flavors: - Community Edition (CE), which is the continuation of previous line of Open-Source products available at moosefs.org, - Professional Edition (Pro) targeting Enterprise Storage market with features like long-awaited High Availability for master servers, which puts the system in a new class of storage solutions. Binary packages from officially supported repositories for multiple platforms are now available through most commonly used package managers. For detailed instructions please go to http://get.moosefs.com and follow instructions for your environment. Core Technology, the independent entity responsible for MooseFS development, is former member of Gemius Group (http://gemius.com) where MooseFS is the central component driving variety of products in Internet Monitoring/Research area with 300 thousand events collected every second and stored on 2.5 PB MooseFS instance. Our mission is to make this technology available to others, as we think it is extremely effective and easy to use, as it was in our case. The newest version has a lot of updates and new, exciting features. MooseFS Pro license is sold together with technical support. Price depends on the size of your installation (number of physical disks, number of physical servers, size of raw data). In case of any questions don't hesitate to contact us at co...@mo... Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: WK <wk...@bn...> - 2014-07-01 19:26:44
|
congratulations. I went to the website to check on Pro pricing and didn't find anything. Instead, I'm told to contact the developers. Before I waste anybody's time, could someone give a general range of PRO costs. Since we run lots of small MooseFS (10-40TB) clusters, I suspect that the cost is prohibitive to us. -wk On 7/1/14, 7:31 AM, Krzysztof Kielak wrote: > > Dear MooseFS Users, > > We are pleased to announce that the new version of MooseFS is publicly > available starting today! > > System will be distributed in two flavors: > > - Community Edition (CE), which is the continuation of previous line > of Open-Source products available at moosefs.org, > > - Professional Edition (Pro) targeting Enterprise Storage market with > features like long-awaited High Availability for master servers, which > puts the system in a new class of storage solutions. > > Binary packages from officially supported repositories for multiple > platforms are now available through most commonly used package > managers. For detailed instructions please go to > http://get.moosefs.com and follow instructions for your environment. > > Core Technology, the independent entity responsible for MooseFS > development, is former member of Gemius Group (http://gemius.com) > where MooseFS is the central component driving variety of products in > Internet Monitoring/Research area with 300 thousand events collected > every second and stored on 2.5 PB MooseFS instance. > > Our mission is to make this technology available to others, as we > think it is extremely effective and easy to use, as it was in our case. > > The newest version has a lot of updates and new, exciting features. > MooseFS Pro license is sold together with technical support. Price > depends on the size of your installation (number of physical disks, > number of physical servers, size of raw data). > > In case of any questions don't hesitate to contact us at > co...@mo... > > Best Regards, > > Krzysztof Kielak > > *Director of Operations and Customer Support* > > Mobile: +48 601 476 440 > > > > ------------------------------------------------------------------------------ > Open source business process management suite built on Java and Eclipse > Turn processes into business applications with Bonita BPM Community Edition > Quickly connect people, data, and systems into organized workflows > Winner of BOSSIE, CODIE, OW2 and Gartner awards > http://p.sf.net/sfu/Bonitasoft > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Krzysztof K. <krz...@co...> - 2014-07-01 14:30:53
|
Dear MooseFS Users, We are pleased to announce that the new version of MooseFS is publicly available starting today! System will be distributed in two flavors: - Community Edition (CE), which is the continuation of previous line of Open-Source products available at moosefs.org, - Professional Edition (Pro) targeting Enterprise Storage market with features like long-awaited High Availability for master servers, which puts the system in a new class of storage solutions. Binary packages from officially supported repositories for multiple platforms are now available through most commonly used package managers. For detailed instructions please go to http://get.moosefs.com and follow instructions for your environment. Core Technology, the independent entity responsible for MooseFS development, is former member of Gemius Group (http://gemius.com) where MooseFS is the central component driving variety of products in Internet Monitoring/Research area with 300 thousand events collected every second and stored on 2.5 PB MooseFS instance. Our mission is to make this technology available to others, as we think it is extremely effective and easy to use, as it was in our case. The newest version has a lot of updates and new, exciting features. MooseFS Pro license is sold together with technical support. Price depends on the size of your installation (number of physical disks, number of physical servers, size of raw data). In case of any questions don't hesitate to contact us at co...@mo... Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: Alexander A. <akh...@ri...> - 2014-06-30 06:54:42
|
Hi! I use Linux as MooseFS client to share via Samba 3 MooseFS files. Performance is up to 2200 Kb/s on windows client connected to FastEthernet. I found by Google this: http://prog.olsztyn.pl/mfs4win/en/# but didn't understand what to download from this web-page to test :--( wbr Alexander ====================================================== HI all, I am planning to deploy a moosefs system in my intranet. For I have some windows apps, such as gene6 ftp, some syslog server, backup servers, I can not drop win in my lan. But so far I didn't find there is native moosefs windows client, so How can I using spaces of moosefs from my windows server? I think there is one way, linux client, share moosefs spaces as nfs/samba/iscsi But I think this will slow the performance of moosefs? Thanks for you suggestion. -- Sorry for my poor English, hope you can understand. |
From: baalchina <baa...@gm...> - 2014-06-28 13:52:22
|
HI all, I am planning to deploy a moosefs system in my intranet. For I have some windows apps, such as gene6 ftp, some syslog server, backup servers, I can not drop win in my lan. But so far I didn't find there is native moosefs windows client, so How can I using spaces of moosefs from my windows server? I think there is one way, linux client, share moosefs spaces as nfs/samba/iscsi But I think this will slow the performance of moosefs? Thanks for you suggestion. -- Sorry for my poor English, hope you can understand. |
From: Timothy M. C. <tc...@um...> - 2014-06-25 14:29:52
|
Hi all, I just wanted to follow up on what I found out about this. (Unfortunately it was not a thorough investigation.) It seems that the osxfuse and FreeBSD FUSE implementations don’t really use the fuse_file_info struct which mfs uses to indicate "keep_cache=0". As a result, they don’t invalidate cached data as expected. (MFS's mfscachemode option uses "keep_cache" and so it has no effect on osxfuse.) The quick workaround which may be acceptable to some on osxfuse is to simply disable FUSE’s caching with some osxfuse-specific options. E.g., sudo mfsmount -o novncache,noubc I’m not sure what the FreeBSD 10.0 equivalent might be; the new FUSE implementation seems uncharacteristically poorly documented. There are some promising-looking sysctls in vfs.fuse, but I didn’t have any luck with them. On Jun 18, 2014, at 11:09 AM, Tim Creech <tc...@um...> wrote: > Thanks for this suggestion. I have actually tried mfscachemode=NEVER, and the > problematic behavior doesn't change (!). This is the case for both FreeBSD 10.0 > and OS X. > > It's as though mfs is generally unable to direct these FUSE implementations to > invalidate cached data. (Or to disable some built-in caching so that mfsmount > can do it? I'm not sure how things actually work.) > > On Wed, Jun 18, 2014 at 05:14:00PM +0400, Alexander Akhobadze wrote: >> >> Hi ! >> >> May be it is a good idea to use mfsmount with mfscachemode=NEVER option ? >> >> ====================================================== >> >> Hi all, >> >> Is anyone having any luck using OS X and/or FreeBSD machines as >> clients? >> >> I am able to build mfsmount easily for FreeBSD 10.0 and Mavericks, and >> mfsmount appears to work correctly at first glance. However, I have >> noticed that writes to files from other machines (say, a Linux machine) >> are never observed by the OS X or FreeBSD clients. >> >> For example, say I have a file /mfs/numbers, a FreeBSD machine named >> "beastie", and a Linux machine "tux". This is the kind of thing I see: >> >> tux% echo 1 > /mfs/numbers >> beastie% cat /mfs/numbers >> 1 >> tux% echo 2 >> /mfs/numbers >> beastie% cat /mfs/numbers >> 1 >> beastie% sleep 600; cat /mfs/numbers >> 1 >> >> Even hours later, the change to /mfs/numbers will not be seen. It seems >> beastie's cache is never invalidated. Other Linux clients see the >> change within seconds, and if I unmount/remount mfs on the FreeBSD >> machine it then sees the change. >> >> I'm a little suspicious of the mfsmount implementation because I see >> the same behavior on FreeBSD and OS X, which AFAIK have mostly >> unrelated FUSE implementations. >> >> Has anyone had any relevant experiences? Any tips or ideas are greatly >> appreciated. >> >> ------------------------------------------------------------------------------ >> HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions >> Find What Matters Most in Your Big Data with HPCC Systems >> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >> http://p.sf.net/sfu/hpccsystems >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Tim C. <tc...@um...> - 2014-06-18 15:42:58
|
Thanks for this suggestion. I have actually tried mfscachemode=NEVER, and the problematic behavior doesn't change (!). This is the case for both FreeBSD 10.0 and OS X. It's as though mfs is generally unable to direct these FUSE implementations to invalidate cached data. (Or to disable some built-in caching so that mfsmount can do it? I'm not sure how things actually work.) On Wed, Jun 18, 2014 at 05:14:00PM +0400, Alexander Akhobadze wrote: > > Hi ! > > May be it is a good idea to use mfsmount with mfscachemode=NEVER option ? > > ====================================================== > > Hi all, > > Is anyone having any luck using OS X and/or FreeBSD machines as > clients? > > I am able to build mfsmount easily for FreeBSD 10.0 and Mavericks, and > mfsmount appears to work correctly at first glance. However, I have > noticed that writes to files from other machines (say, a Linux machine) > are never observed by the OS X or FreeBSD clients. > > For example, say I have a file /mfs/numbers, a FreeBSD machine named > "beastie", and a Linux machine "tux". This is the kind of thing I see: > > tux% echo 1 > /mfs/numbers > beastie% cat /mfs/numbers > 1 > tux% echo 2 >> /mfs/numbers > beastie% cat /mfs/numbers > 1 > beastie% sleep 600; cat /mfs/numbers > 1 > > Even hours later, the change to /mfs/numbers will not be seen. It seems > beastie's cache is never invalidated. Other Linux clients see the > change within seconds, and if I unmount/remount mfs on the FreeBSD > machine it then sees the change. > > I'm a little suspicious of the mfsmount implementation because I see > the same behavior on FreeBSD and OS X, which AFAIK have mostly > unrelated FUSE implementations. > > Has anyone had any relevant experiences? Any tips or ideas are greatly > appreciated. > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Markus K. <mar...@tu...> - 2014-06-18 15:14:30
|
On Wednesday 18 June 2014 16:08:57 baalchina wrote: > Hi everyone, > > I want to deploy moosefs in my lan, before buy servers, I have some > questions. > > First, my server needs raid card or not? If I have a server with 12 > harddisk, I can using all of them to make a virtual disk,such as raid5,with > my raid card. Or make 12 virtual disk, each one vd is a real hdd. In > moosefs, which one is suggested? A large virtual disk or many small vd? I > think maybe many small vd is good, but I don't sure. Both are possible. I think the best choice depends on the size of your storage and the type of data. If you plan to use huge disks (e.g. 6TB) it will take a long time to rebalanced your data if you have to replace a faulty disk especially if you have lots of small files stored. So using a raid may be a good idea in this case. > Second, is cpu and memory important for moosefs? I want to buy some 1 cpu > and 8 or 16gb memory server, is this ok? All of my servers using to store > media files for ftp, so speed is not so important. I think that this will be ok but I do not have experiences using 12 disks in one chunk server. For the master server you will need more RAM and a fast storage. There have been several posts on this list with reports for memory requirements on the master server. regards Markus Köberl |
From: baalchina <baa...@gm...> - 2014-06-18 14:09:05
|
Hi everyone, I want to deploy moosefs in my lan, before buy servers, I have some questions. First, my server needs raid card or not? If I have a server with 12 harddisk, I can using all of them to make a virtual disk,such as raid5,with my raid card. Or make 12 virtual disk, each one vd is a real hdd. In moosefs, which one is suggested? A large virtual disk or many small vd? I think maybe many small vd is good, but I don't sure. Second, is cpu and memory important for moosefs? I want to buy some 1 cpu and 8 or 16gb memory server, is this ok? All of my servers using to store media files for ftp, so speed is not so important. Thanks for you suggestion, and sorry for my poor English. -- from:baalchina |
From: Alexander A. <akh...@ri...> - 2014-06-18 13:14:18
|
Hi ! May be it is a good idea to use mfsmount with mfscachemode=NEVER option ? ====================================================== Hi all, Is anyone having any luck using OS X and/or FreeBSD machines as clients? I am able to build mfsmount easily for FreeBSD 10.0 and Mavericks, and mfsmount appears to work correctly at first glance. However, I have noticed that writes to files from other machines (say, a Linux machine) are never observed by the OS X or FreeBSD clients. For example, say I have a file /mfs/numbers, a FreeBSD machine named "beastie", and a Linux machine "tux". This is the kind of thing I see: tux% echo 1 > /mfs/numbers beastie% cat /mfs/numbers 1 tux% echo 2 >> /mfs/numbers beastie% cat /mfs/numbers 1 beastie% sleep 600; cat /mfs/numbers 1 Even hours later, the change to /mfs/numbers will not be seen. It seems beastie's cache is never invalidated. Other Linux clients see the change within seconds, and if I unmount/remount mfs on the FreeBSD machine it then sees the change. I'm a little suspicious of the mfsmount implementation because I see the same behavior on FreeBSD and OS X, which AFAIK have mostly unrelated FUSE implementations. Has anyone had any relevant experiences? Any tips or ideas are greatly appreciated. ------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Tim C. <tc...@um...> - 2014-06-17 17:23:31
|
Hi all, Is anyone having any luck using OS X and/or FreeBSD machines as clients? I am able to build mfsmount easily for FreeBSD 10.0 and Mavericks, and mfsmount appears to work correctly at first glance. However, I have noticed that writes to files from other machines (say, a Linux machine) are never observed by the OS X or FreeBSD clients. For example, say I have a file /mfs/numbers, a FreeBSD machine named "beastie", and a Linux machine "tux". This is the kind of thing I see: tux% echo 1 > /mfs/numbers beastie% cat /mfs/numbers 1 tux% echo 2 >> /mfs/numbers beastie% cat /mfs/numbers 1 beastie% sleep 600; cat /mfs/numbers 1 Even hours later, the change to /mfs/numbers will not be seen. It seems beastie's cache is never invalidated. Other Linux clients see the change within seconds, and if I unmount/remount mfs on the FreeBSD machine it then sees the change. I'm a little suspicious of the mfsmount implementation because I see the same behavior on FreeBSD and OS X, which AFAIK have mostly unrelated FUSE implementations. Has anyone had any relevant experiences? Any tips or ideas are greatly appreciated. |
From: Aleksander W. <ale...@co...> - 2014-06-12 12:04:04
|
Hi. MooseFS preserve files attributes restored from trash. To restore file you need to move it to undel folder. If full path to file exists file will restore with all attributes. If path to file not exists moosefs will create it with default permissions drwxr-xr-x root root for folders, but file will have all attributes preserved Best Regards Aleksander Wieliczko Technical Support Engineer moosefs.com On 06/12/2014 09:45 AM, Florent B wrote: > Hi all, > > MooseFS trash seems to not preserve file attributes (ownership & rights) > of files. > When a file is restored, it takes your user default attributes. > > Is it expected ? > > Thank you :) > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Florent B <fl...@co...> - 2014-06-12 07:45:59
|
Hi all, MooseFS trash seems to not preserve file attributes (ownership & rights) of files. When a file is restored, it takes your user default attributes. Is it expected ? Thank you :) |
From: Hakan A. <hak...@gm...> - 2014-06-11 13:00:50
|
Hi, Davies and me have been implementing an "offline snapshot"-feature, which: 1. Provides a clean way to roll back to an old state in case of lost current metadata. 2. Allows accidentally overwritten data to be recovered. 3. Should not use any extra memory on the master server. This is achived by providing a new tool, mfsarchive, that takes a directory parameter and produces an archive file on the server with the metadata of the content of that directory. It also increases the refcount of the chunks refered to by the saved metadata to make sure they will be available as long as the archive lives on the server. It can be used like this (with /mfs being a mounted moosefs): mfsarchive /mfs/path/to/some/dir /path/to/save/archive/id this saves the metadata to disk on the master server without increasing the permanent memory usage. The archive can later be loaded into the master server memory using: mfsrestore /path/to/saved/archive/id /mfs/restored This combination of mfsarchive and mfsrestore will have the same effect as a single call to mfsmakesnapshot. To later remove an archive, there is: mfsunarchive /path/to/saved/archive/id /mfs and to list all the archives on the server: mfslistarchives /mfs Both mfsarchive and mfslistarchives also prints archive ids that can be used directly instead of the /path/to/saved/archive/id files in the exampled above by using the -i option. The archives are stored in /var/lib/mfs/archives on the master server which needs to be backed up separately, they are currently not transferred to the metadata loggers. The implementation is available in the offline-snapshot branch here: https://github.com/hakanardo/moosefs/tree/offline-snapshot It still requires more testing... -- Håkan Ardö |
From: Steve W. <st...@pu...> - 2014-06-10 12:41:50
|
On 06/10/2014 04:20 AM, Richard Morris wrote: > > Good morning, I hope someone can advise me. > > We have a chunkserver where we have recently replaced a faulty disk. > > Before removing, we marked the disk as for replacement and all the > chunks were moved to the other disks by the software. > > Having replaced the disk and removed the marker for replacement, I was > surprised to see that chucks were not being moved back to this disk. > After 2 days the number of chunks on the new disk was still zero > according to the CGI monitor -- all other disks within this > chunkserver had around 380,000 chunks each. > > As a test, we added several large files to the share (it's an archive, > so new items are not added regularly) On refresh of the CGI monitor > the number of chunks on the new disk started to rise, so now there are > around 3,600 chunks on the new disk, so I'm happy the disk is fine and > that the chunkserver is working correctly -- my question is this, is > there an easy way of basically saying to the chunkserver level out the > number of chunks so they are roughly equal on all the disks? I can > see references to a disk usage balancer but it appears to be automated > and doesn't appear to be working on this server -- is it balancing on > the server level?. > > Perhaps I should note that we have 3 chunkservers, one mfsmaster > server and a metalogger/file share server in this cluster. > > Any guidance would be appreciated. > > Cheers, > > Richard Morris > I will normally copy the chunks from the failing disk (if it's still possible to read the disk) to a replacement disk, stop the chunk server, install the replacement disk, and then let MooseFS replicate any under-goal files that were missed while the chunk server was down. This leaves the disks fairly well balanced after you're done but there is a bit more exposure due to being under-goal for a short period of time. Steve |
From: Davies L. <dav...@gm...> - 2014-05-26 04:58:57
|
One trick for this: start a chunk server for each disks. On Sun, May 25, 2014 at 2:27 PM, Leszek A. Szczepanowski <twi...@gm...> wrote: > I prepared MooseFS "cluster" on one machine for testing purposes. > I use one master server, one chunk server and no meta replication server for > now. > 3 hdd's are for data and are defined in mfshdd.cfg. > But one of these disks got some bad clusters. I marked it for remove in > mfshdd.cfg, > restarted chunk server processes and... nothing. Why data aren't flow to > free space > of the rest hdd's? I even set: > > CHUNKS_WRITE_REP_LIMIT = 5 #default value 1 > CHUNKS_READ_REP_LIMIT = 25 #default value 5 > > But also nothing is happening. Should I wait, or so? > When chunk server will rebalance itself? > > > -- > Leszek A. Szczepanowski > twi...@gm... > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform > available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Deon C. <deo...@gm...> - 2014-05-25 21:47:16
|
Marking a disk for removal requires another chunk server to send its chunks to. Moosefs is a system that works with multiple chunk servers, running everything on just 1 server is more or less just a demo. Deon On 26/05/2014, at 9:27 am, Leszek A. Szczepanowski <twi...@gm...> wrote: > I prepared MooseFS "cluster" on one machine for testing purposes. > I use one master server, one chunk server and no meta replication server for now. > 3 hdd's are for data and are defined in mfshdd.cfg. > But one of these disks got some bad clusters. I marked it for remove in mfshdd.cfg, > restarted chunk server processes and... nothing. Why data aren't flow to free space > of the rest hdd's? I even set: > > CHUNKS_WRITE_REP_LIMIT = 5 #default value 1 > CHUNKS_READ_REP_LIMIT = 25 #default value 5 > > But also nothing is happening. Should I wait, or so? > When chunk server will rebalance itself? > > > -- > Leszek A. Szczepanowski > twi...@gm... > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Leszek A. S. <twi...@gm...> - 2014-05-25 21:28:16
|
I prepared MooseFS "cluster" on one machine for testing purposes. I use one master server, one chunk server and no meta replication server for now. 3 hdd's are for data and are defined in mfshdd.cfg. But one of these disks got some bad clusters. I marked it for remove in mfshdd.cfg, restarted chunk server processes and... nothing. Why data aren't flow to free space of the rest hdd's? I even set: CHUNKS_WRITE_REP_LIMIT = 5 #default value 1 CHUNKS_READ_REP_LIMIT = 25 #default value 5 But also nothing is happening. Should I wait, or so? When chunk server will rebalance itself? -- Leszek A. Szczepanowski twi...@gm... |
From: Deon C. <deo...@gm...> - 2014-05-18 23:14:49
|
I run master server on centos, and my chunk servers on freebsd. I don’t think one platform is better than another necessarily. Although I prefer BSD for chunk servers due to native ZFS support and very low foot print since I run my chunk servers off USB keys. On 17/05/2014, at 2:37 am, Warren Myers <wa...@an...> wrote: > Based on the answer to this FAQ (http://www.moosefs.org/moosefs-faq.html#average), would it make sense to use a multi-OS infrastructure for MooseFS (eg CentOS & FreeBSD)? > > Does a Master or Metalogger run better / faster / more stably on one platform over another? Do chunkservers? > > Warren Myers > http://antipaucity.com > https://www.digitalocean.com/?refcode=d197a961987a > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Warren M. <wa...@an...> - 2014-05-16 14:37:27
|
Based on the answer to this FAQ (http://www.moosefs.org/moosefs-faq.html#average), would it make sense to use a multi-OS infrastructure for MooseFS (eg CentOS & FreeBSD)? Does a Master or Metalogger run better / faster / more stably on one platform over another? Do chunkservers? Warren Myers http://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a |
From: Alexander A. <akh...@ri...> - 2014-05-14 10:55:06
|
Hi All Does anybody know what means the following messages in log: May 14 13:00:59 gw1 mfschunkserver[2051]: main server module: can't set accept filter: ENOENT (No such file or directory) May 14 13:59:47 gw1 mfschunkserver[2610]: write to Master error: EPERM (Operation not permitted) While they happen my chunk server can't completely connect to master. mfscgiserv shows connected chunk server but with ZERO chunk count. Chunk server continuously attempts to reconnect to master. Master and chunk server are located on the same host. This behavior happens when I try to use master's IP which is a CARP's address (I use FreeBSD). If I write normal (not CARP controlled) Master's IP-address in mfschunkserver.cfg everything works fine. Other chunk server and clients are working with Master's CARP-IP-address without any errors. Thanks! wbr Alexander Akhobadze |
From: Anh K. H. <ky...@th...> - 2014-05-13 11:10:17
|
Hi the team, A long time ago I created an unofficial mirror of "moosefs" on Github to deploy on my platform. Recently some developers have sent pull requests on Github, and that isn't very convenient. You know I am not in a MooseFS team ;) See the pull requests here https://github.com/xmirror/moosefs/pulls Can someone take a look and discuss the requests with them? Thanks a lot! -- I am ... 5.5 dog years old. |
From: Matt B. <ma...@va...> - 2014-05-08 02:53:39
|
SF here. m On Wed, May 07, 2014 at 05:46:29PM -0700, Davies Liu wrote: > Hi guys, > > MooseFS is so unknown in SF Bay Area, but there should be some users > here. We can arrange a meetup or just fire-side chat. > > -- > - Davies > > ------------------------------------------------------------------------------ > Is your legacy SCM system holding you back? Join Perforce May 7 to find out: > • 3 signs your SCM is hindering your productivity > • Requirements for releasing software faster > • Expert tips and advice for migrating your SCM now > http://p.sf.net/sfu/perforce > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Matt Billenstein ma...@va... http://www.vazor.com/ |
From: WK <wk...@bn...> - 2014-05-08 01:23:00
|
On 5/7/14, 5:46 PM, Davies Liu wrote: > Hi guys, > > MooseFS is so unknown in SF Bay Area, but there should be some users > here. We can arrange a meetup or just fire-side chat. > I'm in Southern California and I get up to San Jose quite a bit. -bill |
From: Davies L. <dav...@gm...> - 2014-05-08 00:46:57
|
Hi guys, MooseFS is so unknown in SF Bay Area, but there should be some users here. We can arrange a meetup or just fire-side chat. -- - Davies |