You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: David M. <dav...@pr...> - 2017-05-17 18:03:02
|
Hi MooseFS users, I'm trying to reduce the impact of chunk server rebalancing after new partitions were added. I have set some very conservative master and chunkserver cfg parameters but I can only seem to get the linux server loads down to around 2. http://i.imgur.com/cnomEaU.png Here are some params I have set to try to slow the master down: ACCEPTABLE_PERCENTAGE_DIFFERENCE = 10.0 NICE_LEVEL = 19 REPLICATIONS_DELAY_INIT = 600 CHUNKS_LOOP_MAX_CPS = 1 CHUNKS_LOOP_MIN_TIME = 3600 CHUNKS_SOFT_DEL_LIMIT = 1 CHUNKS_HARD_DEL_LIMIT = 2 CHUNKS_WRITE_REP_LIMIT = 2,1,1,1 CHUNKS_READ_REP_LIMIT = 10,1,1,1 ATIME_MODE = 4 CS_HEAVY_LOAD_THRESHOLD = 5 CS_HEAVY_LOAD_RATIO_THRESHOLD = 1.0 Here are some params I have set to try to slow the chunkservers down: HDD_REBALANCE_UTILIZATION = 1 NICE_LEVEL = 19 WORKERS_MAX = 1 WORKERS_MAX_IDLE = 1 The parameter changes have definitely helped, reducing the linux server loads from 30 down to ~2. It is hard to say which parameters have the most effect or if some parameters are even working at all (such as CS_HEAVY_LOAD_*, which doesn't seem to be putting the chunk servers into grace mode). From watching IO via iotop I think the main problem is how much IO mfschunkserver still uses even when greatly restricted by the aforementioned parameters. It would be great to throttle its read/write speeds somehow. Any general ideas on mitigating the effect of big chunkserver work while in production and under heavy use by mfsmount? I have also moved all other services off the chunkserver machines. Could mfsmount be turned off on the chunkservers and only used on the master? Any help would be greatly appreciated. Thank you, Dave |
From: David M. <dav...@pr...> - 2017-05-16 18:43:27
|
Hi, What parameters on the master and chunkservers could be tweaked to reduce impact on I/O? MFS is serving files to heavily accessed nginx and varnish processes and server load is now considerably higher after adding a new partition to one of the chunkservers. I have reduced HDD_REBALANCE_UTILIZATION to 5. Are there other parameters I could change? Thank you, Dave |
From: Piotr R. K. <pio...@mo...> - 2017-05-13 21:08:18
|
Hi David, > Could they also be files that were deleted during rebalancing operations? No. Rebalancing operations are transparent for users and are done on a lower level. Rebalancing takes into consideration chunks, not files. Best regards, Peter -- Piotr Robert Konopelko MooseFS Client Support Team | moosefs.com <https://moosefs.com/> > On 13 May 2017, at 7:27 PM, David Myer <dav...@pr...> wrote: > > Hi, > > I have looked in the mfsmeta trash and the trashed files also seem to exist in the mfs mount, so they may be temporary files created by rsync as Ben suggested. > > Could they also be files that were deleted during rebalancing operations? > > Thanks, > Dave > > >> -------- Original Message -------- >> Subject: Re: [MooseFS-Users] Large trash space without performing deletion operations >> Local Time: May 12, 2017 6:16 PM >> UTC Time: May 12, 2017 10:16 PM >> From: pio...@mo... >> To: David Myer <dav...@pr...> >> moo...@li... <moo...@li...> >> >> Maybe rsync deleted some files during syncing if it was called several times? >> >> MooseFS has a system-wide trash with a per-file configurable trashtime period (see https://moosefs.com/manpages/mfstrashtime.html <https://moosefs.com/manpages/mfstrashtime.html>), by default it is 86400 seconds, so 24 hours. You can configure trashtime before file is deleted. >> >> >> You can access the trash by mounting it: >> >> mkdir /mnt/mfsmeta >> mfsmount -H mfsmaster.host.name -o mfsmeta /mnt/mfsmeta >> >> in /mnt/mfsmeta/trash you have 4096 sub-trashes, just navigate to one and list files - their names will be pseudo-paths, so you can check what actually is in trash (ls -l). You can move these files to "undel" dir, which will cause undelete or remove them from (sub)trash - just issue rm * >> >> You can also mount a trash with no subtrashes (-o mfsmeta,mfsflattrash), but in case of many objects in trash, listing them may be not a good idea. >> >> >> Best regards, >> Peter >> >> -- >> Piotr Robert Konopelko >> MooseFS Client Support Team | moosefs.com <https://moosefs.com/> >> >>> On 12 May 2017, at 11:59 PM, David Myer <dav...@pr... <mailto:dav...@pr...>> wrote: >>> >>> Hi, >>> >>> I have noticed a large amount of trashed files despite performing no deletion operations. >>> >>> So far I have only been filling up my mfs mount using rsync (without any rsync destination deletion flags set). >>> >>> What would be causing this large trash space? >>> >>> total space: 8.0 TiB >>> avail space: 4.2 TiB >>> trash space: 387 GiB >>> trash files: 915451 >>> files: 3928894 >>> chunks: 3609973 >>> all chunk copies: 7219946 >>> >>> Thanks, >>> Dave >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot_________________________________________ <http://sdm.link/slashdot_________________________________________> >>> moosefs-users mailing list >>> moo...@li... <mailto:moo...@li...> >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Ben H. <bj...@ba...> - 2017-05-12 22:41:10
|
David, it might be that rsync when called without --inplace flag will create temporary files on the receiving side that will be of the same size as the files being moved, they'll of course be removed once the transfer is complete. That might account for the trash space. On May 12, 2017 23:01, David Myer <dav...@pr...> wrote:Hi, I have noticed a large amount of trashed files despite performing no deletion operations. So far I have only been filling up my mfs mount using rsync (without any rsync destination deletion flags set). What would be causing this large trash space? total space: 8.0 TiB avail space: 4.2 TiB trash space: 387 GiB trash files: 915451 files: 3928894 chunks: 3609973 all chunk copies: 7219946 Thanks, Dave |
From: Piotr R. K. <pio...@mo...> - 2017-05-12 22:16:35
|
Maybe rsync deleted some files during syncing if it was called several times? MooseFS has a system-wide trash with a per-file configurable trashtime period (see https://moosefs.com/manpages/mfstrashtime.html <https://moosefs.com/manpages/mfstrashtime.html>), by default it is 86400 seconds, so 24 hours. You can configure trashtime before file is deleted. You can access the trash by mounting it: mkdir /mnt/mfsmeta mfsmount -H mfsmaster.host.name -o mfsmeta /mnt/mfsmeta in /mnt/mfsmeta/trash you have 4096 sub-trashes, just navigate to one and list files - their names will be pseudo-paths, so you can check what actually is in trash (ls -l). You can move these files to "undel" dir, which will cause undelete or remove them from (sub)trash - just issue rm * You can also mount a trash with no subtrashes (-o mfsmeta,mfsflattrash), but in case of many objects in trash, listing them may be not a good idea. Best regards, Peter -- Piotr Robert Konopelko MooseFS Client Support Team | moosefs.com <https://moosefs.com/> > On 12 May 2017, at 11:59 PM, David Myer <dav...@pr...> wrote: > > Hi, > > I have noticed a large amount of trashed files despite performing no deletion operations. > > So far I have only been filling up my mfs mount using rsync (without any rsync destination deletion flags set). > > What would be causing this large trash space? > > total space: 8.0 TiB > avail space: 4.2 TiB > trash space: 387 GiB > trash files: 915451 > files: 3928894 > chunks: 3609973 > all chunk copies: 7219946 > > Thanks, > Dave > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: David M. <dav...@pr...> - 2017-05-12 21:59:19
|
Hi, I have noticed a large amount of trashed files despite performing no deletion operations. So far I have only been filling up my mfs mount using rsync (without any rsync destination deletion flags set). What would be causing this large trash space? total space: 8.0 TiB avail space: 4.2 TiB trash space: 387 GiB trash files: 915451 files: 3928894 chunks: 3609973 all chunk copies: 7219946 Thanks, Dave |
From: Piotr R. K. <pio...@mo...> - 2017-05-12 15:20:00
|
One chunk minimum size is 64 kB + 5kB MFS chunk header (so 69kB) maximum chunk size is 64MB + 5kB header If file size exceeds 64MB (and does not exceed 128MB), the file has 2 chunks, if >128MB and <192MB: 3 chunks and so on. BR Peter -- Piotr Robert Konopelko MooseFS Client Support Team | moosefs.com <https://moosefs.com/> > On 12 May 2017, at 5:12 PM, Alex Crow <ac...@in...> wrote: > > Hi, > > If you have a lot of very small files there will be some overhead as there is a minimum chunk size. Used space also will vary depending on your underlying FS. I use ZFS with compression so I see relatively less overhead. > Alex > > On 12/05/17 16:02, David Myer wrote: >> Thanks Piotr and Wolfgang. >> >> I have interpreted the term "copy" incorrectly. I thought a copy was an additional instance of the file, so a goal of 2 would result in the file plus 2 copies of it, thus 3 instances of the file. >> >> I also thought this because it explained the size of my mfs mount - I have rsync'd ~500GB of data into my mfs mount, and the CGI stats show ~1.5TB of disk used at a goal of 2. Based off this I assumed each file had 3 instances. What would account for 1.5TB of storage usage if 500GB of files only exist twice at a goal of 2? >> >> Thanks, >> Dave >> >> >>> -------- Original Message -------- >>> Subject: Re: [MooseFS-Users] Min goal to prevent data loss >>> Local Time: May 12, 2017 10:31 AM >>> UTC Time: May 12, 2017 2:31 PM >>> From: pio...@mo... <mailto:pio...@mo...> >>> To: David Myer <dav...@pr...> <mailto:dav...@pr...> >>> moo...@li... <mailto:moo...@li...> <moo...@li...> <mailto:moo...@li...> >>> >>> Hi David, >>> >>> > Wouldn't a file exist in 2 places if the goal was 1? >>> >>> No. If file had goal = 1, it had only one copy. >>> >>> Goal means number of copies, so the best practice is to set the goal to 2 minimum for whole filesystem, e.g.: >>> mfssetgoal -r 2 /mnt/mfs >>> >>> MooseFS will then automatically take care of replication of files to other servers to fulfil the goal. >>> >>> >>> Best regards, >>> Peter >>> >>> -- >>> Piotr Robert Konopelko >>> MooseFS Client Support Team | moosefs.com >>> >>> > On 12 May 2017, at 3:54 PM, David Myer <dav...@pr...> <mailto:dav...@pr...> wrote: >>> > >>> > Hi, >>> > >>> > The MooseFS best practices recommend a min goal of 2 to prevent eventual data loss. Wouldn't a file exist in 2 places if the goal was 1? >>> > >>> > Thanks, >>> > Dave >>> > >>> > ------------------------------------------------------------------------------ >>> > Check out the vibrant tech community on one of the world's most >>> > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ <http://sdm.link/slashdot_________________________________________> >>> > moosefs-users mailing list >>> > moo...@li... <mailto:moo...@li...> >>> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>> >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot <http://sdm.link/slashdot> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alex C. <ac...@in...> - 2017-05-12 15:12:49
|
Hi, If you have a lot of very small files there will be some overhead as there is a minimum chunk size. Used space also will vary depending on your underlying FS. I use ZFS with compression so I see relatively less overhead. Alex On 12/05/17 16:02, David Myer wrote: > Thanks Piotr and Wolfgang. > > I have interpreted the term "copy" incorrectly. I thought a copy was > an additional instance of the file, so a goal of 2 would result in the > file plus 2 copies of it, thus 3 instances of the file. > > I also thought this because it explained the size of my mfs mount - I > have rsync'd ~500GB of data into my mfs mount, and the CGI stats show > ~1.5TB of disk used at a goal of 2. Based off this I assumed each file > had 3 instances. What would account for 1.5TB of storage usage if > 500GB of files only exist twice at a goal of 2? > > Thanks, > Dave > > >> -------- Original Message -------- >> Subject: Re: [MooseFS-Users] Min goal to prevent data loss >> Local Time: May 12, 2017 10:31 AM >> UTC Time: May 12, 2017 2:31 PM >> From: pio...@mo... >> To: David Myer <dav...@pr...> >> moo...@li... <moo...@li...> >> >> Hi David, >> >> > Wouldn't a file exist in 2 places if the goal was 1? >> >> No. If file had goal = 1, it had only one copy. >> >> Goal means number of copies, so the best practice is to set the goal >> to 2 minimum for whole filesystem, e.g.: >> mfssetgoal -r 2 /mnt/mfs >> >> MooseFS will then automatically take care of replication of files to >> other servers to fulfil the goal. >> >> >> Best regards, >> Peter >> >> -- >> Piotr Robert Konopelko >> MooseFS Client Support Team | moosefs.com >> >> > On 12 May 2017, at 3:54 PM, David Myer <dav...@pr...> >> wrote: >> > >> > Hi, >> > >> > The MooseFS best practices recommend a min goal of 2 to prevent >> eventual data loss. Wouldn't a file exist in 2 places if the goal was 1? >> > >> > Thanks, >> > Dave >> > >> > >> ------------------------------------------------------------------------------ >> > Check out the vibrant tech community on one of the world's most >> > engaging tech sites, Slashdot.org! >> http://sdm.link/slashdot_________________________________________ >> > moosefs-users mailing list >> > moo...@li... >> > https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: David M. <dav...@pr...> - 2017-05-12 15:02:32
|
Thanks Piotr and Wolfgang. I have interpreted the term "copy" incorrectly. I thought a copy was an additional instance of the file, so a goal of 2 would result in the file plus 2 copies of it, thus 3 instances of the file. I also thought this because it explained the size of my mfs mount - I have rsync'd ~500GB of data into my mfs mount, and the CGI stats show ~1.5TB of disk used at a goal of 2. Based off this I assumed each file had 3 instances. What would account for 1.5TB of storage usage if 500GB of files only exist twice at a goal of 2? Thanks, Dave -------- Original Message -------- Subject: Re: [MooseFS-Users] Min goal to prevent data loss Local Time: May 12, 2017 10:31 AM UTC Time: May 12, 2017 2:31 PM From: pio...@mo... To: David Myer <dav...@pr...> moo...@li... <moo...@li...> Hi David, > Wouldn't a file exist in 2 places if the goal was 1? No. If file had goal = 1, it had only one copy. Goal means number of copies, so the best practice is to set the goal to 2 minimum for whole filesystem, e.g.: mfssetgoal -r 2 /mnt/mfs MooseFS will then automatically take care of replication of files to other servers to fulfil the goal. Best regards, Peter -- Piotr Robert Konopelko MooseFS Client Support Team | moosefs.com > On 12 May 2017, at 3:54 PM, David Myer <dav...@pr...> wrote: > > Hi, > > The MooseFS best practices recommend a min goal of 2 to prevent eventual data loss. Wouldn't a file exist in 2 places if the goal was 1? > > Thanks, > Dave > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Piotr R. K. <pio...@mo...> - 2017-05-12 14:31:30
|
Hi David, > Wouldn't a file exist in 2 places if the goal was 1? No. If file had goal = 1, it had only one copy. Goal means number of copies, so the best practice is to set the goal to 2 minimum for whole filesystem, e.g.: mfssetgoal -r 2 /mnt/mfs MooseFS will then automatically take care of replication of files to other servers to fulfil the goal. Best regards, Peter -- Piotr Robert Konopelko MooseFS Client Support Team | moosefs.com > On 12 May 2017, at 3:54 PM, David Myer <dav...@pr...> wrote: > > Hi, > > The MooseFS best practices recommend a min goal of 2 to prevent eventual data loss. Wouldn't a file exist in 2 places if the goal was 1? > > Thanks, > Dave > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wolfgang <moo...@wo...> - 2017-05-12 14:30:33
|
Hi! No - Goal of 1 means that each chunk of one file will be stored only once on the chunkservers. All chunks of a file are still distributed across all your chunkservers - but only one copy of each chunk. So if one of your chunkserver crashes - your files with goal=1 will be corrupted. If you have goal=2 and one of your chunkserver crashes - moosefs will immediatly start duplicating the undergoal chunks to reach the goal of 2 for all chunks again. Greetings Wolfgang On 2017-05-12 15:54, David Myer wrote: > Hi, > > The MooseFS best practices recommend a min goal of 2 to prevent > eventual data loss. Wouldn't a file exist in 2 places if the goal was 1? > > Thanks, > Dave > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: David M. <dav...@pr...> - 2017-05-12 13:54:25
|
Hi, The MooseFS best practices recommend a min goal of 2 to prevent eventual data loss. Wouldn't a file exist in 2 places if the goal was 1? Thanks, Dave |
From: Tom I. H. <ti...@ha...> - 2017-04-20 04:43:07
|
Alex Crow <ac...@in...> writes: > The chunkserver is a userland process and has no direct disk access. And > if you're looking for performance bottlenecks, that's the last place to > look (and this pretty much applies to all distributed FSs.) This sounds very reasonable. Do you know what the state is for other applications, particularly database systems? Back when I was a DBA, storage in raw partitions was pretty much assumed to always be better, but these days it seems your typical RDBMS wants xfs. The exception is Oracle, but they have their own fancy storage subsystem (ASM) that does caching, data migration among disks, and so forth. I'd assume that if you don't have some very advanced application specific software handling the storage, it'd be really hard to beat the combination of xfs, OS level buffer caching, and application caching of specific information. -tih -- Most people who graduate with CS degrees don't understand the significance of Lisp. Lisp is the most important idea in computer science. --Alan Kay |
From: Alex C. <ac...@in...> - 2017-04-19 20:45:08
|
MooseFS stores its "chunks" as files on an underlying filesystem. It doesn't have to be XFS. I use it with ZFS, set up as a pool of devices on each node. But unless the devs decide to rewrite the chunkserver as a kernel-level process, which can manage direct block access to drives (a huge job), you have to use some FS for your chunkservers to use. I can't see why that would be a problem. The chunkserver is a userland process and has no direct disk access. And if you're looking for performance bottlenecks, that's the last place to look (and this pretty much applies to all distributed FSs.) Cheers Alex On 19/04/17 21:04, Marco Milano wrote: > On 04/19/2017 02:11 PM, Rupert Chandler wrote: >> What a very curious email.. >> > Which one? My original email or his response ? > > -- Marco > > >> Rup >> >> ___ >> Rupert Chandler >> Tel: +44(0)1672600139 >> ru...@bt... >> >> >> >> >> >> On Wed, Apr 19, 2017 at 4:08 PM +0100, "Ali Moeinvaziri" <moe...@gm... <mailto:moe...@gm...>> wrote: >> >> Marco, >> >> I appreciate the response. As to your question, well, it does not sound >> like a question, rather telling us no direct i/o is needed. >> >> I don't mean to sound inappropriate. But, I'll let you decide what you need for your >> own system, not telling us what we should need for ours. However if you have a compelling >> reason behind your e-mail, please send me a link, document, or simple e-mail. I'll be happy to >> read it and try to understand your solid reasoning. >> >> Having said this, if you ever stop by Utah (you might already be here), I'll be happy to buy >> you a cup of coffee and have a quick discussion, even a tour of our facility would be in order. >> >> Cheers, >> -AM >> >> >> >> >> On Tue, Apr 18, 2017 at 7:28 PM, Marco Milano <mar...@gm... <mailto:mar...@gm...>> wrote: >> >> On 04/18/2017 05:54 PM, Ali Moeinvaziri wrote: >> > >> > Hi, >> > >> > I see chunckserver partitions need to be formatted under xfs. Is it possible to by-pass >> > xfs and format the partitions under moosefs itself? >> > >> > Thanks, >> > -AM >> >> >> Ali, >> >> Is there a reason for you to have MooseFS handle the I/O without a filesystem? >> (It can't do it now.) Why do you think direct I/O will be better? >> >> Thanks, >> -- Marco >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> <https://u2661533.ct.sendgrid.net/wf/click?upn=aSVgumtVgpGQU3uYU8sWoVpneYg1QJbOcA9KD7cB3B0-3D_l6HG3FW8n50aQtA4oQ21Qbtjs6qeCyu-2BLzdMFIJc-2F4eRy0v3sy6xyrjOMFMLGp4T5L9P2Po95V8HmBuupV-2Fcz1T7miNl6LN45Trud7d8EgcE0Qzp5BUId6WWPZLaCx4FJnzpfStwHTWbEX2oowfYU5jm071H22yV9OddyhdO0-2FvvrQvMRJDoNs-2FCXtbVfLFddQXyufFq-2BgSwNhgWNaonZxJ3AesRUznSB5cBFjT58hc-3D> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> <https://u2661533.ct.sendgrid.net/wf/click?upn=Icg0l1BNdpBnYuWGoApbn0IXdEaT5qnUC7gIcS2XNdrgBiT-2B-2BuagrhyyC-2BAQOpyG-2BmG4gKLe4qAingLKnHCnlKCkpKKQ4sX3lGnNh63lOzI-3D_l6HG3FW8n50aQtA4oQ21Qbtjs6qeCyu-2BLzdMFIJc-2F4eRy0v3sy6xyrjOMFMLGp4THbF3G7SCzhVgdUnhLPkYN1xf-2BXIh7Agv0XcQnLdamBgpNdCeVeWnaX0YD45Xd27lTC9Kf1dBoN8MaXm6tN9g9KIFsVOxpGIia1prgdM1uMrfn8LBwisLiTN96Of0iJaySwKkPo4fW9EgwrS6nylrNCXk5xCg1UtTNtwt6KR8bDo-3D> >> >> > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Marco M. <mar...@gm...> - 2017-04-19 20:05:19
|
On 04/19/2017 02:11 PM, Rupert Chandler wrote: > What a very curious email.. > Which one? My original email or his response ? -- Marco > Rup > > ___ > Rupert Chandler > Tel: +44(0)1672600139 > ru...@bt... > > > > > > On Wed, Apr 19, 2017 at 4:08 PM +0100, "Ali Moeinvaziri" <moe...@gm... <mailto:moe...@gm...>> wrote: > > Marco, > > I appreciate the response. As to your question, well, it does not sound > like a question, rather telling us no direct i/o is needed. > > I don't mean to sound inappropriate. But, I'll let you decide what you need for your > own system, not telling us what we should need for ours. However if you have a compelling > reason behind your e-mail, please send me a link, document, or simple e-mail. I'll be happy to > read it and try to understand your solid reasoning. > > Having said this, if you ever stop by Utah (you might already be here), I'll be happy to buy > you a cup of coffee and have a quick discussion, even a tour of our facility would be in order. > > Cheers, > -AM > > > > > On Tue, Apr 18, 2017 at 7:28 PM, Marco Milano <mar...@gm... <mailto:mar...@gm...>> wrote: > > On 04/18/2017 05:54 PM, Ali Moeinvaziri wrote: > > > > Hi, > > > > I see chunckserver partitions need to be formatted under xfs. Is it possible to by-pass > > xfs and format the partitions under moosefs itself? > > > > Thanks, > > -AM > > > Ali, > > Is there a reason for you to have MooseFS handle the I/O without a filesystem? > (It can't do it now.) Why do you think direct I/O will be better? > > Thanks, > -- Marco > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > <https://u2661533.ct.sendgrid.net/wf/click?upn=aSVgumtVgpGQU3uYU8sWoVpneYg1QJbOcA9KD7cB3B0-3D_l6HG3FW8n50aQtA4oQ21Qbtjs6qeCyu-2BLzdMFIJc-2F4eRy0v3sy6xyrjOMFMLGp4T5L9P2Po95V8HmBuupV-2Fcz1T7miNl6LN45Trud7d8EgcE0Qzp5BUId6WWPZLaCx4FJnzpfStwHTWbEX2oowfYU5jm071H22yV9OddyhdO0-2FvvrQvMRJDoNs-2FCXtbVfLFddQXyufFq-2BgSwNhgWNaonZxJ3AesRUznSB5cBFjT58hc-3D> > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > <https://u2661533.ct.sendgrid.net/wf/click?upn=Icg0l1BNdpBnYuWGoApbn0IXdEaT5qnUC7gIcS2XNdrgBiT-2B-2BuagrhyyC-2BAQOpyG-2BmG4gKLe4qAingLKnHCnlKCkpKKQ4sX3lGnNh63lOzI-3D_l6HG3FW8n50aQtA4oQ21Qbtjs6qeCyu-2BLzdMFIJc-2F4eRy0v3sy6xyrjOMFMLGp4THbF3G7SCzhVgdUnhLPkYN1xf-2BXIh7Agv0XcQnLdamBgpNdCeVeWnaX0YD45Xd27lTC9Kf1dBoN8MaXm6tN9g9KIFsVOxpGIia1prgdM1uMrfn8LBwisLiTN96Of0iJaySwKkPo4fW9EgwrS6nylrNCXk5xCg1UtTNtwt6KR8bDo-3D> > > |
From: Rupert C. <ru...@bt...> - 2017-04-19 18:30:00
|
What a very curious email.. Rup ___ Rupert Chandler Tel: +44(0)1672600139 ru...@bt... On Wed, Apr 19, 2017 at 4:08 PM +0100, "Ali Moeinvaziri" <moe...@gm...> wrote: Marco, I appreciate the response. As to your question, well, it does not sound like a question, rather telling us no direct i/o is needed. I don't mean to sound inappropriate. But, I'll let you decide what you need for your own system, not telling us what we should need for ours. However if you have a compelling reason behind your e-mail, please send me a link, document, or simple e-mail. I'll be happy to read it and try to understand your solid reasoning. Having said this, if you ever stop by Utah (you might already be here), I'll be happy to buy you a cup of coffee and have a quick discussion, even a tour of our facility would be in order. Cheers, -AM On Tue, Apr 18, 2017 at 7:28 PM, Marco Milano <mar...@gm...> wrote: On 04/18/2017 05:54 PM, Ali Moeinvaziri wrote: > > Hi, > > I see chunckserver partitions need to be formatted under xfs. Is it possible to by-pass > xfs and format the partitions under moosefs itself? > > Thanks, > -AM Ali, Is there a reason for you to have MooseFS handle the I/O without a filesystem? (It can't do it now.) Why do you think direct I/O will be better? Thanks, -- Marco ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! https://u2661533.ct.sendgrid.net/wf/click?upn=aSVgumtVgpGQU3uYU8sWoVpneYg1QJbOcA9KD7cB3B0-3D_k7tQZTCNs8PUWgPd-2BsoXulQ9g80pICXXfySmZG5L7Bu7j7CVxFQed6XW02rNF-2BLOuB-2Bi3YIWc3kqSzJEBw-2BfuCl-2BmpmAoPrUex9ETnMRNaorxFBl4UedJEtHY4ExHVHq-2BdqvVCwhjv-2FZah5QRmNqbbq7x1aIUH43hx8Dnx2W8TNEWqKjNG2cFMCb0ZVt5L8Nmsglz66d76b-2Fufpeg9urYlFSCwYjVWx1YibNmO8MPSc-3D _________________________________________ moosefs-users mailing list moo...@li... https://u2661533.ct.sendgrid.net/wf/click?upn=Icg0l1BNdpBnYuWGoApbn0IXdEaT5qnUC7gIcS2XNdrgBiT-2B-2BuagrhyyC-2BAQOpyG-2BmG4gKLe4qAingLKnHCnlKCkpKKQ4sX3lGnNh63lOzI-3D_k7tQZTCNs8PUWgPd-2BsoXulQ9g80pICXXfySmZG5L7Bu7j7CVxFQed6XW02rNF-2BLOuB-2Bi3YIWc3kqSzJEBw-2BfuKI7OnKlh6rgm7b7HjwVqJ3lZMwhI7TO4PY6Bhza0DHCibSpoVHKgJA4JPWboEQWtm7BjJ98ys6IHrZVN-2Bdl7h4Wz9eLXmD2-2FLxxrlAoTzM3dP7ThXPGDTxWyKzbVEqQ74VxMk5BzI7G08O1Zly8roY-3D |
From: Ali M. <moe...@gm...> - 2017-04-19 15:08:51
|
Marco, I appreciate the response. As to your question, well, it does not sound like a question, rather telling us no direct i/o is needed. I don't mean to sound inappropriate. But, I'll let you decide what you need for your own system, not telling us what we should need for ours. However if you have a compelling reason behind your e-mail, please send me a link, document, or simple e-mail. I'll be happy to read it and try to understand your solid reasoning. Having said this, if you ever stop by Utah (you might already be here), I'll be happy to buy you a cup of coffee and have a quick discussion, even a tour of our facility would be in order. Cheers, -AM On Tue, Apr 18, 2017 at 7:28 PM, Marco Milano <mar...@gm...> wrote: > On 04/18/2017 05:54 PM, Ali Moeinvaziri wrote: > > > > Hi, > > > > I see chunckserver partitions need to be formatted under xfs. Is it > possible to by-pass > > xfs and format the partitions under moosefs itself? > > > > Thanks, > > -AM > > > Ali, > > Is there a reason for you to have MooseFS handle the I/O without a > filesystem? > (It can't do it now.) Why do you think direct I/O will be better? > > Thanks, > -- Marco > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Marco M. <mar...@gm...> - 2017-04-19 01:28:50
|
On 04/18/2017 05:54 PM, Ali Moeinvaziri wrote: > > Hi, > > I see chunckserver partitions need to be formatted under xfs. Is it possible to by-pass > xfs and format the partitions under moosefs itself? > > Thanks, > -AM Ali, Is there a reason for you to have MooseFS handle the I/O without a filesystem? (It can't do it now.) Why do you think direct I/O will be better? Thanks, -- Marco |
From: Piotr R. K. <pio...@mo...> - 2017-04-18 22:08:28
|
Hi Ali, unfortunately no. MooseFS is a network distributed filesystem and needs underlying filesystem on disks which store chunks on Chunkservers. We recommend XFS as stated. Best regards, Peter -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 18 Apr 2017, at 11:54 PM, Ali Moeinvaziri <moe...@gm...> wrote: > > > Hi, > > I see chunckserver partitions need to be formatted under xfs. Is it possible to by-pass > xfs and format the partitions under moosefs itself? > > Thanks, > -AM > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ali M. <moe...@gm...> - 2017-04-18 21:54:14
|
Hi, I see chunckserver partitions need to be formatted under xfs. Is it possible to by-pass xfs and format the partitions under moosefs itself? Thanks, -AM |
From: Piotr R. K. <pio...@mo...> - 2017-04-12 10:41:17
|
Cześć Marcin, Thanks for your feedback, that's great! MooseFS 3.0.91 fixes mainly this issue. The issue was connected with many threads using one descriptor: * MooseFS 3.0.91-1 (2017-04-07) - (...) - (mount) fixed reading done by many (20+) threads using one descriptor (premature removing requests from queue during cleaning) Best regards, Peter -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com > On 12 Apr 2017, at 11:54 AM, ma...@me... wrote: > > W dniu 13.01.2017 o 12:35, ma...@me... pisze: >> W dniu 12.01.2017 o 18:38, Marcin pisze: >>> W dniu 2017-01-12 18:16, Aleksander Wieliczko napisał(a): >>>> By the way I would like to add that minimum configration is: >>>> 1x MooseFS master server >>>> 3x MooseFS chunkserver. >> >> Hi! >> I've added third node with chunkserver. It doesn't fix described problem. > > After upgrading moosefs to 3.0.91, bug didn't appear. > Thank you! > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: <ma...@me...> - 2017-04-12 10:03:00
|
W dniu 13.01.2017 o 12:35, ma...@me... pisze: > W dniu 12.01.2017 o 18:38, Marcin pisze: >> W dniu 2017-01-12 18:16, Aleksander Wieliczko napisał(a): >>> By the way I would like to add that minimum configration is: >>> 1x MooseFS master server >>> 3x MooseFS chunkserver. > > Hi! > I've added third node with chunkserver. It doesn't fix described problem. After upgrading moosefs to 3.0.91, bug didn't appear. Thank you! |
From: Piotr R. K. <pio...@mo...> - 2017-04-10 16:04:24
|
Dear MooseFS Users, We are proud to annonce, that MooseFS 3.0.91 has been released <https://github.com/moosefs/moosefs/releases/tag/v3.0.88> as stable! This version includes bugfixes and improves stability. If you like MooseFS, please support it by starring it on GitHub <https://github.com/moosefs/moosefs>. Opening an issue on GitHub <https://github.com/moosefs/moosefs/issues> is also appreciated. Thanks! Please find the changes in MooseFS 3 since 3.0.88 below: MooseFS 3.0.91-1 (2017-04-07) (all) silence false warnings generated by static analyzers (clang and cppcheck) (master) fixed quota testing routine used during move/rename (all) added README.md to distribution (cs+mount) decreased delay in conncache (mount) fixed reading done by many (20+) threads using one descriptor (premature removing requests from queue during cleaning) MooseFS 3.0.90-1 (2017-03-15) (master) fixed move/rename with quota (master) fixed chunk counters shown in cli/cgi (master+tools) added option preserve hardlinks to mfsmakesnapshot (master) fixed acl copying during mfsmakesnapshot MooseFS 3.0.89-1 (2017-02-21) (mount) added fixing file length in all inodes after write (cs) split bitfiled into separate bytes (fixed potential race condition) (master) setting operation to NONE before sending status (silence unnecessary messages in some cases) (cs) increased verbosity of crc-error messages (cs) fixed invalidating preserved block in case of truncate down followed by truncate up (mount) increased master-proxy buffer sizes Best regards, Peter / MooseFS Team -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 9 Feb 2017, at 3:12 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Dear MooseFS Users, > > We are proud to annonce, that MooseFS 3.0.88 has been released <https://github.com/moosefs/moosefs/releases/tag/v3.0.88> as stable today! > This version includes bugfixes and improves stability (mainly concerning MooseFS Client). > > If you like MooseFS, please support it by starring it on GitHub <https://github.com/moosefs/moosefs>. Opening an issue on GitHub <https://github.com/moosefs/moosefs/issues> is also appreciated. Thanks! > > > Please find the changes in MooseFS 3 since 3.0.86 below: > MooseFS 3.0.88-1 (2017-02-08) > (mount) added read cache clean on write (same file access using different descriptors) > (mount) added missing cond_destroy in readdata.c (fix sent by Jakub Ratajczak) > (master) fixed initializing packet size for reading 'sustained' directory > (all) fixed zassert for printing correct statuses in case of pthread functions > > MooseFS 3.0.87-1 (2017-02-01) > (mount) fix fleng in finfo after truncate (patched by Davies Liu) > (mount) fix overlapped read (patched by Davies Liu) > (mount) fixed invalidating chunk cache after truncate > (mount) fixed fleng handling in read worker > (mount) fixed handling BREAK state in read worker > (mount) changed handling data invalidation in read module (sometimes could be less efficient, but it is much more safer) > (tools) fixed number parsing (patched by Pawel Gawronski) > (cli) fixed printed host/port options > (mount) moved pipes from requests to workers (read and write - huge decrease of descriptors used by mount) > (mount) changed signal to broadcast in rwlock (fixed very rare read/write deadlock) > (mount) fixed 'open descriptors' leak (lookup(with data for open)->open(with cached data)->close) > (mount) fixed potential 'race condition' - free csdata during access > (master) split (only internally) sustained folder into 256 subfolders (too many sustained files caused socket timeouts in master) > > > Best regards, > Peter / MooseFS Team > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> On 02 Dec 2016, at 7:54 PM, Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>> wrote: >> >> Dear MooseFS Users, >> >> we released the newest version MooseFS 3.0 recently: MooseFS 3.0.86. >> This version mainly contains bugfixes and improves stability. >> >> >> We strongly recommend to upgrade to the latest version, but please remember, that If you had version lower than 3.0.83 and then you upgraded your Chunkservers to v. 3.0.83 or higher, please DO NOT downgrade them! >> In MooseFS Chunkserver v. 3.0.83 we changed Chunk header from 5k to 8k (see changelog) - this is one of major changes and it means, that Chunkserver older than 3.0.83 cannot "understand" new Chunk header, which may lead to potential data loss! >> >> >> If you like MooseFS, please support it by starring it on GitHub <https://github.com/moosefs/moosefs>. Thanks! >> >> >> Please find the changes in MooseFS 3 since 3.0.77 below: >> MooseFS 3.0.86-1 (2016-11-30) >> (master) fixed leader-follower resynchronization after reconnection (pro only) >> (master) fixed rehashing condition in edge/node/chunk hashmaps (change inspired by yinjiawind) >> >> MooseFS 3.0.85-1 (2016-11-17) >> (mount) fixed memory leak (also efficiency) on Linux and potential segfaults on FreeBSD (negative condition) >> (mount) fixed race condition for inode value in write module >> (mount) better descriptors handling (lists of free elements) >> (mount) better releasing descriptors on FreeBSD >> (cs) fixed time condition (patch send by yinjiawind) >> >> MooseFS 3.0.84-1 (2016-10-06) >> (master) fixed setting acl-default without named users or named groups >> (master) fixed master-follower synchronization after setting storage class >> >> MooseFS 3.0.83-1 (2016-09-30) >> (cs) changed header size from 5k to 8k (due to 4k-sector hard disks) >> >> MooseFS 3.0.82-1 (2016-09-28) >> (all) silenced message about using deprecated parameter 'oom_adj' >> (mount) fixed FreeBSD delayed release mechanism >> (mount) added rwlock for chunks >> >> MooseFS 3.0.81-1 (2016-07-25) >> (mount) fixed oom killer disabling (setting oom_adj and oom_score_adj) >> (cli) fixed displaying inactive mounts >> (mount) added fsync before removing any locks >> (daemons) added disabling oom killer (Linux only) >> (all) small fixes in manpages >> (mount) fixed handling nonblocking lock commands (unlock and try) in both locking mechanisms >> (daemons+mount) changed default settings for limiting malloc arenas (Linux only) >> >> MooseFS 3.0.80-1 (2016-07-13) >> (master) fixed chunk loop (in some cases chunks from the last hash position might be left unchecked) >> (master) fixed storage class management (fixed has_***_labels fields) >> >> MooseFS 3.0.79-1 (2016-07-05) >> (master) fixed 'flock' (change type of lock SH->EX and EX->SH caused access to freed memory and usually SEGV) >> >> MooseFS 3.0.78-1 (2016-06-14) >> (cs) fixed serious error that may cause data corruption during internal rebalance (intr. in 3.0.75) >> >> >> >> Best regards, >> Peter >> >> -- >> >> <https://moosefs.com/>Piotr Robert Konopelko >> MooseFS Technical Support Engineer >> e-mail: pio...@mo... <mailto:pio...@mo...> >> www: https://moosefs.com <https://moosefs.com/> >> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. >> >> >>> On 08 Jun 2016, at 1:04 AM, Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>> wrote: >>> >>> Dear MooseFS Users, >>> >>> we released the newest versions of both MooseFS 3.0 and MooseFS 2.0 recently: 3.0.77 and 2.0.89. >>> >>> They improve MooseFS stability and (in MooseFS 3.0) also add new features, like: >>> Storage Classes <https://moosefs.com/documentation/moosefs-3-0.html> >>> Possibility to mount MooseFS on Linux by issuing: mount -t moosefs mfsmaster.example.lan: /mnt/mfs >>> File sparsification >>> Automatic temporary maintenance mode >>> New I/O synchronisation between different mounts >>> If you like MooseFS, please support it by starring it on GitHub <https://github.com/moosefs/moosefs>. Thanks! >>> >>> >>> Please find the changes in MooseFS 3 since 3.0.73 below: >>> MooseFS 3.0.77-1 (2016-06-07) >>> (mount) added assertions after packet allocation in master communication module >>> (manpages) fixes related to storage classes >>> (all) improved error messages used for storage classes >>> >>> MooseFS 3.0.76-1 (2016-06-03) >>> (master) fixed resolving multi path for root inode >>> (master) fixed licence expiration behaviour (pro only) >>> >>> MooseFS 3.0.75-1 (2016-04-20) >>> (all) fixed cppcheck warnings/errors (mostly false negatives) >>> (cs) added file sparsification during chunk write (also chunk replication, chunk duplication etc.) >>> (mount) fixed write data inefficiency when I/O was performed by root >>> (master) fixed removing too early locked chunks from data structures >>> (tools) fixed reusing local ports (connection to master returned EADDRNOTAVAIL) >>> (all) changed default action from restart to start >>> (all) added exiting when user defined config (option -c) cannot be loaded >>> (cs) changed handling chunk filename (dynamic filename generation - cs should use less memory) >>> (netdump) changed 'moose ports only' option to 'port range' >>> (mount) limited number of files kept as open after close >>> (cs) changed subfolder choosing algorithm >>> (mount) changed mutex lock to rw-lock for I/O on the same descriptor >>> (mount) added link to mfsmount from '/sbin/mount.moosefs' (Linux only) >>> (all) introduced "storage classes" (new goal/labels management) >>> (master+cs) introduced 'temporary maintenance mode' (automatic maintenance mode after graceful stop of cs) >>> (master+mount) added fix for bug in FreeBSD kernel (kernel sends truncate before first close - FreeBSD only) >>> (cs) fixed setting malloc pools on Linux >>> >>> MooseFS 3.0.74-1 (2016-03-08) >>> (master) fixed rebalance replication (check for all chunk copies for destination - not only valid) >>> (master+mount) new mechanism for atime+mtime setting during I/O >>> (master+mount) new I/O synchronization between different mounts (with cache invalidation) >>> (master+mount) new chunk number/version cache (with automatic invalidation from master) >>> (master) added mapping chunkserver IP classes (allow to have separate network for I/O and separate for other activity) >>> (master) fixed status returned by writechunk after network down/up >>> (master) changed trashtime from seconds to hours >>> (master) added METADATA_SAVE_FREQ option (allow to save metadata less frequently than every hour) >>> (master) added using in emergency (endangered chunks) all available servers for replication (even overloaded and being maintained) >>> (master) added using chunkservers in 'internal rebalance' state in case of deleting chunks >>> (all) spell check errors fixed (patch contributed by Dmitry Smirnov) >>> >>> Please find the changes in MooseFS 2.0 since 2.0.88 below: >>> MooseFS 2.0.89-1 (2016-04-27) >>> (master+mount) added fix for bug in FreeBSD kernel (kernel sends truncate before first close - FreeBSD only) >>> >>> Best regards, >>> >>> -- >>> >>> <Mail Attachment.png> <https://moosefs.com/> >>> >>> Piotr Robert Konopelko >>> MooseFS Technical Support Engineer >>> e-mail : pio...@mo... <mailto:pio...@mo...> >>> www : https://moosefs.com <https://moosefs.com/> >>> >>> <Mail Attachment.png> <https://twitter.com/MooseFS><Mail Attachment.png> <https://www.facebook.com/moosefs><Mail Attachment.png> <https://www.linkedin.com/company/moosefs><Mail Attachment.png> <https://github.com/moosefs> >>> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. >>> >>> >>>> On 08 Mar 2016, at 9:24 AM, Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>> wrote: >>>> >>>> Dear MooseFS Users, >>>> >>>> we released the newest versions of both MooseFS 3 and MooseFS 2 recently: 3.0.73 and 2.0.88. >>>> >>>> Please find the changes in MooseFS 3 since 3.0.71 below: >>>> >>>> MooseFS 3.0.73-1 (2016-02-11) >>>> (master) fixed restoring ARCHCHG from changelog >>>> (cli+cgi) fixed displaying master list with followers only (pro version only) >>>> (master) added using size and length quota to fix disk usage values (statfs) >>>> (master) fixed xattr bug which may lead to data corruption and segfaults (intr. in 2.1.1) >>>> (master) added 'node_info' packet >>>> (tools) added '-p' option to 'mfsdirinfo' - 'precise mode' >>>> (master) fixed edge renumeration >>>> (master) added detecting of wrong edge numbers and force renumeration in such case >>>> >>>> MooseFS 3.0.72-1 (2016-02-04) >>>> (master+cgi+cli) added global 'umask' option to exports.cfg >>>> (all) changed address of FSF in GPL licence text >>>> (debian) removed obsolete conffiles >>>> (debian) fixed copyright file >>>> (mount) fixed parsing mfsmount.cfg (system options like nodev,noexec etc. were ommited) >>>> (master) changed way how cs internal rebalance state is treated by master (as 'grace' state instead of 'overloaded') >>>> (mount) fixed bug in read module (setting etab after ranges realloc) >>>> (tools) removed obsoleted command 'mfssnapshot' >>>> >>>> >>>> Please find the changes in MooseFS 2 since 2.0.83 below: >>>> >>>> MooseFS 2.0.88-1 (2016-03-02) >>>> (master) added METADATA_SAVE_FREQ option (allow to save metadata less frequently than every hour) >>>> >>>> MooseFS 2.0.87-1 (2016-02-23) >>>> (master) fixed status returned by writechunk after network down/up >>>> >>>> MooseFS 2.0.86-1 (2016-02-22) >>>> (master) fixed initialization of ATIME_MODE >>>> >>>> MooseFS 2.0.85-1 (2016-02-11) >>>> (master) added ATIME_MODE option to set atime modification behaviour >>>> (master) added using size and length quota to fix disk usage values (statfs) >>>> (all) changed address of FSF in GPL licence text >>>> (debian) removed obsolete conffiles >>>> (debian) fixed copyright file >>>> (mount) fixed parsing mfsmount.cfg (system options like nodev,noexec etc. were ommited) >>>> (tools) removed obsoleted command 'mfssnapshot' >>>> >>>> MooseFS 2.0.84-1 (2016-01-19) >>>> (mount) fixed setting file length in write module during truncate (fixes "git svn" case) >>>> >>>> Best regards, >>>> >>>> -- >>>> >>>> <Mail Attachment.png> <https://moosefs.com/> >>>> >>>> Piotr Robert Konopelko >>>> MooseFS Technical Support Engineer >>>> e-mail : pio...@mo... <mailto:pio...@mo...> >>>> www : https://moosefs.com <https://moosefs.com/> >>>> >>>> <Mail Attachment.png> <https://twitter.com/MooseFS><Mail Attachment.png> <https://www.facebook.com/moosefs><Mail Attachment.png> <https://www.linkedin.com/company/moosefs><Mail Attachment.png> <https://github.com/moosefs> >>>> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. >>>> >>>> >>>>> On 27 Jan 2016, at 5:51 PM, Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>> wrote: >>>>> >>>>> Dear MooseFS Users, >>>>> >>>>> today we published the newest version from 3.x branch: 3.0.71. >>>>> >>>>> Please find the changes since 3.0.69 below: >>>>> >>>>> MooseFS 3.0.71-1 (2016-01-21) >>>>> (master) fixed emptying trash issue (intr. in 3.0.64) >>>>> (master) fixed possible segfault in chunkservers database (intr. in 3.0.67) >>>>> (master) changed trash part choice from nondeterministic to deterministic >>>>> >>>>> MooseFS 3.0.70-1 (2016-01-19) >>>>> (cgi+cli) fixed displaying info when there are no active masters (intr. in 3.0.67) >>>>> (mount+common) refactoring code to be Windows ready >>>>> (mount) added option 'mfsflattrash' (makes trash look like before version 3.0.64) >>>>> (mount) added fixes for NetBSD (patch contributed by Tom Ivar Helbekkmo) >>>>> >>>>> Best regards, >>>>> >>>>> -- >>>>> Piotr Robert Konopelko >>>>> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >>>> >>>> >>> >>> >> > |
From: 田忠博(Zhongbo T. <win...@gm...> - 2017-04-10 05:26:19
|
Awesome, thanks! Piotr Robert Konopelko <pio...@mo...>于2017年4月7日 周五下午8:41写道: > Hey Zhongbo, > > We've followed you suggestion and added this change to MooseFS sources. > Thanks! > > Please find an URL to appropriate commit below: > > https://github.com/moosefs/moosefs/commit/709eba4e72c997888f7e319740dcf45cbde5acbd > > > Best regards, > Peter > > -- > Piotr Robert Konopelko > *MooseFS Technical Support Engineer* | moosefs.com > > On 30 Mar 2017, at 5:02 AM, 田忠博(Zhongbo Tian) <win...@gm...> > wrote: > > Hi Aleksander, > > Thanks for the quick reply. 3.0.90 DO fix the issue! Thank you. > > And on our production cluster, we found a lot of unconsumed messages > in client's TCP receive queue. This led to periodically high load. After > some investigation, we guess the client's `conncache` is too slow to digest > KEEPALIVE messages. So we modified the source code to decrease the sleep > time, and it seemed working for us. Here is our patch: > > """ > diff --git a/mfscommon/conncache.c b/mfscommon/conncache.c > index 4d33c19..b7a99bf 100644 > --- a/mfscommon/conncache.c > +++ b/mfscommon/conncache.c > @@ -161,7 +161,7 @@ void* conncache_keepalive_thread(void* arg) { > } > ka = keep_alive; > zassert(pthread_mutex_unlock(&glock)); > - portable_usleep(10000); > + portable_usleep(5000); > } > return arg; > } > """ > > Finally, I am curious on the progress of MooseFS 4.0. we are looking > forward for the erase-coding implementation for a quite long time. And we > also want to know how the MooseFS guys's option on Container Storage > Interface (CSI), here you can find more details on it: > https://github.com/docker/docker/issues/31923 > > > And at the end, thank you for this excellent project. > > On Wed, Mar 29, 2017 at 6:04 PM Aleksander Wieliczko < > ale...@mo...> wrote: > > Hi. > Did you tried the last stable MooseFS version 3.0.90? > > MooseFS 3.0.86 client has a few bugs, but they were fixed. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <http://moosefs.com/> > On 29.03.2017 11:46, 田忠博(Zhongbo Tian) wrote: > > Hi all, > > We had encountered a weird issue after upgrading to moosefs 3.0.86. > When we try to run ' TMPDIR=/some/moosefs/path python -c "import ctypes" ', > we end up with a SIGBUS. > After some investigations, we found it seems related with mmap, and we > can reproduce this bug using following C code: > """ > > #include <stdio.h> > #include <fcntl.h> > #include <unistd.h> > #include <sys/mman.h> > > int main(int argc, char** argv) { > int fd; > char* filename; > char *c2; > if (argc != 2) { > fprintf(stderr, "usage: %s <file>\n", argv[0]); > return 1; > } > filename = argv[1]; > unlink(filename); > fd = open(filename, O_RDWR|O_CREAT, 0600); > ftruncate(fd, 4096); > c2 = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); > *c2 = '\0'; // SIGBUS > return 0; > } > > """ > Here is the strace for when we run this on a moosefs path: > > """ > > $ strace ./test /mfs/user/tianzhongbo/temp/test > execve("./test", ["./test", "/mfs/user/tianzhongbo/temp/test"], [/* 52 > vars */]) = 0 > brk(0) = 0x949000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d85a000 > access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or > directory) > open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 > fstat(3, {st_mode=S_IFREG|0644, st_size=114873, ...}) = 0 > mmap(NULL, 114873, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f825d83d000 > close(3) = 0 > open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 > read(3, > "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\t\2\0\0\0\0\0"..., 832) = > 832 > fstat(3, {st_mode=S_IFREG|0755, st_size=1697568, ...}) = 0 > mmap(NULL, 3804928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) > = 0x7f825d299000 > mprotect(0x7f825d430000, 2097152, PROT_NONE) = 0 > mmap(0x7f825d630000, 24576, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x197000) = 0x7f825d630000 > mmap(0x7f825d636000, 16128, PROT_READ|PROT_WRITE, > MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f825d636000 > close(3) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d83c000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d83b000 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f825d83a000 > arch_prctl(ARCH_SET_FS, 0x7f825d83b700) = 0 > mprotect(0x7f825d630000, 16384, PROT_READ) = 0 > mprotect(0x600000, 4096, PROT_READ) = 0 > mprotect(0x7f825d85b000, 4096, PROT_READ) = 0 > munmap(0x7f825d83d000, 114873) = 0 > unlink("/mfs/user/tianzhongbo/temp/test") = 0 > open("/mfs/user/tianzhongbo/temp/test", O_RDWR|O_CREAT, 0600) = 3 > ftruncate(3, 4096) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0x7f825d859000 > --- SIGBUS {si_signo=SIGBUS, si_code=BUS_ADRERR, si_addr=0x7f825d859000} > --- > +++ killed by SIGBUS +++ > Bus error > > """ > > Can anyone help to resolve this? > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > > _________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! > http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Jakub Kruszona-Z. <jak...@ge...> - 2017-04-10 05:23:11
|
mfsdirinfo shows number of chunks not multiplied by goal. As somebody mentioned, number of chunks slightly less than number of files is usually caused by empty files - it is quite normal. Goal is used to estimate 'realsize', so with goal 2 'realsize' should be twice as big as 'size' and it is in this case, so everything looks pretty ok. On 7 Apr, 2017, at 14:09, Wilson, Steven M <st...@pu...> wrote: > No, the goal is set at 2 for this file system. > > Steve > > From: Casper Langemeijer <cas...@pr...> > Sent: Friday, April 7, 2017 4:37 AM > To: Wilson, Steven M; MooseFS-Users > Subject: Re: [MooseFS-Users] Fw: Number of files > number of chunks? > > A file of zero length does not have any chunks. But still: a number of chunks very similar to number of files. Are you running a system with a goal of 1? > > Op do 6 apr. 2017 om 14:31 schreef Wilson, Steven M <st...@pu...>: > Ah, yes, that's probably what happening here. Thanks! > > Steve > > > On Apr 5, 2017 5:50 PM, "Ricardo J. Barberis" <ric...@do...> wrote: > In my case, whenever I see more files than chunks it's almost always empty > files, like temporary locks. > > El Miércoles 05/04/2017 a las 17:26, Wilson, Steven M escribió: > > Hi, > > > > I was looking at one of our MooseFS file systems today and noticed that it > > has more files than it has chunks. Can anyone explain how that can happen? > > Here's the output of mfsdirinfo at the root of the file system: > > ? > > > > inodes: 224665855 > > directories: 6659685 > > files: 217339466 > > chunks: 215970818 > > length: 44007258442765 > > size: 57959637532672 > > realsize: 115919275065344 > > > > Thanks, > > Steve > > Cheers, > -- > Ricardo J. Barberis > Senior SysAdmin / IT Architect > DonWeb > La Actitud Es Todo > www.DonWeb.com > _____ > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Regards, Jakub Kruszona-Zawadzki - - - - - - - - - - - - - - - - Segmentation fault (core dumped) Phone: +48 602 212 039 |